Subscribe by Email


Showing posts with label Structures. Show all posts
Showing posts with label Structures. Show all posts

Monday, February 4, 2013

How are unit and integration testing done in EiffelStudio?


- Eiffelstudio provides a development environment that is complete and well integrated.
- This environment is ideal for performing unit testing and integration testing. 
- Eiffelstudio lets you create software systems and applications that are scalable, robust and of course fast. 
- With Eiffelstudio, you can model your application just the way you want. 
Eiffelstudio has effective tools for capturing your thought process as well as the requirements. 
- Once you are ready to follow your design, you can start building up on the model that you have already created. 
- The creation and implementation of the models both can be done through Eiffelstudio. 
- There is no need of keeping one thing out and starting over. 
- Further, you do not need any other external tools to go back and make modifications to the architecture. 
- Eiffelstudio provides all the tools. 
- Eiffelstudio provides round-trip engineering facility by default in addition to productivity and test metrics tools.
Eiffel studio provides the facility for integration testing through its component called the eiffelstudio auto test. 
- Sophisticated unit tests and integration testing suites might be developed by the software developers that might be quite simple in their build. 
- With eiffelstudio auto test the Eiffel class code can be executed and tested by the developer at the feature level. 
- At this level, the testing is considered to be the unit testing. 
- However, if the code is executed and tested for the entire class systems, then the testing is considered as the integration testing.
- Executing this code leads to the execution of contracts of attributes and features that have already been executed. 
- Eiffelstudio auto test also serves as a means for implementing the tests as well as assumptions made regarding the design as per the conditions of the contract. 
- Therefore, there is no need of re-testing the things that have been given as specification in class texts contracts by unit and integration testing through some sort of test oracles or assertions.

- Eiffelstudio auto test lays out three methods for creating the test cases for unit and integration testing:
  1. A test class is created by the auto test for the tests that have been manually created. This test class contains the test framework. So the user only needs to input the code for the test.
  2. The second method for the creation of the tests is based up on the failure of the application during its run time. Such a test is known as the ‘extracted’. Whenever an unexpected failure occurs during the run time of the system under test, the auto test works up on the info provided by the debugger in order to produce a new test case. The calls and the states that cause the system to fail are reproduced by this test. After fixing the failure, the extracted tests are then added to the complete suite as a thing that would avoid the recurrence of the problem.
  3. The third method involves production of tests known as generated tests. For this the user needs to provide the classes for which tests are required and plus some additional info that auto test might require to control the generation of the tests. The routines of the target classes are then called by the tool using arguments values that have been randomized. A single new test is created that reproduces the call that caused the failure whenever there is a violation of a class invariant or some post condition.


Thursday, January 31, 2013

Explain EiffelStudio? What technology is used by EiffelStudio?


For Eiffel programming language, the development environment is provided by the Eiffelstudio. Both of these – the Eiffelstudio and Eiffel programming language have been developed by Eiffel software. Presently the version 7.1 has been released.

- Eiffelstudio consists a number of development tools namely:
  1. Compiler
  2. Interpreter
  3. Debugger
  4. Browser
  5. Metrics tool
  6. Profiler
  7. Diagram tool
- All these tools have been integrated and put under the single user interface of the Eiffelstudio. 
- This user interface in turn is based on several UI paradigms that are quite specific to one another. 
- There has been done effective browsing through ‘pick and drop’ thing. 
- Eiffelstudio supports a number of platforms including the following:
  1. Windows
  2. Linux
  3. Mac OS
  4. VMS and
  5. Solaris
- This Eiffel software product comes with a GPL license. 
- However, a number of other licenses are also available. 
- Eiffelstudio falls under the category of open source development. 
- The beta versions of the product of the following release are made available to the public at regular intervals. 
- The participation of the Eiffel community in the development of the product has been quite active. 
- A list of the open projects has even made available on the origo web site. 
- The host of this site is at ETH Zurich. 
- Along with the list, information regarding the discussion forums, basic source code for check out etc. also has been put up. 
- In the month of June 2012, the last version 7.1 was released and the successive beta releases were made available very soon after that.

Technology behind EiffelStudio

The compilation technology used by the Eiffelstudio called Melting Ice is unique to the Eiffel software and is their trademark.
- This technology integrates the interpretation process of the elements with the proper compilation process. 
- This technology offers a very fast turnaround time. 
- This also means that the time taken for recompilation depends up on the size of the change to be made and not on the overall size of the program. 
Such melted programs even though can be delivered readily but still a finalization step is considered important to be performed before the product is released.
- Finalization step involves a very highly optimized compilation process which takes a long time but the executable generated is optimized.
- The interpretation in eiffelstudio is carried out through what is called the byte code-oriented virtual machine. 
- Either .NET CIL or C is generated by the compiler. 

History of Eiffelstudio

- The roots of the Eiffelstudio date back to when the Eiffel was first implemented by interactive software engineering Inc. 
- The Eiffel software was preceded by the interactive software engineering Inc. -The first implementation took place in the year of 1986. 
-The current technology used in Eiffelstudio evolved from the earlier technology called the ‘Eiffel bench’ that saw its first use in the year of 1990. 
- It was used along with the version 3 of the Eiffel programming language. 
- In the year 2001, the name Eiffel bench was changed to what we know now, the ‘Eiffelstudio’. 
- This was also the year when the environment was developed to obtain compatibility with the windows and a number of other platforms. 
- Originally, it was only available for Unix platform.
- Since 2001, Eiffelstudio saw some major releases with some new features:
  1. Version 5.0 (july 2001): The first proper version. Saw integration of the eiffelcase tool with the eiffelbench as its diagram tool.
  2. Version 5.1 (December 2001): Support for .NET applications. Also called the eiffel#.
  3. Version 5.2 (November 2002): The debugging capabilities were extended, an improved mechanism for C++ and C was introduced, eiffelbuild, roundtripping abilities etc. were added.



Wednesday, January 30, 2013

Give an overview of The diagram Tool of EiffelStudio?


Eiffelstudio is a rich combination of a number of development environment tools such as:
  1. Compiler
  2. Interpreter
  3. Debugger
  4. Browser
  5. Metrics tool
  6. Profiler
  7. Diagram tool
In this article we shall discuss about the last tool of Eiffelstudio i.e., the diagram tool. 

A graphical view of the software structures is provided by the Eiffelstudio’s diagram tool. This tool can be used effectively in both:
  1. Forward engineering process: In this process it can be used as design tool that uses the graphical descriptions for producing the software.
  2. Reverse engineering process: In this process it produces the graphical representations of the program texts that already exist automatically.
The changes that are made in any of the above mentioned two processes are given guaranteed integration by the diagram tool and this is called round trip engineering. 
It uses any of the following two graphical notations:
  1. BON (business object notation) and
  2. UML (unified modeling language)
By default the notation used is BON. The Eiffelstudio has the capability of displaying several views of the classes and their features. 
It provides various types of views such as:
1. Text view: It displays the full text of the program.
2. Contract view: It displays only the interface but with the contracts.
3. Flat view: It displays the inherited features as well.
4. Clients: It displays all the classes with their features that depend up on other class or feature.
5. Inheritance history:  It shows how a feature is affected when it goes up or down the inheritance structure.
There are a number of other views also available. There is an user interface paradigm that is based on holes, pebbles and other development objects and the Eiffelstudio relies heavily on this. 

Software developers using Eiffelstudio have to deal with abstractions that represent the following:
Ø  Classes
Ø  Features
Ø  Breakpoints
Ø  Clusters
Ø  Other development objects

- The way they deal with these things are same as that of the way in which the objects during run time are dealt by the object – oriented in Eiffelstudio.
- In Eiffelstudio, wherever a development object appears at the interface, it can be picked or selected irrespective of how it is visually represented i.e., what name is given to it and what symbol and so on. 
- To pick a development object you just have to right click on it. 
- The moment you click on it the cursor changes to pebble (a special symbol) that corresponds to different types of the object such as:
  1. Bubble or ellipse for class
  2. Dot for breakpoint
  3. Cross for feature and so on.
- As the position of the cursor changes, a line appears displaying the original position and current position of the object. 
- The object can be dropped at any place where the pebble symbol matches the cursor.
- An object can also be dropped in a window that is compatible with it. 
- Multiple views can be combined together to make it easy browsing through the complex structure of the system. 
- This also makes it possible to follow the transformations such as re-naming, un-definition and re-definition that are applied to the features while inheriting.
- The diagram tool of the Eiffelstudio is the major helping hand in the creation of the applications that are robust, scalable and fast. 
- It helps you to model your application just the way you want. 
- It helps in capturing your requirements as well as thought processes. 
- The tools of the Eiffel studio make it sure that you don’t have to use separate tools to make changes in the architecture of the system while designing.



Friday, October 26, 2012

How to run the silk scripts?


Silk test as we know is a test automation tool from the Borland which was launched in the year of 2006. This tool facilitates the automated regression testing and functional testing. However, the credit for the original creation goes to the segue software.
Various clients are offered by the silk test as described below:
  1. Silk test classic: This client of the silk test makes use of the domain specific language called “4test” for scripting of the test automation scripts. This language just like the C++ language is an object oriented language. Just like C++ it also makes use of the Object Oriented concepts such as following:
a)   Inheritance
b)   Classes and
c)   objects
  1. Silk 4J: This client of the silk test enables one to follow test automation by using java as the scripting language in eclipse.
  2. Silk 4 net: This client of the silk test also enables one to follow test automation by using VBScript or sometimes using C# as the scripting language in the visual studio.
  3. Silk test work bench: This client of the silk test enables the testers to carry out the automation testing using VB.net as the scripting language as well as on a visual level.

What kind of files a silk script consists?

In this article we are going to see how the execution of the test scripts takes place in silk test.
-  A basic or a typical silk script consists of two files namely:
  1. Include file and
  2. Script file
- The include file of the silk script is saved with an extension .inc and is particularly for the declaration of the following:
  1. Window names
  2. Window objects
  3. Variables
  4. Constants
  5. Structures
  6. Classes and so on.
- The second file i.e., the script file contributes in writing of the scripts. 
- It is saved by the extension .t and is used for defining of the body of the scripts. 
- Test cases meeting the various test conditions are defined in this script. 
- The two different types of files have been used to maintain the clarity of the code as much as possible. 
- If the include file of the test script does not consists of any of the test cases then the file may be compiled but cannot be executed. 
- If tried for the execution an error will be generated saying that there are no test cases. 
- Only the include file which consists of some test case can be run. 
- Before you run the script always make sure that you make separate declaration for the files:
  1. For the declaration of the objects and
  2. For creation of the scripts using declaration file.

Steps for running Test Scripts

After the two declarations have been made you should proceed for their compilation. Below we are stating the steps that are to be followed for running the test scripts:
  1. Launch the silk test automation tool.
  2. Open the script file of the script to be executed.
  3. Compile the above script by using the compile option from menu bar.
  4. Once the compilation of the file is complete the status of the script can be checked by the progress status.
  5. Any errors present in the script will be displayed at the end of the compiling process.
  6. Now run the silk scripts by clicking the run option in the menu bar.
  7. For running the test scripts you have two options: either you can run all of them at a single stretch or you can run them selectively.
  8. If you have opted for the latter option then you need to specify which all tests have to be executed.
  9. After selection you can again give the run command. 


Thursday, March 22, 2012

Loop testing is a white box testing technique - Explain?

Loop testing is also one of the white box testing techniques and thus requires a very deep knowledge about the software system or application. Loop testing methodology has been designed exclusively for the checking of the validation of the iterative constructs which are nothing but the loops.

Types of Loop Constructs
These loop constructs are 4 types as mentioned below:
1. Unstructured loops
2. Simple loops
3. Nested loops and
4. Concatenated loops

Tests applied to different Loop Constructs
Now we shall define some of the tests that can be applied to the above mentioned types of loop constructs under the context of the loop testing:

1. For unstructured loops only one thing is possible which is that they should be redesigned in order to form a structured construct and then can be tested accordingly.

2. For simple loops a number of allowable passes through them is specified first and then the following tests are applied:

(a) Skipping of the entire loop.
(b) Making only one pass through the loop.
(c) Making two passes through the loop.
(d) Making “p” passes through the loop where p is the maximum number of passes.
(e) Making “n-1”, “n”, “n+1” passes through the loop.

3. For nested loops simply the testing approach of the simple loops is extended but, the number of the test cases increases geometrically as per the number of the nested loops and the level of nesting. Usually the following steps are followed:

(a) The inner most loop is the starting point for the testing.
(b) All other loops are set to minimum possible values.
(c) Simple loop tests are conducted for the inner most loop and the outer loops or the nesting loops are kept in their minimum values only till the testing of the inner most loop is complete.
(d) For the excluded values more tests are added.
(e) Now once the testing of the inner most loop is complete, this loop including all the other nested loops are set to typical values and the testing moves outwards. The other nesting loops are held with their minimum values.
(f) The testing in this manner continues until and unless all the loops have been tested.

4. For concatenated loops also the approach that has been defined for the testing of the simple loops can be used but only if the either loops are independent of each other i.e., if the loop counter for one of concatenated loop is 1 and it is used as the executing value for the other loop, then the two loops are said to be dependent on each other and hence the simple loop approach cannot be followed for them.

More about Loop Testing

- It has been observed so many times that most of the semantic bugs preside over the loops.

- It becomes difficult for the path testing also to commence since there are so many paths generated via a loop and an infected loop leads to infected paths which makes even further difficult to track the bug.

- Some of testers believe that it is just enough to test the loop only two times but this is not a good practice.

- A loop should be tested at the following three instances:
a) At the entry of the loop
b) During the execution of the loop and
c) At the exit of the loop

- Loop testing is aimed at testing a resource multiple numbers of times by executing it under a loop and this whole process is controlled by a diagnostic controller.

- However, one rule has been defined for the loop testing which is that the user can interact only at the entry and exit of the loop and nowhere in between.


Friday, December 16, 2011

What are different types of white box testing? Part 2

White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs. White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests.

White-box testing can be defined as a methodology to verify the source code of the software system if it works as expected or not. White box testing is a synonym for structural testing.

Unit testing and Integration testing is already discussed in previous post.

Types of White Box Testing



- Function level testing:
This white box testing is carried to check the flow of control of the program. Adequate test cases are designed to check the control flow and coverage. During functionality level white box testing simple input values can be used.

- Acceptance level testing:
This type of white box testing is performed to determine whether all he specifications of a software system have been fulfilled or not. It involves various kinds of other tests like physical tests, chemical tests and performance tests.

- Regression level testing:
This type of white box testing can also be called as retesting. It is done after all the modifications have been done to the software and hardware units. Regression level white box testing ensures that the modifications have not altered the working of the software and has not given rise to more bugs and errors.

- Beta level testing:
Beta testing is that phase of software testing in which a selected audience tries out the finished software application or the product. It is also called pre- release testing.


Thursday, December 15, 2011

What are different types of white box testing? Part 1

White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs. White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests.

White-box testing can be defined as a methodology to verify the source code of the software system if it works as expected or not. White box testing is a synonym for structural testing.

There are certain levels only at which white box testing can be applied. The levels have been given below in the list:
- Unit level
- Integration level and
- System level
- Acceptance level
- Regression level
- Beta level

Unit level testing:

This type of white box testing is used for testing the individual units or modules of the software system. Sometimes it also tests a group of modules. A unit is the smallest part of a program and cannot be divided further into smaller parts. Units form the basic structure of a software system. Unit level white box testing is performed to check whether or not the unit is working as expected. White box testing is done to ensure whether the unit is working properly or not so that later it can be integrated with the other units of the system. It is important to test units at this level because later after integration it becomes difficult to find errors. The software engineer who has written the code only knows where the potential bugs can be found. Others cannot track them. Therefore, such kinds of flaws are completely in the privacy of the writer. Unit level white box testing can find up to 65 percent of the total flaws.

- Integration level testing:
In this type of white box testing the software components and the hardware components are integrated and the program is executed. This is done mainly to determine whether both of the software units and hardware units are working together in harmony. It includes designing of test cases which check the user interfaces of the two components.


Friday, November 25, 2011

What are different characteristics of visual testing?

Visual testing is a frequently used testing technique for testing software. But, what it is actually?

Visual testing technique is categorized under non destructive testing. Non destructive testing includes several other techniques. As the name suggests, non destructive testing techniques do not involve vigorous checking of the software structure and so does visual testing or “VT” as it is abbreviated and commonly used.

Visual testing itself suggests that it has everything to do with visual examination of the program or the source code. Anything which is to be tested is first examined visually. Later the operations are carried out. Similarly for software systems and programs also the same procedure is used. They are first checked visually and later they are tested with white box testing or black box testing etc. even though visual testing sounds like an unsophisticated method of testing, it is quite effective.

Many errors and flaws in the source code and programs can be spotted during visual testing. It is more effective when a large number of professionals carry out the visual examination. Visual checks how sound the program or the software application is before it is brought in use.Visual testing sounds very simple but, it requires quite a lot of knowledge.Even though it is very primitive kind of testing it has got a lot of advantages.

Few have been listed below:

- Simplicity
Visual testing is very easy to carry out. One doesn’t require any complex techniques or software.

- Rapidity
Apart from being simple, it is faster in process when compared to other kinds of testing techniques. One doesn’t require any extra efforts.

- Low cost
Visual testing is priced very low. You are charged only for hiring professionals to examine your software and nothing else. If it was to be tested by the writer itself, then it would have been absolutely no cost.

- Minimal training
The individuals or the professionals testing the software visually need minimal training just enough to spot big blunders in the program.

- Equipment requirement
It requires no special equipment.

Visual testing can be performed almost anytime. You can visually examine a program simultaneously while making some modifications or manipulating or executing the program.

In contrast to these advantages there are limitations to visual testing. They have been discussed below:

- Visual testing can detect only the mistakes that appear on the surface of the program. It cannot discover the discrepancies hidden in the internal structure of the program.
- The quality of visual testing also depends on the resolution of the eyes of the tester.
- Extent of visual testing depends on the fatigue of the person who’s inspecting. Due to prolonged testing, the examiner may start getting headaches and pain in the eyes.
- Also during visual examination there’s a lot of distraction due to surrounding environment. It impossible for a person to devote himself entirely to the visual testing without paying attention to what’s happening around him.

Visual testing holds good when it comes to checking the size or length of the program, to determine its completeness, to make sure the correct number of units or modules are there in the program, and to inspect the format of the program; basically to ensure that the presentation of the program is good.

Visual testing spots the big mistakes. They can be corrected at a very low level of testing and this in turn reduces the future workload. Requirements include:
- A vision test of the inspector
- Measurement of light with a light meter.

The inspector only needs to set up a visual contact with the part of the program that is to be tested.Visual testing also gives an idea on how to make a program better.

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
The Art of Unit Testing: With Examples in .Net

Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing


Tuesday, November 22, 2011

What are different characteristics of white box testing?

White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs.

White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests. White-box testing includes designing of test cases and an internal perspective of the software system.

Expert programming skills are needed to design test cases and internal structure of the program i.e., in short to perform white box testing. The tester or the person who is performing white box tests inputs some certain specified data to the code and checks for the output whether it is as expected or not. There are certain levels only at which white box testing can be applied.

The levels have been given below in the list:
- Unit level
- Integration level and
- System level
- Acceptance level
- Regression level
- Beta level

Even though there’s no problem in applying white box testing at all the 6 levels, it is usually performed at the unit level which is the basic level of software testing.

White box testing is required to test paths through a source codes, between systems and sub systems and also between different units during the integration of the software application.

White box testing can effectively show up hidden errors and grave problems.But, it is incapable of detecting the missing requirements and unimplemented parts of the given specifications. White box testing includes basically four kinds of basic and important testings. These have been listed below:

- Data flow testing
- Control flow testing
- Path testing and
- Branch testing

In the field of penetration testing, white box testing can be defined as a methodology in which a hacker has the total knowledge of the hacked system. So we can say that the white box testing is based on the idea of “how the system works?” it analyzes flow of data, flow of information, flow of control, coding practices and handling of errors and exceptions in the software system.

White box testing is done to ensure that whether the system is working as intended or not and it also validates the implemented source code for its control flow and design, security functionalities and to check for the vulnerable parts of the program.

White box testing cannot be performed without accessing the source code of the software system. It is recommended that the white boxing is performed at the unit level testing phase.

White box testing requires the knowledge of insecurities and vulnerabilities and strengths of a program.

- The first step in white box testing includes analyzing and comprehensing the software documentation, software artifacts and the source code.
- The second step of white box testing requires the tester to think like an attacker i.e., in what ways he/ she can exploit and damage the software system.
He/ she needs to think of ways to exploit the system of the software.
- The third step of white boxing testing techniques are implemented.

These three steps need to be carried out in harmony with each other. Other wise, the white box testing would not be successful.

White box testing is used to verify the source code. For carrying out white box testing one requires full knowledge of the logic and structure of the code of the system software. Using white box testing one can develop test cases that implement logical decisions, paths through a unit, operate loops as specified and ensure validity of the internal structure of the software system.

Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional
Search-Based Testing: Automating White-Box Testing

Software Testing Interview Questions You'll Most Likely Be Asked


Thursday, October 6, 2011

Some details about Pointers in C...

There was a need for a kind of variable that could store the address of another variable, so that the value of he variable could be directly accessed through its memory address and could be manipulated more easily and in a short span of time. The “pointer” is such a variable invented. It can be defined as a variable which stores the address of another variable. A pointer can be declared easily as shown below:

int *ptr1;

The “int” keyword is used to tell compiler that the pointer “ptr1” will store the memory address of an integer type variable. The symbol “*” called asterisk is used to tell the compiler that the variable “ptr1” is actually a pointer and that it will store the address of the variable it is pointing to irrespective of the bytes it will require to store the memory address. The pointer “ptr1” is said to point to an integer variable. N the above declaration we didn’t provide ptr1 with a value i.e., it’s empty now. If the declaration is made outside the function, the pointer “ptr1” will be initialized to a value that will not point to any of the variables or objects in a C program. Pointers initialized in this manner are said to have a null value or are called as “null” pointers. Null pointers are very useful in many of the C programs as they prevent system crash. A null pointer is implemented using a macro. The macro used is called “NULL” macro. If you set the value of a pointer using the above mentioned macro through an assignment statement as shown below:

Ptr1 = NULL;

It is assured that the pointer is now having a null value or it has become a null pointer. A null pointer can be tested using the below given statement

if (ptr 1== NULL);

Now suppose we want to store the address of an integer a in the above declared pointer “ptr1”, we will use the following statement:

ptr1 = &a;

Before proceeding further, we should now that a pointer has two values attached to it. One is the “l value” and the other one is the “r value”. L value is where the address of the variable pointed to is stored. R value stores the value of that variable. Now, the function of the “&” operator is to retrieve this l value of the variable a. the assignment operator copies the address of the variable a to the pointer “ptr1”. Now the pointer “ptr1” is said to point to variable “a”.

The “*” operator is also called the dereferencing operator and is used for de-referencing as follows:

*ptr 1 = 10;

The above statement will copy the value “10” to the address of variable a pointed to by the pointer “ptr1”. The above assignment statement can be written in another way as shown below:

Printf ( “ % d \n ”, *ptr1 );

The above statement will also print the value stored in the pointer on the screen as output.

Pointers are very much essential nowadays in programs. They solve basically two problems avoiding many other problems that may follow. First problem that they solve is that they make it easy to share information and data through and from the different sections of the memory. Secondly, they solve the problem of having complex structures. They make it easy to have linked data structures namely linked lists and queues and also binary trees. Pointers reduce the complexity of the program and there are many things that one can do using only pointers. Pointers are by no doubt a powerful C construct.


Wednesday, October 5, 2011

Some details about Pointers to Arrays in C

A pointer is a variable that holds a memory address, usually of another variable in memory. The pointers are one of the most powerful and strongest features of C language. The correct understanding and use of pointers is critical to successful programming in C. pointer’s support C’s dynamic memory allocation routines. Pointers provide the means through which the memory location of a variable can be directly accessed and hence can be manipulated in the required way. Lastly, pointers can improve the efficiency of certain routines. Arrays and pointers are very loosely linked. C treats the name of array as if it were a pointer.
Consider the following code snippet:


Int *a ; // a is a pointer to an integer
Int age [10] ; //age is an array holding ten integers
For (int I = 0; I < 10 ; I ++)
a = age ; // makes a to point to the location where age points to. Age is a pointer pointing to age [0].
.
.
.


In the above code a is a pointer and age is an array holding 10 integers. The pointer a is made to point where age is pointing to. Since the name of an array is a pointer to its first element, the array name + 1 gives the address of the second element of the array, array name + 2 gives the address of the 3rd element, and so forth.

Pointers also may be arrayed like any other data type. To declare an array holding 10 integer pointers, the declaration would be as follows:

Int *ip [10] ; // array of 10 int pointers

After this declaration, contiguous memory would be allocated for 10 pointers that can point to integers. Now each of the pointers, the elements of pointer array, may be initialized. We can use the following statement:

Ip [3] = &a ;

To find the value of a, you can use the below given statement:
*ip [3] ;

The name of an array is actually a pointer to the first element of the array, the same holds true for the array of pointers also. Most often, an operation is carried on successive elements of an array. Using a loop for it and using the array elements indices. Consider the following code fragment that initializes an array to 0:


Const int abc = 20 ;
Int arr [ abc ] ;
For ( int I = 0 ; I < abc ; i++ )
Arr [ I ] = 0 ;


To execute the above code snippet, the compiler computes the address of array [ I ] every time by multiplying the index I by the size of an array element. That is, the compiler performs the multiplication for each element. A faster alternative would be to use a pointer as shown below:


Const int abc = 20 ;
Int arr [ abc ] ;
Int * arr2 ;
For ( arr2 = arr ; arr2 < &arr [ abc ] ; arr2++ )
*arr2 = 0;


Now the compiler only needs to evaluate a subscript once, when setting up the loop, and so saves 19 multiplication operations. So it is faster to use an element pointer than an index when you need to scan the arrays in a program. Pointers in c are defined by their data type and values. The data type determines the increment or decrements of the pointer value. The value is the address of the memory location to which the pointer is pointing. If you are using array notation, you don’t need to pass the dimensions.


Thursday, September 15, 2011

Some details about Pointers and Structures in C...

A pointer can be defined as a variable pointing to another variable or a variable storing the location or address of another variable.Pointers can be used to point to any data type.There are object pointers, function pointers, structure pointers and so on. Like any other pointers, structure pointers are also a very useful tool in C programming.Pointer structures are easy to declare and declared similarly like any other kind of pointer by putting a “*” sign before the name of the structure. See the example below:

Struct address *a1, *a2;
a1 = &b2;
a2 = &b3;

where b2 and b3 are the actual structure address variables. Using the below given assignment statement you can copy the details of structure pointed by a2 to a1:

*a1 = *a2;

Any member of a structure can be accessed using a pointer in the following way:
A->b

Here A is a pointer pointing to a structure and b is a data member or a member function of the structure being pointed. And there’s another way to access a member of a structure which has been given below:

(*A).b;

Both the statements are equivalent. Use of structure pointers is very common and useful.Be careful while dealing with structure pointers like any other normal pointer.The precedence of operators matters a lot. Be careful with the precedence of operators while programming in C.If we enclose *A is parentheses, an error will be generated and the code will not be compiled since the “.” Operator has got a high precedence than the”*” operator.Using so many parentheses can be quite a tedious job so, C allows us to use a shorthand notation to reduce the bombastic jargon. ” (*A).” can also be written as given below:

A ->

This is equivalent to (*A). but takes less characters. We can create pointers to structure arrays. A lot of space can be saved if we declare an array of pointers instead of an array to structures. In this only one structure is created and subsequently values are entered and disposed. Even structures can contain pointers as shown below:

Typedef struct
{
Char a[10];
Char *b;
} c;
c d;
char e[10];
gets(d.a,10);
d.b = (char *) malloc (sizeof (char[strlen(e)+ 1]));
strcpy(d.b, e);


This method is implemented when only a few records are required to be stored. When the size of the structure is large, it becomes difficult to pass and return the structures to the functions. To overcome this problem we can pass structure pointers to the functions. These structures can be accessed indirectly via pointers. Given below is a small code to illustrate the passing of structure pointers to the function and accessing them:

struct product
{
Int pno;
Float price;
};
Void inputpno (struct product *pnoptr);
Void outputpno (struct product *pnoptr);
Void main()
{
Struct product item;
Printf(“product details\n\n”);
Inputpno (&item);
Outputpno (&item);
}
Void outputpno (struct product *pnoptr)
{
Printf( “product no. = %d, price = %5.2f \n”, ( *pnoptr). Pno, (*pnoptr).price);
}
Void inputpno (struct product *pnoptr)
{
Int x;
Float y;
Struct product pno;
Printf( “product number: “);
Scanf( “%d”, &x);
( *pnoptr).pno = x;
Printf ( “price of the product: “);
Scanf( “%f”, &y);
( *pnoptr). Price = y;
}


In this program code, the prototypes, function calls and definitions have been changed in order to work with structure pointers. Dereferencing of a structure pointer is very common in programs. “->” (arrow) is an operator that is provided by C for accessing the member function of a structure. The 2 statements given below are equivalent:

Pnoptr -> pno;
(*pnoptr).pno;


Wednesday, March 9, 2011

How is data designed at architectural and component level?

Data Design at Architectural Level


Data design translates data objects defined during analysis model into data structures at the software component level and, when necessary,a database architecture at the application level.
There are small and large businesses that contains lot of data. There are dozens of databases that serve many applications comprising of lots of data. The aim is to extract useful information from data environment especially when the information desired is cross functional.
Techniques like data mining is used to extract useful information from raw data. However, data mining becomes difficult because f some factors:
- Existence of multiple databases.
- Different structures.
- Degree of detail contained with databases.Alternative solution is concept of data warehousing which adds an additional layer to data architecture. Data warehouse encompasses all data used by a business. A data warehouse is a large, independent database that serve the set of applications required by a business. Data warehouse is a separate data environment.

Data Design at Component Level


It focuses on representation of data structures that are directly accessed by one or more software components. Set of principles applicable to data design are:
- Systematic analysis principles applied to function and behavior should also be applied to data.
- All data structures and operations to be performed on each should be identified.
- The content of each data object should be defined through a mechanism that should be established.
- Low level data design decisions should be deferred until late in design process.
- A library of data structures and operations that are applied to them should be developed.
- The representation of data structure should only be known to those modules that can directly use the data contained within the structure.
- Software design and programming language should support the specification and realization of abstract data types.


Tuesday, March 8, 2011

Software Architecture Design - why is it important?

The architecture is not the operational software, rather it is a representation that enables a software engineer to analyze the effectiveness of the design in meeting its stated requirements, consider architectural alternatives at a stage when making design changes is still relatively easy and reduce the risk associated with the construction of the software.

- Software architecture enables and shows communication between all parties interested in the development of a computer based system.
- Early design decisions that has a profound impact on software engineering work is highlighted through architecture.
- Architecture constitutes a relatively small, intellectually graspable model of how the system is structured and how its components work together.

The architectural design model and the architectural patterns contained within it are transferable. Architectural styles and patterns can be applied to the design of other systems and represent a set of abstractions that enable software engineers to describe architecture in predictable ways.

Software architecture considers two levels of design pyramid - data design and architectural design. The software architecture of a program or computing system is the structure or structures of the system, which compose software components, the externally visible properties of those components and the relationships among them.


Sunday, February 6, 2011

Control Structure Testing - Condition Testing, Data Flow Testing, Loop Testing

Control structure testing is a group of white-box testing methods.
CONDITION TESTING
- It is a test case design method.
- It works on logical conditions in program module.
- It involves testing of both relational expressions and arithmetic expressions.
- If a condition is incorrect, then at least one component of the condition is incorrect.
- Types of errors in condition testing are boolean operator errors, boolean variable errors, boolean parenthesis errors, relational operator errors, and arithmetic expression errors.
- Simple condition: Boolean variable or relational expression, possibly proceeded by a NOT operator.
- Compound condition: It is composed of two or more simple conditions, Boolean operators and parentheses.
- Boolean expression: It is a condition without Relational expressions.

DATA FLOW TESTING
- Data flow testing method is effective for error protection because it is based on the relationship between statements in the program according to the definition and uses of variables.
- Test paths are selected according to the location of definitions and uses of variables in the program.
- It is unrealistic to assume that data flow testing will be used extensively when testing a large system, However, it can be used in a targeted fashion for areas of software that are suspect.

LOOP TESTING
- Loop testing method concentrates on validity of the loop structures.
- Loops are fundamental to many algorithms and need thorough testing.
- Loops can be defined as simple, concatenated, nested, and unstructured.
- In simple loops, test cases that can be applied are skip loop entirely, only one or two passes through loop, m passes through loop where m is than n, (n-1), n, and (n+1) passes through the loop where n is the maximum number of allowed passes.
- In nested loops, start with inner loop, set all other loops to minimum values, conduct simple loop testing on inner loop, work outwards and continue until all loops tested.
- In concatenated loops, if loops are independent, use simple loop testing. If dependent, treat as nested loops.
- In unstructured loops, redesign the class of loops.


Thursday, January 27, 2011

Introduction to Architecture Design - Content Architecture

The design process for identifying the subsystems making up a system and the
framework for sub-system control and communication is architectural design. An architectural design is :
- early stage in system design process.
- conducted in parallel with other design activities.
- establishes a link among goals established for web application, content, users visiting it, and the navigation criterion.
- identifying system components and their communications.

Content architecture emphasize on the fact how are the content objects structured for presentation and navigation. It focuses on overall hypermedia structure of the web application. It focuses on
- identifying links and relationships among content and documents.
- defining the structure of content.
- specifying consistent document requirements and attributes.

The design can choose from four different content structures:
- Linear Structures : a predictable sequence of interactions is common. The sequence of content presentation is predefined and linear in nature.
- Grid Structures : applied when the web application content can be organized categorically in two dimensions. This web application architecture is useful when highly regular content is encountered.
- Hierarchical Structures : it is the most common web application architecture. It is designed in a manner that enables flow of control horizontally, across vertical branches of the structure.
- Networked Structures : architectural components are designed so that they may pass control to virtually every other component in the system. It provides navigational flexibility but at the same time it can be a bit confusing to a user.
- Composite Structures : the overall architecture of the web application may be hierarchical, but part of the a structure may exhibit linear characteristics, while another part of the architecture may be networked.


Friday, October 8, 2010

What are the limitations and tools used for white box testing ?

In white box testing, exhaustive testing of a code presents certain logistical problems. Even for small programs, the number of possible logical paths can be very large. For example, a hundred line C program which contains tow nested loops executing 1 to 20 times depending upon some initial input after some basic data declaration. Inside the interior loop, four if-then-else constructs are requires. Then there are approximately 10^14 logical paths that are to be exercised to test the program exhaustively which means that a magic test processor developing a single test case, execute it and evaluate results in one millisecond would require 3170 years working continuously for this exhaustive testing which is certainly impractical. Exhaustive WBT is impossible for large software systems. But that does not mean WBT should be considered as impractical. Limited WBT in which a limited number of important logical paths are selected and exercised and important data structures are probed for validity, is both practical. It is suggested that white and black box testing techniques can be coupled to provide an approach that validates the software interface selectively ensuring the correction of internal working of the software.

Tools used for white box testing are :
Few test automation tool vendors offer white box testing tools which:
- Provide run-time error and memory leak detection.
- Record the exact amount of time the application spends in any given block of code for the purpose of finding inefficient code bottlenecks.
- Pinpoint areas of application that have and have not been executed.


Facebook activity