- A test class is
created by the auto test for the tests that have been manually created.
This test class contains the test framework. So the user only needs to
input the code for the test.
- The second method
for the creation of the tests is based up on the failure of the
application during its run time. Such a test is known as the ‘extracted’. Whenever
an unexpected failure occurs during the run time of the system under test,
the auto test works up on the info provided by the debugger in order to
produce a new test case. The calls and the states that cause the system to
fail are reproduced by this test. After fixing the failure, the extracted
tests are then added to the complete suite as a thing that would avoid the
recurrence of the problem.
- The third method
involves production of tests known as generated tests. For this the user
needs to provide the classes for which tests are required and plus some
additional info that auto test might require to control the generation of
the tests. The routines of the target classes are then called by the tool
using arguments values that have been randomized. A single new test is
created that reproduces the call that caused the failure whenever there is
a violation of a class invariant or some post condition.
Monday, February 4, 2013
How are unit and integration testing done in EiffelStudio?
Posted by
Sunflower
at
2/04/2013 09:28:00 PM
0
comments
Labels: Application, Browse, Changes, Design, Development, Eiffelstudio, Engineering, Environment, Features, Integration testing, Interface, Model, Requirements, Software, Structures, Technology, Tools, Unit Testing
![]() | Subscribe by Email |
|
Thursday, January 31, 2013
Explain EiffelStudio? What technology is used by EiffelStudio?
- Compiler
- Interpreter
- Debugger
- Browser
- Metrics tool
- Profiler
- Diagram tool
- Windows
- Linux
- Mac OS
- VMS and
- Solaris
Technology behind EiffelStudio
History of Eiffelstudio
- Version 5.0 (july
2001): The first proper version. Saw integration of the eiffelcase tool
with the eiffelbench as its diagram tool.
- Version 5.1
(December 2001): Support for .NET applications. Also called the eiffel#.
- Version 5.2
(November 2002): The debugging capabilities were extended, an improved
mechanism for C++ and C was introduced, eiffelbuild, roundtripping
abilities etc. were added.
Posted by
Sunflower
at
1/31/2013 05:18:00 PM
0
comments
Labels: Application, Browse, Changes, Classes, Cursor, Design, Development, Eiffelstudio, Engineering, Environment, Features, Interface, Objects, Position, Process, Software, Structures, Technology, Tools, Window
![]() | Subscribe by Email |
|
Wednesday, January 30, 2013
Give an overview of The diagram Tool of EiffelStudio?
- Compiler
- Interpreter
- Debugger
- Browser
- Metrics tool
- Profiler
- Diagram tool
- Forward engineering
process: In this process it can be used as design tool that uses the
graphical descriptions for producing the software.
- Reverse engineering
process: In this process it produces the graphical representations of the
program texts that already exist automatically.
- BON (business object
notation) and
- UML (unified
modeling language)
- Bubble or ellipse
for class
- Dot for breakpoint
- Cross for feature
and so on.
Posted by
Sunflower
at
1/30/2013 09:41:00 PM
0
comments
Labels: Application, Browse, Changes, Classes, Cursor, Design, Development, Diagram tool, Eiffelstudio, Engineering, Environment, Features, Interface, Objects, Position, Process, Software, Structures, Tools, Window
![]() | Subscribe by Email |
|
Friday, October 26, 2012
How to run the silk scripts?
- Silk test classic: This client of the silk test makes use of the domain specific language called “4test” for scripting of the test automation scripts. This language just like the C++ language is an object oriented language. Just like C++ it also makes use of the Object Oriented concepts such as following:
- Silk 4J: This client of the silk test enables one to follow test automation by using java as the scripting language in eclipse.
- Silk 4 net: This client of the silk test also enables one to follow test automation by using VBScript or sometimes using C# as the scripting language in the visual studio.
- Silk test work bench: This client of the silk test enables the testers to carry out the automation testing using VB.net as the scripting language as well as on a visual level.
What kind of files a silk script consists?
- Include file and
- Script file
- Window names
- Window objects
- Variables
- Constants
- Structures
- Classes and so on.
- For the declaration of the objects and
- For creation of the scripts using declaration file.
Steps for running Test Scripts
- Launch the silk test automation tool.
- Open the script file of the script to be executed.
- Compile the above script by using the compile option from menu bar.
- Once the compilation of the file is complete the status of the script can be checked by the progress status.
- Any errors present in the script will be displayed at the end of the compiling process.
- Now run the silk scripts by clicking the run option in the menu bar.
- For running the test scripts you have two options: either you can run all of them at a single stretch or you can run them selectively.
- If you have opted for the latter option then you need to specify which all tests have to be executed.
- After selection you can again give the run command.
Posted by
Sunflower
at
10/26/2012 01:46:00 PM
0
comments
Labels: Automated Testing Tool, Automation, Client, Conditions, Domain, files, Include, Language, Objects, Scripting, Scripts, SilkTest, Software testing, Steps, Structures, Test cases, Testers, Tools, Variables
![]() | Subscribe by Email |
|
Thursday, March 22, 2012
Loop testing is a white box testing technique - Explain?
Loop testing is also one of the white box testing techniques and thus requires a very deep knowledge about the software system or application. Loop testing methodology has been designed exclusively for the checking of the validation of the iterative constructs which are nothing but the loops.
Types of Loop Constructs
These loop constructs are 4 types as mentioned below:
1. Unstructured loops
2. Simple loops
3. Nested loops and
4. Concatenated loops
Tests applied to different Loop Constructs
Now we shall define some of the tests that can be applied to the above mentioned types of loop constructs under the context of the loop testing:
1. For unstructured loops only one thing is possible which is that they should be redesigned in order to form a structured construct and then can be tested accordingly.
2. For simple loops a number of allowable passes through them is specified first and then the following tests are applied:
(a) Skipping of the entire loop.
(b) Making only one pass through the loop.
(c) Making two passes through the loop.
(d) Making “p” passes through the loop where p is the maximum number of passes.
(e) Making “n-1”, “n”, “n+1” passes through the loop.
3. For nested loops simply the testing approach of the simple loops is extended but, the number of the test cases increases geometrically as per the number of the nested loops and the level of nesting. Usually the following steps are followed:
(a) The inner most loop is the starting point for the testing.
(b) All other loops are set to minimum possible values.
(c) Simple loop tests are conducted for the inner most loop and the outer loops or the nesting loops are kept in their minimum values only till the testing of the inner most loop is complete.
(d) For the excluded values more tests are added.
(e) Now once the testing of the inner most loop is complete, this loop including all the other nested loops are set to typical values and the testing moves outwards. The other nesting loops are held with their minimum values.
(f) The testing in this manner continues until and unless all the loops have been tested.
4. For concatenated loops also the approach that has been defined for the testing of the simple loops can be used but only if the either loops are independent of each other i.e., if the loop counter for one of concatenated loop is 1 and it is used as the executing value for the other loop, then the two loops are said to be dependent on each other and hence the simple loop approach cannot be followed for them.
More about Loop Testing
- It has been observed so many times that most of the semantic bugs preside over the loops.
- It becomes difficult for the path testing also to commence since there are so many paths generated via a loop and an infected loop leads to infected paths which makes even further difficult to track the bug.
- Some of testers believe that it is just enough to test the loop only two times but this is not a good practice.
- A loop should be tested at the following three instances:
a) At the entry of the loop
b) During the execution of the loop and
c) At the exit of the loop
- Loop testing is aimed at testing a resource multiple numbers of times by executing it under a loop and this whole process is controlled by a diagnostic controller.
- However, one rule has been defined for the loop testing which is that the user can interact only at the entry and exit of the loop and nowhere in between.
Posted by
Sunflower
at
3/22/2012 11:26:00 AM
0
comments
Labels: Approaches, Concatenated Loops, Constructs, Loop testing, Loops, Nested loops, Passes, Simple Loops, Software testing, Structures, Techniques, Test cases, Tests, Unstructured loops, White box testing
![]() | Subscribe by Email |
|
Friday, December 16, 2011
What are different types of white box testing? Part 2
White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs. White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests.
White-box testing can be defined as a methodology to verify the source code of the software system if it works as expected or not. White box testing is a synonym for structural testing.
Unit testing and Integration testing is already discussed in previous post.
Types of White Box Testing
- Function level testing:
This white box testing is carried to check the flow of control of the program. Adequate test cases are designed to check the control flow and coverage. During functionality level white box testing simple input values can be used.
- Acceptance level testing:
This type of white box testing is performed to determine whether all he specifications of a software system have been fulfilled or not. It involves various kinds of other tests like physical tests, chemical tests and performance tests.
- Regression level testing:
This type of white box testing can also be called as retesting. It is done after all the modifications have been done to the software and hardware units. Regression level white box testing ensures that the modifications have not altered the working of the software and has not given rise to more bugs and errors.
- Beta level testing:
Beta testing is that phase of software testing in which a selected audience tries out the finished software application or the product. It is also called pre- release testing.
Posted by
Sunflower
at
12/16/2011 07:49:00 PM
0
comments
Labels: Application, Code, Functional testing, program, Regression Testing, Software testing, Source, Structural testing, Structures, Test cases, Transparent, Types, Values, White box testing
![]() | Subscribe by Email |
|
Thursday, December 15, 2011
What are different types of white box testing? Part 1
White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs. White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests.
White-box testing can be defined as a methodology to verify the source code of the software system if it works as expected or not. White box testing is a synonym for structural testing.
There are certain levels only at which white box testing can be applied. The levels have been given below in the list:
- Unit level
- Integration level and
- System level
- Acceptance level
- Regression level
- Beta level
Unit level testing:
This type of white box testing is used for testing the individual units or modules of the software system. Sometimes it also tests a group of modules. A unit is the smallest part of a program and cannot be divided further into smaller parts. Units form the basic structure of a software system. Unit level white box testing is performed to check whether or not the unit is working as expected. White box testing is done to ensure whether the unit is working properly or not so that later it can be integrated with the other units of the system. It is important to test units at this level because later after integration it becomes difficult to find errors. The software engineer who has written the code only knows where the potential bugs can be found. Others cannot track them. Therefore, such kinds of flaws are completely in the privacy of the writer. Unit level white box testing can find up to 65 percent of the total flaws.
- Integration level testing:
In this type of white box testing the software components and the hardware components are integrated and the program is executed. This is done mainly to determine whether both of the software units and hardware units are working together in harmony. It includes designing of test cases which check the user interfaces of the two components.
Posted by
Sunflower
at
12/15/2011 07:49:00 PM
0
comments
Labels: Application, Code, Functional testing, Integration testing, program, Regression Testing, Source, Structural testing, Structures, Test cases, Types, Unit Testing, Values, White box testing
![]() | Subscribe by Email |
|
Friday, November 25, 2011
What are different characteristics of visual testing?
Visual testing is a frequently used testing technique for testing software. But, what it is actually?
Visual testing technique is categorized under non destructive testing. Non destructive testing includes several other techniques. As the name suggests, non destructive testing techniques do not involve vigorous checking of the software structure and so does visual testing or “VT” as it is abbreviated and commonly used.
Visual testing itself suggests that it has everything to do with visual examination of the program or the source code. Anything which is to be tested is first examined visually. Later the operations are carried out. Similarly for software systems and programs also the same procedure is used. They are first checked visually and later they are tested with white box testing or black box testing etc. even though visual testing sounds like an unsophisticated method of testing, it is quite effective.
Many errors and flaws in the source code and programs can be spotted during visual testing. It is more effective when a large number of professionals carry out the visual examination. Visual checks how sound the program or the software application is before it is brought in use.Visual testing sounds very simple but, it requires quite a lot of knowledge.Even though it is very primitive kind of testing it has got a lot of advantages.
Few have been listed below:
- Simplicity
Visual testing is very easy to carry out. One doesn’t require any complex techniques or software.
- Rapidity
Apart from being simple, it is faster in process when compared to other kinds of testing techniques. One doesn’t require any extra efforts.
- Low cost
Visual testing is priced very low. You are charged only for hiring professionals to examine your software and nothing else. If it was to be tested by the writer itself, then it would have been absolutely no cost.
- Minimal training
The individuals or the professionals testing the software visually need minimal training just enough to spot big blunders in the program.
- Equipment requirement
It requires no special equipment.
Visual testing can be performed almost anytime. You can visually examine a program simultaneously while making some modifications or manipulating or executing the program.
In contrast to these advantages there are limitations to visual testing. They have been discussed below:
- Visual testing can detect only the mistakes that appear on the surface of the program. It cannot discover the discrepancies hidden in the internal structure of the program.
- The quality of visual testing also depends on the resolution of the eyes of the tester.
- Extent of visual testing depends on the fatigue of the person who’s inspecting. Due to prolonged testing, the examiner may start getting headaches and pain in the eyes.
- Also during visual examination there’s a lot of distraction due to surrounding environment. It impossible for a person to devote himself entirely to the visual testing without paying attention to what’s happening around him.
Visual testing holds good when it comes to checking the size or length of the program, to determine its completeness, to make sure the correct number of units or modules are there in the program, and to inspect the format of the program; basically to ensure that the presentation of the program is good.
Visual testing spots the big mistakes. They can be corrected at a very low level of testing and this in turn reduces the future workload. Requirements include:
- A vision test of the inspector
- Measurement of light with a light meter.
The inspector only needs to set up a visual contact with the part of the program that is to be tested.Visual testing also gives an idea on how to make a program better.
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation | The Art of Unit Testing: With Examples in .Net | Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing |
Posted by
Sunflower
at
11/25/2011 02:29:00 PM
0
comments
Labels: Advantages, Application, Cost, Effective, Errors, Execute, Internals, Limitations, Non destructive testing, Procedure, Programming, Software testing, Structures, Techniques, Visual, Visual Testing
![]() | Subscribe by Email |
|
Tuesday, November 22, 2011
What are different characteristics of white box testing?
White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs.
White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests. White-box testing includes designing of test cases and an internal perspective of the software system.
Expert programming skills are needed to design test cases and internal structure of the program i.e., in short to perform white box testing. The tester or the person who is performing white box tests inputs some certain specified data to the code and checks for the output whether it is as expected or not. There are certain levels only at which white box testing can be applied.
The levels have been given below in the list:
- Unit level
- Integration level and
- System level
- Acceptance level
- Regression level
- Beta level
Even though there’s no problem in applying white box testing at all the 6 levels, it is usually performed at the unit level which is the basic level of software testing.
White box testing is required to test paths through a source codes, between systems and sub systems and also between different units during the integration of the software application.
White box testing can effectively show up hidden errors and grave problems.But, it is incapable of detecting the missing requirements and unimplemented parts of the given specifications. White box testing includes basically four kinds of basic and important testings. These have been listed below:
- Data flow testing
- Control flow testing
- Path testing and
- Branch testing
In the field of penetration testing, white box testing can be defined as a methodology in which a hacker has the total knowledge of the hacked system. So we can say that the white box testing is based on the idea of “how the system works?” it analyzes flow of data, flow of information, flow of control, coding practices and handling of errors and exceptions in the software system.
White box testing is done to ensure that whether the system is working as intended or not and it also validates the implemented source code for its control flow and design, security functionalities and to check for the vulnerable parts of the program.
White box testing cannot be performed without accessing the source code of the software system. It is recommended that the white boxing is performed at the unit level testing phase.
White box testing requires the knowledge of insecurities and vulnerabilities and strengths of a program.
- The first step in white box testing includes analyzing and comprehensing the software documentation, software artifacts and the source code.
- The second step of white box testing requires the tester to think like an attacker i.e., in what ways he/ she can exploit and damage the software system.
He/ she needs to think of ways to exploit the system of the software.
- The third step of white boxing testing techniques are implemented.
These three steps need to be carried out in harmony with each other. Other wise, the white box testing would not be successful.
White box testing is used to verify the source code. For carrying out white box testing one requires full knowledge of the logic and structure of the code of the system software. Using white box testing one can develop test cases that implement logical decisions, paths through a unit, operate loops as specified and ensure validity of the internal structure of the software system.
Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional | Search-Based Testing: Automating White-Box Testing | Software Testing Interview Questions You'll Most Likely Be Asked |
Posted by
Sunflower
at
11/22/2011 01:59:00 PM
0
comments
Labels: Application, Code, Data, Design, Errors, Exceptions, Functionality, Inputs, Methods, program, Software testing, Source, Structures, Techniques, Test cases, White box testing
![]() | Subscribe by Email |
|
Thursday, October 6, 2011
Some details about Pointers in C...
There was a need for a kind of variable that could store the address of another variable, so that the value of he variable could be directly accessed through its memory address and could be manipulated more easily and in a short span of time. The “pointer” is such a variable invented. It can be defined as a variable which stores the address of another variable. A pointer can be declared easily as shown below:
int *ptr1;
The “int” keyword is used to tell compiler that the pointer “ptr1” will store the memory address of an integer type variable. The symbol “*” called asterisk is used to tell the compiler that the variable “ptr1” is actually a pointer and that it will store the address of the variable it is pointing to irrespective of the bytes it will require to store the memory address. The pointer “ptr1” is said to point to an integer variable. N the above declaration we didn’t provide ptr1 with a value i.e., it’s empty now. If the declaration is made outside the function, the pointer “ptr1” will be initialized to a value that will not point to any of the variables or objects in a C program. Pointers initialized in this manner are said to have a null value or are called as “null” pointers. Null pointers are very useful in many of the C programs as they prevent system crash. A null pointer is implemented using a macro. The macro used is called “NULL” macro. If you set the value of a pointer using the above mentioned macro through an assignment statement as shown below:
Ptr1 = NULL;
It is assured that the pointer is now having a null value or it has become a null pointer. A null pointer can be tested using the below given statement
if (ptr 1== NULL);
Now suppose we want to store the address of an integer a in the above declared pointer “ptr1”, we will use the following statement:
ptr1 = &a;
Before proceeding further, we should now that a pointer has two values attached to it. One is the “l value” and the other one is the “r value”. L value is where the address of the variable pointed to is stored. R value stores the value of that variable. Now, the function of the “&” operator is to retrieve this l value of the variable a. the assignment operator copies the address of the variable a to the pointer “ptr1”. Now the pointer “ptr1” is said to point to variable “a”.
The “*” operator is also called the dereferencing operator and is used for de-referencing as follows:
*ptr 1 = 10;
The above statement will copy the value “10” to the address of variable a pointed to by the pointer “ptr1”. The above assignment statement can be written in another way as shown below:
Printf ( “ % d \n ”, *ptr1 );
The above statement will also print the value stored in the pointer on the screen as output.
Pointers are very much essential nowadays in programs. They solve basically two problems avoiding many other problems that may follow. First problem that they solve is that they make it easy to share information and data through and from the different sections of the memory. Secondly, they solve the problem of having complex structures. They make it easy to have linked data structures namely linked lists and queues and also binary trees. Pointers reduce the complexity of the program and there are many things that one can do using only pointers. Pointers are by no doubt a powerful C construct.
Posted by
Sunflower
at
10/06/2011 06:17:00 PM
0
comments
Labels: C, C Language, Compiler, Complex, Complexity, Data, Functions, Memory, Objects, Pointers, Points, Programs, Storage, Structures, Values, Variables
![]() | Subscribe by Email |
|
Wednesday, October 5, 2011
Some details about Pointers to Arrays in C
A pointer is a variable that holds a memory address, usually of another variable in memory. The pointers are one of the most powerful and strongest features of C language. The correct understanding and use of pointers is critical to successful programming in C. pointer’s support C’s dynamic memory allocation routines. Pointers provide the means through which the memory location of a variable can be directly accessed and hence can be manipulated in the required way. Lastly, pointers can improve the efficiency of certain routines. Arrays and pointers are very loosely linked. C treats the name of array as if it were a pointer.
Consider the following code snippet:
Int *a ; // a is a pointer to an integer
Int age [10] ; //age is an array holding ten integers
For (int I = 0; I < 10 ; I ++)
a = age ; // makes a to point to the location where age points to. Age is a pointer pointing to age [0].
.
.
.
In the above code a is a pointer and age is an array holding 10 integers. The pointer a is made to point where age is pointing to. Since the name of an array is a pointer to its first element, the array name + 1 gives the address of the second element of the array, array name + 2 gives the address of the 3rd element, and so forth.
Pointers also may be arrayed like any other data type. To declare an array holding 10 integer pointers, the declaration would be as follows:
Int *ip [10] ; // array of 10 int pointers
After this declaration, contiguous memory would be allocated for 10 pointers that can point to integers. Now each of the pointers, the elements of pointer array, may be initialized. We can use the following statement:
Ip [3] = &a ;
To find the value of a, you can use the below given statement:
*ip [3] ;
The name of an array is actually a pointer to the first element of the array, the same holds true for the array of pointers also. Most often, an operation is carried on successive elements of an array. Using a loop for it and using the array elements indices. Consider the following code fragment that initializes an array to 0:
Const int abc = 20 ;
Int arr [ abc ] ;
For ( int I = 0 ; I < abc ; i++ )
Arr [ I ] = 0 ;
To execute the above code snippet, the compiler computes the address of array [ I ] every time by multiplying the index I by the size of an array element. That is, the compiler performs the multiplication for each element. A faster alternative would be to use a pointer as shown below:
Const int abc = 20 ;
Int arr [ abc ] ;
Int * arr2 ;
For ( arr2 = arr ; arr2 < &arr [ abc ] ; arr2++ )
*arr2 = 0;
Now the compiler only needs to evaluate a subscript once, when setting up the loop, and so saves 19 multiplication operations. So it is faster to use an element pointer than an index when you need to scan the arrays in a program. Pointers in c are defined by their data type and values. The data type determines the increment or decrements of the pointer value. The value is the address of the memory location to which the pointer is pointing. If you are using array notation, you don’t need to pass the dimensions.
Posted by
Sunflower
at
10/05/2011 07:45:00 PM
0
comments
Labels: Address, Arrays, C Language, Code, Data, Efficiency, Function, Integer, Memory, Pointers, Programming, Routines, Structures, Variables
![]() | Subscribe by Email |
|
Thursday, September 15, 2011
Some details about Pointers and Structures in C...
A pointer can be defined as a variable pointing to another variable or a variable storing the location or address of another variable.Pointers can be used to point to any data type.There are object pointers, function pointers, structure pointers and so on. Like any other pointers, structure pointers are also a very useful tool in C programming.Pointer structures are easy to declare and declared similarly like any other kind of pointer by putting a “*” sign before the name of the structure. See the example below:
Struct address *a1, *a2;
a1 = &b2;
a2 = &b3;
where b2 and b3 are the actual structure address variables. Using the below given assignment statement you can copy the details of structure pointed by a2 to a1:
*a1 = *a2;
Any member of a structure can be accessed using a pointer in the following way:
A->b
Here A is a pointer pointing to a structure and b is a data member or a member function of the structure being pointed. And there’s another way to access a member of a structure which has been given below:
(*A).b;
Both the statements are equivalent. Use of structure pointers is very common and useful.Be careful while dealing with structure pointers like any other normal pointer.The precedence of operators matters a lot. Be careful with the precedence of operators while programming in C.If we enclose *A is parentheses, an error will be generated and the code will not be compiled since the “.” Operator has got a high precedence than the”*” operator.Using so many parentheses can be quite a tedious job so, C allows us to use a shorthand notation to reduce the bombastic jargon. ” (*A).” can also be written as given below:
A ->
This is equivalent to (*A). but takes less characters. We can create pointers to structure arrays. A lot of space can be saved if we declare an array of pointers instead of an array to structures. In this only one structure is created and subsequently values are entered and disposed. Even structures can contain pointers as shown below:
Typedef struct
{
Char a[10];
Char *b;
} c;
c d;
char e[10];
gets(d.a,10);
d.b = (char *) malloc (sizeof (char[strlen(e)+ 1]));
strcpy(d.b, e);
This method is implemented when only a few records are required to be stored. When the size of the structure is large, it becomes difficult to pass and return the structures to the functions. To overcome this problem we can pass structure pointers to the functions. These structures can be accessed indirectly via pointers. Given below is a small code to illustrate the passing of structure pointers to the function and accessing them:
struct product
{
Int pno;
Float price;
};
Void inputpno (struct product *pnoptr);
Void outputpno (struct product *pnoptr);
Void main()
{
Struct product item;
Printf(“product details\n\n”);
Inputpno (&item);
Outputpno (&item);
}
Void outputpno (struct product *pnoptr)
{
Printf( “product no. = %d, price = %5.2f \n”, ( *pnoptr). Pno, (*pnoptr).price);
}
Void inputpno (struct product *pnoptr)
{
Int x;
Float y;
Struct product pno;
Printf( “product number: “);
Scanf( “%d”, &x);
( *pnoptr).pno = x;
Printf ( “price of the product: “);
Scanf( “%f”, &y);
( *pnoptr). Price = y;
}
In this program code, the prototypes, function calls and definitions have been changed in order to work with structure pointers. Dereferencing of a structure pointer is very common in programs. “->” (arrow) is an operator that is provided by C for accessing the member function of a structure. The 2 statements given below are equivalent:
Pnoptr -> pno;
(*pnoptr).pno;
Posted by
Sunflower
at
9/15/2011 08:44:00 PM
0
comments
Labels: Arrays, C, C Language, Code, Compilation, Definitions, Details, Functions, Input, Output, Pointers, Programming, Prototypes, Statements, Strings, Structures
![]() | Subscribe by Email |
|
Wednesday, March 9, 2011
How is data designed at architectural and component level?
Data Design at Architectural Level
Data design translates data objects defined during analysis model into data structures at the software component level and, when necessary,a database architecture at the application level.
There are small and large businesses that contains lot of data. There are dozens of databases that serve many applications comprising of lots of data. The aim is to extract useful information from data environment especially when the information desired is cross functional.
Techniques like data mining is used to extract useful information from raw data. However, data mining becomes difficult because f some factors:
- Existence of multiple databases.
- Different structures.
- Degree of detail contained with databases.Alternative solution is concept of data warehousing which adds an additional layer to data architecture. Data warehouse encompasses all data used by a business. A data warehouse is a large, independent database that serve the set of applications required by a business. Data warehouse is a separate data environment.
Data Design at Component Level
It focuses on representation of data structures that are directly accessed by one or more software components. Set of principles applicable to data design are:
- Systematic analysis principles applied to function and behavior should also be applied to data.
- All data structures and operations to be performed on each should be identified.
- The content of each data object should be defined through a mechanism that should be established.
- Low level data design decisions should be deferred until late in design process.
- A library of data structures and operations that are applied to them should be developed.
- The representation of data structure should only be known to those modules that can directly use the data contained within the structure.
- Software design and programming language should support the specification and realization of abstract data types.
Posted by
Sunflower
at
3/09/2011 05:45:00 PM
0
comments
Labels: Analysis Model, Application, Architectural, Architectural design, Component Level Design, Data, Data Design, Data structure, data warehousing, Databases, Design, Levels, Structures
![]() | Subscribe by Email |
|
Tuesday, March 8, 2011
Software Architecture Design - why is it important?
The architecture is not the operational software, rather it is a representation that enables a software engineer to analyze the effectiveness of the design in meeting its stated requirements, consider architectural alternatives at a stage when making design changes is still relatively easy and reduce the risk associated with the construction of the software.
- Software architecture enables and shows communication between all parties interested in the development of a computer based system.
- Early design decisions that has a profound impact on software engineering work is highlighted through architecture.
- Architecture constitutes a relatively small, intellectually graspable model of how the system is structured and how its components work together.
The architectural design model and the architectural patterns contained within it are transferable. Architectural styles and patterns can be applied to the design of other systems and represent a set of abstractions that enable software engineers to describe architecture in predictable ways.
Software architecture considers two levels of design pyramid - data design and architectural design. The software architecture of a program or computing system is the structure or structures of the system, which compose software components, the externally visible properties of those components and the relationships among them.
Posted by
Sunflower
at
3/08/2011 06:08:00 PM
1 comments
Labels: Architecture, Communication, Components, computers, Design, Impact, Levels, Operational, Patterns, program, Representation, Software, Software Architectue, Stages, Structures
![]() | Subscribe by Email |
|
Sunday, February 6, 2011
Control Structure Testing - Condition Testing, Data Flow Testing, Loop Testing
Control structure testing is a group of white-box testing methods.
CONDITION TESTING
- It is a test case design method.
- It works on logical conditions in program module.
- It involves testing of both relational expressions and arithmetic expressions.
- If a condition is incorrect, then at least one component of the condition is incorrect.
- Types of errors in condition testing are boolean operator errors, boolean variable errors, boolean parenthesis errors, relational operator errors, and arithmetic expression errors.
- Simple condition: Boolean variable or relational expression, possibly proceeded by a NOT operator.
- Compound condition: It is composed of two or more simple conditions, Boolean operators and parentheses.
- Boolean expression: It is a condition without Relational expressions.
DATA FLOW TESTING
- Data flow testing method is effective for error protection because it is based on the relationship between statements in the program according to the definition and uses of variables.
- Test paths are selected according to the location of definitions and uses of variables in the program.
- It is unrealistic to assume that data flow testing will be used extensively when testing a large system, However, it can be used in a targeted fashion for areas of software that are suspect.
LOOP TESTING
- Loop testing method concentrates on validity of the loop structures.
- Loops are fundamental to many algorithms and need thorough testing.
- Loops can be defined as simple, concatenated, nested, and unstructured.
- In simple loops, test cases that can be applied are skip loop entirely, only one or two passes through loop, m passes through loop where m is than n, (n-1), n, and (n+1) passes through the loop where n is the maximum number of allowed passes.
- In nested loops, start with inner loop, set all other loops to minimum values, conduct simple loop testing on inner loop, work outwards and continue until all loops tested.
- In concatenated loops, if loops are independent, use simple loop testing. If dependent, treat as nested loops.
- In unstructured loops, redesign the class of loops.
Posted by
Sunflower
at
2/06/2011 09:40:00 PM
0
comments
Labels: Boolean, Condition testing, Conditions, Control, Control Structure Testing, Data, Data Flow Testing, Errors, Loop testing, Loops, program, Structures, White box testing
![]() | Subscribe by Email |
|
Thursday, January 27, 2011
Introduction to Architecture Design - Content Architecture
The design process for identifying the subsystems making up a system and the
framework for sub-system control and communication is architectural design. An architectural design is :
- early stage in system design process.
- conducted in parallel with other design activities.
- establishes a link among goals established for web application, content, users visiting it, and the navigation criterion.
- identifying system components and their communications.
Content architecture emphasize on the fact how are the content objects structured for presentation and navigation. It focuses on overall hypermedia structure of the web application. It focuses on
- identifying links and relationships among content and documents.
- defining the structure of content.
- specifying consistent document requirements and attributes.
The design can choose from four different content structures:
- Linear Structures : a predictable sequence of interactions is common. The sequence of content presentation is predefined and linear in nature.
- Grid Structures : applied when the web application content can be organized categorically in two dimensions. This web application architecture is useful when highly regular content is encountered.
- Hierarchical Structures : it is the most common web application architecture. It is designed in a manner that enables flow of control horizontally, across vertical branches of the structure.
- Networked Structures : architectural components are designed so that they may pass control to virtually every other component in the system. It provides navigational flexibility but at the same time it can be a bit confusing to a user.
- Composite Structures : the overall architecture of the web application may be hierarchical, but part of the a structure may exhibit linear characteristics, while another part of the architecture may be networked.
Posted by
Sunflower
at
1/27/2011 03:55:00 PM
0
comments
Labels: activities, Architectural, Architecture, Content, Content Architecture, Design, Grids, Hierarchical, Linear, navigation, Network, Objects, Relationships, Structures
![]() | Subscribe by Email |
|
Friday, October 8, 2010
What are the limitations and tools used for white box testing ?
In white box testing, exhaustive testing of a code presents certain logistical problems. Even for small programs, the number of possible logical paths can be very large. For example, a hundred line C program which contains tow nested loops executing 1 to 20 times depending upon some initial input after some basic data declaration. Inside the interior loop, four if-then-else constructs are requires. Then there are approximately 10^14 logical paths that are to be exercised to test the program exhaustively which means that a magic test processor developing a single test case, execute it and evaluate results in one millisecond would require 3170 years working continuously for this exhaustive testing which is certainly impractical. Exhaustive WBT is impossible for large software systems. But that does not mean WBT should be considered as impractical. Limited WBT in which a limited number of important logical paths are selected and exercised and important data structures are probed for validity, is both practical. It is suggested that white and black box testing techniques can be coupled to provide an approach that validates the software interface selectively ensuring the correction of internal working of the software.
Tools used for white box testing are :
Few test automation tool vendors offer white box testing tools which:
- Provide run-time error and memory leak detection.
- Record the exact amount of time the application spends in any given block of code for the purpose of finding inefficient code bottlenecks.
- Pinpoint areas of application that have and have not been executed.
Posted by
Sunflower
at
10/08/2010 03:26:00 PM
0
comments
Labels: Automate, Glass Box testing, Limitations, Purpose, Quality, Software testing, Structures, Techniques, Tests, Tools, White box testing
![]() | Subscribe by Email |
|