Subscribe by Email


Showing posts with label Routines. Show all posts
Showing posts with label Routines. Show all posts

Wednesday, May 22, 2013

What are Address Binding, Dynamic Loading and Dynamic Linking?

In this article we shall discuss about three interrelated concepts namely address binding, dynamic loading and dynamic linking.

1. Address Binding: 
- There are two types of addresses for the computer memory. 
- These are called the physical address and the logical address. 
- A physical memory location is allocated to a logical pointer by address binding process.
- This is actually nothing but associating the physical address and the logical address with each other. 
- Sometimes logical address is also referred to as the virtual address. 
- This concept is an important part of the memory management. 
- Operating system is responsible for carrying out address binding on behalf of the applications and programs that need an access to the memory. 
- A program cannot be executed without bringing it to the main memory. 
- The instructions of the program have to be bound to right address spaces in the physical memory. 
- Address binding is simply a scheme for performing this job. 
- It can be thought of as something similar to address mapping. 
- Address binding can be carried out at any of the following times:
Ø  Compile time
Ø  Loading time
Ø  Execution time

- In execution time binding, whenever the program requires access to memory, it has to go through a register called the relocation register and is similar to the base register. 
- Then the offset is added. 
- But in binding during the loading time, same thing is done but every time this register need not be evaluated. 
- The addresses are mapped at the time of loading the program in to the memory. 
- If there is a change in the base address, the whole program has to be reloaded.

2. Dynamic Loading: 
- This mechanism is very useful for a program as it helps it do the following things:
Ø  Loading library in to the main memory.
Ø  Retrieving the address of the variables and routines that are contained in the library.
Ø  Accessing those variables and executing those routines.
Ø  Unloading the library.
- Dynamic loading is very much different from the load time linking and static linking. 
- Dynamic loading allows a system to start up even of the libraries are absent. - It also helps in discovering the absent libraries and then gaining the additional functionality. 
- Dynamic loading is a very transparent process since it is the operating system that handles it. 
- Main advantages are firstly, it helps in fixing the patches at once without having the need for re-linking them and secondly, it provides protection to the libraries against modification that is not authorized. 
Dynamic loading find its major use in the implementation of the software plugins.
- It is also used in the implementation of the computer programs where requisite functionality is supplied by the different libraries and user has the freedom to select the libraries he/ she wishes to provide.

3. Dynamic Linking: 
- This is an important part of the binding process. 
- The purpose of the dynamic linking is resolving the references or symbols and links to the library modules. 
- This process is carried out by a linker program. 
- This programs searches for a set of library modules in some given sequence. 
This process takes place during the creation of the executable file. 
- The resolved references may be addresses of the jump calls and the routines. - These may in different modules or in the main program.
- Dynamic linking resolves them in to relocatable address or fixed address through allocation of the memory to each of the memory segment of the referenced module. 


Tuesday, May 15, 2012

How does a definition use association play a role in data flow testing?


Definition use association is one of the terms that appear at the scene of data flow testing and quite many of us are unaware of it. This article is all about the concepts of the definition use associations and what role does they have got to play in the data flow testing. 
The definition use association forms quite an important part of the data flow testing. Let us see how! 
First we are going to discuss some concepts of the data flow testing in regard with the definition use associations and then we will discuss the role of the definition use associations in the data flow testing. 

About Data Flow Testing


- A control flow graph is an important tool that is used by the data flow testing so that the anomalies related to the data can be explored. 
- A proper path selection strategy is what that is required for detection of such anomalies. 
- The path strategy which is to be used can be decided on the basis of the data flow anomalies discovered earlier. 
- Data flow testing is nothing but a family of path testing strategies through the control of a software program or application. 
- The path testing is required so that the sequence of possible events associated with the objects’ status can be explored.
- It is necessary that you keep the number of paths sufficient and sensible so that no object is left without initialization and without being used at least once in its life time without carrying out unnecessary testing. 
Data flow testing is comprised of two types of anomaly detection namely:
1. Static analysis: It is carried out on the program code without its actual execution. It involves finding syntax errors.
2. Dynamic analysis: It is carried out on a program while it is in a state of execution. It involves finding the logical errors.
- The data objects have been categorized in to several categories so as to make data flow testing much easy:
  1. Defined, created, initialized (d)
  2. Killed, undefined, released (k)
  3. Used (u): in calculations (c)
  4. In predicates (p)

Anomalies discovered by the static analysis are meant to be handled by the compiler itself. But the static analysis and the dynamic analysis do not suffice altogether. A rigorous path testing is required. 

About Definition Use Associations


- The definition use associations or the “du segments” are the path segments whose last links have a use of variable X and are simple and definition clear. 
- Typically  a definition use association is a combination of triple elements (x, d, u) where:
  1. X is the variable
  2. D is the node consisting of a definition of variable x
  3. U is either a predicate node or a statement depending up on the case and consists of a use of x.
- A sub path from d to u is also included in the flow graph with no definition of variable x occurring in between the d and u. 
- Below mentioned are some examples of the def use associations:
  1. (x, 3, 4)
  2. (x, 1, 4)
  3. (y, 2, (4, t))
  4. (z, 2, (3, t)) etc.
- Some of the most common data flow testing strategies are:
  1. All uses (AU)
  2. All DU paths (ADUP) and many more.
- First advice for effective data flow testing would be to resolve all the data flow anomalies discovered above. 
- Carrying out data flow operations on the same variable and within the same routines can also reap you good results. 
- It is advisable to use defined types and strong typing wherever it is possible in the program. 


Wednesday, October 5, 2011

Some details about Pointers to Arrays in C

A pointer is a variable that holds a memory address, usually of another variable in memory. The pointers are one of the most powerful and strongest features of C language. The correct understanding and use of pointers is critical to successful programming in C. pointer’s support C’s dynamic memory allocation routines. Pointers provide the means through which the memory location of a variable can be directly accessed and hence can be manipulated in the required way. Lastly, pointers can improve the efficiency of certain routines. Arrays and pointers are very loosely linked. C treats the name of array as if it were a pointer.
Consider the following code snippet:


Int *a ; // a is a pointer to an integer
Int age [10] ; //age is an array holding ten integers
For (int I = 0; I < 10 ; I ++)
a = age ; // makes a to point to the location where age points to. Age is a pointer pointing to age [0].
.
.
.


In the above code a is a pointer and age is an array holding 10 integers. The pointer a is made to point where age is pointing to. Since the name of an array is a pointer to its first element, the array name + 1 gives the address of the second element of the array, array name + 2 gives the address of the 3rd element, and so forth.

Pointers also may be arrayed like any other data type. To declare an array holding 10 integer pointers, the declaration would be as follows:

Int *ip [10] ; // array of 10 int pointers

After this declaration, contiguous memory would be allocated for 10 pointers that can point to integers. Now each of the pointers, the elements of pointer array, may be initialized. We can use the following statement:

Ip [3] = &a ;

To find the value of a, you can use the below given statement:
*ip [3] ;

The name of an array is actually a pointer to the first element of the array, the same holds true for the array of pointers also. Most often, an operation is carried on successive elements of an array. Using a loop for it and using the array elements indices. Consider the following code fragment that initializes an array to 0:


Const int abc = 20 ;
Int arr [ abc ] ;
For ( int I = 0 ; I < abc ; i++ )
Arr [ I ] = 0 ;


To execute the above code snippet, the compiler computes the address of array [ I ] every time by multiplying the index I by the size of an array element. That is, the compiler performs the multiplication for each element. A faster alternative would be to use a pointer as shown below:


Const int abc = 20 ;
Int arr [ abc ] ;
Int * arr2 ;
For ( arr2 = arr ; arr2 < &arr [ abc ] ; arr2++ )
*arr2 = 0;


Now the compiler only needs to evaluate a subscript once, when setting up the loop, and so saves 19 multiplication operations. So it is faster to use an element pointer than an index when you need to scan the arrays in a program. Pointers in c are defined by their data type and values. The data type determines the increment or decrements of the pointer value. The value is the address of the memory location to which the pointer is pointing. If you are using array notation, you don’t need to pass the dimensions.


Monday, June 6, 2011

What are different testing mechanisms used to test the sofware?

After deciding the inputs and outputs for the test cases, another issue that comes into picture is writing the code that actually tests the software. Following mechanisms can be used to write codes:

- TEST DRIVERS
The lower level modules can be tested by using a test driver program. Approach that is used is to write a program that passes input data to the unit under test and comparing the output to truth. Input is selected from uniform testing, Monte Carlo testing, selected input conditions or from manufactured data. Output is compared against trusted results using an inverse function or a file containing correct data.

- WHITE BOX TESTING
Test drivers can also be used to test several modules at once to save time if you are doing white box testing. White box testing taken advantage of the internal working of the module under test. This approach saves time but disadvantages include while testing several things together, you may get a right answer indicating that everything is right but if you do not get the right answer then you are not sure what went wrong.

- BLACK BOX TESTING
This testing does not depend on the internal working of the module unde test. It only depends on the inputs and outputs of the system.

- TEST STUBS
Test Drivers are high level routines that call lower level subprograms, test stubs can be used to test higher levels of program. Stub is a simple routine that takes the place of real routine. It may be a null procedure or it may have a simple message. There is no need for test stubs to be limited to fixed data or user supplied data. Stubs need not be just input stimulators. Stubs also display or record data sent to them.

- TEST AND DEMO PROGRAMS
A test program usually does not involve much operator intervention. A demo program is a quick confidence check.


Tuesday, June 22, 2010

Testing Approaches: Top-down Approach versus Bottom-up Approach

In a large software project, it is impractical to integrate all the modules of the project together and test the software as whole. It should be build and tested in stages. There are different testing approaches.
Test Approach is the Strategy which describes the test team's approch to test the software, both overall and also in each phase.It gives better idea for the team to plan and execute the testing phase with perfection.

Top-down Approach versus Bottom-up Approach


In top-down approach, testing is done from top of hierarchy. Dummy routines called studs that simulate a module are introduced.
Advantage:
• Easy to visualize functionality.
• Sense of completeness in the requirement.
• Easy to show the progress of development.
Disadvantage:
• UI driven approach hence high possibility of redundant business logics.
• Since an UI is readily available no developer would write a Unit test cases.
• No Concrete layer to rely on, as both presentation & Business Logic keep evolving.
• Lack of concrete test suits to ensure one layer is tied up.

In bottom-up approach, testing is done from bottom of the hierarchy. Dummy routines called drivers are introduced that invoke the module.
Advantage:
• Solid Business Logic, hence zero redundancy
• Good Unit test case can be written to validate changes.
• Developer has only option to use unit testing tools to test the Logic.
• Easy to manage changes and modification.
Disadvantage:
• Effort involved to write test cases.
• Progress of implementation cannot be show very effectively.


Facebook activity