Subscribe by Email


Showing posts with label Conditions. Show all posts
Showing posts with label Conditions. Show all posts

Friday, September 13, 2013

What is Portability Testing?

- Portability Testing is the testing of a software/component/application to determine the ease with which it can be moved from one machine platform to another. 
- In other words, it’s a process to verify the extent to which that software implementation will be processed the same way by different processors as the one it was developed on.  
- It can also be understood as amount of work done or efforts made in order to move software from one environment to another without making any changes or modifications to the source code but in real world this is seldom possible.
For example, moving a computer application from Windows XP environment to Windows 7 environment, thereby measuring the efforts and time required to make the move and hence determining whether it is re usable with ease or not.

- Portability testing is also considered to be one of the sub parts of System testing as this covers the complete testing of software and also it’s re-usability over different computer environments that include different Operating systems, web browsers.

What needs to be done before Portability testing is performed (pre requisites/pre conditions)? 
1.   Keep in mind portability requirements before designing and coding of software.
2.   Unit and Integration Testing must have been performed.
3.   Test environment has been set up.

Objectives of Portability Testing
  1. To validate the system partially i.e. to determine if the system under consideration fulfills the portability requirements and can be ported to environments with different :-
a). RAM and disk space
b). Processor and Processor speed
c). Screen resolution
d). Operating system and its version in use.
e). Browser and its version in use.
To ensure that the look and feel of the web pages is similar and functional in the various browser types and their versions.

2.   To identify the causes of failures regarding the portability requirements, this in turn helps in identifying the flaws that were not found during unit and integration testing.
3.   The failures must be reported to the development teams so that the associated flaws can be fixed.
4.   To determine the potential or extent to which the software is ready for launch.
5.   Help in providing project status metrics (e.g., percentage of use case paths that were successfully tested).
6.   To provide input to the defect trend analysis effort.



Monday, June 17, 2013

Explain the Round Robin CPU scheduling algorithm

There are number of CPU scheduling algorithms, all having different properties thus making them appropriate for different conditions. 
- Round robin scheduling algorithm or RR is commonly used in the time sharing systems. 
- This is the most appropriate scheduling algorithm for time sharing operating systems. 
- This algorithm shares many similarities with the FCFS scheduling algorithm but there is an additional feature to it. 
- This feature is preemption in the context switch occurring between two processes. 
- In this algorithm a small unit of time is defined and is termed as the time slice or the time quantum. 
- These time slices or quantum range from 10ms to 100 ms.
- The ready queue in round robin scheduling is implemented in a circular fashion. 

How to implement Round Robin CPU scheduling algorithm

Now we shall see about the implementation of the round robin scheduling:
  1. The ready queue is maintained as the FIFO (first in first out) queue of the processes.
  2. Addition of new processes is made at the rear end of the ready queue and selection of the process for execution by the processor is made at the front end.
  3. The process first in the ready queue is thus picked by the CPU scheduler. A timer is set that will interrupt the processor when the time slice elapses. When this happens the process will be dispatched.
  4. In some cases the CPU burst of some processes may be less than the size of the time slice. If this is the case, the process will be voluntarily released by the CPU. The scheduler will then jump to the process next in the ready queue and fetch it for execution.
  5. While in other cases the CPU burst for some processes might be higher than the size of the time slice. In this case the timer set will send an interrupt to the processor, thus dispatching the process and putting it at the rear end of the ready queue. The scheduler will then jump to the next process in the queue.
The waiting time in round robin scheduling algorithm has been observed to be quite long at an average rate. 
- In this algorithm, not more than one time slice can be allocated to any process under any conditions in a row. 
- However, there is an exception to this if there is only one process to be executed.
- If the CPU burst is exceeded by the process, the process is put back at the tail of the queue after preemption.
- Thus, we can call this algorithm as a preemptive algorithm also. 
- The size of the time quantum greatly affects the performance of the round robin algorithm.
- If the size of the time quantum is kept too large then it resembles the FCFS algorithm. 
- On the other hand if the quantum is of too small size, then this RR approach is called the processor sharing approach. 
- An illusion is created in which it seems every process has its own processor that runs at the fraction of the speed of the actual processor. 
- Further, the effect of the context switching up on the performance of the RR scheduling algorithm.
- A certain amount of time is utilized in switching from one process to another. 
In this time the registers and the memory maps are loaded, a number of lists and tables are updated; memory cache is flushed and reloaded etc.
- Lesser the size of the time quantum, context switching will occur more number of times. 


Wednesday, June 5, 2013

Explain the various techniques for Deadlock Prevention

Deadlocks are like a nightmare for the programmers who design and write the programs for the multitasking or multiprocessing systems. For them it is very important to know about how to design programs in such a way as to prevent the deadlocks. 

Deadlocks are a more common problem in the distributed systems which involve a use of the concurrency control and distributed transactions. The deadlocks that occur in these systems are termed as the distributed deadlocks. 

It is possible to detect them using either of the following means:
1. Building a global wait for graph from a local one through a deadlock detector.
2. Using distributed algorithms such as the edge chasing.

- An atomic commitment protocol similar to a two phase commit is used for automatically resolving the distributed deadlocks. 
- Therefore, there is no need for any other resolution mechanism or a global wait for graph. 
- But this is possible only in the commitment ordering based distributed environments. 
- For the environments that have 2 – phase locking, a similar automatic global deadlock resolution takes place.
- There is another class of deadlocks called the phantom deadlocks. 
- These are the ones detected in the system because of some internal delays but they actually do not exist during the detection time.
- Today, their exist a number of ways using which the parallelism can be increased where otherwise severe deadlocks might have been caused by the recursive locks. 
- But like for everything else, this also has a price.
- You either have to accept one of these or both i.e., the data corruption or the performance/ overhead. 
- Preemption and lock–reference counting, WFG or wait – for graph are some of the examples of this. 
- These can be followed either by allowing for the data corruption during the preemption or by using version.
Apart from these heuristic algorithms and the algorithms that can track all the cycles causing the deadlocks can be used for preventing the deadlocks.
These algorithms even though they don’t offer 100 percent parallelism, they prevent deadlocks by providing an acceptable degree of the performance overhead versus parallelism. 

This example will make it clearer: 
- Consider at a crossing junction there are 2 trains approaching each other. 
Their collision can be prevented by some just-in-time prevention means. 
- This mean can be person at the crossing having a switch pressing which will allow only one of them to cross on to the succeeding track super passing the other trains that are also waiting. 
- There are following two types of deadlocks:
  1. Recursive locks: In such locks, only one thread can pass through it. Any other threads or processes entering the lock need to wait for the initial one to pass through after its task is finished.
  2. Non – recursive locks: Here only once a thread can enter the lock. If the same thread again tries to enter the lock without unlocking it, a deadlock can occur.

- There are issues with both of these. 
- The first one does not provides distributed deadlock prevention and the latter one has no concerns for the deadlock prevention. 
- In the first one, if the number of threads trying to enter the lock equals the number of locked threads, then one of the threads has to be assigned as the super one and then only one can execute it till completion. 
- After the execution of the super thread is complete, the condition reverts back from the recursive lock and the super thread removes its status of being super thread and sends a notification to the locker that the condition has to be re-checked. 


Tuesday, June 4, 2013

Explain briefly Deadlock Avoidance and Detection?

Deadlocks are a serious issue that needs to be avoided since it can cause the whole system to hang or crash.

What is Deadlock Avoidance?


- Avoiding a deadlock is possible only if certain information regarding the processes is available with the operating system.
- This information has to be made available to the OS just before the resources are allocated to the processes.
- These are the processes that are to be consumed by the process in its lifetime.
- For every resource request made by the process, any potential threats are checked by the system i.e., whether granting the request of the process will send it in to an unsafe zone or not.
- If it is so then there are possibilities that the system could enter a deadlock.
- Therefore, only those requests are granted by the process that will ensure a safe state of the process.
- It is important for the system to determine whether the next level of the process will be safe or unsafe.
- There are 3 things that the operating system must know at any before or after the execution of the process:
1. The currently available resources.
2. The resources currently allocated to the processes.
3. Resources to be required and released in the future by these processes.

- It is possible that a process might be in an unsafe state but still may not cause a deadlock.
- By the notion of the safe and unsafe state of the process we refer to the system’s ability of entering in to a deadlock.
An example will make it clearer:
- Consider a resource A requested by a process which would make the process state unsafe.
- At the same time it releases another resource say B preventing the circular wait of the resources.
- In such a situation, the system is said to be in an unsafe state though not necessarily in a deadlock.
- There are various algorithms that have been designed for deadlock avoidance and one such is the banker’s algorithm.
- To use this algorithm knowledge about the resource usage limit is required in advance.
-  It is impossible for most of the systems to know what a process will request for in advance.
- This only implies that the deadlock avoidance is also not possible here.
- There are other two algorithms for achieving this task namely wound/ wait and wait/ die algorithms.
- Each of them makes use of a symmetry breaking technique.

What is Deadlock Detection?


- Deadlocks are free to occur under the implementation of this concept.
- Then through the state of the system, the occurrence of the deadlock is confirmed and subsequently mended.
- Here, the resource allocation activities are tracked along with the process states by certain algorithms.
- After this, the algorithm is used for removing the deadlock.
- Deadlock detection is quite easy since the OS scheduler knows about the resources that have been locked by the processes.
- Model checking is one of the techniques used for deadlock detection.
- In this a finite state model is created up on which a progress analysis of the process is carried out and all the terminal sets of the model are found.
- Each of these stands for a deadlock.
- Correction of the deadlock can be done by any of the below mentioned methods after the deadlock has been detected:
1. Process termination: This is about aborting one or more of the processes that cause the deadlock thus ensuring a certain and speedy removal of the deadlock. But this method might prove to be a little expensive because of the loss of the partial computations.
2. Resource preemption: This is about a successive preemption of the allocated resources until the breakdown of the deadlock.


Saturday, May 11, 2013

What is meant by Deadlock? List the necessary conditions for arising deadlocks?


Consider two competing processes or actions in a situation where both of them wait for each other to be done and so neither of them ever finish. Such a situation is called a deadlock. 
- When the number of competing processes is exactly two, then it is said to be a ‘deadly embrace’. 
- The two involved competing actions tend to move towards some sort of tragedy which might be mutual extinction or mutual death. 

"In operating systems a situation occurs where two threads or processes enter the waiting state at the same time because of the resource that they both want is being used by some other process that also in waiting state for some resource being held by another process in waiting state and so on". 

- It so happens that the process is then unable to change its state since the resources it requires are being used by the other processes which is then keeping the process in waiting state indefinitely. 
- The system is now in a deadlock. 
- Systems such as the distributed systems, parallel computing systems, multi-processing systems face the problem of being in a deadlock quite often. 
- This is so because here the hardware and software locks are purposed for handling the resources that are shared and implementing the process synchronization. 
- Deadlocks may also occur in telecommunication systems because of the corrupt signals and their loss rather than resource contention. 
- A deadlock situation can be compared to problems such as that of the catch-22 or chicken or egg problem. 
- A deadlock can also occur in a circular chain pattern. 
For example, consider a computer having 3 processes and corresponding 3 CD drives i.e., one held by each process. 
- Now all the three processes would be in a deadlock if they all request another drive.

Conditions for a Deadlock to arise

There are certain conditions that should be there for a deadlock to arise:
  1. Mutual exclusion: There has to be at least one resource that cannot be shared. So that only one process would use at any given time.
  2. Resource holding (or hold and wait): There should be at least one resource held by a process that in turn should be request more resources that are being held by other processes.
  3. No preemption: Once the resources have been allocated, they should not de-allocated by the operating system. The condition is that the process holding the resource must release it voluntarily.
  4. Circular wait: A circular chain of processes must be formed as explained in the earlier example.
"These 4 conditions for deadlock are collectively called the ‘Coffman conditions’. If any of these conditions is not met, a deadlock can’t occur".

- Handling a deadlock is an important capability of the operating systems. 
However, there are many modern operating systems that still cannot avoid deadlocks. 
- On occurrence of a deadlock many non-standard approaches are followed by different operating systems for handling it. 
- Many of these approaches try to avoid at least one of the Coffman conditions (generally the 4th one). 
- Below we discuss some of the approaches:
  1. Ignoring deadlock
  2. Detection
  3. Prevention
  4. Avoidance
- There is a second kind of deadlocks called the ‘distributed deadlock’ and it is common where concurrency control is used or we can say in the distributed systems. 


Sunday, May 5, 2013

What is DRAM? In which form does it store data?


The random access memory is of two types out of which one is dynamic random access memory or DRAM and the other one is SRAM or static random access memory. 
Here we shall focus up on the first type i.e., the Dynamic RAM. 

What is Dynamic Random Access Memory (DRAM)?

- In dynamic RAM, each bit of the data is stored in a separate capacitor. 
- All these capacitors are housed within an IC (integrated circuit).
- These capacitors can be in either of the two states:
  1. Charged and
  2. Discharged
- The two values of a bit are represented by means of these two states only. 
The two values of bit are 0 and 1. 
- However, there is a disadvantage of the dynamic RAM. 
- These capacitors tend to leak charge and therefore may lose all the stored information. 
- Therefore, it is very important to keep the capacitors flushing with fresh charge. 
- They are refreshed at regular intervals of time. 
- It is because of this refreshing requirement this type of RAM has been named so. 
- The main memory or the physical memory of the CPU is constituted of this dynamic RAM only.
- Apart from desktops, DRAM is also used in workstation systems, laptops, video game consoles etc. 
- The structural simplicity is one of the biggest advantages of the DRAM. 
- For each bit it only requires one capacitor and one transistor, whereas SRAM requires 4 to 6 transistors for the same purpose. 
- This enables the dynamic RAM to attain very high density. 
- DRAM is a volatile memory unlike the flash memory and so it loses data whenever the power supply is cut.
- The capacitors and the transistors it uses are extremely small and so billions of them can be easily be integrated in to one single memory chip.
- DRAM consists of array of charge storage cells arranged in a sort of rectangular way. 
- Each of the cells consists of one transistor and one capacitor. 
- Word lines are the horizontal lines that connect the rows with each other. 
Two bit lines compose each of the columns of cells. 
- These lines are called the + and – bit lines.
- It is specified by the manufacturers that at what rate the storage cell capacitors are to be refreshed. 
- Typically, it is less than or equal to 64 ms. 
- The DRAM controller consists of the refresh logic that is responsible for automating the periodic refresh. 
- This job cannot be done by any other software and hardware. 
- Thus, the circuit of the controller is very complicated. 
- The capacity of DRAM per unit surface is greater than that of the SRAM. 
Some systems may refresh one row at one instant while others may refresh all the rows simultaneously every 64 ms.  
- Some systems use an external timer based up on whose timing they refresh a part of the memory. 
- Many of the DRAM chips come with a counter that keeps track of which row is to be refreshed next.
- However, there are some conditions under which the data can be recovered even if the DRAM has not been refreshed since few minutes. 
- Bits of the DRAM might flip to opposite state spontaneously because of the electromagnetic interference in the system. 
- Background radiation is the major cause for the occurrence of the majority of the soft errors.
- Because of these errors the contents of the memory cells may change and circuitry might be harmed. 
- Redundant memory bits along with the memory controllers are one potential solution to this problem. 
- These bits are within the modules of the RAM. 
- The parity is recorded by these bits which enable the reconstruction of the missing data via ECC or error – correcting code.


Wednesday, March 20, 2013

What are components of autonomic networking?


The concept of the autonomic systems has been derived from a biological entity called the autonomic nervous system (ANS). In human body this system is responsible for carrying out functions such as blood pressure and circulation, respiration and emotive response. 
In this article we discuss about the various components of the autonomic networking.

Components of Autonomic Networking

Autognostics: 
- This category of autonomic components includes capabilities such as that of awareness, self – discovery and self – analysis. 
- With all these capabilities, an autonomic system is capable of having a high – level view. 
- In other words, we can say that perceptual sub–systems are represented by it which serves the purpose of gathering, analyzing and reporting on the conditions and states of the system. 
- These components provide a basis to the system for responding and validating its decisions. 
- In simple words, autognostics provide self – knowledge. 
- This component if is rich, might provide various perceptual senses. 
-In autonomic systems, models of both the external and internal environments are embedded through which perceived threats and states can be assigned some relative value. 
- When it comes to autonomic networking, inputs from the following are taken for defining the state of the network:
a) Various network elements such as network interfaces and switches (inclusive of the current state and specification and configuration.
b) End – host
c)  Traffic flows
d) Logical diagrams
e) Design specifications
f)   Application performance data
- This component inter operates with the other components of the autonomic system.

Configuration management: 
- The responsibility for the interactions that take place among the interfaces and the elements.
- It consists of an accounting capability with which it is possible to track the configurations over the time under various circumstances. 
- Metaphorically, they act as the memory for the autonomic systems. 
- Provision and the remediation over a network can be applied through the configuration settings.
- In addition to these, two other things which can be applied are the selective performance and the implementation affecting access.
- This category only contains the actions that are taken by the human engineers. 
- There are a very few exceptional cases where the interface settings are configured manually using the automated scripts. 
- The dynamic population of the devices is maintained implicitly.
- This component must have the capability operating on all devices and to recover the old configuration settings. 
- There can be some situations where the states may become unrecoverable. 
Therefore, the sub – system must be capable of assessing the consequence of the changes before they are issued.

Policy management: 
- This component is inclusive of the following:
a)   Policy specification
b)   Deployment
c)   Reasoning over the policies
d)   Update of policies
e)   Maintenance of the policies
f)    Enforcement
- The reasons for including this component are:
a)  Configuration management
b)  Definition of the roles and relationships
c)  Establishment of trust and reputation
d)  Description of business processes
e)  Definition of performance
f) Constraints on behavior issues such as privacy, resource access, collaboration and security.
- It represents a model of ideal behavior and environment representing effective interaction.
- For defining the constituents of a policy it is important to know what all is involved in its management.

Autodefense: 
- The mechanism presented by this component is both dynamic and adaptive in nature.
- This mechanism has been developed to keep the network infrastructure safe from the malicious attacks. 
- Further, it also prevents the illegal use of the infrastructure for attacking the various technological resources. 
- This component has the capability of striking a balance between the various performance objectives that have threat management actions. 
- This component can be compared to the immune system of the human body.

Security: 
The structure provided by the security component is responsible for defining and enforcing the relationships between the following:
a)   Roles
b)   Content
c)   resources


Saturday, January 19, 2013

What is meant by Statistical Usage Testing?


Statistical usage testing is the testing process that is aimed at the fitness of the software system or application.
The test cases chosen for carrying out statistical usage testing mostly consist of the usage scenarios and so the testing has been named as statistical usage testing. Software quality is ensured by the extensive testing but that has to be quite efficient. Testing expenditures covers about 20 – 25 percent of the overall cost of the software project. In order to reduce the testing efforts, deploy the available testing tools since they can create automated tests. But usually what happens is that the important tests require manual intervention with the tester requiring thinking about the usage as well behavior of the software. This is just the repetition of the tasks that were done during the requirements analysis phase.

About Statistical Usage Testing

- A usage model forms the basis for the creation of tests in statistical usage testing.
- Usage model is actually a directed usage graph more like a state machine and it consists of various states and transitions. 
- Every transition state has a probability associated with it regarding the traversal of the transition when the system would be in a state that marks the beginning of the transition arc. 
- Therefore, the sum of the probabilities of outgoing transitions sum up to unity for every state.
- Every transition can be associated with an event and more with parameters that are known to trigger the particular transition. 
- Such event associated transitions can be further related to certain conditions called the guard conditions. 
- These conditions imply that the transition occurs only if the value of the event parameter satisfies the condition.
- For assigning probabilities to the transitions, 3 approaches have been defined as follows:
  1. Uninformed approach: In this approach, same probability is assigned to the exit arcs of a state.
  2. Informed approach: In this approach, a sample of user event sequences for calculating suitable properties. The sample is captured from either an earlier version of the software or its prototype.
  3. Intended approach: This approach is used for shifting the focus of the test to certain state transitions and for modeling the hypothetical users.
- According to a property termed as the marcov property, the actual state is what on which the transition probabilities are dependent. 
- However, they are independent of the history again by the property. 
- This implies that the probabilities must be fixed numbers. 
- A system based up on this property is termed as a marcov chain and it requires conclusion of some analytical descriptions. 
- Usage distribution is one among such descriptions. 
- It gives for every state its steady–state probability i.e., appearance rate that is expected.
- All the states are associated to one or the other part of the software system or application and the part of the software that attracts more attention from the tests is shown by the usage distribution. 
- Some other important descriptions are:
  1. Expected test case length
  2. Number of test cases required for the verification of the desired reliability of the software system or application.
- The idea of the usage model generation can be extended by handling guard conditions and enabling the non–deterministic behavior of the system depending on the state of the system’s data. 
- All this helps towards the application of the statistical usage testing to systems over a wide range. 
- The use cases are defined by the top–level structure of the unified modeling language (UML). 


Wednesday, January 16, 2013

What kinds of functions are used by Cleanroom Software Engineering approach?


Harlan Mills and his colleagues namely Linger, Poore, Dyer in the year of 1980 developed a software process that could promise building zero error software at IBM. This process is now popularly known as the Cleanroom software engineering. The process was named in accordance with an analogy with the manufacturing process of the semiconductors. 

The Clean room software engineering process makes use of the statistical process and its control features. The software systems and applications thus produced have certified software reliability. The productivity is also increased as the software has no defects at delivery. 
Below mentioned are some key features of the Cleanroom software engineering process:
  1. Usage scenarios
  2. Incremental development
  3. Incremental release
  4. Statistical modeling
  5. Separate development
  6. Acceptance testing
  7. No unit testing
  8. No debugging
  9. Formal reviews with verification conditions
Basic technologies used by the CSE approach are:
  1. Incremental development
  2. Box structured specifications
  3. Statistical usage testing
  4. Function theoretic verification
- The incremental development phase of the CSE involves overlapping of the incremental development and from beginning of specification to the end of the test execution it takes around 12 – 18 weeks.
- Partitioning of the increments is critical as well as difficult. 
Formal specification of the CSE process involves the following:
  1. Box structured Designing: Three types of boxes are identified namely black box, state box and clear box.
  2. Verification properties of the structures and
  3. Program functions: These are one kind of functions that are used by the clean room approach.
- State boxes are the description of the state of the system in terms of data structures such as sequences, sets, lists, records, relations and maps. 
- Further, they include specification of operations and state in-variants.
- Each and every operation that is carried out needs to take care of the invariant. 
- The syntax errors present in a constructed program in clean-room are checked by a parser but is not run by the developer.
- A team review is responsible for performing verification which is driven by a number of verification conditions. 
- Productivity is increased by 3–5 times in the verification process as compared to the debugging process. 
- Proving the program is always an option with the developers but it calls for a lot of math intensive work.
- As an alternate to this, clean room software engineering approach prefers to use a team code inspection in terms of two things namely:
  1. Program functions and
  2. Verification conditions
- After this, an informal review is carried out which confirms whether all conditions have been satisfied or not. 
- Program functions are nothing but functions describing the prime program’s function.

- Functional verification steps are:
1.    Specifying the program by post and pre-conditions.
2.    Parsing the program in to prime numbers.
3.    Determining the program functions for SESE’s.
4.    Defining verification conditions.
5.    Inspection of all the verification conditions.
- Program functions also define the conditions under which a program can be executed legally. Such program functions are called pre-conditions.
- Program functions can even express the effect the program execution is having up on the state of the system. Such program functions are called the post conditions.
- Programs are mostly expressed on terms of the input arguments, instance variables and return values of the program. 
- However, they cannot be expressed by local program variables. 
- The concept of nested blocks is supported by a number of modern programming languages and structured programs always require well nesting. 
- The process determining SESE’s also involves parsing rather than just program functions.


Facebook activity