Subscribe by Email


Showing posts with label Causes. Show all posts
Showing posts with label Causes. Show all posts

Friday, April 26, 2013

What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?


- Thrashing takes place when the sub-system of the virtual memory of the computer system is involved in a state of paging constantly.
- It rapidly exchanges data in the memory with the data available on the disk excluding level of processing of most of the applications. 
- Thrashing leads to the degradation of the performance of the computer or may even cause it to collapse. 
- The problem may further worsen until the issue is identified and addressed. 
- If there are not enough pages available for the job, it becomes very likely that your system will suffer from thrashing since it’s an activity involving high paging. 
- This also leads to high rate of page fault. 
- This in turn cuts down the utilization of the CPU. 
- Modern systems utilize the concept of the paging system for executing many programs.
- However, this is what makes them prone to thrashing. 
- But this occurs only if the system does not have at present sufficient memory as required by the application or if the disk access time is too long. 

- Thrashing is also quite common in the communication systems where the conflicts concerning the internal bus access is common. 
- The order of magnitude or degree by which the latency and throughput of a system might degrade depends up on the algorithms and the configuration that is being used. 
- In systems making use of virtual memory systems, workloads and programs presenting insufficient locality of reference may lead to thrashing. 
- Thrashing occurs when the physical memory of the system is not able to contain in itself the workload or the program. 
- Thrashing can also be called as the constant data swapping.
- Older systems were low end computers i.e., the RAM they had was insufficient to be employed in modern usage patterns. 
- Thus, when their memory was increased they became noticeably faster. 
- This happened because of the availability of more memory which reduce the amount of swapping and thus increased the processing speed. 
- IBM system/ 370 (mainframe computer) faced this kind of situation. 
- In it a certain instruction consisted of an execute instruction pointing over to another move instruction. 
- Both of these instructions crossed the page boundary and also the source from which the data has to be moved and the destination where it was to be placed both crossed the page boundary. 
- Thus, this particular instruction altogether required 8 pages and that too at the same time in memory. 
- Now if the operating system allocated less than 8 pages, a page fault is sure to occur. 
- This page fault will lead to thrashing of all the attempts of restarting the failing instruction. 
- This may even reduce the CPU utilization to almost zero!

How can a system handle thrashing?

For resolving the problem of thrashing, the following things can be done:
1. Increasing the amount of main memory i.e., the RAM in the system. This is the best ever solution for this and will be helpful for a long term also.
2. Decreasing the number of programs to be executed by the system.
3. Replacing the programs that utilize heavy memory with their less memory utilizing equivalents.
4. Making improvements in the spatial locality.

- Thrashing can also occur in cache memory i.e., the faster storage space that is used for speeding up the data access. 
- Then it is called cache thrashing. 
- It occurs when the cache is accessed in a way that it leaves it of no benefit. 
When this happens many main memory locations compete with each other for getting the same cache lines that it turn leads to a large number of cache misses.


Wednesday, March 21, 2012

Cause-Effect Graphing is a black box testing - Explain?

So many testing techniques have been categorized under the black box testing and the cause effect graphing is one of them and that is what the whole article is all about.

- A directed graph created for the purpose of mapping of the set of causes to a set of effects is nothing but a cause effect graph.
- The causes mapped in the graph are merely the input to a software system or application and the effects can be thought of as the corresponding outputs.
- The right of the cause effect graph houses all the effects with their corresponding nodes and the left side shelters all the causes and along with their corresponding nodes.
- A graph representing causes and effects in such a way is said to be a typical cause effect graph.
- It may also make use of certain intermediate nodes for the representation of the relation between the input and the output using the logical operators like AND, OR etc.
- The constraints can be effectively added to the effects and causes in the graph and these represented as the labelled edges using a dashed line along with the symbol of the constraint.

Constraint Symbols for the Causes:
1. E – exclusive
2. OaOO – one and only one
3. I – at least one

- The first constraint is used to state that at any instant any two causes (say cause 1 and cause 2) cannot be true simultaneously.
- The second constraint i.e., the inclusive constraint is used to state that at least one of the two or more numbers of causes must be true.
- The third constraint “one and only one” is used to state that the only one among all the constraints can be true.

Constraints for the Effects

1. R – requires
2. M – mask

- These are the only two valid constraints for the effects.
- The first one states that if one of the causes is true, then it implies that the other one also must be true and it also states that only one of the two constraints can be true and other can be false.
- The second constraint i.e., the mask constraint states just the opposite of the first constraint i.e., if one of the effects is true, then the other must be false.

"One point to be noted here is that the mask constraint only relates to the effects rather than relating to the causes like other constraints."

The direction of the graph is represented as shown below:
Causes -> Intermediate nodes -> Effects

Normal Forms of Cause Effect Graph

The cause effect graph is always rearranged in such a way that at any point between any input and output there lays only one node. Two normal forms of the cause effect graph have been identified:

- Conjunctive normal form
- Disjunctive normal form

When is Cause Effect Graphing performed?

One of the main purposes of the cause effect graph is the generation of the reduced decision table. The cause effect graphing is performed after the following tasks have been completed:

1. All the requirements have been reviewed to check out for any ambiguity.
2. All the requirements have been reviewed for their content.
3. It has been ensured that the requirements are complete and correct.

Cause effect graphing is basically used for hardware testing, but now it has been adopted for the use in the software testing.

It takes in to consideration only the desired external behaviour of the system and therefore it has been categorized as a black box testing technique and only selects the test cases that represent a logical relation between the causes and effects for the production of the test cases.


Tuesday, February 7, 2012

What are common programming bugs every tester should know?

A programming bug as we all know is common or “one in all” term for a flaw, error or mistake in a software system or program. A bug is known for producing unexpected result always or results in the abnormal behavior of the software system or program.

CAUSES OF BUGS
- Root causes of the bugs are the faults or mistakes introduced in to the program’s source code or design and structure or its implementation.
- A program or a piece of program too much affected with bugs is commonly termed as a “buggy” program or code.
- They can be introduced unknowingly in the software system or program during the coding, specification, data entry, designing and documentation.
- Bugs can also arise due to complex interactions between the components of a complex computer program or system.
- This happens because the software programmers or developers have to combine a great length of code and therefore, they may not be able to track minor bugs.
- The discovered bugs are also documented and such documents or reports are called bug reports or trouble reports.

HOW BUGS INFECT A PROGRAM ACTUALLY?
- A single bug can trigger a number of faults or errors within the program which can affect the program in many ways.
- The degree of affecting depends on the nature of the bug.
- It can either affect the program very badly causing it to rash or hang or it may have only a subtle affect on the system.
- There are some bugs that are not detected in the entire software testing process.
- Some bug may cause a chain effect which can be described as one bug causing an error and that error causing some other errors and so on.
- Some bugs may even shut down the whole software system or application.
- Bugs can have serious impacts.
- Bugs can destroy a whole machine.
- Bugs are after all mistakes of human programmers.

TYPES OF BUGS
Bugs are of many types. There are certain types of common bugs that every programmer should be introduced with.

First we are listing some security vulnerabilities:
- Improper encoding
- SQL injection
- Improper validation
- Race conditions
- Memory leaks
- Cross site scripting
- Errors in transmission of sensitive data
- Information leak
- Controlling of critical data
- Improper authorization
- Security checks on the client side and
- Improper initialization

SOME COMMON BUGS ARE:

1. Memory leaks
- This bug is catastrophic in nature.
- It is most common in languages like C++ and C i.e., the languages which do not have automatic garbage collection feature.
- Here the rate of consumption of memory is higher as compared to rate of de- allocating memory which is zero.
- In such a situation the executing program comes to a halt because there is no availability of free memory.

2. Freeing the resource which has already been freed
- This bug is quite frequent in occurrence.
- Usually it happens that the resources are freed after allocation but here already freed resource is freed which causes an error.

3. De-referencing of NULL operator
- This bug is caused due to an improper or missing initialization.
- It an also be caused due to incorrect use of reference variables.

4. References
- Sometimes unexpected or unclear references are created during the execution which may lead to the problem of de- allocation.

5. Deadlocks
- These bugs though rare are catastrophic and are caused when two or more threads are mutually locked by each other or those threads get entangled.

6. Race conditions
- These are frequent and occur when the same resource or result is being tried to be accessed by two threads.
- The two threads are said to be racing.


Tuesday, October 4, 2011

Concept of Project Scheduling - What is the root cause for late delivery of software?

After all the important elements are defined for a project, it is now time to connect all the elements. It means a network of all engineering tasks is created that will enable you to get the job on time. The responsibility for each task is assigned to make sure that it is done and adapt the network. The software project managers does this at the project level and on an individual level, software engineers themselves.

Project scheduling is important because there are many tasks running in parallel in a complex system and the result of each task performed has a very important effect on the work that is performed by other task. These inter-dependencies are very difficult to understand without project scheduling.

The basic reasons why software is delivered late are:
- Unrealistic deadline by someone outside the software group.
- Changing the requirements of customer and not reflecting them in schedule change.
- Underestimate of amount of effort and number of resources required for the job.
- Non considerable predictable or unpredictable risks.
- Technical difficulties that are left unseen.
- Human difficulties that are left unseen.
- Lack of communication or mis-communication among project staff.
- Project management is not able to judge that project is falling behind schedule.

The estimation and scheduling techniques when implemented under constraint of defined deadline gives the best estimate and if this best estimate indicates that the deadline is unrealistic, the project manager should be careful from undue pressure.

If the management demands that the deadline is unrealistic then following steps should be done:
- A detailed estimate is made and and estimated effort and duration is evaluated.
- Develop a software engineering strategy using incremental process model.
- Explain to the customer the reasons why the deadline is unrealistic.
- An incremental development strategy is explained and offered as an alternative.


Wednesday, July 27, 2011

Introduction to Debugging? What strategies include debugging?

When the software testing is successfully done, the next step is debugging. Debugging is the process of removing the error that has been uncovered during the testing process. Debugging process starts with the execution of a test case. The results that are attained are assessed and the actual and expected values are compared. Debugging is the process that matches symptom with the cause.

In debugging process, there is a possibility of two outcomes:
- cause is found and corrected.
- cause is not found

Debugging sounds difficult and here are some reasons why it is so:
- The cause and symptom may be located remotely.
- Sometimes when some other error is corrected, the symptom disappear.
- Human error can cause a symptom.
- Timing problem can cause a symptom.
- Non errors can cause symptoms.
- Symptoms can be intermittent.
- There is a possibility that causes are distributed across different tasks running on different processors.

Debugging strategy includes finding and correcting the cause of software error by the use of three strategies:
- Brute force uses the philosophy of let the computer find the error. Memory dumps are taken, run-time traces are invoked and program is loaded with output statements.
- Backtracking is the process which starts at the site where symptom is uncovered, source code is traced backward until cause is found.
- In cause elimination, cause hypothesis is devised and data is used to prove or disprove the hypothesis. On the other hand, list of possible causes is developed and tests are conducted to eliminate each.


Thursday, January 21, 2010

Thrashing and its Causes

It is technically possible to reduce the number of allocated frames to the minimum, there is some number of pages that are in active use. If the process does not have this number of frames, it will very quickly page fault. At this point, it must replace some page. However, since all its pages are in active use, it must replace a page that will be needed again right away. Consequently it very quickly faults again, and again, and again. The process continues to fault, replacing pages for which it will then fault and bring back in right away. This high paging activity is called thrashing. A process is thrashing if it is spending more time paging than executing.
Thrashing results in severe performance problems. If you consider the scenario below you will understand how early paging systems behaved :
CPU utilization is monitored by the operating system. If the system finds that CPU utilization is too low, multiprogramming is increased by adding a new process; the algorithm used replaces pages without considering their linked processes. However, a process may end up needing more frames and takes pages away from other processes, causing faulting. The processes from which those pages were taken away in turn will pull pages from other processes increasing the degree of faulting. As processes queue up for the paging device and end up waiting for pages, CPU utilization decreases, in turn causing a push to increase the degree of multiprogramming. This process will keep on happening with CPU utilization decreasing even further and the CPU scheduler trying to increase multiprogramming even further. This leads to thrashing and consequent decrease in system throughput (accompanied by a large increase in page fault rate).


Facebook activity