Subscribe by Email


Showing posts with label Inefficient. Show all posts
Showing posts with label Inefficient. Show all posts

Sunday, August 25, 2013

What is the concept of flow control?

- Flow control is an important concept in the field of data communications. 
- This process involves management of the data transmission rate between two communicating nodes. 
- Flow control is important to avoid a slow receiver from being outrun by a fast sender. 
- Using flow control, a mechanism is designed for the receiver using which it can control its speed of transmission.
- This prevents the receiving node from getting overwhelmed with traffic from the node that is transmitting.
- Do not confuse yourself with congestion control and flow control. Both are different concepts. 
- Congestion control comes in to play when in actual there is a problem of network congestion for controlling the data flow. 

On the other hand the mechanism of flow control can be classified in the following two ways:
  1. The feedback is sent to the sending node by the receiving node.
  2. The feedback is not sent to the sending node by the receiving node.
- The sending computer might tend to send the data at a faster rate than what can be received and processed by the other computer. 
- This is why we require flow control. 
- This situation arises when the traffic load is too much up on the receiving computer when compared to the computer that is sending the data. 
- It can also arise when the processing power of the receiving computer is slower than the processing power of the one that is sending the data.

Stop and Wait Flow Control Technique 
- This is the simplest type of the flow control technique. 
- Here, when the receiver is ready to start receiving data from the sender, the message is broken down in to a number of frames. 
- The sending system then waits for a specific time to get an acknowledgement or ACK from the receiver after sending each frame. 
- The purpose of the acknowledgement signal is to make sure that the frame has been received properly. 
- If during the transmission a packet or frame gets lost, then it has to be re-transmitted. 
- We call this process as the automatic repeat request or ARQ. 
- This technique has a problem which is that it is capable of transmitting only one frame in one go. 
- This makes the transmission channel very inefficient. 
- Therefore, until and unless the sender gets an acknowledgement it will not proceed further for transmitting another packet. 
- Both the transmission channel and the sender are left un-utilized during this period. 
- Simplicity of this method is its biggest advantage. 
- Disadvantage is the inefficiency resulting because of this simplicity. 
- Waiting state of the sender creates inefficiency. 
- This happens usually when the transmission delay is shorter than the propagation delay. 
- Sending longer transmissions is another cause for inefficiencies. 
- Also, it increases the chance for the errors to creep in this protocol. 
- In short messages, it is quite easy to detect the errors early. 
- By breaking down one big message in to various separate smaller frames, the inefficiency increases. 
- This is so because these pieces altogether take a long to be transmitted.


Sliding window Flow Control Technique 
- This is another method of flow control where permission is given to the sender by the receiver for continuously transmitting data until a window is filled up. 
- Once the window is full, sender stops transmission until a larger window is advertised. 
- This method can be utilized in a better way if the size of the buffer is kept limited. 
- During the transmission, space for say n frames is allocated to the buffer. 
This means n frames can be accepted by the receiver without having to wait for ACK. 
- After n frames an ACK is sent consisting of the sequence number of the next frame that has to be sent. 


Sunday, April 28, 2013

What is fragmentation? What are different types of fragmentation?


In the field of computer science, the fragmentation is an important factor concerning the performance of the system. It has a great role to play in bringing the performance of the computers. 

What is Fragmentation?

- It can be defined as a phenomenon involving the inefficient use of the storage space that in turn reduces the capacity of the system and also brings down its performance.  
- This phenomenon leads to the wastage of the memory and the term itself means the ‘wasted space’.
- Fragmentation is of three different forms as mentioned below:
  1. The external fragmentation
  2. Internal fragmentation and
  3. Data fragmentation
- All these forms of fragmentation might be present in conjunction with each other or in isolation. 
- In some cases, the fragmentation might be accepted in exchange of simplicity and speed of the system. 

Basic principle behind the fragmentation concept. 
- The CPU allocates the memory in form of blocks or chunks whenever requested by some computer program. 
- When this program has finished executing, the allocated chunk can be returned back to the system memory. 
- The size of memory chunk required by every program varies.
- In its lifetime, a program may request any number of memory chunks and free them after use. 
- When a program begins with its execution, the memory areas that are free to allocated, are contiguous and long. 
- After prolonged usage, these contiguous memory locations get fragmented in to smaller parts. 
- Later, a stage comes when it becomes almost impossible to serve the large memory demands of the program. 

Types of Fragmentation


1.External Fragmentation: 
- This type of fragmentation occurs when the available memory is divided in to smaller blocks and then interspersed. 
- Certain memory allocation algorithms have a minus point that they are at times unable to order the memory used by the programs in such a way that its wastage is minimized. 
- This leads to an undesired situation where even though we have free memory, it cannot be used effectively since being divided in to very small parts that alone cannot satisfy the memory demands of the programs.  
- Since here, the unusable storage lies outside the allocated memory regions, this type of fragmentation is called external fragmentation. 
- This type of fragmentation is also very common in file systems since here many files with different sizes are created as well as deleted. 
- This has a worse effect if the file deleted was in many small pieces. 
- This is so because this leaves similar small free memory chunks which might be of no use.

2. Internal Fragmentation: 
- There are certain rules that govern the process of memory allocation. 
- This leads to the allocation of more computer memory what is required. 
- For example, as the rule memory that is allocated to programs should be divisible by 4, 8 or 16. So if some program actually requires 19 bytes, it gets 20 bytes. 
- This leads to the wastage of extra 1 byte of memory. 
- In this case, this memory becomes unusable and is contained in the allocated region itself and therefore this type of fragmentation is called as the internal fragmentation.
- In computer forensic investigation, the slack space is the most useful source for evidence. 
- However, it is often difficult to reclaim the internal fragmentation. 
- Making a change in the design is the most effective way for preventing it. 
Memory pools in dynamic memory allocation are the most effective methods for cutting down the internal fragmentation. 
- In this the space overhead is spread by a large number of objects.

3. Data Fragmentation: 
This occurs because of breaking up of the data in many pieces that lie far enough from each other.
                                                                                                               


Monday, April 9, 2012

What are different aspects of error seeding?

There are so many issues associated with the so called bugs and errors! A good tester needs to be well aware about all the terms and issues associated with the errors and bugs.

Errors are the worst nightmare a tester and developer can get since an error might have a great potential to disrupt the functional of the whole of the software system or application and it may also introduce new error in a chain in to the program making it even more cumbersome to be tracked.

This article is focussed up on one of the terms associated with the errors and bugs namely “error seeding”.

You must be quite familiar with what is actual seeding?
Seeding is the process of sowing seeds that when grow up will become plants. Similarly from the term “error seeding” itself we can make out that it is the process of adding or sowing the faults in the software program intentionally so that the rate of detection and the removal of the error can be evaluated.

About Error Seeding



- One thing should be kept in mind which is that the errors to be injected in to the program must be known otherwise it will again become a problem for the tester.

- In many of the cases the error seeding methodology is employed to calculate the number of the errors that are still remaining in the program code.

- The error is intentionally injected in to the source code of the software system or application for the purpose of determination of the rate of discovery of the error which is very crucial for the software testing process.

- Knowing the rate of error detection can help a tester know what’s wrong with the testing methodologies he/ she is using and how they can be improved up on.

Uses of Error Seeding Method


Apart from just being used for the determination of the detection rate, this methodology is helpful in the below mentioned tasks also:

1. It is used to evaluate the skills of the tester of error finding.
2. It is used for the evaluation the ability of the application to survive the errors persisting in it
3. It is used to determine the ease with which the discovered bugs can be fixed up without blocking the work flow of the software system or application.

Advantages and Disadvantages of Error seeding



- Error seeding involves the seeding in of the errors and bugs.

- After the seeding of the errors, several test cases are executed and the ratio between the artificial errors and the actual errors is calculated based on the total number of errors that are detected.

- Normal test cases that are designed for any other error detection methodology can be used for error seeding.

- However the error seeding methodology is quite inefficient as compared to other methodologies like mutation testing. But, it takes lesser time to be completed and is very economical to be carried out.

- However inefficient it may be, it is always a better option of the programs with a lengthier source code. The defects that are injected in to the code have non trivial severity.

- The percentage of the seeded defects injected during the testing can bethought of as a reasonable predictor of the effectiveness of the methodology.


Facebook activity