Subscribe by Email


Showing posts with label Increments. Show all posts
Showing posts with label Increments. Show all posts

Tuesday, August 20, 2013

When is a situation called as congestion?

- Network congestion is quite a common problem in the queuing theory and data networking. 
- Sometimes, the data carried by a node or a link is so much that its QoS (quality of service) starts deteriorating. 
- This situation or problem is known as the network congestion or simply congestion. 
This problem has the following two typical effects:
Ø  Queuing delay
Ø  Packet loss and
Ø  Blocking of the new connections


- The last two effects lead to two other problems. 
- As the offered load increases by the increments, either the throughput of the network is actually reduced or the throughput increases by very small amounts. 
- Aggressive re-transmissions are used by the network protocols for compensating for the packet loss. 
- The network protocols thus tend to maintain a state of network congestion for the system even if the actual initial load is too less that it cannot cause the problem of network congestion. 
- Thus, two stable states are exhibited by the networks that use these protocols under similar load levels. 
- The stable state in which the throughput is low is called the congestive collapse. 
- Congestive collapse is also called congestion collapse.
- In this condition, the switched computer network that can be reached by a packet when because of congestion there is no or little communication happening.
- In such a situation even if a little communication happens it is of no use. 
There are certain points in the network called the choke points where the congestion usually occurs.
- At these points, the outgoing bandwidth is lesser than the incoming traffic. 
Choke points are usually the points which connect the wide area network and a local area network. 
- When a network falls in such a condition, it is said to be in a stable state. 
- In this state, the demand for the traffic is high but the useful throughput is quite less.
- Also, the levels of packet delay are quite high. 
- The quality of service gets extremely bad and the routers cause the packet loss since their output queues are full and they discard the packets. 
- The problem of the network congestion was identified in the year of 1984. 
The problem first came in to the scenario when the backbone of the NSF net phase dropped 3 times of its actual capacity. 
- This problem continued to occur until the Van Jacobson’s congestion control method was implemented at the end nodes.

Let us now see what is the cause of this problem? 
- When the number of packets being set to a router exceeds its packet handling capacity, many packets are discarded by the routers that are intermediate. 
- These routers expect the re-transmission of the discarded information. 
- Earlier, the re-transmission behavior of the TCP implementations was very bad. 
- Whenever a packet was lost, the extra packets were sent in by the end points, thus repeating the lost information. 
- But this doubled the data rate. 
- This is just the opposite of what routine should be carried out during the congestion problem. 
- The entire network is thus pushed in a state of the congestive collapse resulting in a huge loss of packets and reducing the throughput of the network. 
Congestion control as well as congestion avoidance techniques are used by the networks of modern era for avoiding the congestive collapse problem. 
Various congestion control algorithms are available that can be implemented for avoiding the problem of network congestion. 
- There are various criteria based up on which these congestion control algorithms are classified such as amount of feedback, deploy-ability and so on. 


Tuesday, January 15, 2013

What is a Cleanroom approach?


In this article we discuss the cleanroom approach in detail. The size of the team is usually small and is divided in to following three sub – teams:
  1. Specification team: This team is responsible for the development and maintenance of the specifications.
  2. Development team: This team is responsible for the development and verification of the software.
  3. Certification team: This team is responsible for the development of statistical tests and reliability growth models. 
The incremental development is always carried out under statistical quality control so that the performance can be assessed at the end of every iteration using the following measures:
  1. Errors per KLOC
  2. Rate of growth in MTTF
  3. Number of sequential error free tests.
The software development in cleanroom approach is purely based up on the mathematical principles whereas the testing is based up on the statistical principles. 
- Firstly, the system to be developed is formally specified and an operational profile is created. This profile and the formal specifications are then used to define the software increments which are then used for the two purposes namely:
  1. Construction of a structured program
  2. Designing of statistical tests: These tests also contribute to the first purpose.
- The constructed program is then formally verified and integrated with the increment.
Below mentioned is the flow of cleanroom approach:
  1. Software requirements specification
  2. Software design and development
  3. Incremental software delivery
  4. Incremental statistical testing
  5. Regression testing
  6. Software reliability measurement
  7. Process error diagnosis and correction
- The incremental development planning is divided in to two parts namely:
  1. Functional specification: It involves formal design correctness verification.
  2. Usage specification: It involves statistical test case generation.
- Both these processes then merge down to statistical testing which then follows quality certification model and MTTF estimates.
- The whole cleanroom project develops around the incremental strategy. 
- Requirements are gathered from the customers and elicited and refined via the traditional methods.
- The definition of the data, its behavior and procedures are isolated and separated by the box structures at every level of refinement. 
- Specifications or the black boxes when iteratively refined become state boxes i.e., architectural designs and clear boxes i.e., the component–level designs.
- Formal inspections are carried out to make sure that the code confirms to standards, it is syntactically correct and its correctness has been verified. 
- Statistical usage planning involves creation of tests cases that match with the probability distribution of the usage pattern
- In place of the exhaustive testing, a sample of all the test cases is employed. 
- Once the programmers are done with all 3 activities (i.e., verification, inspection, usage testing, and defect removal) the increment is considered to be certified and ready to be integrated. 
- For developing a right system, customer feedback and involvement are 2 necessary elements throughout the process. 
- Increment planning is required so that the customer’s system requirements can be clarified. 
- There is a requirement of management of resources and control of complexity which is also achieved through incremental planning.
- In order to develop a quality product a control over the software development cycle and process measurement is very much required.
- Following are the benefits of concurrent planning:
  1. Concurrent engineering
  2. Step wise integration
  3. Continuous quality feedback
  4. Continuous customer feedback
  5. Risk management
  6. Change management
- All of the above benefits are achieved respectively by:
  1. Certification and scheduling parallel development
  2. Testing cumulative increments
  3. Statistical process control
  4. Through actual use
  5. Treatment of the high risk elements in early phases
  6. Systematic accommodation of the changes
Design verification advantage allows the cleanroom teams to verify each and every line of code. 


Monday, July 11, 2011

What is the Incremental Model in Software Engineering? What are its advantages and disadvantages?

When the elements of waterfall model are applied in iterative manner, the result is the Incremental Model. In this, the product is designed, implemented, integrated and tested as incremental builds. This model is more applicable where software requirements are well defined and basic software functionality is required early.

In incremental model, a series of releases called 'increments' are delivered that provide more functionality progressively for customer as each increment is delivered.
The first increment is known as core product. This core product is used by customers and a plan is developed for next increment and modifications are made to meet the needs of the customer. The process is repeated.


ADVANTAGES OF INCREMENTAL MODEL IN SOFTWARE ENGINEERING


- It generates working software quickly and early during the software life cycle.
- Flexibility is more and less costly.
- Testing and debugging becomes easier during a smaller iteration.
- Risk can be managed more easily because they can be identified easily during iteration.
- Early increments can be implemented with fewer people.

DISADVANTAGES OF INCREMENTAL MODEL IN SOFTWARE ENGINEERING


- Each phase of an iteration is rigid and do not overlap each other.
- Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle.


Facebook activity