Subscribe by Email


Showing posts with label Quality. Show all posts
Showing posts with label Quality. Show all posts

Sunday, December 1, 2013

Testing: A brief summary of white box testing

We all know white box testing by many other names as well such as glass box testing, clear box testing, structural testing, transparent box testing etc. This testing methodology is used for testing the internal structure of the software or to be precise to test how the application actually works and whether it works as desired or not. This is just the opposite of black box testing that is responsible for testing the system’s functionality only, not getting into the internal structure. The test cases for the white box testing are designed based up on the programming skills and internal perspective from which the system is viewed. From the various inputs, some are selected for exercising all paths through the program and determining whether they are producing the desired outputs or not.
A number of errors can be discovered using this testing methodology. But it is not capable of detecting the parts of the specification that have not been implemented or are missing. The following testing techniques are included under the white box testing:

- Data flow testing
- Path testing
- Decision coverage
- Control flow testing
- Branch testing
- Statement coverage

This testing methodology is used for testing the source code of the application using the test cases that are derived using the above mentioned techniques. All these techniques are used as guidelines for the creation of an error free environment. Any fragile piece of code in the whole program can be examined using white box testing. These techniques act as the building blocks for white box testing and its essence lies in carefully testing the source code so that no errors occur later on. Using these different testing techniques all the paths through the source code that are visible can be exercised and an error free environment can be created. But the tester should have the ability to know which path of the code is being tested and what the output should be. Now we shall discuss about the various levels in white box testing:

- Unit testing: White box testing done during this phase ensures that code is working as desired before it is integrated with the units that have already been tested. This helps in catching the errors at an early stage so that later it does not becomes difficult to find them when the units are integrated and the system complexity increases.
- Integration testing: Test cases at this level are meant to test how the interface of various units interact with each other i.e., the system’s behavior is tested in an open environment. During such testing any interaction that is not familiar to the programmer comes to light.
- Regression testing: This phase uses the recycled test cases from white box testing.

For the white box testing to be effective the tester should understand the purpose of the source code so that they are tested well. The programmer must understand the application so that he/ she can create effective and appropriate test cases.
The programmer working on preparing the test cases needs to know the application very well in order to ensure that he / she can prepare cases to handle every visible path for testing purposes. With the source code having been understood, this understanding of the course code lets the team use this analysis of the source code for preparing test cases. And for making these test cases, these are the steps that need to happen in white-box testing:
- Input, that may include functional requirements, specifications, source code etc.
- Processing unit, that carries out the risk analysis, prepares a test plan, executes tests and gives results.
- Output, that provides the final result.


Thursday, October 17, 2013

What happens when you find a serious defect right before you release ??

The end game of a software development schedule can be very critical. It is the timeframe when you are hoping that there are no critical problems that pop up - given that the time involved in turning around for solving critical problems is less, and the amount of tension that such a problem causes to everyone in the team can be enough to give a coronary to everybody involved. Once we had a defect come up 2 days before we were supposed to release the product, and the complications were very bad (in terms of decision making - we had to decide whether the defect needed to be taken, we had to get somebody very safe to diagnose that defect, we had to evaluate the fix to see that there was nothing else that could get broken, we had to then roll out the fix into the product and test the hell out of the fix to ensure that there was nothing that was getting broken). All of this caused a huge amount of tension, and we had management on our heads, wanting to know the progress, and more worryingly, why this defect was not caught before, and whether we were confident that we had done enough testing to ensure that we had caught all other such defects such as this one.
Typically, when you reach a situation like this, you need to ensure that you are thinking through all the options. It is all right to brazen it out and hope that everything will go well and say that you are fine with releasing the product. But, without having done a proper analysis, that would not be the correct option. If you want to get the product released in this haste, then you might be reaching a situation where user find serious defects after the product has been released, and that is something that no one wants. Such a situation, if it happens more than once, can cause loss of user confidence in the product, and to some extent in the organization, and have a lot of serious consequences. However, in most cases, I know teams tend to brazen it out even if they see much more problems later.
But, on the other hand, you cannot suddenly decide that you are willing to take a delay on the product release date to get some extra confidence of the testing (which would have been shaken due to the recent serious defect found). This sounds good, but there are costs involved with such a decision. A product delay causes a loss in revenue, also can cause some customer confidence problems if the organization suddenly has to announce a delay in release, and cause a huge impact on the team morale as well because of management involvement. However, it may be less of an impact than if the product is released and the customers find many problems.
So how do you make such a decision ? Well, that is the million dollar question. And there are no easy answers. To a large extent, whatever decision is made has a number of risks, but it is important to get genuine feedback from the testing team about what they feel, especially from the test managers (and this needs to be done in environment where there are fewer recriminations). Finally, the team manager needs to own the decision and be able to justify this in front of management.


Friday, September 27, 2013

What are the parameters of QoS - Quality of Service?

With the arrival of the new technologies, applications and services in the field of networking, the competition is rising rapidly. Each of these technologies, services and applications are developed with an aim of delivering QoS (quality of service) that is either better with the legacy equipment or better than that. The network operators and the service providers follow from trusted brands. Maintenance of these brands is of critical importance to the business of these providers and operators. The biggest challenge here is to put the technology to work in such a way that all the expectations of the customers for the availability, reliability and quality are met and at the same time the flexibility for quick adaptation of the new techniques is offered to the network operators. 

What is Quality of Service?

- The quality of service is defined by its certain parameters which play a key role in the acceptance of the new technologies. 
- The organization working on several specifications of QoS is ETSI.
- The organization has been actively participating in the organization of the inter-operability events regarding the speech quality.
- The importance of the QoS parameters has been increasing ever since the increasing inter-connectivity of the networks and interaction between many service providers and network operators for delivering communication services.
- It is the quality of service that grants you the ability for the making parameters specifications based up on multiple queues in order to shoot up the performance as well as the throughput of wireless traffic as in VoIP (voice over internet), streaming media including audio and video of different types. 
- This is also done for usual IP over the access points.
- Configuration of the quality of service on these access points involves setting many parameters on the queues that are already there for various types of wireless traffic. 
- The minimum as well as the maximum wait times are also specified for the transmission. 
- This is done through the contention windows. 
- The flow of the traffic between the access point and the client station is affected by the EDCA (AP enhanced distributed channel access) parameters. 
The traffic flow from client to the access point is controlled by the station enhanced distribution channel access parameters. 

Below we mention some parameters:
Ø  QoS preset: The options listed by the QoS are WFA defaults, optimized for voice, custom and WFA defaults.
Ø  Queue: For different types of data transmissions between AP – to – client station, different queues are defined:
- Voice (data 0): Queue with minimum delay and high priority. Data which is time sensitive such as the streaming media and the VoIP are automatically put in this queue.
- Video (data 1): Queue with minimum delay and high priority. Video data which is time sensitive is put in to this queue automatically.
- Best effort (data 2): Queue with medium delay and throughput and medium priority. This queue holds all the traditional IP data. 
- Background (data 3): Queue with high throughput and lowest priority. Data which is bulky, requires high throughput and is not time sensitive such as the FTP data is queued up here.

Ø AIFS (inter-frame space): This puts a limit on the waiting time of the data frames. The measurement of this time is taken in terms of the slots. The valid values lie in the range of 1 to 255.
Ø Minimum contention window (cwMin): This QoS parameter is supplied as input to the algorithm for determining the random back off wait time for re-transmission.
Ø cwMax
Ø maximum burst
Ø wi – fi multimedia
Ø TXOP limit
Ø Bandwidth
Ø Variation in delay
Ø Synchronization
Ø Cell error ratio
Ø Cell loss ratio



Monday, September 23, 2013

What is meant by Quality of Service provided by network layer?

- The QoS or the quality of service is such a parameter that refers to a number of aspects of computer networks, telephony etc. 
- This parameter allows transportation of traffic as per some specific requirements. 
- Technology has advanced so much now computer networks can also be doubled up as the telephone networks for doing audio conversations. 
- The technology even supports the applications which have strict service demands. 
- The ITU defines the quality of service in telephony. 
It covers all the requirements concerning all the connection’s aspects such as the following:
Ø  Service response time
Ø  Loss
Ø  Signal – to – noise ratio
Ø  Cross – talk
Ø  Echo
Ø  Interrupts
Ø  Frequency response
Ø  Loudness levels etc.  

- The GoS (grade of service) requirement is one subset of the QoS and consists of those aspects of the connection that relate to its coverage as well as capacity. 
- For example, outage probability, maximum blocking probability and so on. 
- In the case of the packet switched telecommunication networks and computer networking, the resource reservation mechanisms come under the concept of traffic engineering. 
- QoS can be defined as the ability by virtue of which the different applications, data flows and users can be provided with different priorities. 
- It is important to have QoS guarantees if the capacity of the network is quite insufficient. 
- For example, voice over IP, IP-TV and so on. 
- All these services are sensitive to delays, have fixed bit rates and have limited capacities.
- The protocol or network supporting the QoS might agree up on some traffic contract with the network node’s reserve capacity and the software. 
- However, the quality of service is not supported by the best effort services. 
-Providing high quality communication over such networks provides a alternative to the QoS control mechanisms that are complex. 
- This happens when the capacity is over-provisioned so much that it becomes sufficient for the peak traffic load that has been expected. 
- Now since the network congestion problems have been eliminated, the QoS mechanisms are also not required. 
- It might be sometimes be taken as the level of the service’s quality i.e., the GoS. 
- For example, low bit error probability, low latency, and high bit rate and so on. 
- QoS can also be defined as a metric that reflects up on the experienced quality of the service.
- It is the cumulative effect that can be accepted. 
Certain types of the network traffic require a defined QoS such as the following:
Ø  Streaming media such as IPTV (internet protocol television), audio over Ethernet, audio over IP etc.
Ø  Voice over IP
Ø  Video conferencing
Ø  Telepresence
Ø  iSCSI, FCoE tec. Storage applications
Ø  safety critical applications
Ø  circuit emulation service
Ø  network operations support systems
Ø  industrial control systems
Ø  online games

- All the above mentioned services are examples of the inelastic services and a certain level of latency and bandwidth is required for them to operate properly. - On the other hand, the opposite kind of services such as the elastic services can work with any level of bandwidth and latency. 
- An example of these type of services is the bulk file transfer application based up on TCP.
- A number of factors affect the quality of service in the packet switched networks. 
- These factors can be broadly classified in to two categories namely technical and the human factors. 
The following factors are counted as the human factors:
Ø  reliability
Ø  scalability
Ø  effectiveness
Ø  maintainability
Ø  grade of service and so on.

- ATM (asynchronous transfer mode) or GSM like voice transmissions in the circuit switched networks have QoS in their core protocol. 


Wednesday, September 4, 2013

What is a choke packet?

- The networks often experience problems with congestion and flow of the traffic. 
- While implementing flow control a special type of packet is used throughout the network. 
- This packet is known as the choke packet. 
- The congestion in the network is detected by the router when it measures the percentage of the buffers that are actually being used. 
- It also measures the utilization of the lines and average length of the queues. 
When the congestion is detected, the router transmits choke packets throughout the network. 
- These choke packets are meant for the data sources that are spread across the network and which have an association with the problem of congestion. 
These data sources in turn respond by cutting down on the amount of the data that they are transmitting. 
A choke packet has been found to be very useful in the maintenance tasks of the network. 
- It also helps in maintaining the quality to some extent. 
- In both of these tasks, it is used for informing the specific transmitters or the nodes that the traffic they are sending is resulting in congestion in the network. 
Thus, the transmitters or the nodes are forced to decrease the rate at which they are generating traffic. 
- The main purpose of the choke packets is controlling the congestion and maintaining flow control throughout the network. 
- The router directly addresses the source node, thus causing it to cut down its data transmission rate. 
- This is acknowledged by the source node by making reductions by some percentage in the transmission rates. 
- An example of the choke packet commonly used by the most of the routers is the source quench packet by ICMP (internet control message protocol).  
- The technique of using the choke packets for congestion control and recovery of the network involves the use of the routers. 
- The whole network is continuously monitored over by the routers for any abnormal activity.
- Factors such as the space in the buffers, queue lengths and the line utilization are checked by the routers. 
- In case the congestion occurs in the network, the choke packets are sent by the routers to the corresponding parts of the network instructing them to reduce the throughput. 
- The node that is the source of the congestion has to reduce its throughput rate by a certain percentage that depends on the size of the buffer, bandwidth that is available and the extent of the congestion. 
- Sending the choke packets is the way of routers telling the nodes to slow down so that the traffic can be fairly distributed over the nodes. 
- The advantage of using this technique is that it is dynamic in nature. 
The source node might send as much data as required while the network might inform that it is sending large amounts of traffic.
- The disadvantage is that it is difficult to know by what factor the node should reduce its throughput.
- The amount of the congestion being caused by this node and the capacity of the region in which congestion has occurred is responsible for deciding this. 
- In practical, this information is not instantly available. 
- Another disadvantage is that after the node has received the choke packet, it should be capable of rejecting the other choke packets for some time. 
- This is so because many additional choke packets might be generated during the transmission of the other packets. 

The question is for how long the node is supposed to ignore these packets? 
- This depends up on some dynamic factors such as the delay time. 
- Not all congestion problems are same, they vary over the network depending up on its topology and number of nodes it has. 


Tuesday, August 20, 2013

When is a situation called as congestion?

- Network congestion is quite a common problem in the queuing theory and data networking. 
- Sometimes, the data carried by a node or a link is so much that its QoS (quality of service) starts deteriorating. 
- This situation or problem is known as the network congestion or simply congestion. 
This problem has the following two typical effects:
Ø  Queuing delay
Ø  Packet loss and
Ø  Blocking of the new connections


- The last two effects lead to two other problems. 
- As the offered load increases by the increments, either the throughput of the network is actually reduced or the throughput increases by very small amounts. 
- Aggressive re-transmissions are used by the network protocols for compensating for the packet loss. 
- The network protocols thus tend to maintain a state of network congestion for the system even if the actual initial load is too less that it cannot cause the problem of network congestion. 
- Thus, two stable states are exhibited by the networks that use these protocols under similar load levels. 
- The stable state in which the throughput is low is called the congestive collapse. 
- Congestive collapse is also called congestion collapse.
- In this condition, the switched computer network that can be reached by a packet when because of congestion there is no or little communication happening.
- In such a situation even if a little communication happens it is of no use. 
There are certain points in the network called the choke points where the congestion usually occurs.
- At these points, the outgoing bandwidth is lesser than the incoming traffic. 
Choke points are usually the points which connect the wide area network and a local area network. 
- When a network falls in such a condition, it is said to be in a stable state. 
- In this state, the demand for the traffic is high but the useful throughput is quite less.
- Also, the levels of packet delay are quite high. 
- The quality of service gets extremely bad and the routers cause the packet loss since their output queues are full and they discard the packets. 
- The problem of the network congestion was identified in the year of 1984. 
The problem first came in to the scenario when the backbone of the NSF net phase dropped 3 times of its actual capacity. 
- This problem continued to occur until the Van Jacobson’s congestion control method was implemented at the end nodes.

Let us now see what is the cause of this problem? 
- When the number of packets being set to a router exceeds its packet handling capacity, many packets are discarded by the routers that are intermediate. 
- These routers expect the re-transmission of the discarded information. 
- Earlier, the re-transmission behavior of the TCP implementations was very bad. 
- Whenever a packet was lost, the extra packets were sent in by the end points, thus repeating the lost information. 
- But this doubled the data rate. 
- This is just the opposite of what routine should be carried out during the congestion problem. 
- The entire network is thus pushed in a state of the congestive collapse resulting in a huge loss of packets and reducing the throughput of the network. 
Congestion control as well as congestion avoidance techniques are used by the networks of modern era for avoiding the congestive collapse problem. 
Various congestion control algorithms are available that can be implemented for avoiding the problem of network congestion. 
- There are various criteria based up on which these congestion control algorithms are classified such as amount of feedback, deploy-ability and so on. 


Saturday, August 10, 2013

Shortest Path Routing - a type of routing algorithm

- The usage of the re-configurable logic has been increasing day by day both in scope as well as number. 
- Re-configurable computing combines both the hardware speed and the flexibility of the software. 
- This is the result of the combination of the highspeed computing and re-configurability.
- Tough requirements are posed up on the routing in a network by the increased QoS i.e., the quality of service. 
- This increase in the complexity of the computational capabilities bears an exponential relation with the increased QoS. 
- However, additional computational resources are needed for achieving a network performance level that is acceptable. 
- Re-configurable computing offers a promising solution to the issues of the computations in the routing process.
There are 3 major aspects of the shortest path routing as mentioned below:
Ø Path selection: This involves the various algorithms such as the Dijkstra’s and bellman – ford algorithms and shortest path and minimum – hop routing.
Ø Topology change: Changes in the topology are detected using the beacons.
Ø  Routing protocols: This involves routing protocols such as the link state routing protocols and distance vector protocols.
- Forwarding and routing are two different things. 
- In forwarding, the data packet is directed towards an outgoing link and an individual router is used that also maintains a forwarding table.
- Routing computes the paths that have to be followed by the packets. 
- Routers exchange the path information between themselves and the forwarding table is created by each and every router in the chain.

Routing is important for the following three main reasons:
Ø  End-to-end performance: The user performance is affected by the path quality, throughput, packet loss and delay in propagation.
Ø  Use of the network resources: The traffic has to be balanced between the several links and routers. The traffic is directed towards the links that are lightly loaded for avoiding the congestion.
Ø  Transient disruptions during changes: These disruptions include the load balancing problems, maintenance, failures etc. the packet loss as well as the delay has to be limited while the changes take effect.


- Shortest path routing is based up on a path selection model that gives more preference to the destination. 
- This type of routing is insensitive to load as in it involves the static link weights.
- Here, either the sum of the link weights or the minimum hope is considered. 
In a shortest path problem, the link costs are given for a network topology. 
- For example, C(x,y) denotes the cost of the node x to node y. 
- If the two nodes x and y are not adjacent to each other the cost is taken to be infinity. 
- The least cost paths linking all the nodes are computed from a node taken as the source. 
- Dijkstra’s shortest path algorithm is one of the algorithms used in the shortest path routing. 
- A central role is played by the problems involving finding the shortest paths in the designing and the analyzation of the networks.
- A majority of the routing problems can be taken as the shortest path problems and solved if each link in the network has appropriate cost assigned to it. 
- This cost even reflects the bandwidth as well as the bit error ratio if required. - A number of algorithms are available for computing the shortest path.
- But these algorithms are applicable only if a single non – negative additive metric characterizes every edge in the network.
- Out of these algorithms, the Dijkstra’s algorithm is the most famous one. 
- This algorithm find its use in the OSPF (open shortest path first) routing procedure of the internet. 
- In this algorithm the number of operations carried out are proportional to the number of nodes in the network and the iteration is carried for n-1 times. 


Tuesday, August 6, 2013

What is meant by an optimal route?

- For selecting a path or route, a routing metric has to be applied to a number of routes so as to select the best out of them. 
- This best route is called the optimal route with respect to the routing metric used. 
- This routing metric is computed with the help of the routing algorithms in computer networking.
- It consists of information such as network delay, hop count, network delay, load, MTU, path cost, communication cost, reliability and so on.
- Only the best or the optimal routes are stored in the routing tables that reside in the memory of the routers. 
- The other information is stored in either the topological or the link state databases. 
- There are many types of routing protocol and each of them has a routing metric specific to it. 
- Some external heuristic is required to be used by the multi-protocol routers for selecting between the routes determined using various routing protocols. 
For example, the administrative distance is the value that is attributed to all the routes in Cisco routers. 
- Here, smaller distances mean that the protocol is a reliable one. 
- Host specific routes to a certain device can be set up by the local network admin. 
- This will offer more control over the usage of the network along with better overall security and permission for testing. 
- This advantage comes handy especially when it is needed to debug the routing tables and the connections. 

In this article we discuss about the optimal routes. 
- With the growing popularity of the IP networks as the mission critical tools for business, the need for methods and techniques using which the network’s routing posture can be monitored is increasing.
- Many routing issues or even incorrect routing can lead to undesirable effects on the network such as downtime, flapping or performance degradation. 
- Route analytic are the techniques and tools that are used for monitoring the routing in a network. 

The performance of the network is measured using the following 2 factors:
  1. Throughput or the Quantity of service: This includes the amount of data that is transmitted and time it takes to transfer.
  2. Average packet delay or Quality of service: This includes the time taken by a packet to arrive at its destination and the response of the system to the commands entered by the user.
- There is always a constant battle between the fairness and optimality or we can say between quantity of service and quality of service. 
- For optimizing the throughput, the paths existing between the nodes have to be saturated and the response time from source point to destination point must be noticed. 

For finding the optimal routes, we have two types of algorithms namely:
  1. Adaptive Algorithms: These algorithms are meant for the networks in which the routes change in a dynamic manner. Here the information regarding the route to be followed is obtained at the run time itself from adjacent as well as the all other routers. The routes change whenever there is a change in the load, change in the topology and every delta T seconds.
  2. Non – adaptive algorithms: These algorithms the same routes cannot be followed every time. Therefore the measurements that were made for the previous condition cannot be used for the current condition. The routes thus obtained are called static routes and are computed at the boot time.

Finding optimal routes requires following the principle of optimality according to which the optimal path between an intermediate router and the destination router lies on the same route from the source to the destination route. 


Thursday, March 28, 2013

What is the basic principle behind Dynamic synchronous transfer mode (DTM)?


- Dynamic synchronous transfer mode or DTM is one of the most interesting of all the networking technologies. 
- The basic objective behind implementing this technology is to achieve high speed networking along with the transmissions of top quality.
- It also possesses the ability of adapting the bandwidth in varying traffic conditions quickly. 
- DTM was designed with the purpose of being used in integrated service networks including both the one to one communication and distribution.
- Furthermore, it can be used in application to application communication. 
- Nowadays, it has also found its use as a carrier for IP protocols (i.e., high layer protocols). 
- DTM is a combination of 2 basic technologies namely packet switching and circuit switching. 
- It is because of this that the DTM has many advantages to offer. 
- It also comes with a number of services access solutions for the following fields:
Ø  City networks
Ø  Enterprises
Ø  Residential as well as other small offices
Ø  Content providers
Ø  Video production networks
Ø  Mobile network operators

Principles of Dynamic synchronous transfer mode (DTM)

 
- This mode has been designed to work up on a unidirectional medium. 
- This medium also supports multiple access i.e., all the connected nodes can share it. 
- It can be built up on various topologies such as:
  1. Ring
  2. Double ring
  3. Point – to – point
  4. Dual bus and so on.
- TDM or time division multiplexing is what up on which the DTM is based. 
- Here, a fiber link’s transmission capacity is broken down in to smaller units of time. 
- The total link capacity is broken down in to frames of fixed size of 125 microseconds. 
The frames are then further subjected to division in to time slots of 64 bit. 
- How many time slots will be there in one frame is determined by its bit rate. 
- These time slots consist of many separate control slots and data slots. 
- In some cases more control slots might be required, then the data slots can be turned in to control slots or vice versa.
- The nodes that are attached to the link possess the right to write both the kinds of slots. 
As a consequence of this, same time slot position will be occupied by the all the time slots within each frame. 
- Each node possesses the right to at least one slot which can be used by the node for transmitting control messages to the other nodes. 
- These messages can also be sent when requested by the user as a response to messages sent by the other nodes or for some purpose of network management.
- A small fraction of the whole capacity is constituted by the control slots, while a major part is taken by the data slots that carry payload. 
- With the number of control slots, the signaling overhead in DTM varies though it is usually very low.
- Whenever a communication channel is established, a portion of the available data slots is allocated to the channel by the node. 
- There has been an increasing demand of the network transfer capacity because of the globalization of the network traffic and integrated audio, video and data transmission. 
Optical fibers’ transmission capacity is increasing by great margins when compared to any other processing power. 
- DTM still holds the promise for providing full control to the network resources.


Facebook activity