Subscribe by Email


Showing posts with label Reliability. Show all posts
Showing posts with label Reliability. Show all posts

Friday, September 27, 2013

What are the parameters of QoS - Quality of Service?

With the arrival of the new technologies, applications and services in the field of networking, the competition is rising rapidly. Each of these technologies, services and applications are developed with an aim of delivering QoS (quality of service) that is either better with the legacy equipment or better than that. The network operators and the service providers follow from trusted brands. Maintenance of these brands is of critical importance to the business of these providers and operators. The biggest challenge here is to put the technology to work in such a way that all the expectations of the customers for the availability, reliability and quality are met and at the same time the flexibility for quick adaptation of the new techniques is offered to the network operators. 

What is Quality of Service?

- The quality of service is defined by its certain parameters which play a key role in the acceptance of the new technologies. 
- The organization working on several specifications of QoS is ETSI.
- The organization has been actively participating in the organization of the inter-operability events regarding the speech quality.
- The importance of the QoS parameters has been increasing ever since the increasing inter-connectivity of the networks and interaction between many service providers and network operators for delivering communication services.
- It is the quality of service that grants you the ability for the making parameters specifications based up on multiple queues in order to shoot up the performance as well as the throughput of wireless traffic as in VoIP (voice over internet), streaming media including audio and video of different types. 
- This is also done for usual IP over the access points.
- Configuration of the quality of service on these access points involves setting many parameters on the queues that are already there for various types of wireless traffic. 
- The minimum as well as the maximum wait times are also specified for the transmission. 
- This is done through the contention windows. 
- The flow of the traffic between the access point and the client station is affected by the EDCA (AP enhanced distributed channel access) parameters. 
The traffic flow from client to the access point is controlled by the station enhanced distribution channel access parameters. 

Below we mention some parameters:
Ø  QoS preset: The options listed by the QoS are WFA defaults, optimized for voice, custom and WFA defaults.
Ø  Queue: For different types of data transmissions between AP – to – client station, different queues are defined:
- Voice (data 0): Queue with minimum delay and high priority. Data which is time sensitive such as the streaming media and the VoIP are automatically put in this queue.
- Video (data 1): Queue with minimum delay and high priority. Video data which is time sensitive is put in to this queue automatically.
- Best effort (data 2): Queue with medium delay and throughput and medium priority. This queue holds all the traditional IP data. 
- Background (data 3): Queue with high throughput and lowest priority. Data which is bulky, requires high throughput and is not time sensitive such as the FTP data is queued up here.

Ø AIFS (inter-frame space): This puts a limit on the waiting time of the data frames. The measurement of this time is taken in terms of the slots. The valid values lie in the range of 1 to 255.
Ø Minimum contention window (cwMin): This QoS parameter is supplied as input to the algorithm for determining the random back off wait time for re-transmission.
Ø cwMax
Ø maximum burst
Ø wi – fi multimedia
Ø TXOP limit
Ø Bandwidth
Ø Variation in delay
Ø Synchronization
Ø Cell error ratio
Ø Cell loss ratio



Friday, July 12, 2013

Sliding Window Protocols? – Part 1

- There are many types of data transmission protocols of which one type is the packet based data transmission protocols. 
- These protocols have a feature called the sliding window protocol.
- The sliding window protocols are a great help wherever the in-order delivery of the data packets demand reliability. 
- For example, the Data link layer of the TCP (transmission control protocol) model and OSI model demand such reliability and thus use window sliding protocol. 
- According to the concept of the sliding window protocols, a consecutive number which is unique is assigned to each and every portion of the transmission i.e., the packets.
- These numbers are used by the receiver for placing the packets it will receive in their correct order. 
- Also, with the help of these numbers, the missing packets can be identified and the duplicate packets can be removed. 
- One problem regarding the sliding window protocols is that it has kept no limits for the size of these numbers that are required. 

- An unlimited number of data packets can be allowed to be communicated at any instant of time if limits are placed on the number of packets involved in transmission or reception. 
- By this, we mean using the sequence numbers of fixed size. 
- By term window we refer to the transmission side. 
- It actually represents the logical boundary or limit of the number of packets that the receiver has to acknowledge. 
- The transmitter has to be informed by the receiver for each ACK (acknowledgement) packet regarding the maximum size or the window boundary of the current receiver buffer. 
- For reporting the window size of the received buffer, a 16 bit field is used in the TCP header. 
- The maximum limit or boundary of the window that we can have is 216 i.e., 64 KB. 
- When operating in the slow start mode, the counting of the transmitter begins with a low packet count.
- Gradually, the number of packets involved increases in every transmission after the ACK packet has been received. 
- Whenever it receives an ACK packet, the window slides logically by one packet for the transmission of a new packet. 
- On reaching the window threshold, one packet is sent by the transmitter for every one packet of ACK received. 
- Suppose the limit of the window is 10 packets and the transmitter is in slow start mode. 
- Then, first one packet will be transmitted followed by another two. 
- Between these two transmissions, it will send an ACK packet also. 
- This process will continue until the limit of 10 has reached. 
- After crossing the limit, the transmission is restricted to one i.e., for every ACK packet received only one data packet is transmitted. 
- When viewed during simulation, it seems as if the window is shifting by distance of one packet whenever an ACK packet is received. 
- For avoiding the traffic congestion, the sliding window protocol works up a great deal.
- In this way the application layer would not have to worry about transmission the next set of data packets. 
- It can continue to do so since the sliding windows of the packet buffer will be implemented on both the sides i.e., the receiver’s and the sender’s side by the TCP. 
-However, the network traffic influences the window size dynamically to a great extent. 
- In order to achieve the highest possible throughput, care should be taken for not forcing the transmitter to stop the transmission before one RTT or round trip delay time by the sliding window protocol. 
- The bandwidth delay product of the links in the communication should be less than the limit of the data amount that can be sent before sending ACK packet. - If this condition is not met, the links’ effective bandwidth will be limited by the protocol. 


Friday, December 28, 2012

What is the difference between Purify and traditional debuggers?


The IBM Rational Purify grants power to the developers to deliver a product whose quality, reliability and performance matches with the expectations of the users. The purify plus combines the following and provides 3 benefits:
  1. Bug finding capabilities from the rational purify,
  2. Performance tuning effects from the rational quantify and
  3. Testing rigors from the rational pure coverage.
Together these three things make purify a different debugger that what the traditional debuggers we have. The above mentioned 3 benefits are measured in the terms of the faster development times, less number of errors and better code. 

About IBM Rational Purify

- The purify is actually a memory debugger by nature and is particularly used for the detection of the memory access errors especially in the programs that have been written in languages such as C and C++. 
- This software was originally developed by Reed Hastings, a member developer of the pure software organization. 
- However, Rational Purify exhibits the similar functionality as that of the Valgrind, bounds checker, Insure++.
- A process called dynamic verification using which the errors that occur during the execution can be discovered by a program is supported by the rational purify just like a debugger.
- However, there is another process called the static verification which is just the opposite of the dynamic verification and is also supported by the rational purify. 
- This process works by digging out inconsistencies present in the program logic. 
- Whenever there is a linking between a program and purify, the correct version of the verified code is automatically inserted in to the executable part of the code by either adding it to the object code or by parsing. 
- So, if whenever an error occurs, the location of the error, its memory address and other relevant info will be printed out by the tool. 
- Similarly, whenever a memory leak is detected by the purify it generates a leak report towards the exit of the program.

Difference between Rational Purify and Traditional Debuggers

- The major difference between the rational purify and the traditional debuggers is the ability of detecting the non – fatal errors. 
- The traditional debuggers only show up the sources which can cause the fatal errors such as a de-referencing to a null pointer can cause a program to crash and they are not effective in finding out the non – fatal memory errors. 
However, there are certain things for which the traditional debuggers are more effective than the rational purify for e.g.
- The debuggers can be used to step line by line through the code and to examine the memory of the program at any particular instance of time. 
- It would not be wrong if we say that these two tools are complementary to each other and can work great for a skilled developer. 
- The purify comes with other functionality which can be used for more general purposes rather than the debuggers which can be used only for the code.
- One thing to be noted about the purify is that it is more effective for the programming languages in which the memory management is left to the program developer. 
- This is the reason why the occurrence of the memory leaks is reduced in the programs written in languages such as java, visual basic and lisp etc. 
- It is not like these languages will never have memory leaks, they do have which occur because of the objects being referred to unnecessarily (this prevents the re – allocation of the memory.). 
- IBM has provided solution for these kind of errors also in the form of its another product called the rational application developer.
- Errors such as the following are covered by the purify:
  1. Array bounds
  2. Access to un-allocated memory
  3. Freeing the memory that is un-allocated
  4. Memory leaks and so on. 


Tuesday, July 10, 2012

What Tools are used for code coverage analysis?


Code coverage analysis is quite an essential process that makes up the complete and efficient software testing process. 
This analyzation consists of the following three basic activities:
  1. Checking out for the areas of the software system or application that have not been exercised by the set of tests that have been performed so far.
  2. Creation of the additional test cases so that the code coverage can be increased.
  3. Determination of the quantitative measure for the code coverage which some what provides an indirect measure of the quality of the software system and application.
Apart from this, there is one more optional aspect of the code coverage analysis which is that it helps in the identification of the redundant test cases that add to the measure of the code coverage but do not merely increase it.
In this article we have discussed about the tools that make this whole process of code coverage analyzation quite easy.

Tools Used for Code Coverage Analysis


- The code coverage analyzation is quite an effort and time consuming process and therefore is nowadays automated using tools like code coverage analyzer. 
- But a code coverage analyzer cannot be used always like in situations when the tests have to be run through the release candidate.
- For different languages, there are many different and vivid tools are available for code coverage analysis.

  1. For C++ and C programming languages:
a)  Tcov
b)  Bulls eye coverage
c)  Gcov
d)  LDRA test bed
e)  NuMega True Coverage
f)   Tessy
g)  Trucov
h)  Froglogic’s squish coco
i)   Parasoft C++ soft
j)   Test well CTC++
k)  McCabe IQ
l)   Insure++
m)Cantata

  1. Tools for C#:
a)  Mc Cabe IQ
b)  Jet brains dot cover
c)  Ncover
d)  Visual studio 2010
e)  Parasoft Dottest
f)   Test driven.NET
g)  Kalistick
h)  Dev partner

  1. Tools for Java:
a)  McCabe IQ
b)  Clover
c)  EMMA
d)  Kalistick
e)  JaCoCo
f)   JMockit coverage
g)  Code coverage
h)  LDRA test bed
i)    Jtest
j)   Den partner
k)  Cobertura

  1. Tools for Java Script:
a) Mc Cabe IQ
b) JS coverage
c) Code coverage
d) Script cover
e) Coveraje

  1. Tools for Perl:
a) Mc Cabe IQ
b)  Devel cover

  1. Tools for Haskell:
a) HPC (Haskell program coverage) tool kit

  1. Tools for Python:
a) Mc Cabe IQ
b) Fig leaf
c) Pester
d) Coverage.py

  1. Tools for PHP:
a) Mc Cabe IQ
b) PHP unit

  1. Tools for Ruby:
a) Rcov
b) Mc Cabe IQ
c) Simple cov
d) Cover Me

  1. Tools for Ada:
a) GNAT coverage
b) Mc Cabe IQ
c) Rapi Cover

Out of all the above mentioned tools for C and C++, the bulls eye coverage has proven to be the best code coverage analyzer in terms of reliability, usability and platform support etc. 
This coverage analyzer is different from the other analyzers in the following ways:
  1. Better coverage measurement
  2. Wide platform support
  3. Rigorously tested
  4. Efficient technical support
  5. Quite easy to use.
- Using this tool it can be determined that how much of the software system’s or application’s code was tested and this information later can be employed to focus your testing efforts and areas that require some improvement.
- With the bullseye coverage a more reliable code can be created and time can be saved. 
- The function coverage provided by the bulls eye coverage gives you a very high precision.

You can include or exclude the parts of the code of your choice. And what more? You can even merge the results you obtained from the distributed testing plus the run time code can also be included from custom environments. 


Thursday, June 21, 2012

Explain Automation of Smoke Testing?


Smoke testing though being a very useful, quick  software testing methodology, if not carried out in an automated way can take up a lot of time and efforts. 

Need for Automation


There is one more thing to manually carry out smoke testing which is that if later if you come to know that the testing was being carried out in wrong direction then it will surely give you a headache to perform whole of the testing once again and again if required. 

Why automation of smoke testing useful?


- Automation of smoke testing proves to be very useful in saving time and efforts.
- Smoke testing is considered to be one of the most effective ways that are used for the validation of the changes that are made to the software system or application program code at a high level before a detailed testing is undertaken on the new build. 
- Carrying out smoke tests can help a lot to stabilize the builds and verify that there are no major problems in the software system or application.
- The reliability factor of the smoke testing determines the success of the smoke test i.e., the more you can rely up on the smoke test for finding major problems, the more you can cut down the costs and increase the quality measures of the software product or artifact.
- Smoke testing provides a reliable, scalable solution that can be quickly and easily be automated. 
- One of the best things about automating the smoke test is that it does not require any programming.
- Smoke tests can be quickly written and automated by testers of any skill level within few minutes.

What functions are provided by the tools used for automating smoke tests?


There are so many tools available for automating the smoke tests. With these tools you can perform the following functions:
  1. With these tools smoke tests can be written and automated within few minutes and without requiring any programming.
  2. Using these tools all the builds can be validated before any changes are incorporated in to them.
  3. These tools are quite helpful when it comes to stabilizing the whole build process and verifying the readiness of a build for further full scale QA testing.
  4. With these tools quick tests can be conducted ensuring that the basic functionality is still intact and is working properly.
  5. The overall quality of the product can be improved since the problems can be detected early.
  6. Costs can be reduced ensuring that the team is having sufficient time for developing the product.

How a smoke test is automated?


- Smoke testing is usually preceded by quality assurance.
- As soon as the errors are discovered, it requires less efforts and money to get fixed.
-In automated smoke test, a continuous build process is deployed that automatically performs a smoke test each time a build is finished developing. 
- With such a methodology, the developers come to know easily if the recently developed build caused the problem. 
- Depending upon the configuration of the build tool and the software system or application, the process of implementing the automated smoke tests will vary despite of the basic steps being followed the same. 
- After the successful completion of the build, some set up steps are performed before the application is tested. The steps may include:
1. Copying files to the appropriate places.
2. Setting up the data base tables.
3. Starting a server.
4. Installing licenses.
Next step is to obtain all the QA files required for the smoke test and running the smoke test. 
- The report of the smoke test is saved and final step involves clean up including the steps:
  1. Stopping a server
  2. Emptying data base tables and
  3. Deleting files.


What ingredients are necessary to achieve component composition?


Component based software development or CBSD is nowadays becoming a very common practice when it comes to re- using the already existing practice components that have already been validated. This practice has shortened the development cycle gradually and has enhanced the quality of the software products or artifacts. This approach of software development is focussed up on the development of new software systems and applications through the selection and assembling components that belonged to the pre- existing software components. 

Furthermore, CBSD development methodology helps to:
  1. Accelerates the productivity of the software development
  2. Reduces the overall development cost
  3. Reduces maintenance efforts
  4. Enhances flexibility
  5. Enhances maintainability
  6. Assembles system rapidly
  7. Reduces the time to market
Usually no wear and tear occurs to the software system but there is a need to change or modify the software in accordance with the changes in business needs and complexity. Since the system keeps on acquiring more and more complexity, and hence more and more errors are introduced. 
This whole concept of component based software development is based up on components and the success of this development approach is highly affected by the composition of the software components that are being used.

What is a Component?


- A component as we know can be defined as a re- usable unit of deployment which has its access through a graphical user interface. 
- Component can be thought of as an independent entity that possesses its own complete functionality that can be distributed separately and shows no problem in upgrading from time to time. 
- Certain standards have been defined according to which the components are developed and are re- used. 
- Smaller pieces of software are aggregated at a high level and this aggregation is what that forms a component.  

Types of ingredients in Component Based Software Development



  1. An ideal component consists of the implementation detail obtained from the environment that can be re used by the interfaces of those components. This is essential as re- usability of the components is very much important for the component based software development.
  2. The second most important ingredient is the service or functionality that is to be provided by the component.
  3. Quality aspects like predictability, component reliability, usability and so on.
  4. The composition of the components requires meta information regarding the interfaces of the components and properties so as to support the tools during the process of the component composition. This meta information is obtained from the implementation/ interface repositories, type libraries or introspection or dedicated info classes etc.

Different Means of Component Composition


- Though the implementation inheritance works for most of the object oriented frame works, it does not prove useful for the component composition.
- The composition of the components takes place on a binary level. 
- There are 4 means of component composition:
  1. Scripting or glue languages
  2. Component frame works
  3. System languages
  4. Visual programming
All the above mentioned means have their own advantages and disadvantages but, they all are considered to be useful when used in combination. 

- Mostly scripting languages like tcl, visual basic etc are used for component composition rather than using system languages like java, Pascal, C++ etc. 
- The scripting languages are found convenient for this purpose because they are intended for plugging components together operating on a high abstraction level as compared to the system languages. 
- The component composition requires an equivalent object oriented frame work called component frame work and in this the glue code between the classes is predefined for a specific application domain. 


Facebook activity