Subscribe by Email


Showing posts with label Parameters. Show all posts
Showing posts with label Parameters. Show all posts

Friday, September 27, 2013

What are the parameters of QoS - Quality of Service?

With the arrival of the new technologies, applications and services in the field of networking, the competition is rising rapidly. Each of these technologies, services and applications are developed with an aim of delivering QoS (quality of service) that is either better with the legacy equipment or better than that. The network operators and the service providers follow from trusted brands. Maintenance of these brands is of critical importance to the business of these providers and operators. The biggest challenge here is to put the technology to work in such a way that all the expectations of the customers for the availability, reliability and quality are met and at the same time the flexibility for quick adaptation of the new techniques is offered to the network operators. 

What is Quality of Service?

- The quality of service is defined by its certain parameters which play a key role in the acceptance of the new technologies. 
- The organization working on several specifications of QoS is ETSI.
- The organization has been actively participating in the organization of the inter-operability events regarding the speech quality.
- The importance of the QoS parameters has been increasing ever since the increasing inter-connectivity of the networks and interaction between many service providers and network operators for delivering communication services.
- It is the quality of service that grants you the ability for the making parameters specifications based up on multiple queues in order to shoot up the performance as well as the throughput of wireless traffic as in VoIP (voice over internet), streaming media including audio and video of different types. 
- This is also done for usual IP over the access points.
- Configuration of the quality of service on these access points involves setting many parameters on the queues that are already there for various types of wireless traffic. 
- The minimum as well as the maximum wait times are also specified for the transmission. 
- This is done through the contention windows. 
- The flow of the traffic between the access point and the client station is affected by the EDCA (AP enhanced distributed channel access) parameters. 
The traffic flow from client to the access point is controlled by the station enhanced distribution channel access parameters. 

Below we mention some parameters:
Ø  QoS preset: The options listed by the QoS are WFA defaults, optimized for voice, custom and WFA defaults.
Ø  Queue: For different types of data transmissions between AP – to – client station, different queues are defined:
- Voice (data 0): Queue with minimum delay and high priority. Data which is time sensitive such as the streaming media and the VoIP are automatically put in this queue.
- Video (data 1): Queue with minimum delay and high priority. Video data which is time sensitive is put in to this queue automatically.
- Best effort (data 2): Queue with medium delay and throughput and medium priority. This queue holds all the traditional IP data. 
- Background (data 3): Queue with high throughput and lowest priority. Data which is bulky, requires high throughput and is not time sensitive such as the FTP data is queued up here.

Ø AIFS (inter-frame space): This puts a limit on the waiting time of the data frames. The measurement of this time is taken in terms of the slots. The valid values lie in the range of 1 to 255.
Ø Minimum contention window (cwMin): This QoS parameter is supplied as input to the algorithm for determining the random back off wait time for re-transmission.
Ø cwMax
Ø maximum burst
Ø wi – fi multimedia
Ø TXOP limit
Ø Bandwidth
Ø Variation in delay
Ø Synchronization
Ø Cell error ratio
Ø Cell loss ratio



Monday, September 23, 2013

What is meant by Quality of Service provided by network layer?

- The QoS or the quality of service is such a parameter that refers to a number of aspects of computer networks, telephony etc. 
- This parameter allows transportation of traffic as per some specific requirements. 
- Technology has advanced so much now computer networks can also be doubled up as the telephone networks for doing audio conversations. 
- The technology even supports the applications which have strict service demands. 
- The ITU defines the quality of service in telephony. 
It covers all the requirements concerning all the connection’s aspects such as the following:
Ø  Service response time
Ø  Loss
Ø  Signal – to – noise ratio
Ø  Cross – talk
Ø  Echo
Ø  Interrupts
Ø  Frequency response
Ø  Loudness levels etc.  

- The GoS (grade of service) requirement is one subset of the QoS and consists of those aspects of the connection that relate to its coverage as well as capacity. 
- For example, outage probability, maximum blocking probability and so on. 
- In the case of the packet switched telecommunication networks and computer networking, the resource reservation mechanisms come under the concept of traffic engineering. 
- QoS can be defined as the ability by virtue of which the different applications, data flows and users can be provided with different priorities. 
- It is important to have QoS guarantees if the capacity of the network is quite insufficient. 
- For example, voice over IP, IP-TV and so on. 
- All these services are sensitive to delays, have fixed bit rates and have limited capacities.
- The protocol or network supporting the QoS might agree up on some traffic contract with the network node’s reserve capacity and the software. 
- However, the quality of service is not supported by the best effort services. 
-Providing high quality communication over such networks provides a alternative to the QoS control mechanisms that are complex. 
- This happens when the capacity is over-provisioned so much that it becomes sufficient for the peak traffic load that has been expected. 
- Now since the network congestion problems have been eliminated, the QoS mechanisms are also not required. 
- It might be sometimes be taken as the level of the service’s quality i.e., the GoS. 
- For example, low bit error probability, low latency, and high bit rate and so on. 
- QoS can also be defined as a metric that reflects up on the experienced quality of the service.
- It is the cumulative effect that can be accepted. 
Certain types of the network traffic require a defined QoS such as the following:
Ø  Streaming media such as IPTV (internet protocol television), audio over Ethernet, audio over IP etc.
Ø  Voice over IP
Ø  Video conferencing
Ø  Telepresence
Ø  iSCSI, FCoE tec. Storage applications
Ø  safety critical applications
Ø  circuit emulation service
Ø  network operations support systems
Ø  industrial control systems
Ø  online games

- All the above mentioned services are examples of the inelastic services and a certain level of latency and bandwidth is required for them to operate properly. - On the other hand, the opposite kind of services such as the elastic services can work with any level of bandwidth and latency. 
- An example of these type of services is the bulk file transfer application based up on TCP.
- A number of factors affect the quality of service in the packet switched networks. 
- These factors can be broadly classified in to two categories namely technical and the human factors. 
The following factors are counted as the human factors:
Ø  reliability
Ø  scalability
Ø  effectiveness
Ø  maintainability
Ø  grade of service and so on.

- ATM (asynchronous transfer mode) or GSM like voice transmissions in the circuit switched networks have QoS in their core protocol. 


Wednesday, August 28, 2013

What are different policies to prevent congestion at different layers?

- Many times it happens that the demand for the resource is more than what network can offer i.e., its capacity. 
- Too much queuing occurs in the networks leading to a great loss of packets. 
When the network is in the state of congestive collapse, its throughput drops down to zero whereas the path delay increases by a great margin. 
- The network can recover from this state by following a congestion control scheme.
- A congestion avoidance scheme enables the network to operate in an environment where the throughput is high and the delay is low. 
- In other words, these schemes prevent a computer network from falling prey to the vicious clutches of the network congestion problem. 
- Recovery mechanism is implemented through congestion and the prevention mechanism is implemented through congestion avoidance. 
The network and the user policies are modeled for the purpose of congestion avoidance. 
- These act like a feedback control system. 

The following are defined as the key components of a general congestion avoidance scheme:
Ø  Congestion detection
Ø  Congestion feedback
Ø  Feedback selector
Ø  Signal filter
Ø  Decision function
Ø  Increase and decrease algorithms

- The problem of congestion control gets more complex when the network is using a connection-less protocol. 
- Avoiding congestion rather than simply controlling it is the main focus. 
- A congestion avoidance scheme is designed after comparing it with a number of other alternative schemes. 
- During the comparison, the algorithm with the right parameter values is selected. 
For doing so few goals have been set with which there is an associated test for verifying whether it is being met by the scheme or not:
Ø  Efficient: If the network is operating at the “knee” point, then it is said to be working efficiently.
Ø  Responsiveness: There is a continuous variation in the configuration and the traffic of the network. Therefore the point for optimal operation also varies continuously.
Ø Minimum oscillation: Only those schemes are preferred that have smaller oscillation amplitude.
Ø Convergence: The scheme should be such that it should bring the network to a point of stable operation for keeping the workload as well as the network configuration stable. The schemes that are able to satisfy this goal are called convergent schemes and the divergent schemes are rejected.
Ø Fairness: This goal aims at providing a fair share of resources to each independent user.
Ø  Robustness: This goal defines the capability of the scheme to work in any random environment. Therefore the schemes that are capable of working only for the deterministic service times are rejected.
Ø  Simplicity: Schemes are accepted in their most simple version.
Ø Low parameter sensitivity: Sensitivity of a scheme is measured with respect to its various parameter values. The scheme which is found to be too much sensitive to a particular parameter, it is rejected.
Ø Information entropy: This goal is about how the feedback information is used. The goal is to get maximum info with the minimum possible feedback.
Ø Dimensionless parameters: A parameter having the dimensions such as the mass, time and the length is taken as a network configuration or speed function. A parameter that has no dimensions has got more applicability.
Ø Configuration independence: The scheme is accepted only if it has been tested for various different configurations.

Congestion avoidance scheme has two main components:
Ø  Network policies: It consists of the following algorithms: feedback filter, feedback selector and congestion detection.
Ø  User policies: It consists of the following algorithms: increase/ decrease algorithm, decision function and signal filter.
These algorithms decide whether the network feedback has to be implemented via packet header field or as source quench messages.




Friday, August 9, 2013

What are applications of flooding algorithm?

- Flooding algorithm and its many other variants are used as material distributing algorithm.
- This algorithm distributes the messages to all the hosts in the entire graph. 
This algorithm since it acts like a flood and therefore has been named so. 
- In this simple yet useful distribution or routing algorithm, every packet that a node receives is transmitted to every other outgoing link. 
This algorithm is available in many variants but in every variant the following two things are common:
1. Each node acting as receiver and transmitter.
2. Each node responsible for forwarding received message to all its neighboring nodes except the one from which the message came. 
- Thus, the messages are eventually delivered to the hosts spread across the network. 
- Flooding algorithm might have been more useful if it would have been more complex. 
- Also, then it would have been possible to avoid the duplicate messages and infinite loops that occur because of them. 

Applications of Flooding Algorithm

In this article we list some of the application of the flooding algorithm.
1. Used in computer networking
2. Used in graphics
3. These algorithms are quite useful for solving numerous mathematical problems such as the maze problems.
4. Used for solving problems in the graph theory. 
5. Used in systems which make use of bridging. 
6. Used in systems like usenet.
7. Implemented in peer – to – peer file sharing
8. Flooding algorithms are often implemented as a part of some of the routing protocols as in OSPF, DVMRP and so on. 
9. It is also used in the protocols used in the ad hoc wireless networks.
10.There is a variant of flooding algorithm called the selective flooding which is capable on addressing various issues of flooding algorithm partially by allowing the packets to be sent only in the appropriate right direction. The packets are not sent on each and every line. 
11. Another variant of the flooding algorithm called the similarity algorithm is used graph matching algorithm. 
- This variant of the flooding algorithm is quite versatile and has got an application in the schema matching. 
- Matching the contents of the two data schemas has got an important role to play in many biochemical applications, e – business and other data warehousing applications etc. 
- The similarity flooding algorithm is based on a computation that is fixed and can be used across a number of scenarios. 
- Two graphs are passed to the algorithm as the input parameters. 
- These graphs might be of catalogs, schemas or even data structures etc. 
- The algorithm then produces a mapping between the corresponding nodes of the two graphs as output. 
- It depends on the goal of the matching what subset of the mapping has to be chosen using filters. 
- After the algorithm has been executed, a human tester is expected is expected to check and verify the results. 
- The results might be adjusted by the tester if required. 
- In this method, the accuracy of the algorithm is evaluated by number of the adjustments that are necessary.
- In some cases, an accuracy metric might be used for the estimation of the labor savings that could be obtained by the users by means of this similarity flooding algorithm for obtaining a first matching pair.
- Finally, this algorithm can be deployed as an operator of very high level in a test bed that has been implemented for the management of the output mappings and the information models. 
- There are different types of matching problems and thus each type requires following a different approach. 
- For example, the relational schemas can only be matched using SQL data types.


Wednesday, November 21, 2012

How to call tests with parameters in Test Director?


Testing an application is a complicated process. Test director has provided help in many tasks such as in the organization and management of the different phases of the testing process which is inclusive of various activities such as test planning, requirements analyzation, defects tracking and so on. 
Test director basically provides the tester with a frame work that is well organized for the testing of applications just before their deployment. Test plans typically evolve around the requirements that are new or have been modified therefore; there is a need of having a central data repository for the management and organization of the testing activities. 

Test director proves to be an effective guide while you go through the following tasks:
  1. Requirements specification
  2. Test planning
  3. Test execution
  4. Defect tracking etc.
The above mentioned 4 tasks also represent the 4 major phases of the test director testing process. In the phase of test planning you have the following tasks:
  1. Defining the testing strategy
  2. Defining test subjects
  3. Defining tests
  4. Creation of the requirements coverage
  5. Designing the test steps
  6. Automation of the tests
  7. Analyzation of the test plan
A test plan tree is constructed and then the tests are executed for locating the defects and assessing the quality. The following tasks are involved:
  1. Creation of the test sets
  2. Scheduling the runs
  3. Running the tests
  4. Analyzation of the test results

How to call tests with parameters in test Director?

- When the test steps are designed, you have an option whether or not include a call to a manual test. 
- When the test is executed, this manual test is called as a part of the execution sequence. 
- The test that is called is actually a template test which has the characteristic that it can be reused as a test that is called by other tests. 
- It is possible to pass parameters to a template test. 
- Parameter is a variable for replacing a fixed value. 
- The values of the parameters can be changed by passing the desired value to the test that calls it. 

Example:
- Suppose there is a test in which you need to log-in with a specific user id and password. 
- There might be two users to the same test i.e., administrator and public. 
- For achieving this, two parameters are created to the test namely user id and password. 
- The value of these two parameters then can be modified according to whether log-in has to be public or administrative.

Follow the steps below to call a test with parameters:
  1. Firstly, the design steps tab needs to be displayed for the test therefore click on the design steps tab.
  2. Select the test with parameters that are required to be called by clicking on the ‘call to test’ button. A ‘select a test’ dialog box will open up.  Type in the name of the test to be found and click find button. Test director will highlight that particular test. Click OK.
  3. Again you will get a ‘parameters of test …’ dialog box which will display all the parameters associated with that particular test.
  4. Modify the parameter values in the value column according to your requirements. Values to the parameters can be assigned whenever a test is created to call the tests or whenever a test is added to the test set or whenever the test is run.  Click OK. The selected step will be added to the design steps.
  5. Re-order the steps
  6. Adjust the step size.


Monday, October 8, 2012

How many ways we can parameterize data in QTP?


Parameterization is one of the important provisions we have in quick test professional which has enabled the passing of the values to the tests very simple. This parameterization feature enables one to pass multiple values at a time to a test.
And what more? 
The process of parameterization has proven to be a great helping hand while carrying out the data driven testing. Data driven testing is the kind of testing that involves working with multiple sets of data on the same tests. 
The quick test professional comes with various ways for carrying out the process of parameterization:
  1. Parameterization via loop statements
  2. Parameterization via data table
  3. Dynamic test data submission
  4. Obtaining test data via front end objects
  5. Getting test data directly from the spread sheets, flat files or we can say external files.
  6. Getting test data directly from the oracle, MSaccess or we can say data bases.
Now we shall discuss the above 6 different ways of parameterizing the tests in detail.
1. Parameterization via loop statements
In this method the sequential numbers or logical numbers can be passed via the loop statements however you cannot pass strings to the tests.

2. Parameterization via data table: 
One data table or spread sheet is provided along every test in quick test professional. The provided data table along with the test can be used very well for the data driven testing. Furthermore the following 3 purposes are served by the data table:
a)   To import the test data from external spread sheets: for doing this open the data table and place the pointer. Then right click and select the import from file option. Then you need to enter the path of the spread sheet to be imported from and hit ok. Later connect to the test.
b)   To import the test data from the external flat files: for doing this open the data table and place the pointer. Then right click and select the import from file option. Then you need to browse the path of the file to imported and press ok. Later connect to the test.
c)   To import the test data from the data bases: for doing this the user needs to first create a DSN of the data base i.e., the data source name by making use of the SQL commands. This can be done by creating a test data base and saving the created data base with .mdb extension in msaccess. Next step involves creation of a data table and filling it up with data. The last step is to create a DSN and you can start importing the data.

    3. Dynamic test data submission: 
   This also involves the use of the loop statements however the data has to be entered again and again by the user.

    4. Obtaining test data via front end objects

   5. Getting test data directly from the spread sheets, flat files or we can say external files.

   6. Getting test data directly from the oracle, MSaccess or we can say data bases.

There is one more method for parameterizing the data apart from those mentioned above and is also less followed. The method makes use of the dictionary object for the purpose of parameterization. There are several types of parameterization namely:
  1. Data table
  2. Environment variable
  3. Random number
  4. Test and action
The data table consists of the following parameters:
  1. Input parameter
  2. Out put parameter
  3. Random number
  4. Environment variable. 


Wednesday, October 3, 2012

How would you export a Script from one PC to another in QTP? Can launch two instances of QTP on the same machine?


While working on a collaborative project in quick test professional it becomes necessary to import some scripts from one machine to another. But how this to be done will be discussed and also how two instances of quick test professional can be launched on the same machine.

Exporting a script from one PC to another

- If you would have noticed the quick test professional comes with a tab titled “export”. 
- For doing so just mark the code that you want to export, copy and paste it in a text file. 
- The copied code may consist of references repository items, part of tests, part of settings, parameters, environment variables and so on. 
- However, it is another fact that the export option is not directly available in the quality center’s test script lab.
- A suitable version of quick test professional needs to be installed. 
- Then, you need to connect it to the quality center project. 
- Next step would be to open the test via quick test professional. 
- Once you are done with these 3 basic steps you can now export the file via “export to zip file” feature of the quick test professional either in to the quick test professional data base or some other location and either on the same machine or on some different machine. 
- One may have a bulk of test automation quick test professional scripts on a local machine which might be required to be exported from local machine to  QC. 
- Exporting some times becomes necessary when there are too many scripts and it takes a whole lot of time to open each and every script and then save it in the quality center.
- Some third party tools are available which serve the purpose of uploading many quick test professional scripts at once from one PC to another PC.  
- There is one more method which makes use of the “generate script” function that comes in built with the quick test professional. 
- This function is available in the object identification section under test settings, tools and options tab. 
- Using this function of the quick test professional, a zip file of the scripts to be exported can be created. 
- The zip file of the specified scripts is created at the source computer itself. 
Later, this zip file can be easily exported to quick test professional on destined computer.

Now let us answer the second question of this article regarding the launch of two instances on the same machine. 
- The answer is no since the quick test professional only has the ability of working with the multiple instances of the AUT or application. 
- An example of this can be multiple windows of the internet explorer browser that can be handled by the quick test professional. 
- Therefore, at a time only a single instance of the quick test professional can be launched on the same machine. 
- To say, actually two instances cannot be launched on the same machine but yes you can go for virtualization option. 
- Today, there are many tools available for virtualization such as sandboxie, Altiris SVS and so on. 
- These two tools have been used for running two Mozilla Firefox profiles on the same PC and for testing some software in isolated PC environment but have not been put to use for launching two instances of the quick test professional on the same machine. 
- But the possibilities are that these two tools can be used for running multiple instances of quick test professional on the same machine. 


Sunday, September 16, 2012

What is a Test Fusion Report in QTP?


This article gives a detailed description of what a test fusion report is. Once a test has been executed, all the aspects of that particular test report are displayed by a certain report that has been termed as the test fusion report. 

What are Test Fusion Reports?

Basically the below mentioned aspects of a test run are displayed by the test fusion report:
  1. A high level overview of the results.
  2. Tree view of that particular test specifying the exact point where the application software actually encountered failure and this tree view is expandable.
  3. The data that was used during the test run.
  4. The screen shots of the steps of the test run where the application software showed some discrepancy.
  5. Explanation of each check point in high detail along with the fail and pass criteria.
- These test fusion reports can be combined along with the quick test professional. 
- The main advantage of this is that on combining these reports with the quick test professional, they can be shared among the whole development and quality assurance team. 
- The test fusion report displays at what all exact points the application software failed. 
- The test fusion report consists of a data table. 
- This data table gives a record of the whole test data. 
- A screen view is also presented in the test fusion report which provides one with the actual screen view. 
- With the aid of the test fusion report, even the success or failure of the check points can be determined. 
- The test fusion report proves to be quite a handy tool while the developer has to report to its client or customer. 
- There are some other components of the test fusion report as mentioned below:
  1. Name of the test
  2. Test case
  3. Start of the test run
  4. End of the test run
  5. Name of the result
  6. Iteration
  7. Status
- A different aspect of the test run is displayed by the test fusion report and can also be considered to be a kind of compilation of the whole application testing process. 
- The screen shots provided by the test fusion report provides a solid support for any inconsistencies that are present within a software application. 
- The test fusion report is represented in the tree format and displays all the aspects of the test run in the same format. 
- You can find detail for each and every step of the iterations here in the test fusion report. 
- All the results of a test are collected by the test fusion report. 
- Test fusion report can also be categorized under the instances of the collecting parameter return. 
- There is a big difference between the errors and the failures which is distinguished by the test frame work. 
- Anticipated problems are called failures whereas the un- anticipated ones are the errors. 
- All these are listed by the test fusion report. 
- The results of the testing have to be reported at various stages like unit testing, integration testing and system testing and so on. 
- The test fusion report provides the state of the software product that is under development through different angles. 
- Producing this test fusion report takes around 5 – 10 percent of the total development effort. 
- This percent though being quite less is of great significance, after all an organization cannot give a certification about the readiness of the software product in just one line. 
- Producing a test fusion report becomes much more important especially when the software encounters some problem.
- There are several other aspects of the software system or application that the customer would like to know apart from the functional requirements like a few mentioned below:
  1. platform
  2. dependence
  3. performance and so on. 


Facebook activity