Subscribe by Email


Showing posts with label Load. Show all posts
Showing posts with label Load. Show all posts

Tuesday, September 3, 2013

What is meant by load shedding?

The network is monitored by the network monitoring systems. These systems need to be robust and must be capable of inevitably coping with the situations in which the overload occurs. The network gets overloaded because of the nodes generating large volumes of data at high rates. Overload might also occur because of the burstiness of the traffic in its normal course of operation. For reducing the load of the network, load shedding techniques are applied. 

- Load shedding techniques have to be followed if the network is under a lot of stress. 
- This has to be done while monitoring the network for avoiding the packet loss that otherwise might be uncontrollable. 
- Load shedding involves sampling the incoming traffic. 
- CoMo or continuous monitoring has been developed to serve this purpose. 
- It uses such a load shedding scheme which can infer the query’s cost using the relation between the set of features of the traffic and the actual resource usage without having any knowledge of the plug-ins. 
- Here, traffic feature can be defined as a counter describing the incoming traffic’s particular property. 
The property might be any of the following:
Ø  Number of packets
Ø  Number of bytes
Ø  Flows
Ø  Unique IP destination address and so on.


- The CoMo consists of a prediction and the load shedding sub-system for intercepting the packets prior to sending them to the plug-in from the filter.
- A traffic query is implemented by this plug-in. 
- The system completes the process in 4 phases. 
- In the first phase, it forms a batch of packets for each 100ms of the traffic. - It then processes each of these batches for extracting a predefined traffic features’ set that is quite large. 
- From these, the most relevant sets are selected by the feature selection sub-system based up on the present stats of the CPU usage of the query. 
- The selected subset is then supplied as input for the “multiple linear regression subsystem”. 
- This is done for the prediction of the CPU cycles that the query requires for processing the whole batch. 
- If the prediction is greater than the capacity of the system, the batch is pre-processed by the load shedding subsystem for discarding the packet’s portion. 
The batch is discarded through packet or flow sampling. 

Load shedding is now being seen as an effective method for curbing the overload situations even in the real time systems. 
- It involves shedding excess of the load in such a way that the stability of the system is not disturbed and also the system buffers do not experience any overflows. 
- The idea for applying the technique of load shedding in the field of networking has been adopted from the concept of the electric power management.
- Here, the electric current is intentionally disconnected on particular lines when the demands for the power supply are higher than what is being supplied.
- CoMo is an open source system and can be quickly implemented and can be used for further deploying other network monitoring applications. 
- The system has been written using C language and uses a feature rich API. 
The system works by predicting the CPU usage of the system and thus anticipates about the resource requirements bursts that might occur in future. 
- The load shedding scheme used by the CoMo has the capability of automatically identifying the features using which the resource usage can be best modeled for each monitoring application.
This identification is made according to the previous resource usage measurements. 
- These measurements are then used for determining the system’s overall load and by what percentage the load must be shed. 


Thursday, August 29, 2013

How can traffic shaping help in congestion management?

- Traffic shaping is an important part of congestion avoidance mechanism which in turn comes under congestion management. 
- If the traffic can be controlled, obviously we would be able to maintain control over the network congestion. 
Congestion avoidance scheme can be divided in to the following two parts:
  1. Feedback mechanism and
  2. The control mechanism
- The feedback mechanism is also known as the network policies and the control mechanism is known as the user policies.
- Of course there are other components also but these two are the most important. 
- While analyzing one component it is simply assumed that the other components are operating at optimum levels. 
- At the end, it has to be verified whether the combined system is working as expected or not under various types of conditions.

Network policy has got the following three algorithms:

1. Congestion Detection: 
- Before information can be sent as the feedback to the network, its load level or the state level must be determined. 
- Generally, there can be n number of possible states of the network. 
- At a given time the network might be in one of these states. 
- Using the congestion detection algorithm, these states can be mapped in to the load levels that are possible. 
- There are two possible load levels namely under-load and over-load. 
- Under-load means below the knee point and overload occurs above knee point. 
- If this function’s k–ary version is taken, it would produce k load levels. 
- There are three criteria based up on which the congestion detection function would work. They are link utilization, queue lengths and processor utilization. 

2. Feedback Filter: 
- After the load level has been determined, it has to be verified that whether or not the state lasts for duration of sufficiently longer time before it is signaled to the users. 
- It is in this condition that the feedback of the state is actually useful. 
- The duration is long enough to be acted up on. 
- On the other hand a state that might change rapidly might create confusion. 
The state passes by the time the users get to know of it. 
- Such states misleading feedback. 
- A low pass filter function serves the purpose of filtering the desirable states. 

3. Feedback Selector: 
- After the state has been determined, this information has to be passed to the users so that they may contribute in cutting down the traffic. 
- The purpose of the feedback selector function is to identify the users to whom the information has to be sent.

User policy has got the following three algorithms: 

1.Signal Filter: 
- The users to which the feedback signals are sent by the network interpret them after accumulating a number of signals. 
- The nature of the network is probabilistic and therefore signals might not be the same. 
- According to some signals the network might be under-loaded and according to some other it might be overloaded. 
- These signals have to be combined to decide the final action. 
- Based up on the percentage, an appropriate weighting function might be applied. 

2. Decision Function: 
- Once the load level of the network is known to the user, it has to be decided whether or not to increase the load.
- There are two parts of this function: the direction is determined by the first one and the amount is decided by the second one. 
- First part is decision function and the second one is increase/ decrease algorithms. 

3. Increase/Decrease Algorithm: 
- Control forms the major part of the control scheme.
- The control measure to be taken is based up on the feedback obtained. 
- It helps in achieving both fairness and efficiency. 


Tuesday, August 20, 2013

When is a situation called as congestion?

- Network congestion is quite a common problem in the queuing theory and data networking. 
- Sometimes, the data carried by a node or a link is so much that its QoS (quality of service) starts deteriorating. 
- This situation or problem is known as the network congestion or simply congestion. 
This problem has the following two typical effects:
Ø  Queuing delay
Ø  Packet loss and
Ø  Blocking of the new connections


- The last two effects lead to two other problems. 
- As the offered load increases by the increments, either the throughput of the network is actually reduced or the throughput increases by very small amounts. 
- Aggressive re-transmissions are used by the network protocols for compensating for the packet loss. 
- The network protocols thus tend to maintain a state of network congestion for the system even if the actual initial load is too less that it cannot cause the problem of network congestion. 
- Thus, two stable states are exhibited by the networks that use these protocols under similar load levels. 
- The stable state in which the throughput is low is called the congestive collapse. 
- Congestive collapse is also called congestion collapse.
- In this condition, the switched computer network that can be reached by a packet when because of congestion there is no or little communication happening.
- In such a situation even if a little communication happens it is of no use. 
There are certain points in the network called the choke points where the congestion usually occurs.
- At these points, the outgoing bandwidth is lesser than the incoming traffic. 
Choke points are usually the points which connect the wide area network and a local area network. 
- When a network falls in such a condition, it is said to be in a stable state. 
- In this state, the demand for the traffic is high but the useful throughput is quite less.
- Also, the levels of packet delay are quite high. 
- The quality of service gets extremely bad and the routers cause the packet loss since their output queues are full and they discard the packets. 
- The problem of the network congestion was identified in the year of 1984. 
The problem first came in to the scenario when the backbone of the NSF net phase dropped 3 times of its actual capacity. 
- This problem continued to occur until the Van Jacobson’s congestion control method was implemented at the end nodes.

Let us now see what is the cause of this problem? 
- When the number of packets being set to a router exceeds its packet handling capacity, many packets are discarded by the routers that are intermediate. 
- These routers expect the re-transmission of the discarded information. 
- Earlier, the re-transmission behavior of the TCP implementations was very bad. 
- Whenever a packet was lost, the extra packets were sent in by the end points, thus repeating the lost information. 
- But this doubled the data rate. 
- This is just the opposite of what routine should be carried out during the congestion problem. 
- The entire network is thus pushed in a state of the congestive collapse resulting in a huge loss of packets and reducing the throughput of the network. 
Congestion control as well as congestion avoidance techniques are used by the networks of modern era for avoiding the congestive collapse problem. 
Various congestion control algorithms are available that can be implemented for avoiding the problem of network congestion. 
- There are various criteria based up on which these congestion control algorithms are classified such as amount of feedback, deploy-ability and so on. 


Saturday, June 15, 2013

What is CPU Scheduling Criteria?

Scheduling is an essential concept that serves in the multitasking, multiprocessor and distributed systems. There are several schedulers available for this purpose. But these schedulers also require a criterion up on which they can decide how to schedule the processes. In this article we discuss about these scheduling criteria. Today a number of scheduling algorithms are available and all these have different properties. This is why these may work up on different scheduling criteria. Also the chosen algorithm may favor one class of processes more than the other.

What Criteria is used by algorithms for Scheduling?


Below mentioned are some of the criteria used by these algorithms for scheduling:
1. CPU utilization:
- It is a property of a good system to keep the CPU as busy as possible all the time.
- Thus, this utilization ranges from 0 percent to 100 percent.
- However, in the systems that are loaded lightly, the range is around 40 percent and for the systems heavily loaded it ranges around 90 percent.

2. Throughput:
- The work is said to be done if the CPU is busy with the execution of the processes.
- Throughput is one measure of CPU performance and can be defined as the number of processes being executed completely in a certain unit of time.
- For example, in short transactions throughput might range around like 10 processes per second.
- In longer transactions this may range around only one process being executed in one hour.

3. Turnaround time:
- This is an important criterion from the point of view of a process.
- This tells how much time the processor has taken for execution of  a processor.
- The turnaround time can be defined as the time duration elapsed from the submission of the process till its completion.

4. Waiting time:
- The amount of time taken for the process for its completion is not affected by the CPU scheduling algorithms.
- Rather, these algorithms only affects the time when the process is in waiting state.
- The time for which the process waits is called the waiting time.

5. Response time:
- The turnaround is not a good criterion in all the situations.
- The response time is favorable in the case of the interactive systems.
- It happens many a times that a process is able to produce the output in a fairly short time compared to the expected time.
- This process then can continue with the next instructions.
- The time taken for a process from its submission till production of the first response is calculated as the response time and is another criterion for the CPU scheduling algorithms.

All these are the primary performance criteria out of which one or more can be selected by a typical CPU scheduler. These criteria might be ranked by the scheduler depending up on their importance. One common problem in the selection of performance criteria is the possibility of conflict ion between them.
For example, increasing the number of active processes will increase the CPU utilization but at the same time will decrease the response time. This is often desirable to produce reduction in waiting time and turnaround time also. In a number of cases the average measure is optimized. But there are certain cases also where it is more beneficial to optimize the maximum or the minimum values.
It is not necessary that a scheduling algorithm that maximizes the throughput will decrease the turnaround time. Out of a mix of short and long jobs, if a scheduler runs only the short jobs, it will produce the best throughput. But at the same time the turnaround time for the long jobs will be so high which is not desirable.


Wednesday, May 22, 2013

What are Address Binding, Dynamic Loading and Dynamic Linking?

In this article we shall discuss about three interrelated concepts namely address binding, dynamic loading and dynamic linking.

1. Address Binding: 
- There are two types of addresses for the computer memory. 
- These are called the physical address and the logical address. 
- A physical memory location is allocated to a logical pointer by address binding process.
- This is actually nothing but associating the physical address and the logical address with each other. 
- Sometimes logical address is also referred to as the virtual address. 
- This concept is an important part of the memory management. 
- Operating system is responsible for carrying out address binding on behalf of the applications and programs that need an access to the memory. 
- A program cannot be executed without bringing it to the main memory. 
- The instructions of the program have to be bound to right address spaces in the physical memory. 
- Address binding is simply a scheme for performing this job. 
- It can be thought of as something similar to address mapping. 
- Address binding can be carried out at any of the following times:
Ø  Compile time
Ø  Loading time
Ø  Execution time

- In execution time binding, whenever the program requires access to memory, it has to go through a register called the relocation register and is similar to the base register. 
- Then the offset is added. 
- But in binding during the loading time, same thing is done but every time this register need not be evaluated. 
- The addresses are mapped at the time of loading the program in to the memory. 
- If there is a change in the base address, the whole program has to be reloaded.

2. Dynamic Loading: 
- This mechanism is very useful for a program as it helps it do the following things:
Ø  Loading library in to the main memory.
Ø  Retrieving the address of the variables and routines that are contained in the library.
Ø  Accessing those variables and executing those routines.
Ø  Unloading the library.
- Dynamic loading is very much different from the load time linking and static linking. 
- Dynamic loading allows a system to start up even of the libraries are absent. - It also helps in discovering the absent libraries and then gaining the additional functionality. 
- Dynamic loading is a very transparent process since it is the operating system that handles it. 
- Main advantages are firstly, it helps in fixing the patches at once without having the need for re-linking them and secondly, it provides protection to the libraries against modification that is not authorized. 
Dynamic loading find its major use in the implementation of the software plugins.
- It is also used in the implementation of the computer programs where requisite functionality is supplied by the different libraries and user has the freedom to select the libraries he/ she wishes to provide.

3. Dynamic Linking: 
- This is an important part of the binding process. 
- The purpose of the dynamic linking is resolving the references or symbols and links to the library modules. 
- This process is carried out by a linker program. 
- This programs searches for a set of library modules in some given sequence. 
This process takes place during the creation of the executable file. 
- The resolved references may be addresses of the jump calls and the routines. - These may in different modules or in the main program.
- Dynamic linking resolves them in to relocatable address or fixed address through allocation of the memory to each of the memory segment of the referenced module. 


Tuesday, April 16, 2013

What are the basic functions of an operating system?


Operating system is the program that takes care of all the computer operations. It acts like a software link between the computer hardware and you. The link that it provides is nothing but an interface via which several other programs are managed. Computer systems come pre-installed with an OS. The place of storage of the operating system is the hard disk drive of the computer system. As soon as you boot in or turn on the computer system, the operating system is the first thing to be loaded in to the memory. Bootstrap loader is the program responsible for carrying out this task and this whole process is termed as booting. The bootstrap loader resides permanently in the electronic circuitry of the computer i.e., on the ROM chip to be more precise. There are various functions of an operating system about which we will be discussing in this article. 

Every system has an OS and every OS has some basic functions that do not depend up on its size or complexity.

1. Management of the resources: 
- Every OS has some managing resources through which it manages all the resources attached to a computer such as keyboard, mouse and monitor (at which you are looking presently) etc. plus it also manages the memory. 
- A file structure is created on the hard drive of the system which becomes a place for storage and retrieval of data. 
- Whenever a file is created, it is named by the OS and assigned an address in order to remember where it has been stored.
- This makes it easy to be accessed in the future. 
- This system is called the file system and this is usually hierarchical in nature. - Here the files are organized in directories or folders.

2. Provides a user interface: 
- Through user interface, the user is able to interact with the hardware resources and other software applications in a system.
- Almost all the operating systems that we have today come with a GUI or graphical user interface.
- In such as interface icons are the graphical objects that represent most of the features.

3. Execution of the processes: 
- It is the operating system that is responsible for the execution of the applications. 
- Multi – tasking is a major feature of today’s operating systems.
- Multi –tasking is the ability of an OS for running a number of tasks simultaneously. 
- Whenever a program is requested by the user, it located by the OS and loaded in to the system’s main memory i.e., RAM. 
- As more and more programs are requested, OS allocates resources to them.

4. Provides support for the utility programs: 
- Utilities are the programs that perform the repair and maintenance tasks on a computer system. 
- With these programs, back up of data can be taken, damaged files can be repaired, lost files can be located, other problems can be identified. 
- One example of such utility is the disk de-fragmenter.

5. Controls the hardware: 
- Operating system lies between the application software and the basic input and output system or BIOS. 
- This is the system that maintains a control over the hardware resources and their functioning. 
- All the hardware processes need to undergo processing via the OS. 
- Device driver help OS access the hardware via BIOS.

The nature of the OS required depends up on the application for which it is required. For example, OS required for running an airline seat reservation system differs from that required by the scientific experiments. And so its design is also defined by the application. 


Monday, December 17, 2012

Give an overview of the process of system performance validation with IBM Rational Performance Tester?


IBM brought the rational performance tester in to picture as an effort to automate the rigorous process of performance testing. Automated performance testing was required to make it possible to deliver high quality software to the end users.

- The Rational Performance Tester is a convenient means for accelerating the software system’s performance without impacting its quality. 
- Software performance bottlenecks can be identified very easily with the help of IBM rational performance tester. 
- Not only the presence of bottlenecks but also their causes can be identified.  
It is important to make predictions regarding the behavior and performance of the software systems and applications under extreme load conditions. 
Making wrong predictions can get the whole organization, its revenue as well as the customer’s at risk. 
This can prove to be hell devastating in today’s software engineering world where the competition is already soaring high. 

- Hence, a proper testing solution is required that would not only help you validate the performance of your product but also optimize and verify it under a number of load conditions. 
- IBM rational performance tester gives you more than just testing. 
- Using it, the scalability factor of many of the applications can be calculated. 
The rational performance tester supports code free programming i.e., the programming knowledge is not much required. 

In this article we provide an overview of the system performance validation using the rational performance tester. 

- Performance tests are contained within performance test projects.
- Instead of creating a new project, a test can be simply recorded which in turn will create a project immediately. 
- After a test has been recorded, it can be edited so as to include data pools, data correlation and verification points. 
- Data pools are required for obtaining variable data. 
- At verification points, it is confirmed whether the test is running as expected or not. 
- Lastly the data correlation ensures that appropriate data is being returned for the request. 
- Furthermore, protocol specific elements can also be added to a test.
- The items to which you make modifications appear in italics whenever a test is edited. 
- After saving the test it changes in to regular type. 
- A workload can be emulated by adding user groups and other elements to a new schedule.
- After being done with all the additions to the schedule, it can be run. 
- You can either run it locally or with a default launch configuration. 
- The playback might be slowed down because of the delays in HTTP requests. - The performance results of the application can be evaluated after being generated dynamically during the execution process. 
- After execution, the results can be regenerated for analyzation and viewing purpose. 
- The coding problems which are known to cause disturbances in the performance can be found and fixed using the problem analysis tools. 
- The data regarding the response time breakdown can be obtained using an application that has been deployed in a production environment. 
- Another means for collecting this data is from a distributed application implemented in a testing environment. 
- Resource monitoring data including available network, disk usage, network throughput, processor usage and so on can also be collected and imported. 
Resource monitoring data is important since one can have a comprehensive view of the system or application and this is helpful in the determination of problems.
- All the collected and imported data can be analyzed and later can be used in the location of the exact cause of the problem that is hampering the system performance.



Monday, August 27, 2012

Is load testing possible using WinRunner? Does WinRunner help you in web testing?


Winrunner, apart from just serving as a test automation tool has also proved itself to be quite an effective tool for load testing. However, the winrunner can function as a load testing tool only at the level of the graphical user interface layer.  
Why it is so?
- This is so because at this level only the record and play back options are possible as if they are being carried out by a real world human user. 

How is load testing possible using WinRunner?

- The loadrunner counterpart of the winrunner is the proper load testing tool but sometimes winrunner is also used as one in addition to it. 
- Firstly, a user session such as a web browsing session is simulated. 
- The user actions taking place are recorded by the winrunner and are used for load testing. 
- No action is taken at the protocol layer by the wirnunner except recording and playing back of the events and all this seems as if some invisible real world human user is performing all these actions. 
- For the winrunner to perform load testing, it is required to give the control of the pc to it so that it can execute the previously recorded test scripts. 
- But at the same time a load test cannot be run in winrunner as a means of load generation. 
- The number of PCs required is directly proportional to the load that has to be given to the software system or application. 
- In spite of this disadvantage the winrunner will always be valued as a good load testing technology which provides the only means for the determination of the actual user response time. 
- The actual user response it calculates is inclusive of the processing that takes place on the side of the client hardware.

How WinRunner helps you in web testing?

- The context sensitive operations on the web (HTML) objects present in the web site can be recorded and run by the winrunner when it is loaded among with the web test add- in support. 
- This works if the web site in the browsers such as the internet explorer and Netscape. 
- With the help of the web test add- in, the properties of the web objects can be viewed and the information regarding the web objects present in the web site can be retrieved. 
- The check points can be created on the web objects present in the web site for the testing of the functionality of the web site. 
- Apart from internet explorer and netscape, the web browser AOL can be used for running tests and recording the objects in the web site but cannot be run or recorded on the following web browser elements:
  1. Back button
  2. Forward buttons
  3. Navigate buttons and so on.
- When the tests are created using the web test add- in the below mentioned objects are recognized by the winrunner:
  1. Text links
  2. Frames
  3. Images
  4. Web form objects
  5. Tables and so on.
- Different properties are possessed by every object. 
- These properties form a key aspect in the following tasks:
  1. identification of the objects
  2. Retrieval and checking of the property values.
  3. Performing web functions.
- All the above three tasks ensure that your web site is working perfectly or not. 
- You should take care that you start the winrunner with the web test add- in loaded in before you open your web browser to start the web testing. 
- The recorded tab of the GUI spy can be used for viewing the properties and values of the properties that were recorded by the winrunner for the selected GUI objects.
- This is how the winrunner makes it possible to do the web testing. 


Sunday, August 12, 2012

What's the use of GUI Map Editor? How do you load GUI Map Editor?


First of all, for the identification of any object, all the information related to it is learnt by the winrunner software and stored in a map called GUI map. 

GUI Map

- The GUI map is an extremely important asset to the winrunner for the identification of the objects. 
- During the running of the tests the location and identification of the objects is done via this GUI map only. 
- The GUI is constituted of the descriptions of the objects that are to be identified. 
- These descriptions of the objects are read by the winrunner and the objects corresponding to the same description are matched up.
- You can even have a comprehensive view of the objects that constitute your software system or application.
- The final GUI map is actually composed of many more small GUI map files. 

In this article we are going to learn about the use of the GUI map 
editor and how it can be loaded.

GUI Map Editor

- The contents of a GUI map can be easily viewed with the help of GUI map editor. 
- You can view either the contents of the whole GUI map or the contents of those small individual GUI map files. 
- The grouping the GUI objects takes place according to the window in which they reside in the software system or application. 
- With the help of GUI map editor you can edit your GUI map at any time manually.

How do you load GUI Map Editor?

- For loading the GUI map editor you just need to go to the tools menu and select GUI map editor option. 
- The GUI map editor offers you two types of views both of which enable you to glance the contents of the either of those mentioned below:
  1. The individual GUI map files or
  2. The entire GUI map
- You can even load the two GUI map files simultaneously in GUI map editor.
- For viewing the contents of two individual GUI map files simultaneously, you just need to expand the GUI map editor. 
- With such a feature you are able to easily move or copy the descriptions of the objects between the files. 
- The window objects column in the GUI map editor displays all the windows and objects that are present in that particular GUI map. 
- The objects which appear in the windows are indented. 
- The GUI map editor also comes with the option of viewing the physical description of the object or window that has been selected by the user. 
- For viewing the contents of the individual GUI map files you need to go to the view menu and select the option “GUI files”. 
- The GUI file drop box lists all the GUI map files that are open at that time.
- In the GUI map editor, the display of the objects is of tree type under the icon and name of the window in which they reside or appear. 
- Double clicking the window icon gives you the view of the objects appearing in it.
- For a concurrent view of all the objects in the tree, the objects tree can be expanded by choosing the “expand objects tree” option from the view menu.
For again going back to the windows only view, you can select the “collapse objects tree” option from the view menu.
- The display of the physical description of the contents of a single GUI map file is automatic but to view the physical description for the entire GUI map you need to select the show physical description check box. 
- However, you should be careful with the point that if the logical name of an object has been modified in the GUI map, then it has been modified in the test script as well so that there is no problem for the winrunner to locate that object.  


Facebook activity