Subscribe by Email


Showing posts with label Efficiency. Show all posts
Showing posts with label Efficiency. Show all posts

Wednesday, August 28, 2013

What are different policies to prevent congestion at different layers?

- Many times it happens that the demand for the resource is more than what network can offer i.e., its capacity. 
- Too much queuing occurs in the networks leading to a great loss of packets. 
When the network is in the state of congestive collapse, its throughput drops down to zero whereas the path delay increases by a great margin. 
- The network can recover from this state by following a congestion control scheme.
- A congestion avoidance scheme enables the network to operate in an environment where the throughput is high and the delay is low. 
- In other words, these schemes prevent a computer network from falling prey to the vicious clutches of the network congestion problem. 
- Recovery mechanism is implemented through congestion and the prevention mechanism is implemented through congestion avoidance. 
The network and the user policies are modeled for the purpose of congestion avoidance. 
- These act like a feedback control system. 

The following are defined as the key components of a general congestion avoidance scheme:
Ø  Congestion detection
Ø  Congestion feedback
Ø  Feedback selector
Ø  Signal filter
Ø  Decision function
Ø  Increase and decrease algorithms

- The problem of congestion control gets more complex when the network is using a connection-less protocol. 
- Avoiding congestion rather than simply controlling it is the main focus. 
- A congestion avoidance scheme is designed after comparing it with a number of other alternative schemes. 
- During the comparison, the algorithm with the right parameter values is selected. 
For doing so few goals have been set with which there is an associated test for verifying whether it is being met by the scheme or not:
Ø  Efficient: If the network is operating at the “knee” point, then it is said to be working efficiently.
Ø  Responsiveness: There is a continuous variation in the configuration and the traffic of the network. Therefore the point for optimal operation also varies continuously.
Ø Minimum oscillation: Only those schemes are preferred that have smaller oscillation amplitude.
Ø Convergence: The scheme should be such that it should bring the network to a point of stable operation for keeping the workload as well as the network configuration stable. The schemes that are able to satisfy this goal are called convergent schemes and the divergent schemes are rejected.
Ø Fairness: This goal aims at providing a fair share of resources to each independent user.
Ø  Robustness: This goal defines the capability of the scheme to work in any random environment. Therefore the schemes that are capable of working only for the deterministic service times are rejected.
Ø  Simplicity: Schemes are accepted in their most simple version.
Ø Low parameter sensitivity: Sensitivity of a scheme is measured with respect to its various parameter values. The scheme which is found to be too much sensitive to a particular parameter, it is rejected.
Ø Information entropy: This goal is about how the feedback information is used. The goal is to get maximum info with the minimum possible feedback.
Ø Dimensionless parameters: A parameter having the dimensions such as the mass, time and the length is taken as a network configuration or speed function. A parameter that has no dimensions has got more applicability.
Ø Configuration independence: The scheme is accepted only if it has been tested for various different configurations.

Congestion avoidance scheme has two main components:
Ø  Network policies: It consists of the following algorithms: feedback filter, feedback selector and congestion detection.
Ø  User policies: It consists of the following algorithms: increase/ decrease algorithm, decision function and signal filter.
These algorithms decide whether the network feedback has to be implemented via packet header field or as source quench messages.




Monday, August 5, 2013

What is optimality principle?

A network consists of nodes which require communicating with other on various grounds. This communication is established via communication channels that exist between them. The communication involves data transfers. In a network a node may or may not have a link with every other node in the network. 
Applications that require communicating over a network include:
1. Telecommunication network applications such as POTS/ PSTN, local area networks (LANs), internet, mobile phone networks and so on.
2. Distributed system applications
3. Parallel system applications

- As we mentioned above, each and every node might not be linked with every other nodes since for doing so a lot of wires and cables are required which will the whole network more complicated. 
- Therefore, we bring in the concept of the intermediate nodes. 
- The data transmitted by the source node is forwarded to the destination by these intermediate nodes. 
Now the problem that arises is which path or route will be the best to use i.e., the path with the least cost. 
- This is determined using the routing process. 
- The best path thus obtained is called the optimal route. 
- Today, we have a number of algorithms available for determining the optimal path. 

These algorithms have been classified in to two major types:
  1. Non – adaptive or static algorithms
  2. Adaptive or dynamic algorithms

Concept of Optimality Principle

- This is the principle followed while determining the optimal router between the two routes. 
The general statement of the principle of optimality is stated below:
“An optimal policy has the property that whatever the initial state and initial decision are, the remaining decision must constitute an optimal policy with regard to the state resulting from the first decision.”

- This means if P is an optimal state that results in another state say Q, and then the portion of the original from that state to this state i.e., from P to Q must be optimum. 
- This only means the optimality of the part of the optimal policy is preserved. - The initial state and the final state are the most important parts of the optimum. 
- Consider an example, suppose we have problem with 3 inputs and 26 states. - Here, the state is associated with the optimum and the total cost is associated with the optimum policy.
- If brute force method is used for 3 inputs and 100 stages we have the total number of computations as 3100
- That means for solving this problem, a super computer is required.
- Therefore, the approach used for solving this problem is a parallel processing approach. 
- Here, for the each state the least step is computed and stored during the programming. 
- This reduces the number of possibilities and hence reducing the amount of computation.
- The problems become complex if the initial and the final states are undefined. - It is necessary for the problem to follow the principle of optimality in order to use the dynamic programming. 
- This implies that whatever the state may be, the decisions that follow must be optimal in regard with the state obtained from the previous decision. 
- This property is found in combinatorial problems but since they use a lot of time and memory, this method is inefficient for them. 
- These problems can be solved efficiently if some sort of best first search and pruning technique is applied.

- In regard to the routing in networks, it follows from the optimality principle if a router B lies between router A and C which lie on an optimal path, then the path between the router B and C is also an optimal path and lies on the same path. 
- Sink tree is formed as a result of all optimal routes which is the ultimate goal of all the routing algorithms.


Friday, July 19, 2013

What are the goals and properties of a routing algorithm?

Routing requires the use of routing algorithms for the construction of the routing tables.
A number of routing algorithms are today available with us such as:
1.   Distance vector algorithm (bellman ford algorithm)
2.   Link state algorithm
3.   Optimized link state routing algorithm (OLSR)
- In a number of web applications, there are a number of nodes which require communicating with each other via communication channels. 
- Few examples of such applications are telecommunication networks (such as POTS/ PSTN, internet, mobile phone networks, and local area networks), distributed applications, multiprocessor computers etc. 
- All nodes cannot be connected to each other since doing so will require many high powered transceivers, wires and cables. 
- Therefore, the implementation is such that the transmissions of nodes are forwarded by the other nodes till the data or info reaches its correct destination. 
- Thus, routing is the process of determining where the packets have to be forwarded and doing so.

Properties of Routing Algorithm
- The packets must reach their destination if there are no factors preventing this such as congestion.
- The transmission of data should be quick.
- There should be high efficiency in the data transfer.
- All the computations involved must not be long. They should be as easy and quick as possible.
- The routing algorithm must be capable of adapting to the two factors i.e., changing load and changes in topology (this includes the channels that are new and the deleted ones.)
- All the different users must be treated fairly by the routing algorithm.
The second and the third properties can be achieved using fastest or the shortest route algorithms. 
- Graphical representation of the network is a crucial part of the routing process.
- Each network node is represented by a vertex in the graph whereas an edge represents a connection or a link between the two nodes. 
- The cost of each link is represented as the weight of the edge in the graph. 
- There are 3 typical weight functions as mentioned below:
1.   Minimum hops: The weight of all the edges in the graph is same.
2.  Shortest path: The weight of all the edges is a constant non – negative value.
3.   Minimum delay: The weight of every edge depends up on the traffic on its link and is a non – negative value.
However in real networks, the weights are always positive.

Goals of Routing Algorithms
- The goal of these routing algorithms is to find the shortest path based up on some specified relationships that if used will result in the maximum routing efficiency. 
- Another point is to use as minimum information as possible.
- Goal of the routing algorithm is also to keep the routing tables update with all alternative paths so that if one fails, the other one can be used.
- The channel or the path that fails is removed from the table. 
- The routing algorithms need to be stable in order to provide meaningful results but at the same time is quite difficult to detect the stable state of an algorithm. 
- Choosing a routing algorithm is like choosing different horses for different courses. 
- The frequency of the changes in the network is one thing to be considered. 
Other things to be considered include the cost function that is needed to be minimized and the calculation of the routing tables in a centralized fashion.
- For static networks the routing tables are fixed and therefore they require only simple routing algorithms for calculation. 
- On the other hand, the networks that are dynamic nature require distributed routing algorithms which are of course complex.



Monday, April 29, 2013

What is cache memory?


Cache memory is a certain memory aid for computers that speeds them up very well. 
- In cache memory, the storage of the data is transparent so as to make the processing of the future requests faster. 
- A cache might store in it the values that have already computed or duplicate of some values stored somewhere else in the memory. 
- Whenever some data is requested, it is first looked up in the cache memory. - If the data is found here, it is returned to the processor and this is called a ‘cache hit’. 
- In this case the time taken for accessing the data is reduced. 
- This access is thus faster than that of the main memory. 
- Another case is of cache miss when the required data is not found in the cache.
- Then again the data has to be fetched or computed from its original source or the storage location which is slow as obvious. 
- The overall performance of the system increases in proportion with the number of requests that can be served from the cache memory.
- In order to maintain the cost efficiency as well as efficiency in data usage, the size of the cache is kept relatively small as compared to the main memory. 
However, the caches have proven themselves from time to time because of their ability to recognize the patterns of access in the applications having some locality of reference. 
- Temporal locality is exhibited by the references if the data that was previously requested is requested once again.
- These references apart from temporal locality also exhibit spatial locality if the storage location of the requested data is close to the data that was previously requested.

How is cache implemented?

- The cache is implemented as a memory block by the hardware and as a place of temporary storage. 
- Here, only that data is stored which is likely to be accessed again and again. 
Caches are not only used by hard drives and CPUs but also by the web servers and browsers. 
- Pools of entries together make up the cache. 
- Each entry has a datum associated with and a copy of it is stored in the backing store. 
- Each entry is also tagged for the specification of the datum’s identity in the backing store.
- When a datum is required to be accessed by a cache client (it might be an operating system, CPU or web browser.) that it thinks might be available in the backing store, the cache is first checked. 
- If the desired entry is found, it is returned for the use. This is cache hit.
- Similarly, a web browser might look in its local cache available at the disk to see if it has the contents of a web page. 
- In this case the URL serves as the searching tag and the contents are the datum. 
- The rate of successful cache accesses is known as the hit rate of the cache.
- In case of a cache miss, the datum not cached is copied in to the cache so as to prevent future cache misses. 
- For making space for this datum, some already existing datum in the cache is removed. 
- Which datum is to be removed is determined by using the replacement algorithms. 


Thursday, April 25, 2013

What is the difference between Hard and Soft real-time systems?


- Real time operating systems are the systems that have been developed for serving to the real time requests of the applications. 
- They are readily capable of processing the data as it is inputted. 
- They do not make any delays in buffering. 
- Thus, the time taken for processing is quite less.
- The scheduling algorithms used by the real time operating systems are quite advance and dedicate themselves to a small set of applications. 
- Minimal thread switching latency and interrupt latency are two key factors of these kinds of operating systems. 
- For these systems the amount of time they take for responding matters more than amount of work they do.
These systems are used to maintaining consistency in producing the output. 

The real – time operating systems can be divided in to two categories namely:
  1. The hard – real time operating system and
  2. The soft – real time operating system
In this article we discuss about these two systems and the common differences between them.
  1. The hard real time operating systems produce less jitter while producing the desired outputs. On the other hand, the jitter produced by the soft real time operating system is quite more when compared to its hard real time counterpart.
  2. The thing that distinguishes them is not the main goal but rather the type of performance it gathers i.e., whether hard or soft.
  3. The soft real time operating systems have been designed as such that they can usually meet the deadlines whereas, the hard real time operating systems are designed in such a way so as to meet the deadlines deterministic-ally.
  4. The hard – real time systems are also called as the immediate real time systems. They are bound to work within the confined strict deadlines. If in case, the application is unable to complete its task in the allotted time, then it is said to have failed. Some examples of the hard – real time operating systems are: anti-lock brakes, aircraft control systems and the pacemakers.
  5. Hard real time operating system are bound to adhere to the deadlines assigned to them. Missing a deadline can incur a great loss. As for the soft real time operating systems, it is acceptable if the deadline is missed such as in the case of the online databases.
There is also a third category of the real – time operating systems that is not so known. It is called the ‘firm RTOS’. They also need to keep up to the deadline since missing it won’t cause any catastrophic effect but may give results that are undesirable.

More about Real time Operating System

- The embedded systems have evolved all of a sudden and now they are present all around us in digital homes, cell phones, air conditioners, cars and so on. 
- We very rarely recognize the extent to which they have eased our day to day life. 
- Safety is another aspect of our lives for which we depend on these embedded systems. 
- The thing that controls these systems is the operating systems. 
- Real time operating system is what that is used by most of these gadgets. 
- The tasks that are assigned to a real – time OS always have deadlines. 
- The OS adheres to this while completing it. 
- If these systems miss the deadline the results can be very dangerous and even catastrophic. 
- With each passing day the complexity of these systems is increasing and so our dependence on them.
- Some examples of real – time operating systems are:
Ø  OSE
Ø  RTLinux
Ø  Windows CE
Ø  LynxOS
Ø  QNX
Ø  VxWorks
- The RTOS know well not to compromise with the deadlines. 


Saturday, January 26, 2013

What are features of QF – Test?


Qf – test is abundant with features. The QF–test was made available in the market in the year of 2001 and since then it has gained over 600 customers worldwide in around 50 countries.  


Features of QF–test

- It is has been counted as the most professional tool for carrying out regression and functional testing on the web applications. 
- The foundation of the tool is also well established and it has got a high efficiency rating by the users. 
- The tool has been a breakthrough in the field of automated testing of the java based web applications that have a GUI or graphical user interface.
- The reusable tests and the modular tests are combined with the user– friendly handling as well as a price that is competitive enough which in turn yields a high ROI or return of investment. 
- Also, there have been no doubts regarding its power in automated testing  and cross platform testing. 
- In addition to these, it is quite robust in nature which makes it suitable for the cross – browser testing. 
- Without much hard work one can easily create automated load tests and regression tests. 
- The documentation of the tests and their reports can be configured very easily. 
- QF – test has made the recognition of the complex dynamic objects very reliable and quick.
- It supports all the unix systems as well as the windows platforms. 
- Swing, Internet explorer, Mozilla firefox and SWT are also supported. 
- The interface of the QF – test is quite user friendly and the documentation is quite comprehensive. 
- The tool is perfectly supported by the authors directly and quickly.
- The tool comes with an user interface that is quite intuitive in nature and the capture and playback feature is excellent. 
- The documentation of QF–test is quite extensive and has got both manual and tutorial. 
- The evaluation for the reports is done free of cost.
Currently 3 GUI technologies are supported by the qf – test namely:
  1. Swing
  2. SWT and
  3. Web
- These three technologies can be combined in various combinations. 
- Certain features why most of the people prefer QF–test are:
  1. Clear concept used.
  2. Logical
  3. Comprehensiveness
  4. Ease of use
  5. Good price
  6. Good customer care
  7. A product of Germany
- QF–test has proved to be a great tool for the creation and maintenance of the swing tests. 
- The QF–test has provided timely support to the IT industry.
- The testing time is greatly reduced such as to 3 hours from 21 hours. 
- It supports a number of technologies for testing:
  1. Swing: webstart, applets, captain casa, ULC
  2. SWT
  3. Eclipse’s standard widget tool kit inclusive of the rich client platforms or RCP and plug – ins.
  4. Web GUIs: AJAX (ExtGWT, RAP, Richfaces, Qooxdoo, GWT and so on.), web 2.0 and so on.
- The java version required for the qf – test is version 1.5 or higher than that. Platforms supported are:
  1. Windows: windows 7, XP, vista, 2000, server 2008 and 2003.
  2. Linux platforms
  3. Solaris
  4. AIX
  5. HP – UX
  6. Mac OS – X
- Browsers supported are:
  1. 32 and 64 bits JDKs: IBM, excelsior JET etc.
  2. SWT version 3.3 and plus


Friday, January 25, 2013

Explain QF-Test?


The QFtestJUI has been succeeded by the QF – test and was made available in the year of 2001. 

About QF-Test

- QF test was developed by the QFS or quality first software.
- It is actually a software tool for cross platform testing and automation of the GUI tests. 
- QF – test is limited to the following:
  1. Java
  2. Swing
  3. SWT
  4. Eclipse plug – in
  5. RCP applications
  6. Java applets
  7. Java web start
  8. ULC
  9. Cross browser test automation
- All of the above are applications for dynamic as well as static web based applications such as HTML, GWT, Qooxdoo, Vaadin, rich faces and so on. 
- QF – test also provides assistance with the load testing and regression testing of the web applications and supports all the UNIX systems and windows platforms.
- The first commercial use was in the field of quality assurance and is extensively used by the software testers. 
- QF–test is one of the most popular among the capture and playback tools and scripting tools. 
- It has been developed so that the testers and QA people find it very easy to use. 
- The product is quite reliable and robust and also has the ability to support system testing. 
- The QF–test is estimated to have around 600 customers all over the world. 
- It is the professional java and web applications testing tool and has a great efficiency in coping with the automated testing. 
- Modular as well as reusable tests are supported well by the QF–test in various combinations. 
- The product offers a high ROI i.e., return of investment because of its user – friendly GUI and affordable price. 
- Cross browser testing is supported by QF–test for Mozilla Firefox and Internet Explorer and both on the Unix and Windows platforms. 
- The dynamic UI components that have very high complexity can also be reliably recognized by the QF–test. 
- The tests developed with QF–test have the ability to tolerate any GUI changes and so they do not require any high maintenance. 
- QF–test has various in–built modularization and sequential control mechanisms that allow the testers to create sophisticated tests. 
- The test documentation and the reports produced by the QF–test are highly configurable. 
- The tool has been comprehensively documented and provides the perfect support. 
- Because of its intuitive user interface, both the testers and the software developers find it very easy to use it.
- The documentation of the tool is so extensive and consists of various things such as:
  1. Manual
  2. Tutorial
  3. Mailing list
  4. Archive and so on.
- One can obtain training regarding the QF–test on the OFS web site as webinar. 
- The modularization mechanism of the QF–test enables the testers to create large test suites in an arrangement that is concise. 
- There are some users who require a more advanced control over the application.
- The tool provides access to internal structures of the program for them via scripting languages such as java and jython and not to forget groovy. 
- Another feature called the batch mode allows a tester to run a group of tests unattended and generating reports in a number of formats such as HTML, XML and JUnit. 
- This lets the tool to be integrated in to various frame works such as maven, Jenkins and so on. 


Sunday, December 30, 2012

What are main features of TestComplete?


Many of the software applications are being written as web–based applications that can be run in a browser. The measure of effectiveness with which these applications are tested varies from organization to organization. 

- Test complete automated testing tool offers the answer to this demand. 
- For tests such as regression tests, responsiveness can be generated only through automated testing tools as such. 
- Automated testing is the way to provide many benefits including repeat-ability and speed of the test execution. 
- Test automation is known to induce long term efficiency in a software system or application.
- Developers also get a rapid feedback and can carry out unlimited iterations of the tests.
- Reporting gets customized and finding defects that were missed during the manual testing becomes easy.
-  However, automation always does not prove to be advantageous.

Features of TestComplete

Test complete  testing tool comes with certain features which we shall state now:
  1. Keyword testing: This tool comes with a keyword–driven test editor that is built– in and consists of many keyword operations corresponding to the appropriate automated testing actions.
  2. Test record and play back: This tool records the key actions which are required to play back the test. All the actions other than the required ones are then discarded.
  3. Full featured script editor: This is another built – in editor using which the test scripts can be written manually. This editor comes with some special plug – ins that provide further assistance.
  4. Script debugging features: This feature lets you stop before every statement that can be executed so that you can keep a track of what is going on and make changes accordingly.
  5. Access to properties and methods of the objects: The names of all the visible elements can be read by this tool including internal elements of the following applications:
a)   Delphi
b)   C++builder
c)   .net
d)   WPF
e)   Java
f)    Visual basic etc.
Also this tool enables the access to values through test scripts so that they can be verified and used in the tests.
  1. Unicode support: Tool has a Unicode character set support which enables the user to test the applications that are non – ASCII and use characters such as Hebrew, greek, Arabic, katakana and so on.
  2. Issue–tracking support: This tool comes with issue tracking templates which can be deployed for the creation as well as modification of the items that reside in issue – tracking systems. The tool currently provides support for the following:
a)   Microsoft visual studio 2010,2008 and 2005 team system
b)   Bugzilla
c)   Automated QA AQdev team
  1. Open architecture (COM based): An open API, COM interface forms the basis for the test complete’s engine. This makes this tool independent of the source – language and enables it to read the debugger info. It can use this info during run time via debug info agent of test complete.
  2. Test visualizer: This feature of test complete lets you take screen shots of the test recording as well as play back thus allowing you to make comparisons among the actual as well as the expected screens during the run time.
  3. Support for plug–ins: This feature allows the third party vendors to connect with test complete via their software systems and applications. 


Facebook activity