Subscribe by Email


Showing posts with label Properties. Show all posts
Showing posts with label Properties. Show all posts

Thursday, August 22, 2013

What is a spanning tree?

Spanning tree is an important field in both mathematics and computer science. Mathematically, we define a spanning tree T of an un-directed and connected graph G as a tree consisting of all the vertices and all or some edges of the graph G.
- Spanning tree is defined as a selection of some edges from G forming a tree such that every vertex is spanned by it. 
- This means that every vertex of graph G is present in the spanning tree but there are no loops or cycles. 
- Also, every bridge of the given graph must be present in its spanning tree. 
We can even say that a maximal set of the graph G’s edges containing no cycle or a minimal set of the graph G’s vertices forms a spanning tree. 
- In the field of graph theory, it is common finding the MST or the minimum spanning tree for some weighted graph. 
- There are a number of other optimization problems that require using the minimum spanning trees and other types of spanning trees. 

The other types of spanning trees include the following:
Ø  Maximum spanning tree
Ø  An MST spanning at least k number of vertices.
Ø  An MST having at the most k number of edges per vertex i.e., the degree constrained spanning tree.
Ø  Spanning tree having the largest no. of leaves (this type of spanning tree bears a close relation with the “smallest connected dominating set”).
Ø  Spanning tree with the fewest number of leaves (this spanning tree bears a close relation with the “Hamiltonian path problem”).
Ø  Minimum diameter spanning tree.
Ø  Minimum dilation spanning tree.

- One characteristic property of the spanning trees is that they do not have any cycles. 
- This also means that if you add just an edge to the tree, a cycle will be created. 
- We call this cycle as the fundamental cycle. 
- For each edge in the spanning there exists a distinct fundamental cycle and therefore there arises a one – to – one correspondence among the edges that are not present and the fundamental cycles. 
- For a graph G that is connected and has V vertices, there are V-1 edges in its spanning tree. 
- Therefore, for a general graph composed of E edges, its spanning tree will have E-V+1 number of fundamental cycles.
- For the cycle space of a given spanning tree these fundamental cycles are used. 
- The notion of the fundamental cut set as well as of the fundamental cycle forms a dual.  
- If we delete even one edge from the spanning tree, two disjoint sets will be formed of the vertices. 
- The set of the edges that if taken out from the graph G partitioning the vertices in to same disjoint sets is defined as the fundamental cut set. 
- For a given graph G there are V-1 fundamental cut sets i.e., one corresponding to each spanning tree edge. 
- The fact that the edges of the cycles that do not appear in the spanning tree but only in the cut sets of the edges can be used to establish the relationship between the cycles and the cut sets.

What is Spanning Forest?

- The sub-graph generalizing the spanning tree concept is called the spanning forest. 
- A spanning forest can be defined as a sub-graph consisting in each of the connected component a spanning tree of the graph G or we can call it a maximal cycle free sub graph.
- For counting the number of spanning trees for a complete graph the formula used is known as the cayley’s formula.



Friday, July 19, 2013

What are the goals and properties of a routing algorithm?

Routing requires the use of routing algorithms for the construction of the routing tables.
A number of routing algorithms are today available with us such as:
1.   Distance vector algorithm (bellman ford algorithm)
2.   Link state algorithm
3.   Optimized link state routing algorithm (OLSR)
- In a number of web applications, there are a number of nodes which require communicating with each other via communication channels. 
- Few examples of such applications are telecommunication networks (such as POTS/ PSTN, internet, mobile phone networks, and local area networks), distributed applications, multiprocessor computers etc. 
- All nodes cannot be connected to each other since doing so will require many high powered transceivers, wires and cables. 
- Therefore, the implementation is such that the transmissions of nodes are forwarded by the other nodes till the data or info reaches its correct destination. 
- Thus, routing is the process of determining where the packets have to be forwarded and doing so.

Properties of Routing Algorithm
- The packets must reach their destination if there are no factors preventing this such as congestion.
- The transmission of data should be quick.
- There should be high efficiency in the data transfer.
- All the computations involved must not be long. They should be as easy and quick as possible.
- The routing algorithm must be capable of adapting to the two factors i.e., changing load and changes in topology (this includes the channels that are new and the deleted ones.)
- All the different users must be treated fairly by the routing algorithm.
The second and the third properties can be achieved using fastest or the shortest route algorithms. 
- Graphical representation of the network is a crucial part of the routing process.
- Each network node is represented by a vertex in the graph whereas an edge represents a connection or a link between the two nodes. 
- The cost of each link is represented as the weight of the edge in the graph. 
- There are 3 typical weight functions as mentioned below:
1.   Minimum hops: The weight of all the edges in the graph is same.
2.  Shortest path: The weight of all the edges is a constant non – negative value.
3.   Minimum delay: The weight of every edge depends up on the traffic on its link and is a non – negative value.
However in real networks, the weights are always positive.

Goals of Routing Algorithms
- The goal of these routing algorithms is to find the shortest path based up on some specified relationships that if used will result in the maximum routing efficiency. 
- Another point is to use as minimum information as possible.
- Goal of the routing algorithm is also to keep the routing tables update with all alternative paths so that if one fails, the other one can be used.
- The channel or the path that fails is removed from the table. 
- The routing algorithms need to be stable in order to provide meaningful results but at the same time is quite difficult to detect the stable state of an algorithm. 
- Choosing a routing algorithm is like choosing different horses for different courses. 
- The frequency of the changes in the network is one thing to be considered. 
Other things to be considered include the cost function that is needed to be minimized and the calculation of the routing tables in a centralized fashion.
- For static networks the routing tables are fixed and therefore they require only simple routing algorithms for calculation. 
- On the other hand, the networks that are dynamic nature require distributed routing algorithms which are of course complex.



Thursday, June 27, 2013

What is the difference between a passive star and an active repeater in fiber optic network?

There are two important components of a fiber optic network namely passive star coupler and active repeaters. 

Passive Star in Fiber Optic Network
- Passive star couplers are single mode fiber optic couplers with reflective properties.  
- These couplers are used for optical local area networking at very high speeds. 
- These couplers are made from very simple components such as mirrors and 3 db couplers. 
- Besides this, these star couplers save a lot of optical fiber when compared to its trans-missive counterpart. 
- They are free of any multi-paths so as to avoid any interference. 
- A fiber optic network may consist of any number of passive star couplers and each of them is capable of connecting a number of users. 
- The input and output from every passive star coupler is given to the output and input of an active coupler. 
- The round trip transmission tile is stored by the active star coupler. 
- When it receives a signal from a passive star coupler, it stops the output to that coupler for the duration of the signal.
- It also inhibits the incoming data from all the other passive star couplers for the round trip transmission delay plus signal duration. 
- The purpose of a star coupler is to take one input signal and then splitting it in to a number of output signals. 
- In telecommunications industry and fiber optics communication, this coupler is used in network applications being a passive optical device. 
- If an input signal is introduced to one of the input ports, it is distributed to all of the output ports of the coupler. 
- As per the construction of the passive star coupler, the number of ports it will have is given by the power of 2. 
- For example, in a two port coupler or in a directional coupler or splitter, there are 2 input ports and 2 output ports.
- In a four port coupler, there are 4 i/p ports and 4 o/p ports and so on. 
- The digital equipment corporation also sold a device by the name of star coupler which was used for interconnecting the links and computers through coaxial cable instead of using optical fibers. 

Active Repeater in Fiber Optic Network 
- Active repeater is an important telecommunications device used for re transmitting the signal it receives to a higher level and with higher basically to the other side of an obstacle so that long distances can be covered. 
- Repeater is an electro-mechanical device that helps in regenerating the telegraphy signals. 
- It may be defined as an analog device for amplifying the input signal, reshaping it, re-timing it for re-transmission. 
- A re-generator is a repeater that can perform the re-timing operation. 
Repeaters just tend to amplify the physical signal without interpreting the data transmitted by the signal. 
- The 1st layer i.e., the physical layer is where the repeaters operate. 
Repeaters are employed for boosting the signals in optical fiber lines as well as in twisted pair and coaxial cables. 
- When a signal travels through a channel, it gets attenuated with the distance and time because of the energy loss (dielectric losses, conductor resistance etc.). 
- When light travels in optical fibers, it scattered and absorbed and hence is attenuated. 
- Therefore, in long fiber lines, repeaters are installed at proper intervals for regenerating and strengthening the signal. 
Repeater in optical communication performs the following functions:
Ø  Takes the input signal
Ø  Converts it in to electrical signal
Ø  Regenerates it.
Ø  Converts it in to optical signal
Ø  Re-transmits it

- These repeaters are usually employed in submarine as well as transcontinental communication cables as the loss is unacceptable in these cases.  


Saturday, January 19, 2013

What is meant by Statistical Usage Testing?


Statistical usage testing is the testing process that is aimed at the fitness of the software system or application.
The test cases chosen for carrying out statistical usage testing mostly consist of the usage scenarios and so the testing has been named as statistical usage testing. Software quality is ensured by the extensive testing but that has to be quite efficient. Testing expenditures covers about 20 – 25 percent of the overall cost of the software project. In order to reduce the testing efforts, deploy the available testing tools since they can create automated tests. But usually what happens is that the important tests require manual intervention with the tester requiring thinking about the usage as well behavior of the software. This is just the repetition of the tasks that were done during the requirements analysis phase.

About Statistical Usage Testing

- A usage model forms the basis for the creation of tests in statistical usage testing.
- Usage model is actually a directed usage graph more like a state machine and it consists of various states and transitions. 
- Every transition state has a probability associated with it regarding the traversal of the transition when the system would be in a state that marks the beginning of the transition arc. 
- Therefore, the sum of the probabilities of outgoing transitions sum up to unity for every state.
- Every transition can be associated with an event and more with parameters that are known to trigger the particular transition. 
- Such event associated transitions can be further related to certain conditions called the guard conditions. 
- These conditions imply that the transition occurs only if the value of the event parameter satisfies the condition.
- For assigning probabilities to the transitions, 3 approaches have been defined as follows:
  1. Uninformed approach: In this approach, same probability is assigned to the exit arcs of a state.
  2. Informed approach: In this approach, a sample of user event sequences for calculating suitable properties. The sample is captured from either an earlier version of the software or its prototype.
  3. Intended approach: This approach is used for shifting the focus of the test to certain state transitions and for modeling the hypothetical users.
- According to a property termed as the marcov property, the actual state is what on which the transition probabilities are dependent. 
- However, they are independent of the history again by the property. 
- This implies that the probabilities must be fixed numbers. 
- A system based up on this property is termed as a marcov chain and it requires conclusion of some analytical descriptions. 
- Usage distribution is one among such descriptions. 
- It gives for every state its steady–state probability i.e., appearance rate that is expected.
- All the states are associated to one or the other part of the software system or application and the part of the software that attracts more attention from the tests is shown by the usage distribution. 
- Some other important descriptions are:
  1. Expected test case length
  2. Number of test cases required for the verification of the desired reliability of the software system or application.
- The idea of the usage model generation can be extended by handling guard conditions and enabling the non–deterministic behavior of the system depending on the state of the system’s data. 
- All this helps towards the application of the statistical usage testing to systems over a wide range. 
- The use cases are defined by the top–level structure of the unified modeling language (UML). 


Saturday, December 29, 2012

What is a TestComplete automated testing tool?


The smart bear software has come up with a complete testing solution which aims at providing a platform to the testers on which they can create software tests of high quality. This complete testing solution is popularly known by the name of test complete automated testing tool. This tool allows you to carry out following tasks with the tests:
  1. Record the tests
  2. Manually script the tests using the keyword operations
  3. Automated play back
  4. Error logging
Test complete automated testing tool can be used with a number of applications few of which have been mentioned below:
  1. Web
  2. Windows
  3. WPF
  4. HTML5
  5. Flash
  6. Flex
  7. Silverlight
  8. .NET
  9. Java and so on.
- It is used to automate testing such as the functional testing, front-end user interface testing, data base or back end testing and so on. 
- The test complete tool is effectively used for the creation as well as automation of a number of different software test types. 
- The tests can be recorded and playback can be done whenever required. 
- A test being performed manually is recorded and played over and over again in the form of an automated test. 
- The users have the option to modify the recorded tests whenever they want to so as to create new tests and make enhancements to the existing ones by adding use cases to them. 
- The following are the operating systems supported by test complete:
  1. Windows 200, XP
  2. Server 2003
  3. Server 2008
  4. Vista
  5. Windows 7
- The following are the testing types that can be carried out using the test complete testing tool:
  1. GUI or functional testing
  2. Regression testing
  3. Unit testing
  4. Web testing
  5. Keyword testing
  6. Load testing and functional testing of the web services
  7. Distributed testing
  8. Manual testing
  9. Data driven testing
  10. Coverage testing
- It comes with support for the following languages:
  1. Jscript
  2. VBscript
  3. C++ script
  4. Delphi script
  5. C# script
- The test complete is compatible with both 32 – bit and 64 – bit version of windows applications. 
- The extended support and access is provided for the internal object, their properties and methods for the following applications:
  1. .NET: this further includes the following: VB.NET, VCL.NET, Jscript.NET, C#, C# builder, .NET, perl, python etc.
  2. Java: this is inclusive of SWT, AWT, WFC and swing etc.
  3. WPF
  4. Sybase power builder
  5. Microsoft fox pro
  6. Microsoft access
  7. Microsoft InfoPath
  8. Web browsers such as Netscape navigator, Mozilla Firefox, internet explorer etc.
  9. Visual C++
  10. Visual basic
  11. C++ builder
  12. Delphi
  13. Adobe flex
  14. Adobe flash
  15. Microsoft Silverlight
  16. Adobe AIR
- The test complete tool for its uninterrupted efficient performance has received several awards few of which we list below:
  1. Software development jolt awards by software development magazine
  2. Asp.net pro readers’ choice award
  3. Delphi information readers’ choice awards
  4. ATI automation honors
  5. Windows IT pro editors’ best and community choice award
- The test complete automated software testing tool aims at the identification of all the defects existing in a software system or application. 
- It works on a process that exercises and evaluates a software product’s components through a manual automatic means so that it can be verified whether or not the product satisfies the end users’ requirements. 
- The test complete has been judged based up on factors such as: recording efficiency,
capability of script generation,
data driven testing,
test results reports,
execution speed,
re-usability,
playback of the scripts, 
ease to learn and cost.


Tuesday, November 27, 2012

How to generate graphs for analyzing the testing process in test director?


The graphs that are created during the test director testing process let you keep a track of the progress of the test plan, test runs, defect tracking, requirements and so on. Such graphs can be generated at any point of time during the process and also from any of the test director modules. The graphs created by the test director are based up on the default setting however they can be customized by the user.
A project consists of data of different types. The graphs that you create using the test director can help you a big deal in analyzing the relationships between these different types of data. Each of the modules of the test director comes with a number of graph generating options. After you are done with generating the graph, you can customize its various properties so it comes out exactly as per your specifications and displays the information you want and in the way you want. 

Now we shall mention the steps following which you can generate a defects graph which will show the summary of the defects by status as well as priority levels. 

Steps for generating Defects Graph

Follow the steps:
  1. Click on the defects tab to turn on the defects module of the test director. The defects module will be displayed in the defects grid.
  2. Now for choosing a graph go to analysis menu, then graphs, then summary, then group by status option. This will open up a defects summary graph. This graph is grouped by status by default.
  3. Next you need to clear the default filter. Clicking on the filter button will do the task for you. The filter dialog box will open up. You will see that the detected by field is set to the current user name by default. Here, click the clear button and the applied filter will be removed by the test director.
  4. If you want to define a filter for viewing the defects with high to urgent priority then click the filter condition box for the priority field in the filter dialog box. Clicking on the browse button will open up the select filter condition dialog box’. Then select the required logical expression in the right pane and in the left pan select the level. Click OK to save the settings and close this dialog box.
  5. Next for defining a filter for viewing the defects that are not closed click the filter condition box for the status field. Again open the select filter condition dialog box by clicking on the browse button. Select the ‘not’ logical expression and select closed in the left pane. Click OK to close this box and once again click OK to close the filter dialog box.
  6. For setting the X axis of the graph select priority on the right side of the window for viewing the number of defects according to priority.
  7. Clicking on the refresh button will refresh the graph i.e., a new graph will be displayed.
  8. For displaying additional defect details click on a bar segment of the graph. A drill down results dialog box will display the defects related to that bar segment. Close this dialog box by clicking on the close button.
  9. There are various graph views available such as the data grid view and pie chart. Clicking on the corresponding options will display the graph as a pie chart, grid and so on.
  10. Close the graph and click on the back button to go back to the defects module. 


Tuesday, October 9, 2012

What is a Test Object Model in QTP?


Test object model is an important concept of quick test professional to be understood. In this article we have focused on the test object model of the quick test professional itself. 


What is a Test Object Model?

- The test object model is considered to be a large set of classes or objects of type class which are used by the quick test professional to represent the objects present in the software system or application.
- There is a list of properties associated with every class of the test object. 
Using the properties from this property list, the objects belonging to that particular class can be uniquely identified. 
- Also, the identification of a set of relevant methods that can be recorded for the object by the quick test professional can be easily identified. 

First let us clear up few terms associated with the test object model one by one:

1. Test object: 
This is the object that is created by the quick test professional in the test component or the test as a means of representation of the actual object present in the AUT or application under test. The information regarding the object is stored by the quick test professional since later it is required for many purposes like identification of the object and checking the working of the object during the run session.

    2. Run time object: 
   This object is the actual object present in the application under test. On this object only the various methods are performed while the identification process is in progress or we can say during the run session.
Whenever the user carries out an operation on an object in the application, the  following steps are taken by the quick test professional:
a)    Identification of the test object class of the quick test professional which is said to represent the object up on which the user performed the desired operation.
b)   Creation of appropriate test object based up on the identification in the previous step.
c)     Capturing of the current values of the properties of the objects residing in the application under test and preparing a list accordingly. This list is then saved along with the test object.
d)   Giving of a unique name to the test object up on the condition that it should reflect the value of one of the objects prominent properties.
e)    Recording of the operations that the user carried out on the test object by making use of the appropriate quick test professional test object method.

There are certain points about the test object model which are always helpful:
  1. Each and every test object method that is executed during the recording session forms a separate step in the recorded test. When the command for execution of the test is encountered, this recorded test object method is played up on the run time object.
  2. The source from where the properties of the object are captured is the object itself. These properties are important since there values are used for the identification of the run time objects while a run session is in progress.
  3. The properties of the objects have a tendency to change during the run session and so this would present some difficulty while matching the objects with the description. To avoid such a situation you have the option to make a manual modification of the test object properties while designing the test component during a run session. Some times even the regular expressions can be used as a substitute for the identification of the property values.
  4. The test object property values can be viewed as well as modified and stored through the object repository dialog box. 


Sunday, October 7, 2012

How does QTP recognize Objects in AUT?


The quick test professional comes with two types of object identification mechanisms unlike winrunner as mentioned below:
  1. Usual or the normal object identification mechanism and
  2. The smart identification mechanism

What is Usual or Normal Object Identification Mechanism

- In the usual object identification routine, the first step of the quick test professional is to learn the description of the object as provided by the user before the starting of the test. 
- The description of the object provided by the user consists of the properties of the object. 
- All the objects present in the software system or AUT (application under test) are matched one by one with this physical description.
- It is checked that how many properties of the object are matching with the properties mentioned in the description. 
- This method is the easiest  of the two above mentioned object identification routines. 

Now what if this usual object identification mechanism fails to identify the object? 
The alternative here is the second object identification mechanism i.e., the smart identification mechanism. 

Why does a the normal method fails?

- The normal method fails because the value of the object properties starts changing dynamically which makes it difficult for the quick test professional to track the object. 
- Another situation in which the normal identification mechanism can fail is when the quick test professional finds not one but more than one objects in the application under test matching with the properties mentioned in the description. 
- In such a case the quick test professional erases from its memory the learnt description of the object and calls up on the smart identification mechanism for the identification of that particular object. 

Let us see a comparison between the smart identification mechanism and the normal identification mechanism:
  1. Smart identification is more complex then the usual one.
  2. Smart identification is more flexible then the usual one.

What is Smart Identification Mechanism

- Smart identification mechanism is so reliable that it can work even if the currently provided description of the object fails.
- To get the best out of the smart identification mechanism, one needs to configure it properly in a logical way. 
- Smart identification mechanism is driven by two different sets of properties as described below:
  1. Base filter properties: As the name suggests these are the base or the fundamental properties belonging to a particular test object class. The value of these fundamental properties cannot be changed until and unless you make changes in the properties of the original object.
  2. Optional filter properties: The leftover properties i.e., all the properties leaving out the base filter properties are grouped under this category i.e., the optional filter properties. These properties unlike the other base filter properties do not change frequently. These properties can some times be ignored i.e., when they do not hold to be applicable and therefore have been named as optional filter properties.

How the smart identification process follows?

- The description given by the user is erased from the memory of the quick test professional and a list of objects called candidate list is created.
- Objects or candidates in this list match at least one property in the property list. 
- Now, the base filter properties are used for cutting down on the list of the object candidates. 
- The list is narrowed down to only one object which has most number of properties matching in the list of properties. 
- Some times it may happen even after reaching this stage the quick test professional may not find a matching object. 
- In such a case the quick test pro makes use of an ordinal identifier in addition with the learnt description. 


Facebook activity