Subscribe by Email


Showing posts with label Inputs. Show all posts
Showing posts with label Inputs. Show all posts

Friday, March 22, 2013

What is an Artificial Neural Network (ANN)?


- The artificial neural network or ANN (sometimes also called as just neural network) is a mathematical model that has got its inspiration from the biological neural networks. 
- This network is supposed to consist of several artificial neurons that are interconnected. 
- This model works with a connectionist approach for computing and thus processes information based up on this only. 
- In a number of cases, the neural network can act as an adaptive system that has the ability of making changes in its structure while it is in some learning phase. 
- These networks are particularly used in searching patterns in data and for modeling the complex relationships that exist between the outputs and inputs. 
An analogy to artificial neural network is the neuron network of the human brain. 
- In an ANN, the artificial nodes are termed as the neurons or sometimes as neurodes or units or the ‘processing elements’. 
They are interconnected in such a way that they resemble a biological neural network. 
- Till now, no formal definition has been given for the artificial neural networks. - These processing elements or the neurons show a complex global behavior. 
The connections between the neurons and their parameters is what that determines this behavior.
- There are certain algorithms that are designed for altering the strength of these connections in order to produce the desired flow of the signal. 
- The ANN operates up on these algorithms. 
- As in biological neural networks, in ANN also functions are performed in parallel and collectively by the processing units.
- Here, there is no delineation of the tasks that might be assigned to different units. 
- These neural networks are employed in various fields such as:
  1. Statistics
  2. Cognitive psychology
  3. Artificial intelligence
- There are other neural network models that emulate biological CNS and are part of the following:
  1. Computational neuroscience
  2. Theoretical neuroscience
- The modern software implementation of the ANNs prefers a more practical approach than biologically inspired approach. 
- This practical approach is based up on the signal processing and statistics. The former approach has been largely abandoned. 
- Many times parts of these neural networks serve as components for the other larger systems that are a combination of non – adaptive and adaptive elements.
- Even though a more practical approach for solving the real world problems is the latter one, the former has more to do with the connectionist models of the traditional artificial intelligence. 
- Well the common thing between them is the principle of distributed, non – linear, local and parallel processing and adaptation. 
- A paradigm shift was marked by the use of neural networks during the late eighties. 
- This shift was from the high level artificial intelligence (expert systems) to low level machine learning (dynamical system). 
- These models are very simple and define functions such as:
f: X à Y
- Three types of parameters are used for defining an artificial neural network:
a)   The interconnection pattern between neuron layers
b)   The learning process
c)   The activation function
- The second parameter updates the weights of the connections and the third one converts the weighted input in to output. 
- Learning is the thing that has attracted many towards it. 
- There are 3 major learning paradigms that are offered by ANN:
  1. Supervised learning
  2. Un – supervised learning
  3. Reinforcement learning
- Training a network requires selecting from a set of models that would best minimize the cost.
- A number of algorithms are available for training purpose where gradient descent is employed by most of the algorithms.
- Other methods available are simulated annealing, evolutionary methods and so on.


Wednesday, March 20, 2013

What are components of autonomic networking?


The concept of the autonomic systems has been derived from a biological entity called the autonomic nervous system (ANS). In human body this system is responsible for carrying out functions such as blood pressure and circulation, respiration and emotive response. 
In this article we discuss about the various components of the autonomic networking.

Components of Autonomic Networking

Autognostics: 
- This category of autonomic components includes capabilities such as that of awareness, self – discovery and self – analysis. 
- With all these capabilities, an autonomic system is capable of having a high – level view. 
- In other words, we can say that perceptual sub–systems are represented by it which serves the purpose of gathering, analyzing and reporting on the conditions and states of the system. 
- These components provide a basis to the system for responding and validating its decisions. 
- In simple words, autognostics provide self – knowledge. 
- This component if is rich, might provide various perceptual senses. 
-In autonomic systems, models of both the external and internal environments are embedded through which perceived threats and states can be assigned some relative value. 
- When it comes to autonomic networking, inputs from the following are taken for defining the state of the network:
a) Various network elements such as network interfaces and switches (inclusive of the current state and specification and configuration.
b) End – host
c)  Traffic flows
d) Logical diagrams
e) Design specifications
f)   Application performance data
- This component inter operates with the other components of the autonomic system.

Configuration management: 
- The responsibility for the interactions that take place among the interfaces and the elements.
- It consists of an accounting capability with which it is possible to track the configurations over the time under various circumstances. 
- Metaphorically, they act as the memory for the autonomic systems. 
- Provision and the remediation over a network can be applied through the configuration settings.
- In addition to these, two other things which can be applied are the selective performance and the implementation affecting access.
- This category only contains the actions that are taken by the human engineers. 
- There are a very few exceptional cases where the interface settings are configured manually using the automated scripts. 
- The dynamic population of the devices is maintained implicitly.
- This component must have the capability operating on all devices and to recover the old configuration settings. 
- There can be some situations where the states may become unrecoverable. 
Therefore, the sub – system must be capable of assessing the consequence of the changes before they are issued.

Policy management: 
- This component is inclusive of the following:
a)   Policy specification
b)   Deployment
c)   Reasoning over the policies
d)   Update of policies
e)   Maintenance of the policies
f)    Enforcement
- The reasons for including this component are:
a)  Configuration management
b)  Definition of the roles and relationships
c)  Establishment of trust and reputation
d)  Description of business processes
e)  Definition of performance
f) Constraints on behavior issues such as privacy, resource access, collaboration and security.
- It represents a model of ideal behavior and environment representing effective interaction.
- For defining the constituents of a policy it is important to know what all is involved in its management.

Autodefense: 
- The mechanism presented by this component is both dynamic and adaptive in nature.
- This mechanism has been developed to keep the network infrastructure safe from the malicious attacks. 
- Further, it also prevents the illegal use of the infrastructure for attacking the various technological resources. 
- This component has the capability of striking a balance between the various performance objectives that have threat management actions. 
- This component can be compared to the immune system of the human body.

Security: 
The structure provided by the security component is responsible for defining and enforcing the relationships between the following:
a)   Roles
b)   Content
c)   resources


Tuesday, July 3, 2012

What are the advantages and disadvantages of random testing?



Ever heard about random testing? It would not be so shocking if your answer is no since this software testing methodology is rarely used. We have dedicated this entire article regarding the discussion about random testing, it advantages as well as disadvantages. 

Random testing is actually a kind of functional testing. It is used in the conditions when the time taken for writing and running tests is quite long or the problem is too complex and hence it is not possible for every test case to be executed.

Advantages and Disadvantages of Random Testing


- One of the advantage of random testing is that you can rely on the assertions in the program code.
- There is one more advantage of random testing which is that you can make inferences regarding the reliability of the application in production if you have a selection of random tests that have been generated by reference. 
- One of the big disadvantages of the random testing is that you need to know when a test fails as there is no self indication given by the system. 
- To carry out random testing you require an oracle. 
"By oracle we mean you need to throw random inputs at the software system or application code and that too from multiple possible threads. And if no error or fault occurs, then you can make sure that your software system or application is working well."
Another disadvantage of random testing is that very often you may come across some situations in which you will have two distinct implementations of the same specifications namely:
  1. The golden model and
  2. The implementation.
- In such cases the test is declared pass if and only if both of the implementations agree to a defined accuracy. 
- When you decide to carry out a random testing you first need to make sure that the tests that you are going to use are sufficiently random and they cover overall functionality of the software system or application.

-  Another disadvantage is that the random testing is not efficient than the directed testing. But the advantage here is that the time needed for generating test cases for random testing is quite less than creating a set of directed tests. 
Once you have programmed your random test generator, it can work 24 hours a day generating whole lot of new tests. 
- Often a conflict arises in the minds of the testers whether they should choose between random testing or functional testing. 
- Here, it becomes necessary to know about the number of defects a technique can dig out. 
However, the random testing proves to be useful even in the situations in which many defects are not discovered per time interval since this testing can work without any manual intervention. 
- Usually, the above mentioned testing processes i.e., random testing and functional testing are found together in combination rather than alone. 
- The usage also depends on the software system or application that is under testing. 
- The test cases used in such combinational testing have been termed as directed random tests since the tests cases can be classified on the basis of their randomness or functionality. 
- Out of all the tests, very few are 100 percent random and usually they are not so of the interesting kind. 
- Whenever you have a random test that consists of quite a big number of random elements (that are mutually constrained), then it becomes difficult to avoid thrashing which also accounts as a disadvantage. 
- Certain languages have been developed for defining the test cases for random testing like E and Vera etc. 


Thursday, June 28, 2012

What is meant by decision table testing and when it is used?


Heard of decision table testing before? This concept is rarely heard since it is not used by the testers very often. This article is focussed upon the decision table testing and when it is used. 

"Decision table testing proves to be a very handy software testing methodology which comes to the tester’s rescue whenever a combination of inputs is to be dealt with and different results are produced". 

To understand this concept you can take example of two binary inputs A and B. You will get 4 different combinations of these two inputs which will produce 4 different results based up on whatever operation is performed on them. If you observe some of these outputs to be the same, then you can select any of them and the output which is different for testing. 

With a small number of inputs you won’t realise the importance of this testing technique since you will feel like using a normal testing technique. But with a large number of inputs, the significance of the decision table testing becomes quite clear. The below mentioned expression gives the possible number of combinations of the inputs:
2^n,  where n stands for the number of inputs.
Let us take n=10. The number of possible input combinations comes as 1024! 

What is Decision Table Testing?


- Decision table is actually a table that showcases all the different possible combinations of the supplied inputs along with their corresponding outputs. 
- Decision table testing is one of the black box testing techniques. 
- This testing technique is widely used in web applications however; it has got limited scope when it comes to equivalence partitioning and boundary value analysis.
- In boundary value analysis and equivalence partitioning, the decision table testing can be applied only in specific conditions.
- Mostly, decision table testing is used for testing rules and logics. 
- Sometimes, it is also used to evaluate complex business rules. 
- These complex rules are broken down in to simple decision tables. 

Advantages of Decision Table Testing


Below mentioned are some of the advantages of the decision table testing:
  1. With decision table testing you get a frame work that facilitates complete and accurate processing of the rules and logics.
  2. Decision table testing helps in the identification of the test scenarios faster because of its simple and accurate tabular representations.
  3. Decision tables are quite easy to understand.
  4. Decision tables require less maintenance and updating the contents is also very easy.
  5. With a decision table you can verify whether or not you have checked all the possible test combinations.

What portions are defined for decision table testing?


- Out of all the black box testing methods, decision table testing is quite rigorous. 
- But nonetheless, decision tables provide quite a compact and precise way for modelling a complex logic. 
- Below mentioned 4 portions have been defined for a typical decision table:
  1. Stub portion,
  2. Entry portion,
  3. Condition portion, and lastly
  4. Action portion.
- “Rule” is the column in entry portion and indicates which actions are to be taken for the condition that is indicated in the condition portion of the table.
- In some decision tables all the conditions are binary, such kind of decision tables are called “limited entry decision tables”. 
- On the contrary, if the conditions have several values, such a table is known as “extended entry decision table”. 

There is one disadvantage of decision table testing: 
It is very difficult to scale up the decision tables. 


Thursday, April 5, 2012

Explain empirical vs defined & prescriptive process?

To make any process successful it has to be controlled in a way that it achieves its predefined goals or we can say it should be guided along the path of success. This fact holds good for any type of process in this world and so does for the processes in the field of software engineering.

In the field of software engineering, two approaches have been identified for keeping a control over the development and other related processes namely:


1. The empirical process control method and
2. The defined process control method.


In this article the above mentioned two approaches have been compared so that you get a better understanding of both the programs. So let us see how these two approaches control the processes.

The Empirical Process Control Method



- The empirical process control model was defined to exercise or control the process via following some frequent adaptations as well as frequent inspections.

- These inspections and the adaptations both are meant for the generation of the unpredictable and unrepeatable outcomes or results and can be thought of as being imperfectly designed.

- Since for the past many years the various software development methodologies have been known to be controlled by the latter approach i.e., the defined process control method.

- But we all know that every time a certain same output or outcome cannot be expected from the software development processes.

- Therefore, most of the agile software development methodologies are controlled by the empirical process control model and the most famous example being that of the “scrum” agile software development methodology.

- The term empirical process control model itself justifies as the term empirical means the information acquired by the means of experimentation and observation.

- Here the information is achieved by means of inspection and adaptations which serve as a means for the observation and experimentation.

- The process of the empirical control is constituted of a continuous cycle of adaptation of the process as per the requirements and inspection of the process for correct working.

- The empirical control process model has three pillars as we can make out from the definition of the process control model without which it cannot be called as the empirical process control method:
1. Transparency
2. Inspection and
3. Adaptation.

- The first pillar i.e., transparency indicates that the outcomes of the affects of the empirical process control model and the aspects affecting the outcomes should be visible to the programmers and developers who are responsible for controlling the whole process.

- The second pillar i.e., the inspection indicates that all the aspects of the control process should be monitored quite frequently to enable the fast and early detection of the unacceptable variances.

- The third pillar i.e., the adaptation indicates the adjustment of one or more aspects as required of the control process if the software system or application being processed is observed to lie outside the acceptable limits implying that the result will also be unacceptable.

- The defined process control model approach is adopted when the underlying mechanisms of the software system or application are well understood by the programmers and the developers.

The Defined Process Control Model



- The defined process control model can be thought of as a theoretical approach.

- When a well defined set of inputs is given, it is obvious that the same outcomes will be generated every time the program executes.

- With the well understood technologies and stable requirements, one can very well predict a whole software project.

- Even nowadays the empirical process control model holds as the essence of the agile software development processes.

- Empirical process holds good for the complex development processes which encounter difficulty in the production of repetitive outcomes.


Tuesday, November 22, 2011

What are different characteristics of white box testing?

White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs.

White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests. White-box testing includes designing of test cases and an internal perspective of the software system.

Expert programming skills are needed to design test cases and internal structure of the program i.e., in short to perform white box testing. The tester or the person who is performing white box tests inputs some certain specified data to the code and checks for the output whether it is as expected or not. There are certain levels only at which white box testing can be applied.

The levels have been given below in the list:
- Unit level
- Integration level and
- System level
- Acceptance level
- Regression level
- Beta level

Even though there’s no problem in applying white box testing at all the 6 levels, it is usually performed at the unit level which is the basic level of software testing.

White box testing is required to test paths through a source codes, between systems and sub systems and also between different units during the integration of the software application.

White box testing can effectively show up hidden errors and grave problems.But, it is incapable of detecting the missing requirements and unimplemented parts of the given specifications. White box testing includes basically four kinds of basic and important testings. These have been listed below:

- Data flow testing
- Control flow testing
- Path testing and
- Branch testing

In the field of penetration testing, white box testing can be defined as a methodology in which a hacker has the total knowledge of the hacked system. So we can say that the white box testing is based on the idea of “how the system works?” it analyzes flow of data, flow of information, flow of control, coding practices and handling of errors and exceptions in the software system.

White box testing is done to ensure that whether the system is working as intended or not and it also validates the implemented source code for its control flow and design, security functionalities and to check for the vulnerable parts of the program.

White box testing cannot be performed without accessing the source code of the software system. It is recommended that the white boxing is performed at the unit level testing phase.

White box testing requires the knowledge of insecurities and vulnerabilities and strengths of a program.

- The first step in white box testing includes analyzing and comprehensing the software documentation, software artifacts and the source code.
- The second step of white box testing requires the tester to think like an attacker i.e., in what ways he/ she can exploit and damage the software system.
He/ she needs to think of ways to exploit the system of the software.
- The third step of white boxing testing techniques are implemented.

These three steps need to be carried out in harmony with each other. Other wise, the white box testing would not be successful.

White box testing is used to verify the source code. For carrying out white box testing one requires full knowledge of the logic and structure of the code of the system software. Using white box testing one can develop test cases that implement logical decisions, paths through a unit, operate loops as specified and ensure validity of the internal structure of the software system.

Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional
Search-Based Testing: Automating White-Box Testing

Software Testing Interview Questions You'll Most Likely Be Asked


Saturday, October 8, 2011

Some details about Strings and Arrays of Strings in C

Multiple character constants can be dealt with in 2 ways in C. if enclosed in single quotes, these are treated as character constants and if enclosed in double quotes, these are treated as string literals. A string literal is a sequence of characters surrounded by double quotes. Each string literal is automatically added with a terminating character ‘\0’. Thus, the string “abc” will actually be represented as follows:

“ abc \0” in the memory and its size is not 3 but 4 characters ( inclusive of terminator character ).

Arrays refer to a named list of a finite number n of similar data structure elements. Each of the data elements can be referenced respectively by a set of consecutive numbers, usually 0, 1, 2, 3, ……., n. if the name of an array of 10 elements is ARR, then its elements will be referred as shown below:

ARR [ 0 ], ARR [ 1 ], ARR [ 2 ], ARR [3], …… ARR [9]

Arrays can be one dimensional, two dimensional or multi dimensional. The functions gets() and puts () are string functions. The gets() function accepts a string of characters entered at the keyboard and places them in the string variable mentioned with it. for example :

Char name[ 21 ];

The above code declares a string namely name which can store 20 valid characters ( width 21 specifies one extra character ‘\0’ with which a string is always terminated ). The function gets() reads a string of maximum 20 characters and stores it in a memory address pointed to by name. As soon as the carriage return is pressed, a null terminator ‘\0’ is automatically placed at the end of the string. The function puts () writes a string on the screen and advances the cursor to the newline. Any subsequent output will appear on the next line of the current output by puts ().

Arrays are a way to group a number of items into a larger unit. Arrays can have data items of simple types like int or float, or even of user defined types like structures and objects. An array can be of strings also. Strings are multi dimensional arrays comprised of elements, each of which is itself an array.

A string is nothing but an array of characters only. In actual C does not have a string data type rather it implements string as single dimension character arrays. Character arrays are terminated by a null character ‘\0’. So, for this reason the character arrays or strings are declared one character larger than the largest string they can hold.

Individual strings of the string array can be accessed easily using the index. The end of a string is determined by checking for null character. The size of the first index ( rows ) determines the number of strings and the size of the second index ( columns ) determines maximum length of each string. By just specifying the first index, an individual string can be accessed. You can declare and handle an array of strings just like a two dimensional array. See an example below:

Char name [10] [20] ;

Here the first dimension declares how many strings will be there in the array and the second dimension declares what will be the maximum length of a string. Unlike C++, C has some different functions for adding or concatenating strings, checking string length and to see the similarity of two strings. The functions are namely strlen, strcmp, strcat, strrev etc and are included in header file string.h. Strings are used for holding long inputs.


Sunday, May 15, 2011

Why is documentation necessary in QA? What steps are needed to develop and run software tests?

Documentation is very necessary in quality assurance. Everything should be documented. User manuals, test plans, bug reports, business reports, code changes, specifications, design and all other reports should be documented. Any changes in the process should be documented.
A properly documented requirement specification is very necessary. Requirements are the details of what is to be done. Requirements should be clear, complete, detailed, testable. Details should be determined and organized in an efficient way but it can be difficult to handle. Some type of documentation with detailed requirements is very important to properly plan and execute tests.

There are some steps that are needed to develop and run software tests:
- The requirements, design specifications are necessary.
- Budget and cost should be known.
- What people will be responsible, responsibilities, standards and processes should be listed.
- Risk aspects should be determined.
- Test approaches should be defined.
- Test environment should be defined.
- Tasks should be identified.
- Inputs should be determined.
- Test plan document should be prepared.
- Test cases should be written.
- Test environment and test ware should be prepared.
- Tests are performed and results are evaluated.
- Problems are tracked, re-testing is done and test plans are maintained and updated.


Friday, April 29, 2011

Explain Black box and White box testing? What are their advantages and disadvantages?

For complete testing of a software product both black and white box testing are necessary.

Black-box testing
This testing looks at the available inputs for an application and the expected outputs that should result from each input. It does not have any relation with the inner workings of the application, the process undertaken or any other internal aspect of the application. Search engine is a very good example of a black box system. We enter the text that we want to search, by pressing “search” we get the results. Here we are not aware of the actual process that has been implemented to get the results. We simply provide the input and get the results.

White-box testing
This testing looks into the complex inner working of the application; it tests the processes undertaken and other internal aspects of the application. While black box testing is mainly concerned with the inputs and outputs of the application, white box testing help us to see beyond i.e. inside the application. White-box testing requires a degree of sophistication which is not the case with the black-box testing, as the tester is required to interact with the objects that are used to develop an application rather than having easy access to the user interface. In-circuit testing is a good example of a white-box system testing where the tester is looking at the interconnections between different components of the application and verifying the proper functioning of each internal connection. We can also consider the example of an auto-mechanic who takes care of the inner workings of a vehicle to ensure that all the components are working correctly to ensure the proper functioning of the vehicle.

The basic difference between black-box and white-box testing is the areas of focus which they choose. We can simply say that black-box testing is focused on results. Where if an action is performed and the desired result is obtained then the process that has actually been used is irrelevant. White-box testing, on the other hand focuses on the internal working of an application and it is considered to be complete only when all the components are tested for proper functioning.

Advantages of Black-box testing
- Since tester does not have to focus on the inner working of an application, creating test cases is easier.
- Test case development is faster as tester need not to spend time on identifying the inner processes; his only focus is on the various paths that a user may take through GUI.
- It is simple to use as it focuses only on valid and invalid inputs and ensures that correct outputs are obtained.

Drawbacks of Black-box testing
Constantly changing GUI makes script maintenance difficult as the input may also be changing. Interacting with GUI may result in making the test script fragile and it may not properly execute consistently.

Advantages of White-box testing
- Since the focus is on the inner working the tester can identify objects pro grammatically. This can be useful when the GUI is frequently changing.
- It can improve stability and re usability of test cases provided the object of an application remains the same.
- By testing each path completely it is possible for a tester to achieve thoroughness.

Drawbacks of White-box testing
Developing test cases for white-box testing involves high degree of complexity therefore it requires highly skilled people to develop the test cases. Although to a great extent fragility is overcome in white-box testing BUT change in the objects name may lead to breaking of the test script.


Saturday, April 16, 2011

Model-View-Controller (MVC) Design Pattern

Model View Controller design pattern is used to support multiple types of users with multiple types of interfaces. The Model-View-Controller (MVC) pattern separates the modeling of the domain, the presentation, and the actions based on user input into three separate classes:

- Model: The model locates all data information for the application. It does not care about how you interpret it or how process it. It divides functionality among objects involved in maintaining and presenting data to minimize the degree of coupling between the objects.

- View: The view manages the display of information. It is responsible for maintaining the consistency in its presentation when the underlying model changes.

- Controller: The controller interprets the mouse and keyboard inputs from the user, informing the model and/or the view to change as appropriate. The actions performed on the model can be activating device, business process or changing the state of a model.

The MVC pattern allows any number of controllers to modify the same model. The strategies by which MVC can be implemented are as follows:
- For Web-based clients such as browsers, use Java Server Pages (JSP) to render the view, Servlet as the controller, and Enterprise JavaBeans (EJB) components as the model.
- For Centralized controller, a main servlet is used to make control more manageable.


Thursday, March 31, 2011

What is a System? What are general principles of system?

System is an:
- integrated set of inter-operable elements;
- it consists of group of entities or components;
- interacting together to form specific inter-relationships;
- organized by means of structure;
- working together to achieve a common goal.

In defining the system boundaries,a software engineer discovers the following:
- entities or group of entities that are related and organized in some way within the system, either they provide input, do activities or receive output;
- activities or actions that must be performed by the entities or group of entities in order to achieve the purpose of the system;
- a list of inputs;
- a list of outputs.
Entities that are involved in this system are the applicant, club staff and coach.

General Principles of Systems
- The more specialized a system, the less it is able to adapt to different circumstances.
- The larger the system is, the more resources must be devoted to its everyday maintenance
- Systems are always part of larger systems, and they can always bepartitioned into smaller systems

There are two types of systems, namely, man-made systems and automated systems. Man made systems will always have areas for correctness and improvements. These areas for correctness and improvements can be addressed by automated systems. Automated systems consists of computer hardware, computer software, people, procedures, data and information and the connectivity that allows the connection of one computer systemwith another computer system.


Monday, December 13, 2010

What is the purpose of load tests?

The purpose of any load test should be clearly understood and documented. A load test usually fits into one of the following categories:
- Quantification of risks :
Determine, through formal testing, the likelihood that system performance will meet the formal stated performance expectations of stakeholders, such as response time requirements under given levels of load. This is traditional quality assurance(QA) type test. The load testing does not mitigate risk directly, but through identification and quantification of risk, presents tuning opportunities and an impetus for remediation that will mitigate risk.

- Determination of minimum configuration : Determine, through formal testing, the minimum configuration that will allow the system to meet the formal stated performance expectations, so that extraneous hardware, software and the associated cost of ownership can be minimized. This is a Business Technology Optimization (BTO) type test.

Basis for determining the business functions/processes to be included in a test


- High Frequency Transactions : The most frequently used transactions have the potential to impact the performance of all of the other transactions if they are not efficient.
- Critical Transactions : The more important transactions that facilitate the core objectives of the system should be included, as failure under load of these transactions has the greatest impact.
- Read Transactions : At least one READ ONLY transaction should be included, so that performance of such transactions can be differentiated from other more complex transactions.
- Update Transactions : At least one update transaction should be included so that performance of such transactions can be differentiated from other transactions.


What are Load Tests - End to End performance tests

Load Tests are end to end performance tests under anticipated production load. The objective such tests are to determine the response times for various time critical transactions and business processes and ensure that they are within documented expectations. Load tests also measures the capability of an application to function correctly under load, by measuring transaction pass/fail/error rates. An important variation of the load test is the network sensitivity test which incorporates WAN segments into a load test as most applications are deployed beyond a single LAN.

Load tests are major tests, requiring substantial input from the business, so that anticipated activity can be accurately simulated in a test environment. If the project has a pilot in production then logs from the pilot can be used to generate 'usage profiles' that can be used as part of the testing process, and can even be used to drive large portions of load test.

Load testing must be executed on today's production size database, and optionally with a projected database. If some database tables will be much larger in some months time, then load testing should also be performed against a projected database. It is important that such tests are repeatable, and give the same results for identical runs. They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed service level agreements.


Friday, December 10, 2010

Define Unit Test Case, Integration Test Case, System Test case

UNIT TEST CASES(UTC)
The unit test cases are very specific to a particular unit. The basic functionality of the unit is to be understood based on the requirements and the design documents. Generally, design document will provide a lot of information about the functionality of a unit. The design document has to be referred before a unit test case is written because it provides the actual functionality of how the system must behave, for given inputs.

INTEGRATION TEST CASES
Before designing the integration test cases the testers should go through the integration test plan. It will give complete idea of how to write integration test cases. The main aim of integration test cases is that it tests the multiple modules together. By executing these test cases the user can find out the errors in the interfaces between the modules.
The tester has to execute unit and integration test cases after coding.

SYSTEM TEST CASES
the system test cases meant to test the system as per the requirements; end to end. This is basically to make sure that the application works as per the software requirement specification. In system test cases, the testers are supposed to act as an end user. so, system test cases normally do concentrate on the functionality of the system, inputs are fed through the system and each and every check is performed using the system itself. Normally, the verifications are done by checking the database tables directly or running programs manually are not encouraged in the system test.
The system test must focus on functional groups, rather than identifying the program units. When it comes to system testing, it is assumed that the interfaces
between the modules are working fine.
Ideally the test cases are nothing but a union of the functionalities tested in the unit testing and the integration testing. Instead of testing the system inputs and outputs through database or external programs, everything is tested through the system itself. In system testing, the tester will mimic as an end user and hence checks the application through its output.
Sometimes, some of the integration and unit test cases are repeated in system testing also especially when the units are tested with test stubs before and not actually tested with other real modules, during system testing those cases will be performed again with real modules.


What are Test Case Documents and what is the general format of test cases?

The test cases will have a generic format as below:
- Test Case ID : The test case id must be unique across the application.
- Test case description : The test case description should be very brief.
- Test Prerequisite : The test pre-requisite clearly describes what should be present in the system, before the test executes.
- Test Inputs : The test input is nothing but the test data that is prepared to be fed to the system.
- Test Steps : The test steps are the step-by-step instructions on how to carry out the test.
- Expected Results : The expected results are the ones that say what the system must give as output or how the system must react based on the test steps.
- Actual results : The actual results are the ones that say outputs of the action for the given inputs or how the system reacts for the given inputs.
- Pass/Fail : If the expected and actual results are same then test id Pass otherwise Fail.

The test cases are classified into positive and negative test cases.Positive test cases are designed to prove that the system accepts the valid inputs and then process them correctly. Suitable techniques to design the positive test cases are specification derived tests. The negative test cases are designed to prove that the system rejects invalid inputs and does not process them. Suitable techniques to design the negative test cases are error guessing, boundary value analysis, internal boundary value testing and state transition testing. The test cases details must be very clearly specified, so that a new person can go through the test cases step by step and is able to execute it.
In an online shopping application, at the user interface level, the client request the web server to display the product details by giving email id and username. The web server processes the request and will give the response. For this application, we design the unit, integration and system test cases.


Monday, November 29, 2010

Step 4 To test API : Call Sequencing, Step 5 To Test API : Observe the output

Step 4: Call Sequencing
When combinations of possible arguments to each individual call are unmanageable, the number of possible call sequences is infinite. Parameter selection and combination issues further complicate the problem call-sequencing problem. Faults caused by improper call sequences tend to give rise to some of the most dangerous problems in software. Most security vulnerabilities are caused by the execution of some such seemingly improbable sequences.

Step 5: Observe the output
The outcome of an execution of an API depends upon the behavior of that API, the test condition and the environment. The outcome of an API can be at different ways i.e. some could generally return certain data or status but for some of the APIs. It might not return or shall be just waiting for a period of time, triggering another event, modifying certain resource and so on.

The tester should be aware of the output that needs to be expected for the API under test. The outputs returned for various input values like valid/invalid, boundary values etc needs to be observed and analyzed to validate if they are as per the functionality. All the error codes returned and exceptions returned for all the input combinations should be evaluated.


Tuesday, November 9, 2010

What are some of the formal approaches used for exploratory testing? Continued...

Some of the formal approaches used for exploratory testing are:

- Identify the break points
Break points are the situations where the system starts behaving abnormally. It does not give the output it is supposed to give. So, by identifying such situations also, testing can be done. Use boundary values or invariance for finding the break points of the application. In most of the cases, it is observed that system would work for normal inputs and outputs. Try to give input that might be the ideal situation or the worse situation. By trying to identify the extreme conditions or the breakpoints would help the tester to uncover the hidden bugs. Such cases might not be covered in the normal scripted testing. hence, this helps in finding the bugs which might not be covered in normal testing.

- Check the UI against Windows Interface etc standards
The exploratory testing can be performed by identifying the user interface standards. There are set standards laid down for the user interfaces that need to be developed. These user standards are nothing but the look and feel aspects of the interfaces, the user interacts with. The user should be comfortable with any of the screens that he or she is working on. These aspects help the end user to accept the system faster. By identifying the user standards, define an approach to test because the application developed should be user friendly for the user's usage.

- Identify expected results
The tester should know what he is testing for and expected output for the given input. Until and unless, the aim of the testing is not known, there is no use of the testing that is done because the tester may not succeed in distinguishing the real error and normal work-flow. The tester needs to analyze what is the expected output for the scenario he is testing.

- Identify the interfaces with other interfaces/external applications
In the age of component development and maximum re-usability, developers try to pick up the already developed components and integrate them. In some cases, it would help the tester explore the areas where the components are coupled. The output of one component should be correctly sent to other component. Hence, such scenarios or work-flows need to be identified and explored more. There may be external interfaces, like the application is integrated with another application for the data. In such cases, focus should be more on the interface between the two applications.


Friday, October 29, 2010

Software Testing - Validation Phase - Alpha Testing

A software prototype stage when the software is first available for run. The software has the core functionalities in it but complete functionality is not aimed at. It would be able to accept inputs and give outputs. Usually, the most used functionalities are developed more. This test is conducted at the developer's site only. In a software development life cycle, depending on the functionalities, the number of alpha phases required are laid down in the project plan itself.

During this, the testing is not a through one since only the prototype of the software is available. The basic installation and un-installation tests and the completed core functionalities are tested.

The aim of alpha testing is :
- to identify any serious errors.
- to judge if the intended functionalities are implemented.
- to provide to the customer the feel of the software.

A thorough understanding of the product is done now. During this phase, the test plan and test cases for the beta phase which is the next stage is created. The errors reported are documented internally for the testers and developers reference. No issues are usually reported and recorded in any of the defect management or bug trackers.

The role of the test lead is to understand the system requirements completely and to initiate the preparation of test plan for the beta phase. The role of the tester is to provide input while there is still time to make significant changes as the design evolves and to report errors to developers.


Thursday, October 14, 2010

Validation phase - Unit Testing - how to write Unit test cases

Preparing a Unit test case document commonly referred as UTC is an important task in unit testing activity. Having a complete UTC with every possible test case leads to complete unit testing and thus gives an assurance of defect free unit at the end of unit testing stage.

While preparing unit test cases the following aspects should be kept in mind-

Expected functionality
Write test cases against each functionality that is expected to be provided from the unit being developed. It is important that user requirements should be traceable to functional specifications which should be traceable to program specifications which should be traceable to unit test cases. Maintaining such traceability ensures that the application fulfills user requirements.

Input Values
- Write test cases for each of the inputs accepted by the unit. Every input has certain validation rule associated with it. Write test cases to validate this rule.
- There can be cross-field validations in which one field is enabled depending upon input of another field. Test cases for these should not be missed.
- Write test cases for the minimum and maximum values of input.
- Variables that hold data have their value limits. In case of computed fields, it is very important to write test cases to arrive at an upper limit value of the variables.
- Write test cases to check the arithmetic expressions with all possible combinations of values.

Output Values
- Write test cases to generate scenarios which will produce all types of output values that are expected from unit.

Screen Layout
Screen/report layout must be tested against the requirements. It should ensure that pages and screens are consistent.

Path Coverage
A unit may have conditional processing which results in various paths, the control can traverse through. Test cases must be written for each of these paths.

Assumptions and Transactions
A unit may assume certain things for it to function. Test cases must be written to check that the unit reports error if such assumptions are not met.
In case of database applications, test cases should be written to ensure that transactions are properly designed and in no way inconsistent data gets saved in the database.

Abnormal terminations and Error messages
Test cases should be written to test the behavior of unit in case of abnormal termination.
Error messages should be short, precise and self explanatory. They should be properly phrased and free of grammatical mistakes.


Sunday, October 3, 2010

Verification Strategies - Overview to Walkthroughs.

Walkthrough is a static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a segment of documentation or code, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.
the objectives of Walkthrough can be summarized as follows:
- Detect the errors early.
- Train and exchange technical information among project teams which participate in the walkthrough.
- Increase the quality of the project, thereby improving morale of the team members.
The participants in walkthroughs assume the role of a walk-through leader, recorder, author and a team member.
To consider a review as a systematic walk-through, a team of at least two members shall be assembled. Roles may be shared among the tam members. the walk-through leader or the author may serve as the recorder. The walk-through leader may be the author.
Individuals holding management positions over any member of the walk-through team shall not participate in the walk-through.

Input to the walk-through includes:
- A statement of objectives for the walk-through.
- The software product being examined.
- Standards that are in effect for the acquisition, supply, development, operation and/or maintenance of the software product.
- Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected.
- Anomaly categories.

The walk-through shall be considered complete when
- The entire software product has been examined.
- Recommendations and required actions have been recorded.
- The walk-through output has been completed.


Facebook activity