Friday, January 18, 2013
Explain Cucumber Testing Tool?
Posted by
Sunflower
at
1/18/2013 02:34:00 PM
0
comments
Labels: Application, Browser, Cucumber Testing Tool, Development, Execution, Features, Framework, Keywords, Language, pages, Software, Software System, Target, Test cases, Testing, Testing tools, Tools, Web automation
![]() | Subscribe by Email |
|
Monday, September 12, 2011
How to define effective metrics? What is the use of quality metrics?
The goal of software engineering is to develop a product that is of high quality. Metrics that are derived from these measures indicates the effectiveness of individual. Metric is a standard for measurement. A metric is a measure that captures performance and allows comparisons and supports business strategy.
The use of quality metrics is to spot the performance trends, comparing alternatives and predicting the performance. However, the costs and benefits of a particular quality metric should be considered as collecting data will not necessarily result in higher performance levels.
Effective metrics should define performance in quantifiable entity and a capable system exists to measure the entity. Effective metrics allow for actionable responses if the performance is unacceptable.
Identifying effective metrics is a bit difficult. Ranges can be identified for an acceptable performance of a metric. It can be referred as breakpoints in case metrics are defined for services and targets, tolerances, or specifications for manufacturing.
Breakpoints are levels where there is a chance that the improved performance will change the behavior of the customer. A target is a desired value of a characteristic. A tolerance is an allowable deviation from target value.
Posted by
Sunflower
at
9/12/2011 01:57:00 PM
0
comments
Labels: Benefits, Break points, Data, Effective, Entity, Goals, Measure, Metrics, Performance, Quality, Quality metrics, software engineering, Software quality assurance, Specification, Target, Values
![]() | Subscribe by Email |
|
Monday, March 14, 2011
Architectural Design - Representing the System in Context and Defining Archetypes
As architectural design begins, the design should define the external entities that the software interacts with and nature of the interaction. Once the context is modeled and all external interfaces are described, the structure of the system is specified by the designer. It is done by defining and refining software components that implement the architecture.
REPRESENTING THE SYSTEM IN CONTEXT
Architectural context represents how the software interacts with entities external to its boundaries. A system context diagram accomplishes this requirement by representing the flow of information into and out of the system. At the architectural design level, a software architect uses an architectural context diagram to model the manner in which software interacts with entities external to its boundaries.
How do systems inter-operate with the target system?
Superordinate Systems
These systems use the target system as part of some higher level processing scheme.
Subordinate Systems
These systems are used by the target system and provide data or processing that are necessary to complete target system.
Peer-level Systems
These systems interact on a peer-to-peer basis.
Actors
These entities interact with the target system by producing or consuming information necessary for requisite processing.
Each of these external entities communicates with target systems through an interface.
DEFINING ARCHETYPES
Archetypes are the abstract building blocks of an architectural design. It is a class or pattern that represents a core abstraction that is critical to design of an architecture for the target system. Archetypes can be derived by examining analysis classes defined as part of analysis model.Target system architecture is composed of these archetypes which represent stable elements of the architecture. Some kind of archetypes are:
- Nodes
- Detector
- Indicator
- Controller
Posted by
Sunflower
at
3/14/2011 07:38:00 PM
0
comments
Labels: Archetypes, Architectural, Architectural design, Components, Context, Define, Design, Diagram, Entity, Information, Interactions, Patterns, Structure, Styles, Target
![]() | Subscribe by Email |
|
Wednesday, February 16, 2011
Interface Design Steps - Applying Interface Design Steps, User Interface Design Patterns and Design Issues
Interface design is an iterative process. Interface design activities commences once interface analysis has been completed. Each user interface design occurs a number of times, each elaborating and refining information developed. Interface designer begins with sketches of each interface state and then work backward to define objects, actions, and other important design information.
APPLYING INTERFACE DESIGN STEPS
- Definition of interface objects.
- Defining the actions that are applied to them.
- Description of a use case is written.
- Once objects and actions are defined and elaborated, they are categorized by type.
- Target, source and application objects are identified.
- A source objects is dragged and dropped onto a target object.
- Application objects represents application specific data that is not directly manipulated as part of screen interaction.
- Screen layout is performed once all objects and actions are defined.
- Screen layout is interactive process in which graphical design and placement of icons, major and minor menus, screen text is conducted.
- Although automated tools can be useful in developing layout prototypes, sometimes a pencil and paper are all that are needed.
USER INTERFACE DESIGN PATTERNS
- A wide variety of user interface design patterns have emerged.
- A design pattern is an abstraction that prescribes a design solution to a specific well bounded design problem.
DESIGN ISSUES
Four common design issues are:
- system response time: it is measured from the point at which the user performs some control action. its two main characteristics are length and variability.If system response is too long, user frustration and stress is the result.Variability refers to the deviation from average response time.
- user help facilities: design issues must be addressed when a help facility is considered.
- error information handling: A good error message should describe problem in language that a user can understand. An error message should provide constructive advice for recovering from error. The message should indicate any negative consequences of the error. The error message should be accompanied by an audible or visual cue and the message should be non-judgmental.
- command labeling: Conventions for command usage should be established across all applications.It is confusing and often error-prone for a user to type command. The potential of error obvious.
Unnecessary iteration, project delays, and customer frustration often result. It is better to establish a design issue to be considered at beginning of software design, when changes are easy and costs are low.
Posted by
Sunflower
at
2/16/2011 05:18:00 PM
0
comments
Labels: Actions, activities, Application, Data, Design, Design Issues, Information, Interface, Issues, Iterative, Objects, Patterns, Prototypes, Source, Steps, Target, User
![]() | Subscribe by Email |
|
Wednesday, December 22, 2010
Performance Tests Precedes Load Tests
The best time to execute performance tests is at the earliest opportunity after the content of a detailed load test plan have been determined. Developing performance test scripts at such an early stage provides opportunity to identify and re mediate serious performance problems and expectations before load testing commences. For example, management expectations of response time for a new web system that replaces a block mode terminal application are often articulated as 'sub second'. However, a web system, in a single screen, may perform the business logic of several legacy transactions and may take two seconds. Rather than waiting until the end of a load test cycle to inform the stakeholders that the test failed to meet their formally stated expectations, a little education up front may be in order. Performance tests provide a means for this education.
Another key benefit of performance testing early in the load testing process is the opportunity to fix serious performance problems before even commencing load testing. When performance testing of a 'customer search' screen yields response times of more than ten seconds, there may well be a missing index, or poorly constructed SQL statement. By raising such issues prior to commencing formal load testing, developers and DBAs can check that indexes have been set up properly.
Performance problems that relate to size of data transmissions also surface in performance tests when low bandwidth connections are used. For example, some data, such as images and "terms and conditions" text are not optimized for transmission over slow links.
Posted by
Sunflower
at
12/22/2010 04:59:00 PM
0
comments
Labels: Changes, Components, Layers, Load, Load Testing, Performance, Performance testing, Performance Tests, Process, Protocols, Software testing, Target, Tests, Web system
![]() | Subscribe by Email |
|
Tuesday, December 21, 2010
Pre-requisites for Performance Testing
A performance test is not valid until the data in the system under test is realistic and the software and configuration is production like.
- Production Like Environment
Performance tests need to be executed on the same specification equipment as production if the results are to have integrity.Lightweight transactions that do not require significant processing can be tested but only substantial deviations from expected transaction response times should be reported. Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.
- Production Like Configuration
The configuration of each component needs to be production like. For example: database configuration and operating system configuration. While system configuration will have less impact on performance testing than load testing, only substantial deviations from expected transaction response times should be reported.
- Production Like Version
The version of software to be tested should closely resemble the version to be used in production. Only major performance problems such as missing indexes and excessive communications should be reported with a version substantially different from the proposed production version.
- Production Like Access
If clients will access the system over a WAN, dial up modems, DSL, ISDN, etc. then testing should be conducted using each communication access method. Only tests using production like access are valid.
- Production Like Data
All relevant tables in the database need to be populated with a production like quantity with a realistic mix of data. Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.
Posted by
Sunflower
at
12/21/2010 09:14:00 PM
0
comments
Labels: Changes, Clients, Components, Data, Load, Load Testing, Performance, Performance testing, Performance Tests, Pre-requisites, Process, Protocols, Software testing, Target, Tests, Version
![]() | Subscribe by Email |
|
Targeted Infrastructure Tests and Performance Testing
TARGETED INFRASTRUCTURE TESTS
Targeted Infrastructure tests are isolated tests of each layer and or component in an end to end application configuration.
- It includes communications infrastructure, load balancers, web servers, application servers, crypto cards, citrix servers allowing for identification of any performance issues that would fundamentally limit the overall ability of a system to deliver at a given performance level.
- Each test can be quite simple.
- Targeted infrastructure testing separately generates load on each component and measures the response of each component under load.
- Different infrastructure tests require different protocols.
PERFORMANCE TESTS
These are the tests that determine end to end timing of various time critical business processes and transactions, while the system is under low load, but with a production sized database.
- This sets best possible performance expectation under a given configuration of infrastructure.
- It also highlights very early in the testing process if changes need to be made before load testing should be undertaken.
- Performance testing would highlight such a slow customer search transaction which could be re mediated prior to a full end to end load test.
- The best practice to develop performance tests is with n automated tool such as WinRunner, so that the response times from a user perspective can be measured in a repeatable manner with a high degree of precision. The same test scripts can later be re-used in a load test and the results can be compared back to the original performance tests.
- A key indicator of the quality of a performance test is repeatability. Re-executing a performance test multiple times should give the same set of results each time. If the results are not same each time, then the differences in results from one run to the next can not be attributed to changes in the application, configuration or environment.
Posted by
Sunflower
at
12/21/2010 04:29:00 PM
0
comments
Labels: Changes, Components, Layers, Load, Load Testing, Performance, Performance testing, Performance Tests, Process, Protocols, Software testing, Target, Targeted Infrastructure Tests, Tests
![]() | Subscribe by Email |
|
Monday, December 20, 2010
How does stress test execute?
A stress test starts with a load test, and then additional activity is gradually increased until something breaks. An alternative type of stress test is a load test with sudden bursts of additional activity. The sudden bursts of activity generate substantial activity as sessions and connections are established, where as a gradual ramp-up in activity pushes various values past fixed system limitations.
Ideally, stress tests should incorporate two runs, one with burst type activity and the other with gradual ramp-up to ensure that the system under test will not fail catastrophically under excessive load. System reliability under severe load should not be negotiable and stress testing will identify reliability issues that arise under severe levels of load.
An alternative, or supplemental stress test is commonly referred to as a spike test, where a single short burst of concurrent activity is applied to a system. Such tests are typical of simulating extreme activity where a count down situation exists. For example, a system that will not take orders for a new product until a particular date and time. If demand is very strong, then many users will be poised to use the system the moment the count down ends, creating a spike of concurrent requests and load.
Posted by
Sunflower
at
12/20/2010 04:08:00 PM
0
comments
Labels: Activity, Conditions, Environment, Execute, Failure, Focus areas, Load Testing, Loads, Performance, Quality, Simulate, Software testing, Stress, Stress testing, Target, Test Execution
![]() | Subscribe by Email |
|
Saturday, December 18, 2010
Overview of Stress Testing and its Focus..
Stress Tests determine the load under which a system fails, and how it fails. This is in contrast to load testing, which attempts to simulate anticipated load. It is important to know in advance if a stress situation will result in catastrophic system failure, or if everything just goes really slow. There are various varieties of stress tests, including spike, stepped and gradual ramp-up tests. Catastrophic failures require restarting various infrastructure and contribute to downtime, a stress-full environment to support staff and managers, as well as possible financial losses. If a major performance bottleneck is reached, then the system performance will usually degrade to a point that is unsatisfactory, but performance should return to normal when the excessive load is removed.
Before conducting a stress test, it is usually advisable to conduct targeted infrastructure tests on each of the key components in the system. A variation on targeted infrastructure tests would be to execute each one as a mini stress test.
What is the focus of stress tests?
In a stress event, it is most likely that many more connections will be requested per minute than under normal levels of expected peak activity. In many stress situations, the actions of each connected user will not be typical of actions observed under normal operating conditions. This is partly due to the slow response and partly due to the root cause of the stress event.
If we take an example of a large holiday resort web site, normal activity will be characterized by browsing, room searches and bookings. If a national online news service posted a sensational article about the resort and included a URL in the article, then the site may be subjected to a huge number of hits, but most of the visits would probably be a quick browse. It is unlikely that many of the additional visitors would search for rooms and it would be even less likely that they would make bookings. However, if instead of a news article, a national newspaper advertisement erroneously understand the price of accommodation, then there may well be an influx of visitors who cl amour to book a room, only to find that the price did not match their expectations.
In both of the above situations, the normal traffic would be increased with traffic of a different usage profile. So, a stress test design would incorporate a load test as well as additional virtual users running a special series of stress navigations and transactions.
For the sake of simplicity, one can just increase the number of users using the business processes and functions coded in the load test. However, one must then keep in mind that a system failure with that type of activity may be different to the type of failure that may occur if a special series of stress navigations were utilized for stress testing.
Posted by
Sunflower
at
12/18/2010 08:55:00 PM
0
comments
Labels: Activity, Conditions, Environment, Failure, Focus areas, Load Testing, Loads, Performance, Quality, Simulate, Software testing, Stress, Stress testing, Target
![]() | Subscribe by Email |
|