- Feedback
mechanism and
- The
control mechanism
Thursday, August 29, 2013
How can traffic shaping help in congestion management?
Posted by
Sunflower
at
8/29/2013 06:13:00 PM
0
comments
Labels: Algorithms, Avoidance, Congestion, Control, Feedback, Information, Levels, Load, Management, Network, Operation, Overload, Policies, Signals, States, System, traffic, Traffic shaping, Under-load, User
![]() | Subscribe by Email |
|
Saturday, June 16, 2012
Reverse Engineering - an activity involved in software re-engineering process model.
- Inventory analysis
- Documentation reconstruction
- Reverse engineering
- Code re- structuring
- Data re- structuring
- Forward Engineering
What is Reverse Engineering?
- Browsers
- Cross reference generators and so on.
Levels in Reverse Engineering
Stages in Reverse Engineering Process
Activities in Reverse Engineering Process
When is Reverse Engineering Preferred?
Posted by
Sunflower
at
6/16/2012 02:48:00 PM
0
comments
Labels: activities, Activity, Analyze, Applications, Code, Design, Information, Interactive, Levels, Re-engineering, Requirements, Reverse Engineering, Software Re-engineering, Software Systems, Specifications, Stages
![]() | Subscribe by Email |
|
Tuesday, May 1, 2012
How does penetration testing tool emphasize on data base security?
About Penetration Testing and Database Security
How Penetration Testing emphasize on Database Security?
- They provide assistance
in the assessment of the measure of the operational and business impacts
of the attacks on the data base system.
- Successfully test
the effectiveness of the security defenders in detecting and responding to
the attacks.
- Provide the
evidence in support of the investments that need to be made in the security
field of the data base.
Posted by
Sunflower
at
5/01/2012 11:25:00 PM
0
comments
Labels: Application, Attackers, Components, Database, Efficiency, Emphasis, Information, Issues, Levels, Methodology, Penetration testing, Policies, Quality, Security, Simulation, Software Systems, Sub Systems, Tools, Users
![]() | Subscribe by Email |
|
How does penetration testing tool emphasize on security subsystem?
About Penetration Testing and Security Sub Systems
How Penetration Testing tool emphasize on Security Sub Systems?
Posted by
Sunflower
at
5/01/2012 01:23:00 PM
0
comments
Labels: Application, Attackers, Components, Efficiency, Emphasis, Information, Issues, Levels, Methodology, Network, Penetration testing, Policies, Quality, Security, Simulation, Software Systems, Sub Systems, Tools, Users
![]() | Subscribe by Email |
|
Monday, March 26, 2012
What is the difference between quality assurance and testing?
Quality assurance and testing are the processes that together keep up a control on the quality check of the software system or application. These two processes when implemented together ensure that maximum quality of the software system or application is maintained as much close as possible to the 100 percent.
There is no such software or application that can boast to have 100 percent customer satisfying quality. Well this article is focussed up on these two processes only and the differences between the two. We are discussing differences here because most of the people often confuse between the two.
QUALITY ASSURANCE
- The term “quality assurance” is a self justifying.
- By the term only we can make out that it must be some systematic and planned activities that are to be implemented in a quality system so that a check over its quality requirements is maintained.
- It involves the following processes:
1. Systematic measurement of the quality of the software system or application.
2. Comparison of the quality of the software system or application with the pre- defined quality standards.
3. Monitoring of the processes.
4. An associated feedback for conferring the error prevention.
- A typical quality assurance process also keeps a quality check on the quality of the tools, assemblages, equipments, testing environment, production, development and management processes that are involved with the process of the software testing.
- The quality of a software system or application product is defined by the clients or the customers rather than having a whole society do it.
- One thing that one should always keep in mind that the quality of a software system or application cannot be defined by quality adjectives like poor and good since the quality of one of the aspects of the system could be high and in some other aspect it could be low.
PRINCIPLES OF QUALITY ASSURANCE
The whole process of the quality assurance is guided by the two following principles:
1. Fit for purpose:
The software product is deemed to fulfil the purpose for which it has been made and
2. Right first time:
The mistakes encountered for the first time should be completely eliminated.
TESTING PROCESSES EMPLOYED IN SOFTWARE TESTING & QUALITY ASSURANCE
Below we are mentioning the testing processes that are employed for both the software testing as well as the quality assurance:
1. Testing approaches:
(a) White box testing
(b) Black box testing
(c) Grey box testing
(d) Visual testing
2. Testing levels:
(a) test target:
(i) unit testing
(ii) Integration testing
(iii) System testing
(b) Objectives:
(i) regression testing
(ii) User acceptance testing
(iii) Alpha and beta testing
3. Non functional testing:
(a) Performance testing
(b) Usability testing
(c) Security testing
(d) Internationalization and localization
(e) Destructive testing
4. Testing processes:
(a) waterfall model or CMMI
(b) Extreme or agile development model
(c) Sample testing cycle
5. Automated testing using tools and measurements
In fact both the processes are just the same but with a different perspective i.e., the software testing is aimed at eliminating the bugs out of the software system and the quality assurance takes in to consideration the overall quality of the software system.
In contrast to the quality assurance, software testing is the way to implement the quality assurance i.e., it provides the clients or the customers with the information regarding the quality of the software system or application. The testing is done to make sure of the following points:
1. The product meets the specified requirements.
2. Works as intended.
3. Is implemented with the same characteristics.
The software testing can be implemented at any point of time in the development process unlike the quality assurance that should be implemented right from the beginning to ensure maximum quality.
Posted by
Sunflower
at
3/26/2012 07:06:00 PM
0
comments
Labels: Application, Approaches, Bugs, Defects, Differences, Environment, Errors, Levels, Objectives, Principles, Purpose, Quality, Quality assurance, Software Systems, Software testing, Standards, Tools
![]() | Subscribe by Email |
|
Tuesday, March 6, 2012
What are different methods and techniques used for security testing at white box level?
It requires a great deal of efforts to harness a good level of security. To obtain good security statistics one has to follow a proper approach to the testing. Like for any other kind of software testing one need to decide for security system also that who will carry out the testing and what approach has to be followed. Carrying out the security testing at the white box level is not at all easy as it is very complex and detailed.
APPROACHES FOR SECURITY TESTING AT WHITE BOX LEVEL
Basically till now two basic approaches have been identified for the security testing at the white box level and these have been mentioned below:
1. Functional Security Testing
- This approach to testing is usually followed by the standard testing organizations. - It deals with the checking of the features and functionalities of the software system or application for determining that whether or not they are working as stated. - This sounds like a very classic approach to security testing.
2. Risk Based Security Testing
- This is a more traditional approach to security testing and is followed usually by the quality assurance staff.
- This approach is quite difficult as compared to the previous mentioned approach.
- The main problem here is of the expertise of the testers since this approach calls for great skills in testing.
- Firstly to design the security tests which can completely exploit the vulnerabilities are difficult to be designed since for this it is required that the tester thinks like an attacker.
- Secondly, the security tests do not exploit the security of the software system or application directly and this causes a problem to observe the outcomes of a security test.
ABOUT SECURITY TESTING AT WHITE BOX TESTING LEVEL
1. A security test carried out without much precaution and logic can cause the whole security testing go wrong and this in turn can lead the software tester to carry out even more complicated test processes to counteract such a situation.
2. Risk based testing requires more skills than experience.
3. Most of the security testing methodologies or techniques that we use at the white box level are traditional and some of them have become out dated.
4. On the other hand the security exploitation techniques used by the attackers have become sophisticated day by day and the traditional methods used to cope these issues are becoming extinct.
5. Security testing at both the black box level and white box level tend to have a better understanding of the software system or application but different approaches are followed at both the levels.
6. The different approach followed by them is decided on the basis of the access of the source code i.e., whether or not the tester is having access to source code.
7. Security testing at the white box level is concerned with the rigorous analyzation of the source code of the software program as well its design.
8. It basically deals with finding the errors in the security mechanism of the software system.
9. In very rare cases it happens that this approach involves the matching of the patterns and automation of the whole testing process by implementing a static analyzer.
10. One peculiar drawback has been discovered for this kind of testing which is that this kind of testing sometimes may report a bug in some part of the software but actually there exists no such bug.
11. But still security testing at white box level using static analysis methods and techniques proves good for some software systems and applications.
12. Risk based testing calls for a lot of understanding of the whole software system.
13. After all, the product security is very much essential to the reputation of the company.
Posted by
Sunflower
at
3/06/2012 10:00:00 AM
0
comments
Labels: Application, Approach, Bugs, Defects, Errors, Functional, Levels, Logical, Risk based testing, Security, Security Testing, Software Systems, Software testing, Techniques, Tests, White box testing
![]() | Subscribe by Email |
|
Sunday, February 12, 2012
What is ISTQB (International software testing qualifications board) certification?
ISTQB is an important certification in the field of software engineering and information technology. ISTQB stands for the International Software Testing Qualifications Board.
SOME FACTS ABOUT ISTQB
1. ISTQB and ISEB are 2 similar organizations.
2. ISTQB is an organization that grants certification for qualification in the field of software testing.
3. ISTQB was formed in the month of November in the year 2002.
4. Though formed in Edinburgh, now its office is headquartered in Belgium.
5. ISTQB certified tester is a program initiated by ISTQB.
6. This qualification scheme is of international scheme.
7. A hierarchy is maintained for the guidelines necessary for qualification examination and accreditation.
8. A certain syllabus is also prescribed for achieving the qualification.
9. ISTQB till now has issued almost over 200, 000 certifications making it the world’s top software testing qualifications issuing organization.
10.There are around 47 members of the ISTQB from over 71 countries.
LEVELS OF ISTQB
Similar to ISEB, ISTQB too offers 3 levels of qualification:
1.1st level: ISTQB foundation level
2. 2nd level: ISTQB advanced level
This further has 3 levels namely test analyst, test manager and technical analyst.
3. 3rd level: ISTQB expert level
This level deals with the continuous improvement of the test processes, automation of test processes, management of test processes and security testing.
COURSE TRAINING
1. The training of the course is followed up by an examination which covers the whole syllabus.
2. On completing the exam successfully the candidate is accredited with a certification of "ISTQB certified tester".
3. It is on the wish of the candidate if he wants to follow the stipulated course for examination or not.
4. It aims at developing a qualification that is accepted world wide.
Posted by
Sunflower
at
2/12/2012 03:31:00 PM
0
comments
Labels: Analysis, Application, Certification, Design, Information, International Software Testing Qualifications Board, ISTQB, Levels, Qualifications, Requirements, Software testing, Subjects, Technology
![]() | Subscribe by Email |
|
Saturday, February 11, 2012
What is ISEB (Information systems examinations board) certification?
ISEB is an important certification in the field of software engineering and information technology. ISEB is the abbreviated form for the Information Systems Examinations Board.
SOME FACTS ABOUT ISEB
1. ISEB is a well known part of BCS which is the chartered institute famous for its information technology.
2. ISEB is known for conducting examinations in the concerned fields.
3. It is an examination conducting body in the field of information technology. 4. It was formed as collaboration between 2 organizations namely BCS and NCC. 5
5. There was need for the development of a certificate in systems analysis and design.
6. It was required for the examination board of systems analysis.
7. Therefore, as a result NCC and BCS came together to form a new board for meeting their requirement and it was named as the systems analysis examinations board.
8. The year of 1989 saw the creation of a new qualification for the field of project management.
9. Simultaneously, the process of expansion of the qualifications portfolio began in 1989.
10. Because of this, the system analysis examinations board was renamed to information systems examination board and hence the ISEB was born.
WHAT DOES THE QUALIFICATION FOR ISEB COVER
The qualifications stated for ISEB cover the following subjects from the field of information technology:
1. Software testing
2. Business analysis
3. Information services management
4. ITIL
5. Sustainable information technology
6. Project support
7. Project management
8. Information technology assets
9. Information technology infrastructure
10.Systems development
11.Green information technology
12.Information technology governance
13.Information technology information
14.Information technology security
LEVELS OF ISEB
There are 3 levels at which ISEB qualification is granted and they are:
1. 1st level: ISEB foundation level
This qualification is based on a certain discipline which is introduced at this level.
2. 2nd level: ISEB practitioner level
This qualification involves application of practical methods within a specified discipline only.
3. 3rd level: ISEB higher level
This level covers a specific discipline in great depth and is meant only for managers and specialists.
The ISEB is recognized all over the world in around 50 countries. Some of these countries are South Africa, Brazil, Japan, Australia and United States of America. The ISEB qualification is granted on the basis of the training as well as both computers based and written examinations.
Posted by
Sunflower
at
2/11/2012 09:11:00 PM
0
comments
Labels: Analysis, Application, Certification, Design, Examination, Information, Information systems examinations board, ISEB, Levels, Methods, Qualifications, Recognition, Requirements, Subjects, Technology
![]() | Subscribe by Email |
|
Wednesday, January 11, 2012
What are different rules of thumb to write good test cases?
Writing good and effective test cases requires great skills since after all effective testing is achieved by effective test cases only!
Writing such test cases is a great skill in itself and can only be achieved by in depth knowledge of the software system or application on which the tests are being carried out and it also requires some experience.
Here I’m going to share some rules of thumb to write effective test cases, basic test cases definition and test case procedures.
What is a test case actually?
A typical test case is comprised of components that are meant to describe an action or event, input and an expected outcome in order to determine whether the software system or application is working as it is meant to or not.
Before writing a test case you should know the 4 levels or categories of the test cases to avoid their duplication.
The levels have been discussed below:
Level 1:
This level involves writing of basic test cases using the available specifications and requirements and documentation provided by the client.
Level 2:
This level is the practical stage and it involves writing of test cases based on the actual system flow and functional routines of the software system or application.
Level 3:
This level involves grouping of some particular test cases. Some test cases are grouped together and a test procedure is written. A test procedure can have maximum up to 10 test cases.
Level 4:
This level involves the automation of the project. This leads to minimization of the human’s interaction with software system or application. This is done basically to maximize the focus upon the currently updated functionality to be tested rather than focusing on regression testing.
Following this whole pattern you can have an automated testing suite from no testable item, i.e., you can observe a systematic growth.
- The tester should know the objective of each and every particular test case.
- The basic objective of all the test cases is to validate the testing coverage of the software system or application.
- You need to strictly follow test cases standards. Writing test cases reduces the chances of following an ad- hoc approach.
Below given is a basic test case format:
- Test case id
- Units to be tested
- Assumptions
- Input /test data
- Execution steps
- Expected outcome
- Actual outcome
- Success or failure
- Observation
- Comments
You need to write a test case statement also. So here’s the basic format:
- Verify:
This is the first word of the test case statement.
- Using tool names, tag names, dialogues etc: this is basically to identify what is being tested.
- Verification with conditions.
- Verification to result.
For any kind of testing:
- You will cover all types of tests like functional test cases, negative value test cases and boundary value test cases.
- Be careful while writing the test cases.
- Keep it simple and easy to understand.
- Don’t write test case statements with the length of an essay.
- Keep it brief and to the point.
- Follow the test cases and test case statements formats stated above.
- Generally spreadsheets are used to write test cases which make them more presentable and easy to understand.
- You can use tools like “test director” when you want to automate the test cases.
- Writing clear and concise test cases forms an important part of software quality assurance.
- Also be careful that a good number of test cases cover functional testing which means that the primary focus is on how the feature works.
Posted by
Sunflower
at
1/11/2012 11:20:00 PM
0
comments
Labels: Application, Category, Components, Effective, Focus areas, Functionality, Good, Levels, Procedure, Quality, Rules, Rules of thumb, Software Systems, Software testing, Statements, Test cases, Tests
![]() | Subscribe by Email |
|
Tuesday, October 4, 2011
Concept of Project Scheduling - What is the root cause for late delivery of software?
After all the important elements are defined for a project, it is now time to connect all the elements. It means a network of all engineering tasks is created that will enable you to get the job on time. The responsibility for each task is assigned to make sure that it is done and adapt the network. The software project managers does this at the project level and on an individual level, software engineers themselves.
Project scheduling is important because there are many tasks running in parallel in a complex system and the result of each task performed has a very important effect on the work that is performed by other task. These inter-dependencies are very difficult to understand without project scheduling.
The basic reasons why software is delivered late are:
- Unrealistic deadline by someone outside the software group.
- Changing the requirements of customer and not reflecting them in schedule change.
- Underestimate of amount of effort and number of resources required for the job.
- Non considerable predictable or unpredictable risks.
- Technical difficulties that are left unseen.
- Human difficulties that are left unseen.
- Lack of communication or mis-communication among project staff.
- Project management is not able to judge that project is falling behind schedule.
The estimation and scheduling techniques when implemented under constraint of defined deadline gives the best estimate and if this best estimate indicates that the deadline is unrealistic, the project manager should be careful from undue pressure.
If the management demands that the deadline is unrealistic then following steps should be done:
- A detailed estimate is made and and estimated effort and duration is evaluated.
- Develop a software engineering strategy using incremental process model.
- Explain to the customer the reasons why the deadline is unrealistic.
- An incremental development strategy is explained and offered as an alternative.
Posted by
Sunflower
at
10/04/2011 07:12:00 PM
0
comments
Labels: Causes, Deadline, Effort, Estimates, Estimation, Levels, Network, Project Manager, Project scheduling, Requirements, Schedule, Scheduling, software engineering, Steps, Tasks, Techniques
![]() | Subscribe by Email |
|
Wednesday, March 9, 2011
How is data designed at architectural and component level?
Data Design at Architectural Level
Data design translates data objects defined during analysis model into data structures at the software component level and, when necessary,a database architecture at the application level.
There are small and large businesses that contains lot of data. There are dozens of databases that serve many applications comprising of lots of data. The aim is to extract useful information from data environment especially when the information desired is cross functional.
Techniques like data mining is used to extract useful information from raw data. However, data mining becomes difficult because f some factors:
- Existence of multiple databases.
- Different structures.
- Degree of detail contained with databases.Alternative solution is concept of data warehousing which adds an additional layer to data architecture. Data warehouse encompasses all data used by a business. A data warehouse is a large, independent database that serve the set of applications required by a business. Data warehouse is a separate data environment.
Data Design at Component Level
It focuses on representation of data structures that are directly accessed by one or more software components. Set of principles applicable to data design are:
- Systematic analysis principles applied to function and behavior should also be applied to data.
- All data structures and operations to be performed on each should be identified.
- The content of each data object should be defined through a mechanism that should be established.
- Low level data design decisions should be deferred until late in design process.
- A library of data structures and operations that are applied to them should be developed.
- The representation of data structure should only be known to those modules that can directly use the data contained within the structure.
- Software design and programming language should support the specification and realization of abstract data types.
Posted by
Sunflower
at
3/09/2011 05:45:00 PM
0
comments
Labels: Analysis Model, Application, Architectural, Architectural design, Component Level Design, Data, Data Design, Data structure, data warehousing, Databases, Design, Levels, Structures
![]() | Subscribe by Email |
|
Tuesday, March 8, 2011
Software Architecture Design - why is it important?
The architecture is not the operational software, rather it is a representation that enables a software engineer to analyze the effectiveness of the design in meeting its stated requirements, consider architectural alternatives at a stage when making design changes is still relatively easy and reduce the risk associated with the construction of the software.
- Software architecture enables and shows communication between all parties interested in the development of a computer based system.
- Early design decisions that has a profound impact on software engineering work is highlighted through architecture.
- Architecture constitutes a relatively small, intellectually graspable model of how the system is structured and how its components work together.
The architectural design model and the architectural patterns contained within it are transferable. Architectural styles and patterns can be applied to the design of other systems and represent a set of abstractions that enable software engineers to describe architecture in predictable ways.
Software architecture considers two levels of design pyramid - data design and architectural design. The software architecture of a program or computing system is the structure or structures of the system, which compose software components, the externally visible properties of those components and the relationships among them.
Posted by
Sunflower
at
3/08/2011 06:08:00 PM
1 comments
Labels: Architecture, Communication, Components, computers, Design, Impact, Levels, Operational, Patterns, program, Representation, Software, Software Architectue, Stages, Structures
![]() | Subscribe by Email |
|
Thursday, February 24, 2011
What are different steps while conducting component level designing?
The following steps represent a typical task set for component level design, when it is applied for an object oriented system. If you are working in a non object oriented environment, the first three steps focus on the refinement of data objects and processing functions identified as part of analysis model.
STEP 1: Identify all design classes that correspond to the problem domain.
STEP 2: Identify all design classes that correspond to the infrastructure domain.
STEP 3: Elaborate all design classes that are not acquired as reusable components.
In addition to all interfaces, attributes, operations, design heuristics i.e cohesion and coupling should be considered during elaboration.
- Specify message details when classes or components collaborate.
Structure of messages that are passed between objects within a system are shown as component level design proceeds.
- Identify appropriate interfaces for each component.
Interface is an abstract class that provides a controlled connection between design classes. So interfaces should be identified appropriately.
- Elaborate attributes and define data types and data structures required to implement them.
The programming language that is to be used in the project typically is a factor in the definition of the data structure and types used to describe attributes. When the component level design process is started, the name of attributes is used; as the design proceeds the UML attribute format is increasingly used.
- Describe processing flow within each operation in detail.
There are two ways to do this; through a UML activity diagram or a programming language based pseudocode. The elaboration of each software component is by doing a number of iterations and in each iteration a step-wise refinement concept is used.
STEP 4: Describe persistent data sources and identify the classes required to manage them.
As the design elaboration proceeds, an additional data should be provided about the structure and organization of these data sources which are initially specified as part of architectural design.
STEP 5: Develop and elaborate behavioral representations for a class or component.
In order to depict the externally observable behavior of the system as well as that of individual analysis classes, UML state diagrams are used. As a part of component level design, modeling the behavior of a design class may be sometimes required. The instantiation of a design class a s program executes is also known as the dynamic behavior of the object. This behavior is impacted by the current state of object as well as external events.
STEP 6: Elaborate deployment diagrams to provide additional implementation detail.
When component level design is being done, in order to make deployment diagrams simple to read and comprehend, the location of components(individual components) are generally not depicted.
STEP 7: Factor every component level design representation and always consider alternatives.
The first component level model that is created will not be consistent, complete and accurate as compared to the nth iteration that is applied to the model. It is necessary to re-factor as design work is conducted.
Posted by
Sunflower
at
2/24/2011 01:41:00 PM
0
comments
Labels: Attributes, Classes, Component Level Design, Components, Consistent, Design, Domain, Elaboration, Identify, Infrastructure, Interfaces, Levels, Message, Operations, Problems, Refactoring, Steps
![]() | Subscribe by Email |
|
Friday, February 18, 2011
Component Level Design - Important Views that describes what a component is?
An Object Oriented View of Component
- From an object oriented viewpoint, a component is a set of collaborating classes.
- Each class within a component consists of attributes and operations relevant.
- Interfaces enabling the classes to communicate with other design classes are defined.
- Designer accomplishes this from analysis model and elaborates analysis classes and infrastructure classes.
- Analysis and design modeling are both iterative actions. Elaborating original analysis class may require additional analysis step which are then followed with design analysis steps to represent elaborated design class.
- Elaboration activity is applied to every component.
- After this, elaboration is applied to each attribute, operation, and interface.
- Data structures are specified.
- Algorithms for implementing each logic is designed.
The Conventional View
- A component is functional element of a program, also called module.
- It incorporates processing logic, internal data structures, interface that enables the component to be invoked and data to be passed to it.
- It resides within software architecture.
- It serves one of the roles control component, problem domain component or an infrastructure component.
- Conventional components are also derived from analysis classes.
- Data flow oriented element of analysis model is the basis for derivation.
- Each module is elaborated.
- Module interface is defined.
- Data structures are defined.
- Algorithm is designed using stepwise refinement approach.
- Design elaboration continues until sufficient detail is provided.
Process Related View
- The above two approaches assume that component is designed from scratch.
- Emphasize is on building systems that make use of existing software.
- As software architecture is developed, components or design patterns are chosen from catalog and used to populate the architecture.
Posted by
Sunflower
at
2/18/2011 11:52:00 AM
0
comments
Labels: Algorithms, Analysis, Analysis Model, Approaches, Attributes, Classes, Component Level Design, Components, Design, Elaboration, Interfaces, Levels, Modules, Object Oriented
![]() | Subscribe by Email |
|
Friday, December 17, 2010
What is Long Session Soak Testing ?
When an application is used for long periods of time each day, the above approach should be modified, because the soak test driver is not logins and transactions per day, but transactions per active user for each user each day. This type of situation occurs in internal systems, such as ERP and CRM systems, where user logins and stay logged in for many hours, executing a number of business transactions during that time. A soak test for such a system should emulate multiple days of activity in a compacted time frame rather than just pump multiple days worth of transactions through the system.
Long session soak tests should run with realistic user concurrency, but the focus should be on the number of transactions processed. VUGen scripts used in long session soak testing may need to be more sophisticated than short session scripts, as they must be capable of running a long series of business transactions over a prolonged period of time.
The duration of most soak tests is often determined by the available time in the test lab. There are many applications that require extremely long soak tests. Any application that must run, uninterrupted for extended periods of time, may need a soak test to cover all of the activity for a period of time that is agreed to by the stakeholders. Most systems have a regular maintenance window, and the time between such windows is usually a key driver for determining the scope of soak test.
Posted by
Sunflower
at
12/17/2010 07:29:00 PM
0
comments
Labels: Databases, Functions, Levels, Load, Long session, Memory, Monitor, Multi-tired system, Problems, Response time, Sessions, Soak test, Soak Testing, Software testing, Test cases, Transactions
![]() | Subscribe by Email |
|
Thursday, December 16, 2010
Overview of Soak testing.
Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed. Also, it is possible that a system may stop working after a certain number of transactions have been processed due to memory leaks or other defects. Soak tests provide an opportunity to identify such defects, whereas load tests and stress tests may not find such problems due to their relatively short duration.A soak test would run for as long as possible, given the limitations of the testing situation. For example, weekends are often an opportune time for a soak test.
There are some typical problems identified during soak tests are:
- Serious memory leaks that would eventually result in memory crisis.
- Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system.
- Failure to close database cursors under some conditions which would eventually result in the entire system stalling.
- Gradual degradation of response time of some functions as internal data structures becomes less efficient during a long test.
Apart from monitoring response time, it is also important to measure CPU usage and available memory. If a server process needs to be available for the application to operate, it is often worthwhile to record its memory usage at the start and end of the soak test. It is also important to monitor internal memory usages of facilities such as Java virtual machines, if applicable.
Posted by
Sunflower
at
12/16/2010 06:52:00 PM
0
comments
Labels: Appliaction, Databases, Functions, Levels, Load, Memory, Monitor, Multi-tired system, Problems, Response time, Soak test, Soak Testing, Software testing, Test cases, Transactions
![]() | Subscribe by Email |
|
Saturday, December 4, 2010
What comprises Test Ware Development : Test Plan - Unit Test Plan
The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.
The plans are to be prepared by experienced people only. In all test plans, the (ETVX) Entry-Task-Validation-Exit criteria are to be mentioned. Entry means the entry point to that phase. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done.
ETVX is a modeling technique for developing worldly and atomic level models. It is a task based model where the details of each task are explicitly defined in a specification table against each phase i.e. Entry, Exit, Task, Feedback In, Feedback Out, and measures.
There are two type of cells, unit cells and implementation cells. The implementation cells are basically unit cells containing the further tasks. A purpose is also stated and the viewer of the model may also be defined e.g. to management or customer.
Types of Test Plan
Unit Test Plan (UTP)
The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual tester, which contains the following sections:
- What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case, mostly the input units will be tested for the format, alignment, accuracy and the totals.
- Sequence of testing
The sequence of test activities that are to be carried out in this phase are to be listed in this section. This includes, whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc.
- Basic functionality of units
The independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level.
Apart from these, the following sections are also addressed:
- Unit testing tools
- Priority of program units
- Naming convention for test cases
- Status reporting mechanism
- Regression test approach
- ETVX criteria
Posted by
Sunflower
at
12/04/2010 12:48:00 PM
0
comments
Labels: ETVX, Functionality, Levels, Plan, Planning, Sections, Software testing, Strategy, Test Planning, Test ware development, Unit test plan, Units, Validation
![]() | Subscribe by Email |
|
Saturday, October 16, 2010
Validation phase - Integration Testing - Top Down Integration and Bottom Up Integration
Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. There are two methods of integration testing:
- Top-down integration approach
- Bottom-up integration approach
Top-down Integration Approach
It is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy beginning with the main control module. Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
- The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.
- Depending upon integration approach, selected subordinate stubs are replaced one at a time with actual components.
- Tests are conducted as each component is integrated.
- On completion of each set of tests, stub is replaced with real component.
- Regression testing may be conducted to ensure that new errors have not been introduced.
Bottom-Up Integration Approach
It begins construction and testing with atomic modules. Because components are integrated from bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
- Low level components are combined into clusters that perform a specific software sub function.
- A driver is written to coordinate test case input and output.
- The cluster is tested.
- Drivers are removed and clusters are combined moving upward in the program structure.
Posted by
Sunflower
at
10/16/2010 12:23:00 PM
0
comments
Labels: Approaches, Bottom Up, Components, Drivers, Function, Integration, Integration Testing, Levels, Phases, Software testing, Structure, Stubs, Techniques, Test cases, Top Down, Validation phase
![]() | Subscribe by Email |
|
Thursday, May 20, 2010
Verification (VER) Process Area in Capability Maturity Model (CMMi)
An Engineering Process Area at Maturity Level 3. The purpose of Verification (VER) is to ensure that selected work products meet their specified requirements.
Verification includes verification of the product and intermediate work products against all selected requirements, including customer, product, and product component requirements. Throughout the process areas, where we use the terms product and product component, their intended meanings also encompass services and their components.
Verification is inherently an incremental process because it occurs throughout the development of the product and work products, beginning with verification of the requirements, progressing through the verification of the evolving work products, and culminating in the verification of the completed product.
Specific Practices by Goal
SG 1 Prepare for Verification
Up-front preparation is necessary to ensure that verification provisions are embedded in product and product component requirements, designs, developmental plans, and schedules. Verification includes selection, inspection, testing, analysis, and demonstration of work products. Methods of verification include, but are not limited to, inspections, peer reviews, audits, walkthroughs, analyses, simulations, testing, and demonstrations.
- SP 1.1 Select Work Products for Verification.
The work products to be verified may include those associated with maintenance, training, and support services. The work product requirements for verification are included with the verification methods.
- SP 1.2 Establish the Verification Environment.
An environment must be established to enable verification to take place. The verification environment can be acquired, developed, reused, modified, or a combination of these, depending on the needs of the project. The type of environment required will depend on the work products selected for verification and the verification methods used.
- SP 1.3 Establish Verification Procedures and Criteria.
The verification procedures and criteria should be developed concurrently and iteratively with the product and product component designs. Verification criteria are defined to ensure that the work products meet their requirements.
SG 2 Perform Peer Reviews
Peer reviews involve a methodical examination of work products by the producers peers to identify defects for removal and to recommend other changes that are needed. The peer review is an important and effective verification method implemented via inspections, structured walkthroughs, or a number of other collegial review methods.
- SP 2.1 Prepare for Peer Reviews.
Preparation activities for peer reviews typically include identifying the staff who will be invited to participate in the peer review of each work product; identifying the key reviewers who must participate in the peer review; preparing and updating any materials that will be used during the peer reviews, such as checklists and review criteria, and scheduling peer reviews.
- SP 2.2 Conduct Peer Reviews.
One of the purposes of conducting a peer review is to find and remove defects early. Peer reviews are performed incrementally as work products are being developed. These reviews are structured and are not management reviews. Peer reviews may be performed on key work products of specification, design, test, and implementation activities and specific planning work products.
- SP 2.3 Analyze Peer Review Data.
Analyze data about preparation, conduct, and results of the peer reviews.
SG 3 Verify Selected Work Products
The verification methods, procedures, and criteria are used to verify the selected work products and any associated maintenance, training, and support services using the appropriate verification environment.
- SP 3.1 Perform Verification.
Verifying products and work products incrementally promotes early detection of problems and can result in the early removal of defects. The results of verification save considerable cost of fault isolation and rework associated with troubleshooting problems.
- SP 3.2 Analyze Verification Results.
Analyze the results of all verification activities. Actual results must be compared to established verification criteria to determine acceptability. The results of the analysis are recorded as evidence that verification was conducted.
Posted by
Sunflower
at
5/20/2010 05:18:00 PM
0
comments
Labels: Capability Maturity Model, CMMi, Goals, Levels, Organization, Practices, Processes, software engineering, Software Process, Validation, verification Process area
![]() | Subscribe by Email |
|
Monday, May 17, 2010
Supplier Agreement Management (SAM) Process Area in CMMi
The purpose of Supplier Agreement Management (SAM) is to manage the acquisition of products from suppliers. It is a Project Management process area at Maturity Level 2.
The Supplier Agreement Management process area involves the following:
- Determining the type of acquisition that will be used for the products to be acquired.
- Selecting suppliers.
- Establishing and maintaining agreements with suppliers.
- Executing the supplier agreement.
- Monitoring selected supplier processes.
- Evaluating selected supplier work products.
- Accepting delivery of acquired products.
- Transitioning acquired products to the project.
Suppliers may take many forms depending on business needs, including in-house vendors (i.e., vendors that are in the same organization but are external to the project), fabrication capabilities and laboratories, and commercial vendors. A formal agreement is established to manage the relationship between the organization and the supplier. A formal agreement is any legal agreement between the organization (representing the project) and the supplier.
Specific Practices by Goal
SG 1 Establish Supplier Agreements
Agreements with the suppliers are established and maintained.
- SP 1.1 Determine Acquisition Type.
Determine the type of acquisition for each product or product component to be acquired. There are many different types of acquisition that can be used to acquire products and product components that will be used by the project.
- SP 1.2 Select Suppliers.
Select suppliers based on an evaluation of their ability to meet the specified requirements and established criteria. Criteria should be established to address factors that are important to the project.Examples of factors include geographical location of the supplier, supplier’s performance records on similar work, engineering capabilities, staff and facilities available to perform the work and prior experience in similar applications.
- SP 1.3 Establish Supplier Agreements.
When integrated teams are formed, team membership should be negotiated with suppliers and incorporated into the agreement. The agreement should identify any integrated decision making, reporting requirements (business and technical), and trade studies requiring supplier involvement.
SG 2 Satisfy Supplier Agreements
Agreements with the suppliers are satisfied by both the project and the supplier.
- SP 2.1 Execute the Supplier Agreement.
Perform activities with the supplier as specified in the supplier agreement. Typical work products are supplier progress reports and performance measures, supplier review materials and reports, action items tracked to closure and documentation of product and document deliveries.
- SP 2.2 Monitor Selected Supplier Processes.
Select, monitor, and analyze processes used by the supplier. The selection must consider the impact of the supplier's processes on the project. On larger projects with significant subcontracts for development of critical components, monitoring of key processes is expected. For most vendor agreements where a product is not being developed or for smaller, less critical components, the selection process may determine that monitoring is not appropriate. Between these extremes, the overall risk should be considered in selecting processes to be monitored.
- SP 2.3 Evaluate Selected Supplier Work Products.
The scope of this specific practice is limited to suppliers providing the project with custom-made products, particularly those that present some risk to the program due to complexity or criticality. The intent of this specific practice is to evaluate selected work products produced by the supplier to help detect issues as early as possible that may affect the supplier's ability to satisfy the requirements of the agreement.
- SP 2.4 Accept the Acquired Product.
Ensure that the supplier agreement is satisfied before accepting the acquired product. Acceptance reviews and tests and configuration audits should be completed before accepting the product as defined in the supplier agreement.
- SP 2.5 Transition Products.
Transition the acquired products from the supplier to the project. Before the acquired product is transferred to the project for integration, appropriate planning and evaluation should occur to ensure a smooth transition.
Posted by
Sunflower
at
5/17/2010 12:42:00 PM
0
comments
Labels: Capability Maturity Model, CMMi, Goals, Levels, Maturity levels, Organization, Practices, Process Area, SAM, software engineering, Supplier Agreement Management, Suppliers
![]() | Subscribe by Email |
|