Subscribe by Email


Showing posts with label Levels. Show all posts
Showing posts with label Levels. Show all posts

Thursday, August 29, 2013

How can traffic shaping help in congestion management?

- Traffic shaping is an important part of congestion avoidance mechanism which in turn comes under congestion management. 
- If the traffic can be controlled, obviously we would be able to maintain control over the network congestion. 
Congestion avoidance scheme can be divided in to the following two parts:
  1. Feedback mechanism and
  2. The control mechanism
- The feedback mechanism is also known as the network policies and the control mechanism is known as the user policies.
- Of course there are other components also but these two are the most important. 
- While analyzing one component it is simply assumed that the other components are operating at optimum levels. 
- At the end, it has to be verified whether the combined system is working as expected or not under various types of conditions.

Network policy has got the following three algorithms:

1. Congestion Detection: 
- Before information can be sent as the feedback to the network, its load level or the state level must be determined. 
- Generally, there can be n number of possible states of the network. 
- At a given time the network might be in one of these states. 
- Using the congestion detection algorithm, these states can be mapped in to the load levels that are possible. 
- There are two possible load levels namely under-load and over-load. 
- Under-load means below the knee point and overload occurs above knee point. 
- If this function’s k–ary version is taken, it would produce k load levels. 
- There are three criteria based up on which the congestion detection function would work. They are link utilization, queue lengths and processor utilization. 

2. Feedback Filter: 
- After the load level has been determined, it has to be verified that whether or not the state lasts for duration of sufficiently longer time before it is signaled to the users. 
- It is in this condition that the feedback of the state is actually useful. 
- The duration is long enough to be acted up on. 
- On the other hand a state that might change rapidly might create confusion. 
The state passes by the time the users get to know of it. 
- Such states misleading feedback. 
- A low pass filter function serves the purpose of filtering the desirable states. 

3. Feedback Selector: 
- After the state has been determined, this information has to be passed to the users so that they may contribute in cutting down the traffic. 
- The purpose of the feedback selector function is to identify the users to whom the information has to be sent.

User policy has got the following three algorithms: 

1.Signal Filter: 
- The users to which the feedback signals are sent by the network interpret them after accumulating a number of signals. 
- The nature of the network is probabilistic and therefore signals might not be the same. 
- According to some signals the network might be under-loaded and according to some other it might be overloaded. 
- These signals have to be combined to decide the final action. 
- Based up on the percentage, an appropriate weighting function might be applied. 

2. Decision Function: 
- Once the load level of the network is known to the user, it has to be decided whether or not to increase the load.
- There are two parts of this function: the direction is determined by the first one and the amount is decided by the second one. 
- First part is decision function and the second one is increase/ decrease algorithms. 

3. Increase/Decrease Algorithm: 
- Control forms the major part of the control scheme.
- The control measure to be taken is based up on the feedback obtained. 
- It helps in achieving both fairness and efficiency. 


Saturday, June 16, 2012

Reverse Engineering - an activity involved in software re-engineering process model.


Software re-engineering process model is a very generic process meant to uplift the standard of the poor code that is currently not acceptable. This model is known to have 6 major stages namely:
  1. Inventory analysis
  2. Documentation reconstruction
  3. Reverse engineering
  4. Code re- structuring
  5. Data re- structuring
  6. Forward Engineering
This article is all about the third stage i.e., “reverse engineering” which is an important concept in itself. 
Re- engineering is usually required when the some of the sub systems of a larger software system or application need to frequently maintained. The re- engineered system is then restructured and re- documented. 

What is Reverse Engineering?


- The reverse engineering forms a very important factor in success of the re- engineering process. 
- The reverse engineering can be considered to be a process of recovering the design of the software system or application that is to be re- engineered. 
- This step involves analyzation of the software program in an effort to obtain an abstract level representation of the program that is higher than the level of the source code.  
- The software system or application is analyzed in such a way so as to understand its design and specifications and requirements. 
- Reverse engineering is an entirely individual process.
- In some cases the reverse re- engineering may be used  for specifying a software system before it is implemented again. 
- The reverse engineering process makes use of the program understanding tools like:
  1. Browsers
  2. Cross reference generators and so on.

Levels in Reverse Engineering


The reverse engineering process takes effect through the following levels:

1. Abstraction level: The design information of the software system or application is derived at the highest level possible.

2. Completeness: At the above abstraction level, the details of the system are obtained.

3. Interactivity: The degree of the human integration with 5the automated reverse engineering tools is measured.

4. Directionality: It can be either:
(a)  One way: All the extracted information is given to the sogftware engineering who is doing the maintenance.
(b)  Or two way: All the extracted information is fed to a re- engineering tool which then regenerates the old software program.

      5. Extract abstraction: From the old source the processing specifications are obtained.

     

Stages in Reverse Engineering Process


      Reverse engineering process from sommerville goes through the following stages:
      1. System to be re- engineered is subjected to automated analysis.
      2. Then it is manually annotated.
      3. With the system information obtained, a whole set of new documentation is generated containing:
       (a)  Program structure diagrams 
       (b)  Data structure diagrams and 
       (c)   Trace ability matrices.

     

Activities in Reverse Engineering Process


     There are 3 basic activities involved with the reverse engineering process:
   
    1. Understanding process: In order to understand the procedural abstractions as well as functionality analyze the source code on the level:
      (a)  System
      (b)  Program 
      (c)   Component 
      (d)  Statement and 
      (e)  Pattern
    
       2. Understanding data:  It analyzes internal data structures and data base structure.  
   3. User interfaces: It analyzes the basic actions processed by the interface, system’s behavioral response to those actions and equivalency of the interfaces.

When is Reverse Engineering Preferred?


- Reverse engineering is usually preferred when the specifications and designs of the system are required to carry out the program maintenance activities.
- The re- engineering process is preceded by the reverse engineering. 
- To put it simply, the reverse engineering can be thought of as a process that goes back through the whole development cycle.
- UML is a source that supports reverse engineering. 
- Different people have their own different perceptions of the reversion engineering. 
- It can also be thought of as an inversion of the water fall model of software development. 


Tuesday, May 1, 2012

How does penetration testing tool emphasize on data base security?


Data base is one of the critical elements of a web application and very much crucial for its proper functioning. All of the sensitive information regarding the functioning of the application as well as the user data is stored in the data base. 

This data is of very much use to the attacker. The attackers can steal this data and use it to their advantage. Therefore, it becomes absolutely necessary that the data base of an application must be provided with adequate security coverage.

Penetration testing is one of the ways to ensure the data base security. Most of us are familiar with what actually is the penetration testing. In this piece of writing we have discussed how the penetration testing tools emphasize up on the data base security. 

About Penetration Testing and Database Security


- Penetration testing is yet another testing methodology that has been adopted for testing the security of a computer network or system against the malicious attacks.
- It is quite a decent measure to evaluate the security level of the computer network by bombarding the network with false simulated attacks as malicious attacks from the outside as well as inside attackers.
Penetration testing is concerned with the security of the data base both from the aliens, foreigners or outside attackers who do not hold any authorized access to the computer system or network as well as the inside attackers who do have that access, but it is limited to a certain level. 
- The whole process of the penetration testing involves performing an active analysis using the penetration testing tools.
- This active analysis brings about an assessment of all the potential vulnerabilities of the whole data base system that are merely a consequence of the malfunctioning of the poor security level as well as configuration level of the application. 
- This active analysis is deemed to successful only if it has been carried out from the view point of a malicious attacker and is concerned about the active exploitation of the recognized vulnerabilities.
- The data base security depends up on the effectiveness of the testing which is in turn is affected by the effectiveness of the tools that are employed in the testing. 
- The tools indeed affect data base security, since the more effective are the tools, the more improvement will be there in the security mechanisms.

How Penetration Testing emphasize on Database Security?


- First step in the penetration testing of the data base is always the identification and recognition of the vulnerabilities and security leaks. 
- A number of penetration tests are then carried out on that particular application data base while simultaneously coupling the information with the active assessment of the risks and threats associated with the data base using the penetration testing tools.
- A whole lot of effective tools are designed to reduce the affect of these vulnerabilities.
- Penetration testing tools have been recognized as important component of the data base security audits.
- There are several other reasons why the penetration testing tools holds good for the data base security:
  1. They provide assistance in the assessment of the measure of the operational and business impacts of the attacks on the data base system.
  2. Successfully test the effectiveness of the security defenders in detecting and responding to the attacks.
  3. Provide the evidence in support of the investments that need to be made in the security field of the data base.



How does penetration testing tool emphasize on security subsystem?


Security is one of the important contributing factors in the success of a software system or application. The security level of the software system or application also influences the security of the users that use that system or application. The higher the security of a system is, the more secure it is for use. 

Since security plays a very important role in the computer world, there has to be some strategy or testing methodology that could judge or assess the security levels and mechanisms of the software systems and applications.
Do we have any such testing methodology? Yes of course we have! The penetration testing! 

About Penetration Testing and Security Sub Systems


- This software testing methodology has the answers to all our security related issues.
- The security mechanism of a software system or application is comprised of many sub mechanisms or sub systems which are commonly addressed as security sub systems. 
- These security subsystems are security components that make up the whole security model of the system.
- These sub systems ensure that the applications are not able to access the resources without being authorized and authenticated.
- Furthermore, they keep a track of the security policies and user accounts of the system. 
- There is a sub system called LSA which is responsible for maintaining all the information and details about the local security of the system. 
- The interactive user authentication services are provided by the security sub systems.
- The tokens containing the user information regarding security privileges are also generated by these sub systems. 
- The audit settings and policies are also managed by the security sub systems. 
- The following aspects are identified by the sub systems:
1.       Domain
2.       Who an access the system?
3.       Who has what privileges?
4.       Security auditing to be performed
5.       Memory quota

How Penetration Testing tool emphasize on Security Sub Systems?


So for having better security at the surface, it is important that the security at the sub systems level should not be over looked. All these matters make the security sub systems very essential. 
Therefore, it is required that to improve the overall quality of the security mechanisms, these sub systems should be tested. 

- The penetration testing tools emphasize upon the security sub systems in the same way as they emphasize the network security.
- Penetration testing was first adopted for the testing of the security of a computer network or system against the malicious attacks.
- For providing a way to evaluate the security level of the computer network by bombarding the network with false simulated attacks as malicious attacks from the outside as well as inside attackers. 
- The whole process of the penetration testing is driven by an active analysis which involves an assessment of all the potential vulnerabilities of the security sub systems that are merely a consequence of its poor security level as well as configuration level. 
- Apart from this, the flaws form both the hardware as well as software components contribute to these vulnerabilities rather than only operational weaknesses. 
- The security at the sub system level depends up on the effectiveness of the testing. 
- And the testing in turn is affected by the effectiveness of the tools that have been employed in the testing. 
- The tools indeed affect the sub systems’ security, since if the tools are reliable and efficient in finding vulnerabilities, obviously there will be more improvement in the security mechanisms. 
- A whole lot of effective tools are designed to reduce the affect of these vulnerabilities.




Monday, March 26, 2012

What is the difference between quality assurance and testing?

Quality assurance and testing are the processes that together keep up a control on the quality check of the software system or application. These two processes when implemented together ensure that maximum quality of the software system or application is maintained as much close as possible to the 100 percent.

There is no such software or application that can boast to have 100 percent customer satisfying quality. Well this article is focussed up on these two processes only and the differences between the two. We are discussing differences here because most of the people often confuse between the two.

QUALITY ASSURANCE

- The term “quality assurance” is a self justifying.

- By the term only we can make out that it must be some systematic and planned activities that are to be implemented in a quality system so that a check over its quality requirements is maintained.

- It involves the following processes:
1. Systematic measurement of the quality of the software system or application.
2. Comparison of the quality of the software system or application with the pre- defined quality standards.
3. Monitoring of the processes.
4. An associated feedback for conferring the error prevention.

- A typical quality assurance process also keeps a quality check on the quality of the tools, assemblages, equipments, testing environment, production, development and management processes that are involved with the process of the software testing.

- The quality of a software system or application product is defined by the clients or the customers rather than having a whole society do it.

- One thing that one should always keep in mind that the quality of a software system or application cannot be defined by quality adjectives like poor and good since the quality of one of the aspects of the system could be high and in some other aspect it could be low.

PRINCIPLES OF QUALITY ASSURANCE
The whole process of the quality assurance is guided by the two following principles:

1. Fit for purpose:
The software product is deemed to fulfil the purpose for which it has been made and
2. Right first time:
The mistakes encountered for the first time should be completely eliminated.

TESTING PROCESSES EMPLOYED IN SOFTWARE TESTING & QUALITY ASSURANCE
Below we are mentioning the testing processes that are employed for both the software testing as well as the quality assurance:

1. Testing approaches:
(a) White box testing
(b) Black box testing
(c) Grey box testing
(d) Visual testing

2. Testing levels:
(a) test target:
(i) unit testing
(ii) Integration testing
(iii) System testing
(b) Objectives:
(i) regression testing
(ii) User acceptance testing
(iii) Alpha and beta testing

3. Non functional testing:
(a) Performance testing
(b) Usability testing
(c) Security testing
(d) Internationalization and localization
(e) Destructive testing

4. Testing processes:
(a) waterfall model or CMMI
(b) Extreme or agile development model
(c) Sample testing cycle

5. Automated testing using tools and measurements

In fact both the processes are just the same but with a different perspective i.e., the software testing is aimed at eliminating the bugs out of the software system and the quality assurance takes in to consideration the overall quality of the software system.

In contrast to the quality assurance, software testing is the way to implement the quality assurance i.e., it provides the clients or the customers with the information regarding the quality of the software system or application. The testing is done to make sure of the following points:


1. The product meets the specified requirements.
2. Works as intended.
3. Is implemented with the same characteristics.


The software testing can be implemented at any point of time in the development process unlike the quality assurance that should be implemented right from the beginning to ensure maximum quality.


Tuesday, March 6, 2012

What are different methods and techniques used for security testing at white box level?

It requires a great deal of efforts to harness a good level of security. To obtain good security statistics one has to follow a proper approach to the testing. Like for any other kind of software testing one need to decide for security system also that who will carry out the testing and what approach has to be followed. Carrying out the security testing at the white box level is not at all easy as it is very complex and detailed.

APPROACHES FOR SECURITY TESTING AT WHITE BOX LEVEL
Basically till now two basic approaches have been identified for the security testing at the white box level and these have been mentioned below:

1. Functional Security Testing
- This approach to testing is usually followed by the standard testing organizations. - It deals with the checking of the features and functionalities of the software system or application for determining that whether or not they are working as stated. - This sounds like a very classic approach to security testing.

2. Risk Based Security Testing
- This is a more traditional approach to security testing and is followed usually by the quality assurance staff.
- This approach is quite difficult as compared to the previous mentioned approach.
- The main problem here is of the expertise of the testers since this approach calls for great skills in testing.
- Firstly to design the security tests which can completely exploit the vulnerabilities are difficult to be designed since for this it is required that the tester thinks like an attacker.
- Secondly, the security tests do not exploit the security of the software system or application directly and this causes a problem to observe the outcomes of a security test.

ABOUT SECURITY TESTING AT WHITE BOX TESTING LEVEL

1. A security test carried out without much precaution and logic can cause the whole security testing go wrong and this in turn can lead the software tester to carry out even more complicated test processes to counteract such a situation.

2. Risk based testing requires more skills than experience.

3. Most of the security testing methodologies or techniques that we use at the white box level are traditional and some of them have become out dated.

4. On the other hand the security exploitation techniques used by the attackers have become sophisticated day by day and the traditional methods used to cope these issues are becoming extinct.

5. Security testing at both the black box level and white box level tend to have a better understanding of the software system or application but different approaches are followed at both the levels.

6. The different approach followed by them is decided on the basis of the access of the source code i.e., whether or not the tester is having access to source code.

7. Security testing at the white box level is concerned with the rigorous analyzation of the source code of the software program as well its design.

8. It basically deals with finding the errors in the security mechanism of the software system.

9. In very rare cases it happens that this approach involves the matching of the patterns and automation of the whole testing process by implementing a static analyzer.

10. One peculiar drawback has been discovered for this kind of testing which is that this kind of testing sometimes may report a bug in some part of the software but actually there exists no such bug.

11. But still security testing at white box level using static analysis methods and techniques proves good for some software systems and applications.

12. Risk based testing calls for a lot of understanding of the whole software system.

13. After all, the product security is very much essential to the reputation of the company.


Sunday, February 12, 2012

What is ISTQB (International software testing qualifications board) certification?

ISTQB is an important certification in the field of software engineering and information technology. ISTQB stands for the International Software Testing Qualifications Board.

SOME FACTS ABOUT ISTQB
1. ISTQB and ISEB are 2 similar organizations.
2. ISTQB is an organization that grants certification for qualification in the field of software testing.
3. ISTQB was formed in the month of November in the year 2002.
4. Though formed in Edinburgh, now its office is headquartered in Belgium.
5. ISTQB certified tester is a program initiated by ISTQB.
6. This qualification scheme is of international scheme.
7. A hierarchy is maintained for the guidelines necessary for qualification examination and accreditation.
8. A certain syllabus is also prescribed for achieving the qualification.
9. ISTQB till now has issued almost over 200, 000 certifications making it the world’s top software testing qualifications issuing organization.
10.There are around 47 members of the ISTQB from over 71 countries.

LEVELS OF ISTQB
Similar to ISEB, ISTQB too offers 3 levels of qualification:

1.1st level: ISTQB foundation level
2. 2nd level: ISTQB advanced level
This further has 3 levels namely test analyst, test manager and technical analyst.
3. 3rd level: ISTQB expert level
This level deals with the continuous improvement of the test processes, automation of test processes, management of test processes and security testing.

COURSE TRAINING
1. The training of the course is followed up by an examination which covers the whole syllabus.
2. On completing the exam successfully the candidate is accredited with a certification of "ISTQB certified tester".
3. It is on the wish of the candidate if he wants to follow the stipulated course for examination or not.
4. It aims at developing a qualification that is accepted world wide.


Saturday, February 11, 2012

What is ISEB (Information systems examinations board) certification?

ISEB is an important certification in the field of software engineering and information technology. ISEB is the abbreviated form for the Information Systems Examinations Board.

SOME FACTS ABOUT ISEB
1. ISEB is a well known part of BCS which is the chartered institute famous for its information technology.
2. ISEB is known for conducting examinations in the concerned fields.
3. It is an examination conducting body in the field of information technology. 4. It was formed as collaboration between 2 organizations namely BCS and NCC. 5
5. There was need for the development of a certificate in systems analysis and design.
6. It was required for the examination board of systems analysis.
7. Therefore, as a result NCC and BCS came together to form a new board for meeting their requirement and it was named as the systems analysis examinations board.
8. The year of 1989 saw the creation of a new qualification for the field of project management.
9. Simultaneously, the process of expansion of the qualifications portfolio began in 1989.
10. Because of this, the system analysis examinations board was renamed to information systems examination board and hence the ISEB was born.

WHAT DOES THE QUALIFICATION FOR ISEB COVER
The qualifications stated for ISEB cover the following subjects from the field of information technology:
1. Software testing
2. Business analysis
3. Information services management
4. ITIL
5. Sustainable information technology
6. Project support
7. Project management
8. Information technology assets
9. Information technology infrastructure
10.Systems development
11.Green information technology
12.Information technology governance
13.Information technology information
14.Information technology security

LEVELS OF ISEB
There are 3 levels at which ISEB qualification is granted and they are:
1. 1st level: ISEB foundation level
This qualification is based on a certain discipline which is introduced at this level.
2. 2nd level: ISEB practitioner level
This qualification involves application of practical methods within a specified discipline only.
3. 3rd level: ISEB higher level
This level covers a specific discipline in great depth and is meant only for managers and specialists.

The ISEB is recognized all over the world in around 50 countries. Some of these countries are South Africa, Brazil, Japan, Australia and United States of America. The ISEB qualification is granted on the basis of the training as well as both computers based and written examinations.


Wednesday, January 11, 2012

What are different rules of thumb to write good test cases?

Writing good and effective test cases requires great skills since after all effective testing is achieved by effective test cases only!

Writing such test cases is a great skill in itself and can only be achieved by in depth knowledge of the software system or application on which the tests are being carried out and it also requires some experience.

Here I’m going to share some rules of thumb to write effective test cases, basic test cases definition and test case procedures.

What is a test case actually?
A typical test case is comprised of components that are meant to describe an action or event, input and an expected outcome in order to determine whether the software system or application is working as it is meant to or not.

Before writing a test case you should know the 4 levels or categories of the test cases to avoid their duplication.
The levels have been discussed below:

Level 1:
This level involves writing of basic test cases using the available specifications and requirements and documentation provided by the client.

Level 2:
This level is the practical stage and it involves writing of test cases based on the actual system flow and functional routines of the software system or application.

Level 3:
This level involves grouping of some particular test cases. Some test cases are grouped together and a test procedure is written. A test procedure can have maximum up to 10 test cases.

Level 4:
This level involves the automation of the project. This leads to minimization of the human’s interaction with software system or application. This is done basically to maximize the focus upon the currently updated functionality to be tested rather than focusing on regression testing.

Following this whole pattern you can have an automated testing suite from no testable item, i.e., you can observe a systematic growth.
- The tester should know the objective of each and every particular test case.
- The basic objective of all the test cases is to validate the testing coverage of the software system or application.
- You need to strictly follow test cases standards. Writing test cases reduces the chances of following an ad- hoc approach.

Below given is a basic test case format:
- Test case id
- Units to be tested
- Assumptions
- Input /test data
- Execution steps
- Expected outcome
- Actual outcome
- Success or failure
- Observation
- Comments

You need to write a test case statement also. So here’s the basic format:

- Verify:
This is the first word of the test case statement.
- Using tool names, tag names, dialogues etc: this is basically to identify what is being tested.
- Verification with conditions.
- Verification to result.

For any kind of testing:

- You will cover all types of tests like functional test cases, negative value test cases and boundary value test cases.
- Be careful while writing the test cases.
- Keep it simple and easy to understand.
- Don’t write test case statements with the length of an essay.
- Keep it brief and to the point.
- Follow the test cases and test case statements formats stated above.
- Generally spreadsheets are used to write test cases which make them more presentable and easy to understand.
- You can use tools like “test director” when you want to automate the test cases.
- Writing clear and concise test cases forms an important part of software quality assurance.
- Also be careful that a good number of test cases cover functional testing which means that the primary focus is on how the feature works.


Tuesday, October 4, 2011

Concept of Project Scheduling - What is the root cause for late delivery of software?

After all the important elements are defined for a project, it is now time to connect all the elements. It means a network of all engineering tasks is created that will enable you to get the job on time. The responsibility for each task is assigned to make sure that it is done and adapt the network. The software project managers does this at the project level and on an individual level, software engineers themselves.

Project scheduling is important because there are many tasks running in parallel in a complex system and the result of each task performed has a very important effect on the work that is performed by other task. These inter-dependencies are very difficult to understand without project scheduling.

The basic reasons why software is delivered late are:
- Unrealistic deadline by someone outside the software group.
- Changing the requirements of customer and not reflecting them in schedule change.
- Underestimate of amount of effort and number of resources required for the job.
- Non considerable predictable or unpredictable risks.
- Technical difficulties that are left unseen.
- Human difficulties that are left unseen.
- Lack of communication or mis-communication among project staff.
- Project management is not able to judge that project is falling behind schedule.

The estimation and scheduling techniques when implemented under constraint of defined deadline gives the best estimate and if this best estimate indicates that the deadline is unrealistic, the project manager should be careful from undue pressure.

If the management demands that the deadline is unrealistic then following steps should be done:
- A detailed estimate is made and and estimated effort and duration is evaluated.
- Develop a software engineering strategy using incremental process model.
- Explain to the customer the reasons why the deadline is unrealistic.
- An incremental development strategy is explained and offered as an alternative.


Wednesday, March 9, 2011

How is data designed at architectural and component level?

Data Design at Architectural Level


Data design translates data objects defined during analysis model into data structures at the software component level and, when necessary,a database architecture at the application level.
There are small and large businesses that contains lot of data. There are dozens of databases that serve many applications comprising of lots of data. The aim is to extract useful information from data environment especially when the information desired is cross functional.
Techniques like data mining is used to extract useful information from raw data. However, data mining becomes difficult because f some factors:
- Existence of multiple databases.
- Different structures.
- Degree of detail contained with databases.Alternative solution is concept of data warehousing which adds an additional layer to data architecture. Data warehouse encompasses all data used by a business. A data warehouse is a large, independent database that serve the set of applications required by a business. Data warehouse is a separate data environment.

Data Design at Component Level


It focuses on representation of data structures that are directly accessed by one or more software components. Set of principles applicable to data design are:
- Systematic analysis principles applied to function and behavior should also be applied to data.
- All data structures and operations to be performed on each should be identified.
- The content of each data object should be defined through a mechanism that should be established.
- Low level data design decisions should be deferred until late in design process.
- A library of data structures and operations that are applied to them should be developed.
- The representation of data structure should only be known to those modules that can directly use the data contained within the structure.
- Software design and programming language should support the specification and realization of abstract data types.


Tuesday, March 8, 2011

Software Architecture Design - why is it important?

The architecture is not the operational software, rather it is a representation that enables a software engineer to analyze the effectiveness of the design in meeting its stated requirements, consider architectural alternatives at a stage when making design changes is still relatively easy and reduce the risk associated with the construction of the software.

- Software architecture enables and shows communication between all parties interested in the development of a computer based system.
- Early design decisions that has a profound impact on software engineering work is highlighted through architecture.
- Architecture constitutes a relatively small, intellectually graspable model of how the system is structured and how its components work together.

The architectural design model and the architectural patterns contained within it are transferable. Architectural styles and patterns can be applied to the design of other systems and represent a set of abstractions that enable software engineers to describe architecture in predictable ways.

Software architecture considers two levels of design pyramid - data design and architectural design. The software architecture of a program or computing system is the structure or structures of the system, which compose software components, the externally visible properties of those components and the relationships among them.


Thursday, February 24, 2011

What are different steps while conducting component level designing?

The following steps represent a typical task set for component level design, when it is applied for an object oriented system. If you are working in a non object oriented environment, the first three steps focus on the refinement of data objects and processing functions identified as part of analysis model.

STEP 1: Identify all design classes that correspond to the problem domain.
STEP 2: Identify all design classes that correspond to the infrastructure domain.
STEP 3: Elaborate all design classes that are not acquired as reusable components.
In addition to all interfaces, attributes, operations, design heuristics i.e cohesion and coupling should be considered during elaboration.
- Specify message details when classes or components collaborate.
Structure of messages that are passed between objects within a system are shown as component level design proceeds.
- Identify appropriate interfaces for each component.
Interface is an abstract class that provides a controlled connection between design classes. So interfaces should be identified appropriately.
- Elaborate attributes and define data types and data structures required to implement them.
The programming language that is to be used in the project typically is a factor in the definition of the data structure and types used to describe attributes. When the component level design process is started, the name of attributes is used; as the design proceeds the UML attribute format is increasingly used.
- Describe processing flow within each operation in detail.
There are two ways to do this; through a UML activity diagram or a programming language based pseudocode. The elaboration of each software component is by doing a number of iterations and in each iteration a step-wise refinement concept is used.
STEP 4: Describe persistent data sources and identify the classes required to manage them.
As the design elaboration proceeds, an additional data should be provided about the structure and organization of these data sources which are initially specified as part of architectural design.
STEP 5: Develop and elaborate behavioral representations for a class or component.
In order to depict the externally observable behavior of the system as well as that of individual analysis classes, UML state diagrams are used. As a part of component level design, modeling the behavior of a design class may be sometimes required. The instantiation of a design class a s program executes is also known as the dynamic behavior of the object. This behavior is impacted by the current state of object as well as external events.
STEP 6: Elaborate deployment diagrams to provide additional implementation detail.
When component level design is being done, in order to make deployment diagrams simple to read and comprehend, the location of components(individual components) are generally not depicted.
STEP 7: Factor every component level design representation and always consider alternatives.
The first component level model that is created will not be consistent, complete and accurate as compared to the nth iteration that is applied to the model. It is necessary to re-factor as design work is conducted.


Friday, February 18, 2011

Component Level Design - Important Views that describes what a component is?

An Object Oriented View of Component


- From an object oriented viewpoint, a component is a set of collaborating classes.
- Each class within a component consists of attributes and operations relevant.
- Interfaces enabling the classes to communicate with other design classes are defined.
- Designer accomplishes this from analysis model and elaborates analysis classes and infrastructure classes.
- Analysis and design modeling are both iterative actions. Elaborating original analysis class may require additional analysis step which are then followed with design analysis steps to represent elaborated design class.
- Elaboration activity is applied to every component.
- After this, elaboration is applied to each attribute, operation, and interface.
- Data structures are specified.
- Algorithms for implementing each logic is designed.

The Conventional View


- A component is functional element of a program, also called module.
- It incorporates processing logic, internal data structures, interface that enables the component to be invoked and data to be passed to it.
- It resides within software architecture.
- It serves one of the roles control component, problem domain component or an infrastructure component.
- Conventional components are also derived from analysis classes.
- Data flow oriented element of analysis model is the basis for derivation.
- Each module is elaborated.
- Module interface is defined.
- Data structures are defined.
- Algorithm is designed using stepwise refinement approach.
- Design elaboration continues until sufficient detail is provided.

Process Related View


- The above two approaches assume that component is designed from scratch.
- Emphasize is on building systems that make use of existing software.
- As software architecture is developed, components or design patterns are chosen from catalog and used to populate the architecture.


Friday, December 17, 2010

What is Long Session Soak Testing ?

When an application is used for long periods of time each day, the above approach should be modified, because the soak test driver is not logins and transactions per day, but transactions per active user for each user each day. This type of situation occurs in internal systems, such as ERP and CRM systems, where user logins and stay logged in for many hours, executing a number of business transactions during that time. A soak test for such a system should emulate multiple days of activity in a compacted time frame rather than just pump multiple days worth of transactions through the system.

Long session soak tests should run with realistic user concurrency, but the focus should be on the number of transactions processed. VUGen scripts used in long session soak testing may need to be more sophisticated than short session scripts, as they must be capable of running a long series of business transactions over a prolonged period of time.

The duration of most soak tests is often determined by the available time in the test lab. There are many applications that require extremely long soak tests. Any application that must run, uninterrupted for extended periods of time, may need a soak test to cover all of the activity for a period of time that is agreed to by the stakeholders. Most systems have a regular maintenance window, and the time between such windows is usually a key driver for determining the scope of soak test.


Thursday, December 16, 2010

Overview of Soak testing.

Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed. Also, it is possible that a system may stop working after a certain number of transactions have been processed due to memory leaks or other defects. Soak tests provide an opportunity to identify such defects, whereas load tests and stress tests may not find such problems due to their relatively short duration.A soak test would run for as long as possible, given the limitations of the testing situation. For example, weekends are often an opportune time for a soak test.

There are some typical problems identified during soak tests are:
- Serious memory leaks that would eventually result in memory crisis.
- Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system.
- Failure to close database cursors under some conditions which would eventually result in the entire system stalling.
- Gradual degradation of response time of some functions as internal data structures becomes less efficient during a long test.

Apart from monitoring response time, it is also important to measure CPU usage and available memory. If a server process needs to be available for the application to operate, it is often worthwhile to record its memory usage at the start and end of the soak test. It is also important to monitor internal memory usages of facilities such as Java virtual machines, if applicable.


Saturday, December 4, 2010

What comprises Test Ware Development : Test Plan - Unit Test Plan

The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.
The plans are to be prepared by experienced people only. In all test plans, the (ETVX) Entry-Task-Validation-Exit criteria are to be mentioned. Entry means the entry point to that phase. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done.

ETVX is a modeling technique for developing worldly and atomic level models. It is a task based model where the details of each task are explicitly defined in a specification table against each phase i.e. Entry, Exit, Task, Feedback In, Feedback Out, and measures.
There are two type of cells, unit cells and implementation cells. The implementation cells are basically unit cells containing the further tasks. A purpose is also stated and the viewer of the model may also be defined e.g. to management or customer.

Types of Test Plan


Unit Test Plan (UTP)
The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual tester, which contains the following sections:

- What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case, mostly the input units will be tested for the format, alignment, accuracy and the totals.

- Sequence of testing
The sequence of test activities that are to be carried out in this phase are to be listed in this section. This includes, whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc.

- Basic functionality of units
The independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level.

Apart from these, the following sections are also addressed:
- Unit testing tools
- Priority of program units
- Naming convention for test cases
- Status reporting mechanism
- Regression test approach
- ETVX criteria


Saturday, October 16, 2010

Validation phase - Integration Testing - Top Down Integration and Bottom Up Integration

Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. There are two methods of integration testing:
- Top-down integration approach
- Bottom-up integration approach

Top-down Integration Approach
It is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy beginning with the main control module. Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
- The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.
- Depending upon integration approach, selected subordinate stubs are replaced one at a time with actual components.
- Tests are conducted as each component is integrated.
- On completion of each set of tests, stub is replaced with real component.
- Regression testing may be conducted to ensure that new errors have not been introduced.

Bottom-Up Integration Approach
It begins construction and testing with atomic modules. Because components are integrated from bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
- Low level components are combined into clusters that perform a specific software sub function.
- A driver is written to coordinate test case input and output.
- The cluster is tested.
- Drivers are removed and clusters are combined moving upward in the program structure.


Thursday, May 20, 2010

Verification (VER) Process Area in Capability Maturity Model (CMMi)

An Engineering Process Area at Maturity Level 3. The purpose of Verification (VER) is to ensure that selected work products meet their specified requirements.
Verification includes verification of the product and intermediate work products against all selected requirements, including customer, product, and product component requirements. Throughout the process areas, where we use the terms product and product component, their intended meanings also encompass services and their components.

Verification is inherently an incremental process because it occurs throughout the development of the product and work products, beginning with verification of the requirements, progressing through the verification of the evolving work products, and culminating in the verification of the completed product.

Specific Practices by Goal


SG 1 Prepare for Verification
Up-front preparation is necessary to ensure that verification provisions are embedded in product and product component requirements, designs, developmental plans, and schedules. Verification includes selection, inspection, testing, analysis, and demonstration of work products. Methods of verification include, but are not limited to, inspections, peer reviews, audits, walkthroughs, analyses, simulations, testing, and demonstrations.
- SP 1.1 Select Work Products for Verification.
The work products to be verified may include those associated with maintenance, training, and support services. The work product requirements for verification are included with the verification methods.
- SP 1.2 Establish the Verification Environment.
An environment must be established to enable verification to take place. The verification environment can be acquired, developed, reused, modified, or a combination of these, depending on the needs of the project. The type of environment required will depend on the work products selected for verification and the verification methods used.
- SP 1.3 Establish Verification Procedures and Criteria.
The verification procedures and criteria should be developed concurrently and iteratively with the product and product component designs. Verification criteria are defined to ensure that the work products meet their requirements.

SG 2 Perform Peer Reviews
Peer reviews involve a methodical examination of work products by the producers peers to identify defects for removal and to recommend other changes that are needed. The peer review is an important and effective verification method implemented via inspections, structured walkthroughs, or a number of other collegial review methods.
- SP 2.1 Prepare for Peer Reviews.
Preparation activities for peer reviews typically include identifying the staff who will be invited to participate in the peer review of each work product; identifying the key reviewers who must participate in the peer review; preparing and updating any materials that will be used during the peer reviews, such as checklists and review criteria, and scheduling peer reviews.
- SP 2.2 Conduct Peer Reviews.
One of the purposes of conducting a peer review is to find and remove defects early. Peer reviews are performed incrementally as work products are being developed. These reviews are structured and are not management reviews. Peer reviews may be performed on key work products of specification, design, test, and implementation activities and specific planning work products.
- SP 2.3 Analyze Peer Review Data.
Analyze data about preparation, conduct, and results of the peer reviews.

SG 3 Verify Selected Work Products
The verification methods, procedures, and criteria are used to verify the selected work products and any associated maintenance, training, and support services using the appropriate verification environment.
- SP 3.1 Perform Verification.
Verifying products and work products incrementally promotes early detection of problems and can result in the early removal of defects. The results of verification save considerable cost of fault isolation and rework associated with troubleshooting problems.
- SP 3.2 Analyze Verification Results.
Analyze the results of all verification activities. Actual results must be compared to established verification criteria to determine acceptability. The results of the analysis are recorded as evidence that verification was conducted.


Monday, May 17, 2010

Supplier Agreement Management (SAM) Process Area in CMMi

The purpose of Supplier Agreement Management (SAM) is to manage the acquisition of products from suppliers. It is a Project Management process area at Maturity Level 2.
The Supplier Agreement Management process area involves the following:
- Determining the type of acquisition that will be used for the products to be acquired.
- Selecting suppliers.
- Establishing and maintaining agreements with suppliers.
- Executing the supplier agreement.
- Monitoring selected supplier processes.
- Evaluating selected supplier work products.
- Accepting delivery of acquired products.
- Transitioning acquired products to the project.

Suppliers may take many forms depending on business needs, including in-house vendors (i.e., vendors that are in the same organization but are external to the project), fabrication capabilities and laboratories, and commercial vendors. A formal agreement is established to manage the relationship between the organization and the supplier. A formal agreement is any legal agreement between the organization (representing the project) and the supplier.

Specific Practices by Goal


SG 1 Establish Supplier Agreements
Agreements with the suppliers are established and maintained.
- SP 1.1 Determine Acquisition Type.
Determine the type of acquisition for each product or product component to be acquired. There are many different types of acquisition that can be used to acquire products and product components that will be used by the project.
- SP 1.2 Select Suppliers.
Select suppliers based on an evaluation of their ability to meet the specified requirements and established criteria. Criteria should be established to address factors that are important to the project.Examples of factors include geographical location of the supplier, supplier’s performance records on similar work, engineering capabilities, staff and facilities available to perform the work and prior experience in similar applications.
- SP 1.3 Establish Supplier Agreements.
When integrated teams are formed, team membership should be negotiated with suppliers and incorporated into the agreement. The agreement should identify any integrated decision making, reporting requirements (business and technical), and trade studies requiring supplier involvement.

SG 2 Satisfy Supplier Agreements
Agreements with the suppliers are satisfied by both the project and the supplier.
- SP 2.1 Execute the Supplier Agreement.
Perform activities with the supplier as specified in the supplier agreement. Typical work products are supplier progress reports and performance measures, supplier review materials and reports, action items tracked to closure and documentation of product and document deliveries.
- SP 2.2 Monitor Selected Supplier Processes.
Select, monitor, and analyze processes used by the supplier. The selection must consider the impact of the supplier's processes on the project. On larger projects with significant subcontracts for development of critical components, monitoring of key processes is expected. For most vendor agreements where a product is not being developed or for smaller, less critical components, the selection process may determine that monitoring is not appropriate. Between these extremes, the overall risk should be considered in selecting processes to be monitored.
- SP 2.3 Evaluate Selected Supplier Work Products.
The scope of this specific practice is limited to suppliers providing the project with custom-made products, particularly those that present some risk to the program due to complexity or criticality. The intent of this specific practice is to evaluate selected work products produced by the supplier to help detect issues as early as possible that may affect the supplier's ability to satisfy the requirements of the agreement.
- SP 2.4 Accept the Acquired Product.
Ensure that the supplier agreement is satisfied before accepting the acquired product. Acceptance reviews and tests and configuration audits should be completed before accepting the product as defined in the supplier agreement.
- SP 2.5 Transition Products.
Transition the acquired products from the supplier to the project. Before the acquired product is transferred to the project for integration, appropriate planning and evaluation should occur to ensure a smooth transition.


Facebook activity