Object oriented testing begins in the small with a series of tests designed to exercise class operations and check whether errors exist as one class collaborates with other classes. Use based testing along with fault based testing is applied to integrated classes. In the end, use cases are used to uncover errors at software validation level.
Encapsulation can create a minor problem when testing because testing reports on concrete and abstract state of an object. Multiple inheritance complicates testing by increasing number of contexts for which testing is required. Black box testing methods are appropriate for object oriented systems as they are for systems developed using conventional software engineering methods.
The strategy for fault based testing is to hypothesize a set of possible faults and then derive the tests to prove each of that hypothesis. The effectiveness of these techniques depends on how testers see a plausible fault.
Inheritance complicates the testing process. even though the base class has been properly tested, there is a need to test all classes derived from it.
Scenario based testing will uncover errors that occur when any actor interacts with the software. It concentrates on what the user does and not what the product does. This type of testing uncovers interaction errors.
Surface structure is an externally observable structure of object oriented program. Testing surface structure is similar to black box testing. Deep structure refers to the internal technical details of object oriented program. It exercises dependencies, behaviors and communication mechanisms. It is similar to white box testing.
Sunday, July 31, 2011
Object oriented testing begins in the small with a series of tests designed to exercise class operations and check whether errors exist as one class collaborates with other classes. Use based testing along with fault based testing is applied to integrated classes. In the end, use cases are used to uncover errors at software validation level.
Saturday, July 30, 2011
Orthogonal array testing enables you to design test cases that provide maximum test coverage with reasonable number of test cases. This type of testing can be applied to problems which has relatively small input domain but too large to accommodate exhaustive testing. Orthogonal array testing is more suitable in finding errors associated with faulty logic within a software component.
Orthogonal arrays are two dimensional arrays of numbers which possess the interesting quality that by choosing any two columns in the array you receive an even distribution of all the pair-wise combinations of values in the array.
The benefits of orthogonal array testing includes:
- lower execution time.
- lower implementation time.
- code coverage is high.
- overall productivity increases.
- the analysis of results take less time.
Orthogonal array testing uses the following terminology:
- Runs are the number of rows in an array.
- Factors are the number of columns in an array.
- Levels are the maximum number of values that can be taken on by any single factor.
Orthogonal array testing (OAT) helps in optimizing testing by creating an optimized test suite, detects all kind of faults, guarantees the testing of pair wise combinations, less prone to errors, simpler to generate and is independent of platforms and domains.
Thursday, July 28, 2011
Basis Path Testing uses the algorithmic flow of the program to design tests. Test cases that are written to test the basis set execute every statement in the program at least once.
- A flow graph should be drawn only when the logical structure of a component is complex. The flow graph allows you to trace program paths more readily. When compound conditions are encountered in a procedural design, the generation of a flow graph becomes slightly more complicated.
- Independent program path is a path which has at least one new set of processing statements or a new condition. An independent path should move along one edge that has not been traversed before. To know how many paths to look for, cyclomatic complexity is a useful metric for predicting those modules that are likely to be error prone. It can be used for test planning as well as test case design. Cyclomatic complexity provides the upper bound on the number of test cases that will be required to guarantee that every statement in the program has been executed at least one time.
- There are few steps to derive test cases or the basis set.
A flow graph is drawn using the design or code.
The cyclomatic complexity of flow graph is evaluated.
A basis set of linear independent path is determined.
Test cases are prepared forcing the execution of each path in basis set.
- Graph matrix is a square matrix whose size is equal to the number of nodes on flow graph. It is a tabular representation of a flow graph. Adding link weight to the each matrix entry, graph matrix can become a powerful tool to evaluate program control structure during testing.
Testing goal is to find errors. It should exhibit a set of characteristics that achieve the goal of finding the errors with minimum effort. The characteristics of software testing includes:
- How easily a software can be tested? i.e. test-ability
- How efficiently it can be tested? i.e. oper-ability
- What you see is what you test? i.e. observability
- How much we control the software so that testing can be automated and optimized? i.e. controllability
- Isolating problems and perform smarter retesting by controlling scope of testing i.e. decomposability.
- The program should be simple so that it can become easy to test i.e. simplicity.
- The fewer the changes, the fewer the disruptions to testing i.e. stability.
- The more information we have, the smarter we will test i.e. understandability.
A good test has the following characteristics:
- A good test has a high probability of finding an error.
- A good test is not redundant.
- A good test has the highest likelihood of uncovering a whole class of errors.
- A good test should be neither too simple nor too complex.
There are two ways to test an engineered product:
- Knowing the internal workings of product and tests can be conducted that can ensure that internal workings are performed according to specifications and all internal components are exercised properly.
- Knowing the output or the function for which the product is designed, tests are conducted to demonstrate each function is fully operational and checking the errors at the same time.
Wednesday, July 27, 2011
When the software testing is successfully done, the next step is debugging. Debugging is the process of removing the error that has been uncovered during the testing process. Debugging process starts with the execution of a test case. The results that are attained are assessed and the actual and expected values are compared. Debugging is the process that matches symptom with the cause.
In debugging process, there is a possibility of two outcomes:
- cause is found and corrected.
- cause is not found
Debugging sounds difficult and here are some reasons why it is so:
- The cause and symptom may be located remotely.
- Sometimes when some other error is corrected, the symptom disappear.
- Human error can cause a symptom.
- Timing problem can cause a symptom.
- Non errors can cause symptoms.
- Symptoms can be intermittent.
- There is a possibility that causes are distributed across different tasks running on different processors.
Debugging strategy includes finding and correcting the cause of software error by the use of three strategies:
- Brute force uses the philosophy of let the computer find the error. Memory dumps are taken, run-time traces are invoked and program is loaded with output statements.
- Backtracking is the process which starts at the site where symptom is uncovered, source code is traced backward until cause is found.
- In cause elimination, cause hypothesis is devised and data is used to prove or disprove the hypothesis. On the other hand, list of possible causes is developed and tests are conducted to eliminate each.
Validation tries to uncover errors, but the focus is at the requirements level, i.e. on the things that will be immediately apparent to the end user. It begins at the end of integration testing, when all individual modules are packaged and interface errors are uncovered and corrected. In validation testing phase, testing focuses on user visible actions and output that is user recognizable. The criteria of software entering into validation phase is that it functions in a manner that is reasonably expected by the customer.
In software requirements specification, there is a section called validation test criteria. Test plan lists out the tests to be conducted and a test procedure defines test cases. These plan and procedure are designed to ensure that all the functional requirements are satisfied, behavioral characteristics are achieved, performance requirements are attained, usability is met and documentation is done.
Configuration review ensures that all elements of software configuration are properly developed, cataloged and every necessary detail is given. It is also known as audit.
Alpha testing is done at developer's site. It does not happen at usual workplace. The real users are simulated by using these techniques and carrying out tasks and operations that a typical user might perform.
Beta testing is done at end user sites. The developer is not present. It is the live application of software in an environment that is not controlled by the developer. The end user records all the problems that he faces and reports to the developer.
Tuesday, July 26, 2011
In testing object oriented software, the objective of testing remains the same. The nature of object oriented software changes both testing strategy and testing tactics.
UNIT TESTING IN OBJECT ORIENTED CONTEXT
In object oriented software, an encapsulated class is the focus of unit testing. Encapsulation means that class and class object packages the attributes and operations that manipulate the data. A class contains many operations so unit testing these operations must also vary.
Class testing for object oriented software is analogous to module testing for conventional software. It is not advisable to test operations in isolation. Unit testing of conventional software focus on algorithmic detail of module and the data that flows across module whereas unit testing for object oriented software focus on operations encapsulated by class.
INTEGRATION TESTING IN OBJECT ORIENTED CONTEXT
An important strategy for integration testing of object oriented software is thread based testing. Threads are set of classes that respond to an input or event. Each thread is integrated and tested individually. Regression testing is done.
Another strategy is the use based tests that focus on classes that do not collaborate heavily with other classes.
The behavior of drivers and stubs also changes during integration testing of object oriented software. Drivers test operations at lowest level and for testing whole group of classes. Stubs are used where collaboration between classes is required but one or more collaborating class is not fully implemented.
Unit testing is useful to test the modules individually. Problem arises when these modules are put together, which is also called interfacing. There is a huge chance that data is lost across an interface or one module can affect the functionality of the other module or global data structures can present problems.
Integration testing is a technique that is used for constructing software architecture while at the same time conducting tests to uncover errors that arise because of interfacing. There are two approaches to integration testing :
- Taking the big bang approach (all components are combined in advance and then testing is done) to integration is a lazy strategy that is bound to fail. One should integrate in an increment manner and do testing as you go. In big bang approach, cause of fail is not easily tracked and hence the complexity increases.
- Incremental approach can be started early and controlled easily. Incremental testing is usually better for large, complex systems. Two related modules are combined and tested and check whether they are working in the correct manner. Then, another module is added and checked.
Integration testing follows unit testing and precedes system testing. It helps in verifying functional, performance and reliability requirements. Reasons for integration testing includes:
- Fields are defined differently in different modules.
- Different perceptions and understanding of business requirements.
- Field content can have different assumptions.
- There are few errors that can be left uncovered during unit testing.
Monday, July 25, 2011
A set of constrained logical constructs that emphasize on maintenance of functional domain are used for designing conventional components. These logical constructs are sequence, condition, and repetition. Structured programming is a design technique that constrains logic flow to three constructs: sequence, condition, and repetition. These structured constructs help in reducing the complexity of the program and enhances readability, test ability and maintainability.
GRAPHICAL DESIGN NOTATION
- Activity diagram is a descendant of flowchart in which all elements of structured programming are represented.
- In a flowchart, each step in the process is represented by a different symbol and contains a short description of the process step.
- Structured programming constructs should make it easier to understand the design. If unnecessary complexity is introduced by using them without violation, it is better ti violate them.
- Dogmatic use of structured constructs introduces inefficiency.
TABULAR DESIGN NOTATION
- Decision tables translates actions and conditions into a tabular form.
- A decision table is used when a complex set of conditions and actions are encountered within a component.
- Decision table is divided into four quadrants. Upper left quadrant lists all conditions. Lower left quadrant lists all actions. Right hand quadrants form a matrix indicating condition combinations and corresponding actions
There are many architectural alternatives that need to be assessed. There are different ways to assess the design:
Trade off Analysis Method
It establishes an iterative evaluation process for software architectures. It involves six steps:
- All the scenarios are collected.
- Requirements, constraints and environment description are elicited.
- Architectural styles or patterns chosen to address scenarios and requirements are described.
- Quality attributes are evaluated.
- The sensitivity of quality attributes to architectural attributes are identified.
- Critique candidate architectures using sensitivity analysis.
In architectural complexity, the dependencies between the components are assessed within the architecture. Dependencies can be divided into:
- Sharing dependencies represent dependence relationship among consumers or producers.
- Flow dependencies represent dependence relationships between producers and consumers of resources.
- Constrained dependencies represent constraints on the flow of control among set of activities.
Architectural Description Languages
Architectural description language provides a semantic and syntax for describing a software architecture. It is a set of tools and notation that allows the design to be represented in an unambiguous and understandable fashion.
Sunday, July 24, 2011
Pattern based design is a technique that reuses the design elements. Each architectural pattern, design pattern, or idiom is cataloged, thoroughly documented and carefully considered as it is assessed for inclusion in a specific application.
A description of design pattern may consider a set of design forces. Design forces are those characteristics of the problem and attributes of the solution that constrain the way in which the design is developed. These design forces also describe the environment and conditions that must exist to make design pattern applicable.
Types of design patterns available are:
- Architectural patterns
The overall structure of software, relationship among subsystems and software components and rules are defined for relationship among elements of architecture.
- Design patterns
It addresses a specific element of design to solve some design problem, relationships among components or mechanisms for good communication among components.
These are language specific patterns. They implement the algorithmic element of a component. They can act as a specific interface protocol or a mechanism for communication among components.
There are two dimensions in which a design model can be illustrated: process and abstraction dimension. Process dimension points the changes in the design model as the design tasks are implemented as part of software process. Abstraction dimension represents the level of detail as each element of analysis model is transformed into design equivalent and refined iteratively.
Design model elements are not developed in a sequential fashion. The design model has four major elements:
Data design focuses on files or databases at an architectural level and on a component level, data design considers the data structures that are required to implement local data objects.
The architectural model is derived from the application domain, the analysis model and available styles and patterns.
The interface design elements for software tells us how the information flows in and out of the system and how it is communicated among the components. There are three parts to interface design elements: the user interface, interfaces to systems external to application and interfaces to components within the application.
User interface elements include aesthetic elements, ergonomic elements and technical elements. External interface design requires information about entity to which information is sent or received. Internal interface design is closely related to component level design.
The component level design elements describes the internal detail of software component. The component level design defines data structures for all local data objects and algorithmic detail processing that occurs within a component and an interface that allows access to all behaviors.
Saturday, July 23, 2011
Analysis classes describes elements of problem domain. As design model evolves, some design classes are defined:
- refinement of analysis classes
- creating new design classes
There are five different types of design classes:
- Business domain classes.
- User Interface classes.
- Process classes.
- Persistent classes represents data stores.
- System classes.
For each design class, a set of attributes and operations are developed for each design class. Design classes present significantly more technical detail as a guide for implementation. The characteristics of a well formed design class are:
- Complete encapsulation of attributes and methods that exist for the class.
- The methods that are associated with design class should focus on accomplishing one service for the class. The class should not have any other way to accomplish the same thing.
- Collaboration is important between design classes but it should be controlled. Implementing, testing and maintenance becomes difficult if there is high coupling.
- Design classes should have high cohesion. It should have focused set of responsibilities and single mindedly applies attributes and methods to implement those responsibilities.
Friday, July 22, 2011
While other analysis modeling elements provides a static view of the software, behavioral modeling depicts the dynamic behavior. The behavioral model uses input from scenario based, flow oriented and class based elements to represent the states of analysis classes and the system as a whole. To accomplish this, states are identified, the events that cause a class to make a transition from one state to another are defined, and the actions that occur as transition is accomplished are also identified. State diagrams and sequence diagrams are the UML notation used for behavioral modeling.
The behavioral model is an indication showing how a software responds to external event. Steps to be followed are:
- All use cases are evaluated.
- Events are identified and their relation to classes are identified.
An event occurs whenever the system and an user exchange information. An event is not the information that is exchanged but a fact that information has been exchanged.
- A sequence is created for use-case.
- A state diagram is built.
There are two different characterizations of states in behavioral modeling : state of class as system performs its function and the state of the system as seen from outside. The system has states that represent specific externally observable behavior whereas a class has states that represent its behavior as the system performs its functions.
- Behavioral model is reviewed for accuracy and consistency.
Class-responsibility-collaborator (CRC) modeling is a means to identify and organize classes relevant to system requirements. CRC model is a collection of index cards and it consists of three parts:
Classes : It is a collection of similar objects.
- Entity classes or business classes are obtained directly from statement of the problem. The information contained in these classes are important to users but they do not display themselves.
- Boundary classes are used to create interface which user sees and interacts with as software is used.
- Controller classes are designed to manage creation or update of entity objects, instantiation of boundary objects, communication between objects and validation of data.
Responsibility : something that a class knows or does. Some guidelines that can be applied for allocating responsibilities to classes are:
- System intelligence should be distributed across classes to best address the needs of the problem.
- Each responsibility should be stated as generally as possible.
- Information and the behavior related to it should reside within the same class.
- Information about one thing should be localized with a single class not distributed across multiple classes.
- Responsibilities should be shared among related classes when appropriate.
Collaborator : another class that the class interacts with to fulfill the responsibilities.
- It takes one of two forms : a request for information or a request to do something.
- If a class cannot fulfill all of its obligations itself, then a collaboration is required.
- Collaboration identifies relationship between classes.
Thursday, July 21, 2011
In today's world, the field of software applications is increasingly complicated, since there are a number of competing products vying for the same set of customers. Margins are increasingly less and less, and even releasing a product with a killer feature only lasts for so long, before the competition copies the feature and incorporates it into their own product. Organizations have to have policies such that they have a very good understanding of the needs of the customer base, and keep on releasing / modifying features that have an impact on their customers. Such a policy is what is required to ensure that a software product remains ahead of its competitors, but it is not so easy to continue to do so. Here are some steps:
- Have an active set of pre-release customers and engage with them early. The team doing the software application needs to have a proper process to identify sets of users who represent their desired customers and engage them on a regular basis. Identifying such a customer set can be difficult by itself, since many of these people will not understand the need to sign legal documents in the nature of a Non Disclosure Agreement (when you want to ensure that your pre-release users do not release details of the feature in the software before it is released). Another problem is that it is an incredible challenge to get the profile of users who represent your customer base. In many cases, you will find users who want to get involved in the program to learn and try to influence what features you will implement (and these could be people such as those who write help books based on your software, those who provide training for your software, and those who use it for business). Need to understand that many of those volunteering have their own motives, and when you get feedback from them on features within the software, some of that feedback can be based on their own motivations rather than being a true portrayal of the desirability of the feature.
- You need to get more pre-release users that you need, since it is only a percentage of them who will actually be active; many of those who have volunteered do not do the active testing that you desire, or do testing only once in a while. A good ratio is that if you need around 100 active users, you should invite and accept into the program, around 250 people. And from time to time, you will need to remind those who are not participating to take part.
Wednesday, July 20, 2011
Flow models focus on the flow of data objects as they are transformed by processing functions. There are applications which are driven by events rather than data and produce information that is controlled and process information keeping in mind the time and performance. In such situations, control flow diagrams comes into picture along with data flow modeling.
There are some guidelines to select potential events for a control flow diagram:
- all sensors that are read by the software are listed.
- all interrupt conditions are listed.
- all switches actuated by operator are listed.
- all data conditions are listed.
- all control items are reviewed.
- all states that describe the behavior of a system are listed.
- all transitions between states are defined.
- all possible omissions should be kept in focus.
The Control Specification contains a state diagram which is sequential specification of behavior. It contains a program activation table which is a combinatorial specification of behavior. Control specification does not give any information about the inner working of the processes activated as a result of this behavior.
Monday, July 18, 2011
Data model is a conceptual representation of data structures. The data structures consist of data objects, relationship between data objects and rules that govern these operations. Often, analysis modeling begins with data modeling.
The inputs of data model comes from the planning and analysis stage. There are two outputs of the data model. First is an entity relationship diagram and second is a data document. The goal of the data model is to make sure that the all data objects required by the database are completely and accurately represented.
The different concepts of data modeling are:
- Data Objects are a representation of any composite information that is processed by software. A data object can be an external entity, thing, occurrence, event, role, an organizational unit, place or a structure. The description of the data object includes data object and its attributes. A data object contains only data.
- Data Attributes name a data object, describe its characteristics and sometimes make reference to another object. One or more attributes must be identified as a key which acts as an identifier.
- Relationships indicate the manner in which data objects are connected to one another.
- Cardinality of a relationship is the actual number of related occurences for each of the two entities. It defines the maximum number of objects participating in relationship. It does not indicate whether a data object should participate in relationship or not.
- Modality of a relationship can be 0 or 1. It is 1 if an occurrence of relationship is must. It is 0 if an occurrence of relationship is optional.
Sunday, July 17, 2011
Analysis model provides description of required information, functional and behavioral domains of computer based systems. There is a set of generic elements that is common to most analysis models:
- Scenario based elements are the first part of the analysis model that is developed. They serve as input for creation of other modeling elements. In another method for scenario based modeling, if we use functions or activities that would have been defined when the requirements elicitation was being done. These functions are a part of the processing context. When you have the analysis model, you would have defined a sequence of activities and these activities in turn describe the processing within a limited context. As stated earlier in previous post, when you take the analysis model, activities can be described at multiple levels of abstraction.
- In class based elements, a usage scenario has a set of objects which are categorized into classes which is a collection of things that have similar attributes and common behavior.
- In behavioral elements, the behavior of a system effects the chosen design and implementation approach that is applied. A state is an externally observable mode of behavior. State diagrams represents behavior of system depicting states and events that cause system to change state.
- In flow oriented elements, information flows through a computer based system. System accepts input,transform them using functions and produces output.
Quality function deployment defines requirements in a way that maximizes the customer satisfaction. Quality function deployment is used to translate customer requirements to engineering specifications. It is a link between customers, design engineers, competitors and manufacturing. QFD is important as it gives importance to the customer and put these values in engineering process. There is a proper time to use quality function deployment. It is used in early phases of design. It can also be used as a planning tool as important areas are identified.
Quality function deployment contains four phases:
- product planning
- product design
- process planning
- production planning
The benefits of QFD includes better understanding of customer needs, reduces iterations in design and enhancing teamwork. There are three types of requirements that are defined by QFD:
- Normal requirements are a mirror of objectives that are stated for a product during meetings with the customer. Normal requirements include graphical displays,specific system levels.
- Expected requirements are requirements that are fundamental and customer does not explicitly state them but their absence will create a significant dissatisfaction. These include ease of software installation and human/machine interaction etc.
- Exciting requirements are the requirements that do not fall within customer's expectations but are very satisfying when they are present.
Saturday, July 16, 2011
Eliciting Requirements - what are basic guidelines for conducting collaborative requirements gathering meeting? PART 1
For collaborative requirements gathering, stakeholders and developers work together to identify the problem, propose a solution and negotiate approaches. The basic guidelines include:
- software engineers and customers attend meetings.
- rules for preparation and participation are established.
- agenda that covers important points and encourage free flow of ideas.
- meeting is controlled by a facilitator who can be a customer, developer or an outsider.
- worksheets, wall stickers, chat room etc. are used.
- goal is to identify the problem, propose a solution and negotiate approaches.
If a system or product will serve many users, one should be certain that requirements are elicited from representative cross-section of users. If only one user defines all requirements, acceptance risk is high. As requirements gathering meeting begins, the first point of discussion is need and justification of the new product. Once agreement is established, each participant presents his lists for discussion in one particular area. After this, combined list is prepared by group after which facilitator coordinates discussion.
Avoid the impulse to shoot down a customer's idea as "too costly" or impractical. The idea is to negotiate a list that is acceptable to all.
Once the lists are completed, team is divided into sub teams and each works to develop mini specifications. Additions, deletions and elaborations are made. After this, each attendee makes a list of validation criteria and finally one or more participant is assigned the task of writing a complete draft specification.
System engineering helps to translate the needs of the customer into a system model that makes use of elements like software, hardware, people, database, documentation and procedures.
Business process engineering is a system engineering approach that defines architecture enabling a business to use the information more effectively and efficiently. It helps in creating an overall plan for implementing computing architecture.
There are three different types of architectures defined and developed keeping business goals in mind. These are:
- Data architectures provides a framework. Data objects used by business are building blocks. After defining the data objects, the relationships among them are identified. The data objects flow between business functions are organized within database and are transformed to provide information.
- Application architectures is a system of programs that transforms objects within data architecture for business purpose.
- Technology architecture provides the foundation for data and application architecture. It includes hardware and software used to support data and application.
The main aim of business process engineering is to derive data architecture, application architecture and technology infrastructure which will meet the needs, goals and objectives of the business.
Friday, July 15, 2011
In software engineering practice, construction practice includes coding and testing tasks and principles. Testing is a process of finding out an error. A good testing technique is one which gives the maximum probability of finding out an error. The main objective of testers is to design tests that uncover different classes of errors with minimum amount of effort and time.
Some testing principles include:
- From a customer point of view, severe defects are one that cause failure of the program so all the tests that are defined should meet the requirements of the customer.
- Planning of tests should be done long before testing begins. When the design model is made, test cases cab be defined at that very point. Tests can be planned before any code has been generated.
- Generally, individual components are tested first and then the focus is shifted to integrated components and finally the entire system.
- It is not possible to execute different combination of paths during testing. This means that exhaustive testing is not possible.
In a broader software design context, we begin "in the large" by focusing on software architecture and end "in the small" focusing on components. In testing, the focus is reversed.
Thursday, July 14, 2011
In software engineering practice, construction practice includes coding and testing tasks and principles. Coding involves direct creation of source code, automatic generation of source code and automatic generation of executable code using fourth generation programming languages.
Coding principles include preparation principles, coding principles, validation principles.
Preparation principles include:
- good understanding of the problem is necessary.
- good understanding of design principles and concepts.
- choose the right programming language and environment.
- unit tests that are applied once the component you code is completed.
Coding principles include:
- choose data structures.
- constrain algorithms.
- choose software architecture and interfaces.
- nested loops should be simple and easily testable.
- conditional logic should be simple.
- variable names should be meaningful.
- code should be self documenting.
Validation principles include:
- a code walk-through is conducted.
- unit tests are performed.
- code re-factoring is done.
Design models provide a concrete specification for the construction of the software. It represents the characteristics of the software that help the practitioners to construct it effectively. Design modeling represents the dummy model of the thing that is to be built. In software systems, the design model provides different views of the system.
A set of design principles when applied creates a design that exhibits both internal and external quality factors. A high quality design can be achieved.
- The work of analysis model is to describe the information domain of problem, user functions, analysis classes with methods. The work of design model is to translate information from analysis model into an architecture. The design model should be traceable to analysis model.
- A data design simplifies program flow, design and implementation of software components easier. It is as important as designing of processing functions.
- In design modeling, the interfaces should be designed properly. It will make integration much easier and increase efficiency.
- Design should always start considering the architecture of the system that is to be built.
- End user plays an important role in developing a software system. User Interface is the visible reflection of the software. User Interface should be in terms of the end-user.
- Component functionality should focus on one and only one function or sub-function.
- In design modeling, the coupling among components should be as low as is needed and reasonable.
- Design models should be able to give information developers, testers and people who will maintain the software. In other words, it should be easily understandable.
- Design models should be iterative in nature. With each iteration, the design should move towards simplicity.
Wednesday, July 13, 2011
Analysis models represent customer requirements. Design models provide a concrete specification for the construction of the software. In analysis models, software is depicted in three domains. These domains are information domain, functional domain and behavioral domain.
Analysis modeling focuses on three attributes of software: information to be processed, function to be delivered and behavior to be exhibited. There are set of principles which relate analysis methods :
- The data that flows in and out of the system and data stores collectively are called information domain. This information domain should be well understood and represented.
- The functions of the software effect the control over internal and external elements. These functions need to be well defined.
- The software is influenced with external environment. Software behaves in a certain manner. This behavior should be well defined.
- Partitioning is a key strategy in analysis modeling. Divide the models depicting information, function and behavior in a manner which uncovers detail in hierarchical way.
- Description of problem from end-user's perspective is the start point of analysis modeling. Task should move from essential information toward implementation detail.
What is software engineering practice? What are seven principles focussing on software engineering practice?
Practice is an array of concepts, principles, methods and tools that should be considered while software is planned and developed. The software process provides everyone involved in the creation of computer based system with a road map to reach the destination. Practice provides you with the detail that is needed to drive along the road. It helps you to understand the concepts and principles that must be understood and followed to drive safely and rapidly. Practice encompasses the technical activities that produce all work products that are defined by the software process model that has been chosen. Three elements of practice that are chosen are: concepts, principles and methods.
According to David Hooker, there are seven core principles that focus on software engineering practice:
- All the decisions should be made keeping in mind that one reason for the existence of a software system is to provide value to the all of its users.
- To have a more easily understood and maintained system, the design should be as simple as possible as one should keep in mind that software design is not a sloppy process. Keep in mind that simple does not mean 'quick and dirty'.
- For a software project to begin and end successfully, a clear vision is very necessary without which the software system weakens.
- The software system should be specified, designed and implemented in a manner that someone else will have to understand what you are doing.
- Software systems must be ready to adapt the changes in specifications and hardware platforms. One should not design the software system into a corner.
- The hardest goal while developing a software system is the ability to reuse it. Reuse can save time and effort. Forethought and planning is very important to realize reuse at entry level of the system development process.
- A clear, complete thought before action almost always produces better results.
Tuesday, July 12, 2011
Crystal agile methodology is a software development approach. It is applicable for projects with small teams. It is a light weight approach. It is an adaptable approach. Crystal is a human powered methodology which means focus is on enhancing the work of the people. Crystal is ultra light which means it reduces the paper work, overhead involved. Crystal is stretch to fit methodology which means it grows just enough to get it to the right size. Crystal focuses on people and not processes.
Crystal consists of methodologies like Crystal Yellow, Crystal Orange, Crystal clear etc. It believes that project requires policies, practices and priorities as characteristics. Crystal methodology is based on the observation of various teams. Crystal methodology focuses on things that matter the most and make the most difference.
Agile Modeling suggests modeling is essential for all systems, but that the complexity, type, and size of the model must be in accordance with the software that is to be built. Some principles of agile methodology are:
- Agile Modeling is a practice based methodology.
- Values, principles and practices combine together for modeling.
- Agile Modeling is not a prescriptive process.
- Agile Modeling is not a complete software process.
- Agile Modeling focus on effective modeling and documentation.
- Developers using agile modeling should model with a purpose.
- Different models should be used to present different aspect and only those models should be kept that provide value.
- Traveling light is an appropriate approach for all software engineering work. Build only those models that provide value - no more, no less.
- During modeling, content is more important than representation.
- Be aware of the models and tools that are used to create them.
- The modeling approach should be able to adapt to the needs of agile team.
Agile modeling becomes difficult to implement in large teams, lack of modeling skills or team members are not co-located.
Adpative Software Development (ASD) is a method for the creation and development of software systems. It focuses on rapid creation and evolution of software systems. ASD is a pat of rapid application development.
Adaptive software development life cycle is mission focused, feature based, iterative, time-boxed, risk driven, and change tolerant. In adaptive software development, the developer has a basic idea in mind and they g to work. The focus is in the computer code.
In ASD, there are no pre-planned steps. The software is made very quickly. New versions can come out very quickly as the development cycle is very short. The structure of adpative software development and rapid application development are similar, difference lies in the fact that:
- adaptive software development does not allow the time when the project is finished, rapid application development does.
- adaptive software development does not have a real end point whereas rapid applicatio development allows the end of project.
Adaptive Software Development life cycle comprises of three phases:
In speculation, user requirements are understood and an adaptive cycle planning is conducted. It depends on bug and user reports.
Effective collaboration with customer is very important. Communication, teamwork, individual creativity are a part of effective collaboration. The individual developers combine their portions of work.
Learning cycles are based on the short iterations with design, build and testing. ASD teams can learn through focus groups, formal technical reviews and postmortems.
Monday, July 11, 2011
Extreme Programming aims at improving the quality of the software. The key elements of extreme programming include code reviews, unit testing, code simplicity, changes in customer's requirements, frequent communication with customers and programmers.
The basic principles of Extreme Programming are quality work, incremental change, embracing change, simplicity and rapid feedback. There are few principles that are less central. These include teach learning, small investment, open communication, local adaptation. Extreme Programming is an efficient, flexible, light weight, predictable, low risk way to develop a software.
Extreme Programming (XP) promises to improve responsiveness to changes in business requirements, reduces project risks and improves the productivity. Extreme Programming (XP) is a set of rules and practices which occurs within the four
Planning includes creating user stories, assigning value to the story based on overall business value of the feature. Cost is assigned after assessing each story. The XP team and customers work together to decide how to group stories for next release and an acceptance test criteria is build.
Extreme Programming design follows "keep it simple" principle. Design provides implementation guidance for a story as it is written. Extreme Programming encourages use of CRC(Class Responsibility Collaborator) cards identify and organize object oriented classes relevant to current software increment. Extreme Programming encourages re-factoring also.
Unit tests should be developed, the developer is able to focus on what must be implemented to pass unit test. Extreme Programming development cycle involves pair of programmers that program together and the development is driven by tests. Pairs also evolve the design of the system. As pair programmers complete their work, the code developed by them is integrated with the work of others. Development is followed by integration.
The unit tests are organized into universal testing suite, integration and validation testing can occur on daily basis. Extreme programming acceptance tests are specified by customers and focus on overall features and functionality of the system that are visible to the customer.
When the elements of waterfall model are applied in iterative manner, the result is the Incremental Model. In this, the product is designed, implemented, integrated and tested as incremental builds. This model is more applicable where software requirements are well defined and basic software functionality is required early.
In incremental model, a series of releases called 'increments' are delivered that provide more functionality progressively for customer as each increment is delivered.
The first increment is known as core product. This core product is used by customers and a plan is developed for next increment and modifications are made to meet the needs of the customer. The process is repeated.
ADVANTAGES OF INCREMENTAL MODEL IN SOFTWARE ENGINEERING
- It generates working software quickly and early during the software life cycle.
- Flexibility is more and less costly.
- Testing and debugging becomes easier during a smaller iteration.
- Risk can be managed more easily because they can be identified easily during iteration.
- Early increments can be implemented with fewer people.
DISADVANTAGES OF INCREMENTAL MODEL IN SOFTWARE ENGINEERING
- Each phase of an iteration is rigid and do not overlap each other.
- Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle.