Subscribe by Email


Showing posts with label Software Development Methodology. Show all posts
Showing posts with label Software Development Methodology. Show all posts

Sunday, October 19, 2014

How is Agile Unified Process (AUP) different from Rational Unified Process (RUP) ?

Agile Unified Process uses Agile Modelling, which is used for describing the modelling and documentation process based on agile disciplines. The Rational Unified Process (RUP) is also an agile methodology. Here we highlight the differences between the two. Both of these processes are divided into disciplines or workflows, which are carried out in iterations. Agile Unified Process (AUP) is derived from RUP, and we can say it is a simpler version of it. There are 3 disciplines followed by RUP i.e., Modelling, Requirements and Analysis and Design. The AUP being the simple version combines all of them into one discipline. It is because of this that it is easy for the RUP teams to migrate to AUP if they want to. Thus RUP is very flexible and can be merged with agile modeling practices.

> Active stakeholder participation: In RUP projects, stakeholders including customers, users, managers and operators, are often involved as a part of the project disciplines. It is necessary for the team to assign modelling roles like requirements specifier, business process designer etc., to the participating stakeholders. The activeness of the stakeholders is related to less requirement of feedback in the Agile Unified Process.

> Agile Modelling Standards: A significant part in AUP is played by the UML (Unified Modelling Language) diagrams. For maintaining its agility, the agile teams often blend the standards and the guidelines together. On the other side, in an RUP project, the guidelines to be adopted by the teams for creating modelling artifacts are included.

> Gentle application of the patterns: The AUP teams get the full freedom for choosing which modelling patterns to use. However, in RUP, these patterns are defined by the product depending on the modelling disciplines being followed. This practice has led to an enhancement in the performance of Agile Unified Process by easing the way to apply patterns. But, this concept is not as explicit as it should be.

> Application of the right artifacts: The advice for creation of various types of models is provided by the AUP and it is one of its strengths. The recent version of the RUP also provides plenty of advice on creating non – UML artifacts (UI flow diagrams, data models etc.).

> Collective ownership: This concept of Agile Modelling is used for making enhancements in the projects developed using AUP. But, it has to be assumed that the open communication is supported by the team culture. Along with supporting this concept, AUP lays strong stress on issues concerning configuration management. Therefore, the change management processes sometimes can be a hurdle in path of development.

> Parallel creation of several models: This is an important concept of UP. The team is required to check the activitiy diagrams corresponding to each discipline and see if they are being worked up on in parallel. But there is an issue with UP which is that the flow diagrams do not explain this well.

> Creation of the simple content: The simplicity is assumed by the developers. The team needs to adopt the guidelines stating the use of simple models and also the customers must be happy with this. However, many organizations often find it difficult to adopt this culture.

> Temporary models should be discarded: The AUP team is free to decide which models to discard and which models to keep. Travelling light also helps in maintaining simplicity.

> Public display of models: Models should be displayed publicly for facilitating open communication. This way all the artifacts will be available to all the stakeholders


Saturday, October 18, 2014

What is Agile Unified Process (AUP) ?

Rational Unified Process, when simplified, gives rise to AUP or Agile Unified Process. Its developer – Scott Ambler, describes it as a simple and very easy to understand methodology for development of application software for business. The agile unified process makes use of agile concepts and techniques but remains true to its origin i.e., the Rational Unified Process. Various agile techniques are employed by Agile Unified Process for developing software:
> Test driven development (TDD)
> Agile modelling (AM)
> Agile change management
> Database refactoring

All these techniques help AUP in delivering its 100 percent productivity. In 2011 the AUP was considered to be 1 percent of the whole agile methodology. In 2012 DAD or Disciplined Agile Delivery Framework superseded the AUP. Since then most people have stopped working on Agile Unified Process. AUP is different from RUP in the sense that it works only on 7 principles:
> Modelling: Involves understanding about how the business is organized around the software and the problem domain of the project, and then identifying a feasible solution for addressing the problem.
> Implementation: The model is transformed in to executable code and basic testing i.e., unit testing is performed on this code.
> Testing: An objective evaluation is carried out for ensuring that the artefact is of high quality. Testing is done for rooting out the defects and validating whether the system works as desired or not. It also includes verification of the requirements met.
> Deployment: The delivery plan for the system is laid out and executed so that the product reaches the customers.
> Configuration management: Managing the access to various artifacts of the system includes tracking the various versions at regular intervals of time, managing and controlling the changes in them.
> Project management: It includes directing the activities of the project to the right track. These activities include risk management, assigning tasks to people, tracking their progress, coordinating with people for ensuring that the system is delivered on time. This is also for making sure that the project is completed within budget.
> Environment: Involves providing continuous guidance i.e., standards and guidelines and tools for ensuring process is going on right track.

The agile unified process follows certain philosophies as mentioned below:
> The team knows what it is doing: People don’t find it convenience to read highly detailed documentation. Timely training and good guidance is accepted by all. An AUP provides links to a number of details if you want, but it does not force you.
> Simplicity: The documentation is kept as simple as possible without going into too much of detail.
> Agility: For maintaining agility, the AUP should conform to the principles and values mentioned in the agile manifesto and agile alliance.
> Emphasizing on high – value activities: Only the activities that affect the project are actually considered. Rest of the activities are not counted.
> Choice of tools: In Agile Unified Process any toolset can be used. However agile experts often recommend using simple tools appropriate for the project.
> The agile unified process can be tailored specific to the needs of your project.

There are two types of iterations in agile unified process as mentioned below:
> Development release iteration: For deployment of the project to demo – area or quality assurance.
> Production release iteration: For deployment of the project to production unit.

These two iterations are a result of refinement of RUP. The RUP’s modelling, requirement and analysis disciplines re encompassed by the disciplines of agile unified process. Even though modelling constitutes an important part of agile process, it is not the dominating factor. 


Thursday, October 16, 2014

What is agile modeling (AM)? An explanation. Part 2

Read the first part of this post (Agile Modeling: An explanation - Part 1)

The modeling should be carried forward in small increments. As such it is easy to find bugs if the increment fails or some fault occur. As an agile developer, it is your duty to continuously strive to improve the code. This is just another way of showing that your code works in real, and is not just mere theory. The stakeholders know what they want better than the developers. Thus, by actively participating in the development process and providing constant feedback they can help in building a better software overall.
The principle of assuming simplicity means keeping focus on the required aspects instead of drawing out a highly detailed sketch. It also means using simple notations for depicting various model components, using simple tools, taking information from a single source, rejecting temporary models and updating models only when required. Facilitation of the communication can be improved by making a public display of the models, be it on wall or website, application of standards of agile modeling, collective ownership of artifacts and modeling in a team. Gentle application of the patterns enhances the results drastically.
The formalized contract models are required when you require integrating your system with other legacy systems such as web application or database. These models lay out a contract between you and the owners of these legacy systems. The principles, values and practices of AM have been developed by many experienced developers. With AMDD or agile model driven development we do sufficient high level modeling in the initial stages of the project for making out the scope of the project. In the later stages the modeling is carried out in iterations as a part of the development plan. You might then take model storming road for compiling your code straightaway. Agile practices can be applied to most of the projects. It is not necessary that you should be working on an agile project to benefit from these practices. Also it is not necessary to put all principles, practices and values to use to harness agile advantage. Instead it is better to tailor these practices according to the specifications of your project.
Another way of harnessing agile benefits is to follow AMDD. MDD (model driven development) in its agile form is called agile model driven development. In AMDD before writing the source code we create extensive agile models. These models are strong enough for carrying forward all the development efforts. AMDD is a strategy that seeks to multiply the scale of agile modeling. The main stages in this process are:
> Envisioning: Consists of initial architecture envisioning and initial requirements envisioning. These activities are carried out during the inception period.
> Iteration modeling
> Model storming
> Reviews
> Implementation

It is during the envisioning phase that we define the project’s scope and architecture. It is done using high level requirements and architecture modeling. The purpose of this phase is explore the requirements of the systems as far as possible and built a strategy for the project. Writing detailed specifications is a great risk to take. For short term projects you may like to pare only few hours on this matter. Agile modelers are advised to spend only required time on this phase to avoid the problem of over – modeling. For exploiting the important requirements you might require a usage model. It helps you explore how the product will be used by the users. For identifying fundamental business entity types an initial domain model is used. The issues with the user interface and its usability can be explored using an initial UI model.


Tuesday, October 14, 2014

What is agile modeling (AM)? An explanation. Part 1

Agile modeling is one of the most trusted development methodologies when it comes to producing an effective documentation and software system. If described at a high level, it comprises of all the best practices, best principles and values required for modeling a high quality software product (this description may seem a bit hyperbolic, but it tends to be true for the most part). These practices are lightweight in implementation, with a lot of flexibility. Although, Agile Modeling is a set of principles and therefore on its own, it is of no use. It has to be mixed with other fuller technologies such as rational unified process, extreme programming, and adaptive software development, scrum and so on. This process enables us to develop software that satisfies all of our requirements. Agile modeling is governed by the following values. Some of these values are extended from extreme programming:
- Communication: All the stakeholders should maintain an effective communication between them.
- Simplicity: Developers should strive to develop the simplest solution possible meeting all the requirements.
- Humility: As a programmer, you should have a sense of humility that you may not know everything and you should allow others to add value to your ideas.
- Feedback: There should be a mechanism for obtaining feedback early in every stage of development.
- Courage: You should have courage to make decisions and stay firm.

The principles on which the Agile Modeling is based are defined by the agile manifesto. Two of these principles are to assume simplicity and embrace changes. Assuming simplicity makes it easy to design software. You are able to cut out unnecessary secondary requirements and focus on the primary needs, thereby reducing the complexity. When you embrace the fact that there will be changes in the requirements, it adds flexibility to your project. As a result, you can develop more flexible projects that can adapt to the changes in requirements, and other changes, over time. 
The software evolves in increments. You should know that it is this incremental behavior that maintains agility of the system. The requirements are ever changing and so there should be rapid feedback mechanism in place through which early feedback can be obtained. With this early feedback it becomes easy for you to ensure that your system is fulfilling all the needs. The modeling should be done with a purpose i.e., if you don’t understand the purpose of your project, its audience or its environment you should avoid working on it unless you are pretty confident. 
It is always wise to have a contingency plan. Therefore, it’s good to have multiple models on standby. There can be a situation in which your primary model might fail, the standby models will provide a back up. One thing worth noting is that agile models are not just mere documentation; they are light weight realizations of your system’s purpose. Once the purpose is fulfilled, the models are discarded. 
One belief of agile developers is that representation is less important than the content. It is the content that matters. Also there are a number of ways in which the same content can be represented. Focus should be maintained on quality work because sloppy work is not valued anywhere. Also adapting to the principles of agile modeling for meeting the environment needs is important. Modeling in an agile manner requires practice. Agile modeling can be applied through various practices. You have to pick the most appropriate one for your project. However there are few fundamental practices that are always important for success of an agile model:
> Parallel creation of several models.
> Application of the right software artifacts depending up on the situation.
> Moving forward through continuous iteration.
One word of caution though! These models are just an abstract representation of the actual systems and therefore cannot be accurate.


Friday, May 11, 2012

Explain Agile Model Driven Development (AMDD) lifecycle?


“AMDD” is the abbreviated form of the “agile model driven development” and nowadays is quite popular among the developers and programmers in the field of software engineering and technology.
AMDD took its birth from the “MDD” or the “model driven development” as its agile version that makes use of the agile models rather than using the extensive models in the pure model driven development. 

The agile model driven development was formulated out of model driven development since it was thought that the iterative development with the model driven development is possible. And since it constituted of the iterations, it was categorized under the category of agile software development methodologies. 

The agile models that drive the whole development procedure are good enough to take care of the development efforts. The agile model driven development is one of the most sought after beyond the small agile software scaling development methodologies.

Agile Model Driven Development Lifecycle


To understand this whole agile model driven development one needs to familiarize himself/ herself with the life cycle of this development model. This article is focused up on the life cycle of the agile model driven development only!

The life cycle of the agile model driven development is of quite a high level. So let us see what all are the various stages in the life cycle of the agile model driven development:

1. Envisioning: 
This stage of the life cycle is comprised of two more sub stages namely: the zeroth and the first iteration. These iterations usually come in to play during the first few weeks of the development process. This stage is actually included in to the life cycle with the purpose of the identification of the scope of the system and what kind of architecture will be suitable for developing the project. For this the following two sub stages come in to process:

(a)  Initial requirements envisioning or modeling: 
This stage may take up to several days for the identification of the high level requirements. Apart from just identifying the requirements the scope of the release product is also determined at this stage only. For carrying out with this stage the developer may require some type of usage model in order to see how the software project will be used by the customers or the users.

(b) Initial architecture modeling: 
This stage is all about setting up of a proper technical direction for the development of your software project.

      2. Iteration Modeling: 
     This stage involves planning for what is to be done with the current iteration. Often the modeling techniques are ignored by the developers while planning objectives for the iteration that is to be carried out next. The requirements in every agile model as we know are implemented in the order of their priority.
     
      3. Model Storming: 
      As mentioned in the agile manifesto the development team should consist of only a few members who are the ones who discuss the development issues by sketching it up on a white board or paper. The sessions which involve activities such as those are called the model storming sessions. These sessions are short enough to last for at most half an hour.
   
     4. Test driven development involving the executable specifications: this stage involves the coding phase using the re-factoring and test first design (TFD). The agile development helps you address cross entity issues whereas with the test driven development you can focus up on each and every single entity. Above all the technique of re-factoring the design, it is ensured that the high quality of the software project is not hampered at all.


Sunday, May 6, 2012

Explain Test-Driven Development Cycle?


Test driven development or TDD is considered to one of the most effective as well as efficient software development methodology or process that is often cited under the category of the agile software development processes.
The term “test driven development” is self justifying as one make out from the term itself that it is driven by the tests. Actually, the test driven development process is wholly based up on the repetition process of the several development cycles that are shorter than the usual development cycles. 
This whole article is dedicated to the discussion regarding the test driven development cycle. 

What are the steps in TDD process?


Test driven development process involves the below mentioned steps:

1. First step involves the creation of an automatic test case that is failing and defines a new function or some desired improvement in the code.
2. Second step involves the production of code which can pass the test.
3. Third and the final step involves the re-factoring of the new code in order to meet the prescribed standards.

This development strategy or process was introduced in the year of 2003 by Kent beck. The first programming concepts of the extreme programming or XP are considered to be related to the concepts of the test driven development to some extent.
But, nowadays the test driven development has been observed to be an individual development methodology with its own rules and independent procedures. The test driven development process has proved to be successful in improving and debugging legacy code that has been developed using the older development techniques. 

We all know that every development process has its own requirements so here in this the requirements are the automated unit tests that provide a definition of the code requirements and create the associated code themselves. 

Test Driven Development Cycle


Now let us describe the test driven development cycle in detail and sequence:

1. Adding of a test: 
- In test driven development, for every feature to be added, first a test is written that is inevitably expected to fail since it is written before the actual implementation of the features. 
- If the test does not fail, then there are two probabilities: the test is either wrongly formulated or the feature for which the test has been created already exists. 
- The test should be written with utmost understanding of the specifications and requirements which can be achieved from the use cases and user stories. 
- An already existing test can also be modified.
- Writing unit tests before the implementation of the feature helps the software developer or program focus up on the requirements before the actual code is written. 
- This feature of the test driven development differentiates it from the others and leaves a subtle and important effect.

2. Execution of all the tests and check if they fail: 
- This step involves the validation of the test harnesses for their proper working.
- This is done to ensure that no test passes mistakenly without requirement of new code. 
- The test itself is also tested but in the negative so that all the possibilities of passing of the new test are ruled out making it worthless.

3. Production of code: 
- This step involves the production of code by virtue of which the test will pass.
- The code even if its no perfect is accepted since later it can be improved.

4. Execution of the automated tests: 
After all the tests have passed, the developer can be confident that the code is perfect.

5. Refactorization of the code: 
This involves clean up drive of the code and the tests can be re- executed in order to ensure that the existing code is not being damaged.

6. Repetition: 
This involves repetitions of the whole development cycle in order to improve functionality. 





Saturday, April 28, 2012

What is meant by production verification testing?


Production verification is also an important part of the software testing life cycle like the other software testing methodologies but is much unheard of! Therefore we have dedicated this article entirely to the discussion about what is production verification testing? 

This software testing methodology is carried out after the user acceptance testing phase is completed successfully. The production verification testing is aimed at the simulation of the cutover of the whole production process as close to the true value as possible. 

This software testing methodology has been designed for the verification of the below mentioned aspects:
  1. Business process flows
  2. Proper functioning of the data entry functions
  3. Proper running of any batch processes against the actual data values of the production process.

About Production Verification Testing


- Production verification testing can be thought of as an opportunity for the conduction of a full dress rehearsal of the changes in the business requirements if any. 
- The production verification is not to be confused from the parallel testing since there is a difference of the goal.
- We mean to say that the goal of the production verification testing is to verify that the data is being processed properly by the software system or application rather than comparing the results of the data handling of the new software system software or application with the current one as in the case of parallel testing. 
- For the production verification testing to commence, it is important that the documentation of the previous testings is produced and the issues and faults that were discovered then are fixed and closed.
- If there is a final opportunity for the determination of whether or not the software system or application is ready for the release, it is the production verification testing. 
- Apart from just the simulation of the actual production cut over, the real business activities are also simulated during the phase of the production verification testing. 
- Since it is the full rehearsal of the production phase and business activities, it should serve the purpose of the identification of the unexpected changes or anomalies presiding in the existing processes as a result of the production of the new software system or application which is currently under the test. 
- The importance of this software testing technique cannot be overstated in the case of the critical software applications.
- For the production verification testing, the testers need to remove or uninstall the software system or application from the testing environment and reinstall it again as it will be installed in the case of the production implementation.
- This is for carrying out a mock test of the whole production process, since such kind of mock tests help a lot in the verification of the interfaces, existing business flows. 
- The batch processes continue to execute alongside those mock tests. 
- This is entirely different from the parallel testing in which both the new and the old systems run besides each other.
- Therefore in parallel testing, the mock testing is not an option to provide accurate results for the data handling issues since the source data or data base has a limited access. 

Entry and Exit Criterion for Production Verification Testing


Here we list some of the entry and exit criteria of the production verification testing:
Entry criteria:
  1. The completion of the User acceptance testing is over and has been approved by all the involved parties.
  2. The documentation of the known defects is ready.
  3. The documentation of the migration package has been completed, reviewed and approved by all the parties and without fail by the production systems manager.
Exit Criteria:
  1. The processing of the migration package is complete.
  2. The installation testing has been performed and its documentation is ready and signed off.
  3. The documentation of the mock testing has been approved and reviewed.
  4. A record of the system changes has been prepared and approved.


Thursday, November 17, 2011

What is a review and what is the role it plays in the software development process?

Reviews give support to a software whether it is good or bad and thus, making it easier for people to judge whether the program is the right choice for them or wrong. As far as a program is concerned, it is improved with only those new extensions which have been reviewed properly. Otherwise, these extensions are marked as “unsupported”. If the extensions have been reviewed they are marked as “stable and supported” and added to the official directory. Reviewing is an important method or strategy for making a software application or a program even more dependable and secure. Reviewing can be defined as a process of self regulation and evaluation by a team of professionals and qualified individuals in that particular field. Reviewing is essentially needed to know pros and cons of the program and to improve it’s performance, and to maintain the standard of the software application. Reviewing also provides the program with some sort of credibility.
Reviews can be classified into many types depending upon the field of activity and the profession involving that particular activity. Peer review is commonly known as software review in the field of computer science and development. Software reviewing is a process in which a software product like a source code, program or a document is first examined and checked by its author and then by his/ her colleagues (who are essentially professionals in that field) and qualified individuals for evaluating the quality of the proposed software product. According to the capability maturity model, its purpose is to spot and correct flaws and errors in software applications or programs and thus preventing them from causing any trouble during operation. Reviewing is a part of software development process and it is used as tool to identify flaws and correct them as soon as possible so as to avoid potential errors. Reviewing is necessary as it saves a trouble by identifying problems early during requirements testing otherwise which would have been a hectic problem to fix during software architecture testing.
Software reviews are different from other kind of reviews. Software review processes involve the following activities:
1. Buddy checking (unstructured activity) and formal activities like:
2. Technical peer reviews
3. Walk through
4. Software inspections.
Software reviewing is now considered as a part of computer science and engineering. If there are reviewers; less difficult it becomes to solve a problem. But even though, there may be many reviewers and researchers, it is still difficult to find out every single and small flaw in a huge work piece. But reviewing always improves the work and identifies the mistakes. Reviewers and the review process are in demand because of basically three reasons.
Firstly, the workload of review cannot be directly handled by the team of developers. Even if each individual contributes his/ her all time it won’t be enough.
Secondly, even though the reviewers work as a team to find out mistakes, they put out their own opinions about the program.
Thirdly, a reviewer cannot be considered equal to an expert in all the fields concerning that program.
So having more reviewers to review a software artifact becomes necessary. The names and identity of the reviewers is kept secret to avoid unnecessary criticism and cronyism. Reviewing leads to great improvement in the quality of the software product, readability of the program code, identification of missing and incorrect references, identification of statistical errors and also the identification of scientific errors. Software reviewing is like a filter which filters out the program in its best form to the benefit of the users.

Some great books explaining software reviews:
1. Best Kept Secrets of Peer Code Review: Modern Approach. Practical Advice
2. Software Engineering Reviews and Audits
3. Peer Reviews in Software: A Practical Guide


Tuesday, November 15, 2011

What is Performance testing for software applications?

Performance testing is required in every field. Without doing some validation for performance testing, quality and success cannot be said to be achieved. Similarly in the field of computer science and engineering, performance testing in software applications is of great importance. Performance testing is done to find out the execution speed and time of the program, and to ensure its effectiveness. Software performance testing basically involves some quantitative tests that can be performed (in a computer lab for example), number of millions of instructions per second (MIPS) and measurement of response time. It also involves some tests for qualitative aspects such as scalability, interoperability and reliability.
Stress testing is carried out simultaneously with performance testing. So finally we can define software performance testing as a testing in software engineering that is done to find out the measure of some qualitative or quantitative aspect under a specific workload. Sometimes, it is also used to relate other quantitative and qualitative aspects such as usage of resources, scalability and reliability. Software performance testing is a concept of performance engineering which is very essential to build good software.
Performance testing consists of many sub testing genres. Few have been discussed below:
1. Stress testing: This testing is done to determine the limits of the capacity of the software application. Basically this is done to check the robustness of the application software. Robustness is checked against heavy loads i.e., to say above the maximum limit.
2. Load testing: This is simplest of all the testings. This testing is usually done to check the behavior of the application software or program under different amounts of load. Load can either be several users using the same application or the difficulty level or length of the task. Time is set for task completion. The response timing is recorded simultaneously. This test can also be used to test the databases and network servers.
3. Spike testing: This testing is carried out by spiking the particular and observing the behavior of the concerned application software under each case that whether it is able to take the load or whether it fails.
4. Endurance testing: As the name suggests the test determines if the application software can sustain a specific load for a certain time. This test also checks out for memory leaks which can lead to application damage. Care is taken for performance degradation. Throughput is checked in the beginning, at the end and at several points of time between the tests. This is done to see if the application continues to behave properly under sustained use or crashes down.
5. Isolation testing: This test is basically done to check for the faulty part of the program or the application software.
6. Configuration testing: This testing tests the configuration of the application software application. It also checks for the effects of changes in configuration on the software application and its performance.

Before carrying out performance testing some performance goals must be set since performance testing helps in many ways like:
1. Tells us whether the application software meets the performance criteria or not.
2. It can compare the performance of two application soft wares.
3. It can find faulty parts of the program.
There are some considerations that should be kept in mind while carrying out performance testing. They have been discussed below:
1. Server response time: This is the time taken by one part of the application software to respond to the request generated by another part of the application. The best example for this is HTTP.
2. Throughput: Can be defined as the highest number of users who use concurrent applications and that is expected to be handled properly by the application.

Good book on performance testing on Amazon (link).


Thursday, November 10, 2011

What is reliability in terms of software engineering ?

Reliability is one of the most important aspect when it comes to discussions about a software application or a program. But what does it exactly means? Reliability can be defined as the strength or the solidity or the rigidity of structure of the software application. It is also a measure of resilience of the software application. Measurement of reliability shows us how much risk there is with the software application; it also measures the number of failures that can happen due to the present internal defects of the software application. A software application is tested for reliability to know the probability of failure of the software application as well as crashes so that the errors and defects can be reduced and corrected to the best level possible. There are some aspects of reliability of which care should be taken while testing. They have been listed below:
- Application practices
- Structure and complexity of the algorithms
- Programming practices
- Coding practices
Software Reliability can be defined as the probability of software operation free from failure for a certain period of time in defined and controlled conditions. Software Reliability overall affects the reliability of the whole system. Many people get confused between software reliability and hardware reliability. Software reliability differs from hardware reliability in the sense that it shows the perfection limit of the software and application design as compared to the hardware reliability which focuses upon manufacturing perfection.
The highly complex structure of the software is the major part of problems of Software Reliability. Till date no proper qualitative and quantitative technologies or methodologies have been designed to measure Software Reliability without any problems. There are several approaches that can be taken in account to improve the reliability of the software application even though it is difficult to balance all the factors of development effort, money and time with improvement of software reliability.
Software reliability if measured properly can be of great help to software application developers. There are many interactions between Software reliability and other aspects of the software application. The other aspects include the structure of the software program, and the number of tests that the software application has gone through. Through several reliability tests, data in the form of statistics can be obtained and true measure of reliability can be known. From the statistical data it can be easily known where improvement is needed in order to achieve greater reliability.
Different researchers and scientists and developers have their own way of testing software for reliability. This leads to conclusion that software reliability testing depends a lot on the one who is performing the test. This makes it a kind of art for which one has to practice a lot and come out with more creative, new, innovative and practical ideas to test reliability of a software application. Since the software reliability testing techniques and methodologies are weak, one can never be sure and confirmed that the software being tested is truly reliable in all kinds of conditions and environment.
Testing software for its reliability is not less than solving a real life hard problem. It requires a lot of efforts and it is time consuming. It also demands a lot of money. Even if we are using some other system to verify one software application for reliability, we cannot be sure that the system being used for testing and comparison is truly verified and cent percent correct. Faults are always present in each and every software application. There are always bugs and the software cannot be tested against infinite conditions. That’s impossible.


Wednesday, November 9, 2011

Backward compatibility for software applications, what does this mean ?

Backward compatibility can be defined as the quality or ability of a device to work well with input generated by a device of older technology. For example if the latest version of a music player can still play music of old formats and types, the music player is said to be backward or downward compatible. The best examples of backward compatibility are given by communication protocols. Forward compatibility is just the opposite of backward compatibility.
In the context of programming languages, a programming language is said to be backward compatible if it’s compiler of version “a” is able to read and execute the source code or programs written in the older version of the same language compiler i.e., in the version “a – 1”. A technology or an IT product can be called a backward compatible device if it properly and equally replaces the older device of the same kind. Even a particular kind of data format can be stated as backward compatible under the condition that the program or message written in that format is still valid under the improved version of the data. For example, the newest version of Microsoft Word should be able to read documents created by previous (could be many years previous) versions of Word.
Backward compatibility can be looked upon as a relationship between two devices having similar attributes. In a layman’s language we can say that a device is called backward compatible if it exhibits all the functionalities of the older device. In the relation of backward compatibility the new or modern version device is said to have inherited all the attributes of the older one. If it does not have those qualities, it cannot be called as a backward compatible device. There are two types of backward compatibility. They have been discussed below:
Binary compatibility or level- one compatibility: It can be defined as the ability of a program to work well directly with the new version of the compiler of the language in which it has been written without its recompilation or modification.
Source compatibility or second level compatibility: it can be defined as the ability of the program to work well and effectively with the new version of the compiler of the language in which it has been written, but, with recompilation of the source code and also with condition that the source code should not be changed.

Many programs and devices use various technologies to achieve backward compatibility. “Emulation” is one such technology. In emulation, the platform of the older software is simulated into the platform of newer software and thus providing backward compatibility. There are so many examples of backward compatibility available. A few have been listed below:
Blu ray disc players can play CDs and DVDs.
Newer video game consoles are able to support games which were created for preceding video game consoles like Atari 7800 and Atari 2600, game boy advance and game boy systems, Nintendo DS and Nintendo DS lite, Nintendo 3DS and Nintendo Ds and Nintendo DSi, play station 2 and play station 3 and play station, PSP and Psone, PS vita and PSP, Xbox 360 and Xbox, wii and Nintendo etc.
Microsoft windows are backward compatible with shims, where the software of the newer version is tweaked specifically to work with already released products. For example, when Microsoft would have released Windows 7, they would have tested existing software applications that works with Windows XP and Vista and made changes inside Windows 7 to ensure that it works with these software as well.
Mac OS X Intel 10.4 versions have been made backward compatible with Mac OS X 10.6 Intel versions through Rosetta which is a binary translation application.
Microsoft word 2000 is backward compatible with Microsoft word 97 and Microsoft 2008 is backward compatible with Microsoft 2007. The other applications of Microsoft office follow the same backward compatibility pattern.
Even some cameras show backward compatibility like Nikon f mount and Nikon DSLRs, canon EF mount and canon APS-Hs etc.


Monday, January 10, 2011

Rapid Application Development (RAD) - Characteristics and Phases

Rapid Application Development is a software development methodology that focuses to decrease the time that is needed to design the software through gathering requirements using workshops or focus groups, prototyping and early, reiterative user testing of designs, the re-use of software components, a rigidly paced schedule that defers design improvements to the next product version, less formality in reviews and other team communication.

Characteristics of Rapid Application Development(RAD)


- It involves techniques like iterative development and software prototyping.
- Focused scope where the business objectives are well defined and narrow is well suited for RAD.
- Project data suitable for RAD is the data for the project already exists (completely or in part). The project largely comprises analysis or reporting of the data.
- Decisions can be made by a small number of people who are available and preferably co-located are suitable for RAD.
- A small project team (preferably six people or less) is suitable for RAD.
- In RAD, the technical architecture is defined and clear and the key technology components are in place and tested.

Phases of Rapid Application Development


RAD has a step by step process.
- Planning of Requirements: Developers meet with the project coordinator or manager to create specific objectives from the desired program. Strategies for development and tools for development are also laid out in a specific project.
- RAD Design Workshop: Using the agreed tools and interfaces, developers will start to create different programs based on the business need.
- Implementation Phase: Even though it has gone through hundreds or even thousands of testing and critique, the stage wherein the software is implemented in a larger scale is different hence new suggestions and bugs should be expected from different users.


Saturday, January 8, 2011

Software Development Methodology - Joint Application Development (JAD)

- Joint Application Development(JAD)is a process that is originally used to develop computer based systems.
- Joint Application Development is a process that accelerates the design of information technology solutions.
- JAD uses customer involvement and group dynamics to accurately depict the user's view of the business need and to jointly develop a solution.
- JAD is thought to lead to shorter development times and greater client satisfaction because the client is involved throughout the development process.
- JAD centers around a workshop session that is structured and focused. Participants of these sessions would typically include a facilitator, end users, developers, observers, mediators and experts.
- In order to get agreement on the goals and scope of the project, a series of structured interviews are held.
- The sessions are very focused, conducted in a dedicated environment, quickly drive major requirements.

Concept of Joint Application Development


- User who do the job have the best understanding of that job.
- The developers have the best understanding of the technology.
- The software development process and business process work in the same way.
- When all groups work equal and as one team with a single goal, the best software comes out.

Principles of JAD Process


- Define session objectives.
- Prepare for the session.
- Conduct the JAD session.
- Procedure the documents.
JAD improves the final quality of the product by keeping the focus on the upfront of the development cycle thus reducing the errors that are likely to cause huge expenses.

Advantages of Joint Application Development


- JAD decreases time and costs associated with requirements elicitation process.
- The experts get a chance to share their views, understand views of others, and develop the sense of project ownership.
- The techniques of JAD implementation are well known as it is the first accelerated design technique.
- Easy integration of CASE tools into JAD workshops improves session productivity and provides systems analysts with discussed and ready to use models.
- Enhances quality.
- Creates a design from the customer's perspective.


Facebook activity