Subscribe by Email


Showing posts with label Engineering. Show all posts
Showing posts with label Engineering. Show all posts

Wednesday, April 10, 2013

How not to put all your eggs in one basket - ensure some amount of knowledge sharing

Every software team has some great engineers, and some not so great ones. Typically, when you are doing a project, you would take your list of features and categorise them into a breakdown of how difficult or how easy they are. Once you have done this, you would get into the process of estimating these tasks and assigning them to different members of the team. One somewhat hidden portion of this entire exercise of estimation and assignment is that you would do the estimation based on a specific person to do the feature (for a more complicated feature, the person who is more skilled would take far less time than the mediocre engineer in your team - and every team has bright people and mediocre people). What this results in is that you have already done some amount of thinking about whom to assign to the more difficult features - typically you would look at the total time available for your skilled people and assign accordingly.
So, you have started the entire exercise of getting the work done, with different parts of the project assigned to different people and are tracking progress of these tasks. This is where I tell my example where some amount of bad planning and some assumptions landed the team in some serious trouble - we had some critical features that were very important for the ongoing version of the product, with marketing depending on these features to be the ones that would get highlighted in reviews, and these were the ones that were pulled up in stakeholder status and review sessions. In fact, part of discussion with stakeholders was about assignment of these features, and once we had presented the assignment of these features to the more skilled engineers, there was approval to proceed.
Now, how many of you have heard of Murphy's Law ? If something can go wrong, it will. And guess what, it did. We were deep into task assignment, and since the engineer doing the most complicated feature was skilled but a bit moody, we had neglected to do periodic sharing of the code and design with a buddy engineer. And of course, we ran into an issue where the engineer ran into some sudden health issues with his daughter, which caused him to take leave for a period of 2 weeks, causing him immense emotional stress. It landed us in a tricky situation. Given that he was already stressed and was already out to take care of his daughter, any kind of information sharing was very difficult. We had permission to drop a couple of features so that we could divert some engineers to this sudden issue, but the information sharing was proving to be a big handicap.
We asked the replacement engineers to study the code intensively, but it did cause us a 6 day delay in delivery of the important features, and also caused a higher number of bugs to turn up since these replacement engineers did not have the same level of familiarity with the code. You would not want to know about the reaction of stakeholders to this delay, especially since these features were critical. And, the biggest problem was that we had a process whereby we used to get engineers to share their current status with their buddy, but for tight-scheduling reasons, we had gone a bit slow on the buddy program; and that was what hit us the most.


Monday, February 4, 2013

How are unit and integration testing done in EiffelStudio?


- Eiffelstudio provides a development environment that is complete and well integrated.
- This environment is ideal for performing unit testing and integration testing. 
- Eiffelstudio lets you create software systems and applications that are scalable, robust and of course fast. 
- With Eiffelstudio, you can model your application just the way you want. 
Eiffelstudio has effective tools for capturing your thought process as well as the requirements. 
- Once you are ready to follow your design, you can start building up on the model that you have already created. 
- The creation and implementation of the models both can be done through Eiffelstudio. 
- There is no need of keeping one thing out and starting over. 
- Further, you do not need any other external tools to go back and make modifications to the architecture. 
- Eiffelstudio provides all the tools. 
- Eiffelstudio provides round-trip engineering facility by default in addition to productivity and test metrics tools.
Eiffel studio provides the facility for integration testing through its component called the eiffelstudio auto test. 
- Sophisticated unit tests and integration testing suites might be developed by the software developers that might be quite simple in their build. 
- With eiffelstudio auto test the Eiffel class code can be executed and tested by the developer at the feature level. 
- At this level, the testing is considered to be the unit testing. 
- However, if the code is executed and tested for the entire class systems, then the testing is considered as the integration testing.
- Executing this code leads to the execution of contracts of attributes and features that have already been executed. 
- Eiffelstudio auto test also serves as a means for implementing the tests as well as assumptions made regarding the design as per the conditions of the contract. 
- Therefore, there is no need of re-testing the things that have been given as specification in class texts contracts by unit and integration testing through some sort of test oracles or assertions.

- Eiffelstudio auto test lays out three methods for creating the test cases for unit and integration testing:
  1. A test class is created by the auto test for the tests that have been manually created. This test class contains the test framework. So the user only needs to input the code for the test.
  2. The second method for the creation of the tests is based up on the failure of the application during its run time. Such a test is known as the ‘extracted’. Whenever an unexpected failure occurs during the run time of the system under test, the auto test works up on the info provided by the debugger in order to produce a new test case. The calls and the states that cause the system to fail are reproduced by this test. After fixing the failure, the extracted tests are then added to the complete suite as a thing that would avoid the recurrence of the problem.
  3. The third method involves production of tests known as generated tests. For this the user needs to provide the classes for which tests are required and plus some additional info that auto test might require to control the generation of the tests. The routines of the target classes are then called by the tool using arguments values that have been randomized. A single new test is created that reproduces the call that caused the failure whenever there is a violation of a class invariant or some post condition.


Thursday, January 31, 2013

Explain EiffelStudio? What technology is used by EiffelStudio?


For Eiffel programming language, the development environment is provided by the Eiffelstudio. Both of these – the Eiffelstudio and Eiffel programming language have been developed by Eiffel software. Presently the version 7.1 has been released.

- Eiffelstudio consists a number of development tools namely:
  1. Compiler
  2. Interpreter
  3. Debugger
  4. Browser
  5. Metrics tool
  6. Profiler
  7. Diagram tool
- All these tools have been integrated and put under the single user interface of the Eiffelstudio. 
- This user interface in turn is based on several UI paradigms that are quite specific to one another. 
- There has been done effective browsing through ‘pick and drop’ thing. 
- Eiffelstudio supports a number of platforms including the following:
  1. Windows
  2. Linux
  3. Mac OS
  4. VMS and
  5. Solaris
- This Eiffel software product comes with a GPL license. 
- However, a number of other licenses are also available. 
- Eiffelstudio falls under the category of open source development. 
- The beta versions of the product of the following release are made available to the public at regular intervals. 
- The participation of the Eiffel community in the development of the product has been quite active. 
- A list of the open projects has even made available on the origo web site. 
- The host of this site is at ETH Zurich. 
- Along with the list, information regarding the discussion forums, basic source code for check out etc. also has been put up. 
- In the month of June 2012, the last version 7.1 was released and the successive beta releases were made available very soon after that.

Technology behind EiffelStudio

The compilation technology used by the Eiffelstudio called Melting Ice is unique to the Eiffel software and is their trademark.
- This technology integrates the interpretation process of the elements with the proper compilation process. 
- This technology offers a very fast turnaround time. 
- This also means that the time taken for recompilation depends up on the size of the change to be made and not on the overall size of the program. 
Such melted programs even though can be delivered readily but still a finalization step is considered important to be performed before the product is released.
- Finalization step involves a very highly optimized compilation process which takes a long time but the executable generated is optimized.
- The interpretation in eiffelstudio is carried out through what is called the byte code-oriented virtual machine. 
- Either .NET CIL or C is generated by the compiler. 

History of Eiffelstudio

- The roots of the Eiffelstudio date back to when the Eiffel was first implemented by interactive software engineering Inc. 
- The Eiffel software was preceded by the interactive software engineering Inc. -The first implementation took place in the year of 1986. 
-The current technology used in Eiffelstudio evolved from the earlier technology called the ‘Eiffel bench’ that saw its first use in the year of 1990. 
- It was used along with the version 3 of the Eiffel programming language. 
- In the year 2001, the name Eiffel bench was changed to what we know now, the ‘Eiffelstudio’. 
- This was also the year when the environment was developed to obtain compatibility with the windows and a number of other platforms. 
- Originally, it was only available for Unix platform.
- Since 2001, Eiffelstudio saw some major releases with some new features:
  1. Version 5.0 (july 2001): The first proper version. Saw integration of the eiffelcase tool with the eiffelbench as its diagram tool.
  2. Version 5.1 (December 2001): Support for .NET applications. Also called the eiffel#.
  3. Version 5.2 (November 2002): The debugging capabilities were extended, an improved mechanism for C++ and C was introduced, eiffelbuild, roundtripping abilities etc. were added.



Wednesday, January 30, 2013

Give an overview of The diagram Tool of EiffelStudio?


Eiffelstudio is a rich combination of a number of development environment tools such as:
  1. Compiler
  2. Interpreter
  3. Debugger
  4. Browser
  5. Metrics tool
  6. Profiler
  7. Diagram tool
In this article we shall discuss about the last tool of Eiffelstudio i.e., the diagram tool. 

A graphical view of the software structures is provided by the Eiffelstudio’s diagram tool. This tool can be used effectively in both:
  1. Forward engineering process: In this process it can be used as design tool that uses the graphical descriptions for producing the software.
  2. Reverse engineering process: In this process it produces the graphical representations of the program texts that already exist automatically.
The changes that are made in any of the above mentioned two processes are given guaranteed integration by the diagram tool and this is called round trip engineering. 
It uses any of the following two graphical notations:
  1. BON (business object notation) and
  2. UML (unified modeling language)
By default the notation used is BON. The Eiffelstudio has the capability of displaying several views of the classes and their features. 
It provides various types of views such as:
1. Text view: It displays the full text of the program.
2. Contract view: It displays only the interface but with the contracts.
3. Flat view: It displays the inherited features as well.
4. Clients: It displays all the classes with their features that depend up on other class or feature.
5. Inheritance history:  It shows how a feature is affected when it goes up or down the inheritance structure.
There are a number of other views also available. There is an user interface paradigm that is based on holes, pebbles and other development objects and the Eiffelstudio relies heavily on this. 

Software developers using Eiffelstudio have to deal with abstractions that represent the following:
Ø  Classes
Ø  Features
Ø  Breakpoints
Ø  Clusters
Ø  Other development objects

- The way they deal with these things are same as that of the way in which the objects during run time are dealt by the object – oriented in Eiffelstudio.
- In Eiffelstudio, wherever a development object appears at the interface, it can be picked or selected irrespective of how it is visually represented i.e., what name is given to it and what symbol and so on. 
- To pick a development object you just have to right click on it. 
- The moment you click on it the cursor changes to pebble (a special symbol) that corresponds to different types of the object such as:
  1. Bubble or ellipse for class
  2. Dot for breakpoint
  3. Cross for feature and so on.
- As the position of the cursor changes, a line appears displaying the original position and current position of the object. 
- The object can be dropped at any place where the pebble symbol matches the cursor.
- An object can also be dropped in a window that is compatible with it. 
- Multiple views can be combined together to make it easy browsing through the complex structure of the system. 
- This also makes it possible to follow the transformations such as re-naming, un-definition and re-definition that are applied to the features while inheriting.
- The diagram tool of the Eiffelstudio is the major helping hand in the creation of the applications that are robust, scalable and fast. 
- It helps you to model your application just the way you want. 
- It helps in capturing your requirements as well as thought processes. 
- The tools of the Eiffel studio make it sure that you don’t have to use separate tools to make changes in the architecture of the system while designing.



Wednesday, June 6, 2012

Differentiate between Capability Maturity Model (CMM) and Scrum?


In the year of 2002, when both the CMM (capability maturity model) and agile software development models were on the list of the highest demanded software development methodologies, the CMM model and agile processes were considered to be to pretty much same.
But, later in the same year two of the software engineers Jain and Turner argued that these two software development models or processes have much difference between them if investigated in detail.
This article discusses about the differences between the cmm model and agile methods. There is no fixed or appropriate way to develop a software system or application but at different stages a different methodology is to be adopted. 

Coming to the scrum, it is a pre defined development life cycle based entirely up on the agile principles. The CMM on the other hand comprises of many practices that can be adopted by a team so as to improve its overall performance. The CMM model pays more attention to the areas that require change and project management. 
Another aspect of the CMM is focussed up on the following three aspects:
1.      Engineering skills
2.      Organizational learning
3.      Advanced project management

Differences between Capability Maturity Model and Scrum



Difference #1:
- In CMM, it is required that an understanding is developed with the requirement
providers  based upon the definition of the requirements.
- In Scrum, there are reviews of the requirements listed in the product back log with
development team and product owner.

Difference #2:
- In CMM, commitment to the requirements is demanded from all those involved in    
the project. 
- In Scrum practice, the commitment is needed in the sprint planning and release 
planning.

Difference #3:
- In CMM, practice any changes to be made are directly carried out on the
requirements.
- In Scrum practice, the changes are recorded in the product back log and are worked 
up on in the later sprints.

Difference #4:
- In CMM, it involves the identification of the inconsistencies among the work 
products and the requirements.
- In Scrum practice, the inconsistencies are identified during the sprint planning and
release planning sessions.

Difference #5:
- CMM develops a top level work breakdown structure for the proper estimation of the  
project scope. 
- In scrum, the standard tasks combined with the specific project tasks define the 
scope of the project.

Difference #6:
- In CMM, the attributes of the work products and tasks are maintained kept for the 
maintenance of the estimates.
- In scrum, this task is done with the help of story points.

Difference #7:
- In CMM, the whole project budget and schedule is prepared in CMM.
- In scrum, a project is broken down in to several sprints and for every sprint there is a 
separate schedule and back log.

Difference #8:
- The main estimates that are carried out in CMM are of the resources required to 
perform the development tasks. 
- On the other side, the scrum maintains the estimates of the sprint back log, release 
plan and assignments.

Difference #9:
- In CMM a plan of involvement is prepared for all the stakeholders separately during 
the planning process.
- In Scrum, there are predefined core and ancillary roles that saves a lot of time.

Difference #10:
- In CMM, at the end of the day the project plan is re-conciliated to check out the 
available resources.
- In scrum, there is sprint planning meetings and daily scrum meetings to do this task.

Difference #11:
- The project development in CMM is tracked by monitoring the actual values of the 
project parameters against the already prepared project plan. 
- On the other hand the scrum is aided by the sprint burn down charts for the same 
purpose. 






Tuesday, September 13, 2011

Some details about the types of pointers and arrays in C

Arrays are linear data structures whose data elements form a sequence. The elements of these linear structures are stored in memory as the contiguous memory locations and are represented as the same. These are the simplest data structures and are easy to be performed operations like insertion, deletion, searching, and traversal, sorting and merging. An array can store a finite number of data elements homogenous in nature i.e., they should be of same data type. The number of elements of the array represents the length or size of the array. The front and end index of the array are called lower bound and upper bound respectively. In C, the lower bound is always 0 and the upper bounds calculated as (size – 1). There are basically two types of arrays namely one dimensional arrays, two dimensional arrays and multi-dimensional arrays. The one –dimensional arrays are the simplest ones. Every name has a unique name. Each element of the array is represented by an index number or subscript of the element. An array is represented as:
Array name[ lower bound L, upper bound]

One - dimensional elements are implemented in C by allocating a sequence of contiguous memory location. The starting address of the first element of the array is called the “base address” of the array. C allows many searching algorithms namely linear search and binary search.
Two dimensional arrays are the arrays in which each element is itself an array. For example, a two dimensional array “A” can be represented as a table of M*N elements, where M is the number of rows and N is the number of columns. These arrays also like one dimensional array are stored as contiguous memory allocations. They must be linearized before storing.
These two dimensional arrays can be implemented in two ways i.e., row- major and column major. Row major implementation technique stores the arrays as rows whereas column major implementation technique stores arrays as columns i.e., it stores first row of the array as the first column in the memory. Algebraic Operations that can be performed on two- dimensional arrays are addition, subtraction, multiplication, division and transposition.
Pointer is a variable which stores the address of another variable. Pointers are the strongest as well as the weakest feature of the C programming language. Pointers are a mean to access and modify the address of a variable and its value. Pointers provide a way to implement dynamic allocation of memory and to improve the efficiency of program routines. But, they have a downside which is that they can cause your system to crash or hang if used in an incorrect way. There are no real arrays in a C program, but a chain of pointers. The name of the array is itself a pointer pointing to the first element of the array. There are many kinds of arrays. We will discuss them one by one. First is “array pointers”. These pointers store the address of the first element of the array. An array pointer can be declared as follows:
Int *a[10];
The second kind of pointers is “string arrays”. Strings as we know are nothing but just arrays of characters. A character pointer can be declared as:
Char name[ ]=”abc”;
Char *cpr;
The third type of pointer is called “const pointer” and can be defined as the constant pointer or a pointer to a constant. The fourth type of pointers is known as “structure pointers”. These pointers point to the structures. They too are declared by placing “*” in front of a structure name. Fifth type of pointer is called “object pointer” and points to an object the last and sixth type of pointer is the “THIS pointer” and it is used to automatically pass an implicit argument or a pointer to the object invoked in the function call.


Tuesday, January 26, 2010

Enabling a product to work across different locales - what is the need ?

Nowadays, in order to have a successful software product, there is a dire need to ensure that you release the product in different languages. You might argue that your main market is in the United States, and there are enough English speakers the world over to make your product successful. However, consider the following factors -
- Do a quick survey to determine potential users for your products in different countries
- Look at your competitors and see how many different countries they are sold in
- Consider the problem of negotiating deals with hardware sellers to include your software in their hardwares as part of OEM deals (they would want the ability to include the same software in their different country deals)
- The incremental benefit to having multiple language sales is more than the cost involved
- You can seriously cause harm to your image if you don't have a global presence. With an increasing percentage of people in countries being bilingual or from another country of origin (such as the increasing Hispanic population in the US, and the movement of people within the EU), it gets difficult for you to justify yourself as a serious software product if you don't have language versions.

Now, how do you actually go ahead with ensuring that your product can be localized easily and is properly available in different languages ?
Well, you do something called software internationalization; a process that ensures that your software application works in different languages and regions without having to make changes at the time of use. You enable the software during the design time to work with different languages, typically by adding something called language packs that allows the same software to pick up the relevant language packs, and in many cases, enabling the software to change languages easily during use. The engineering work needed to enable this to happen is a lot more complicated, and will be detailed over the next few posts.


Sunday, January 17, 2010

In a product development cycle, how to get engineering and product management on the same page

As mentioned in the previous post, one of the biggest problems in a product development cycle is the objective of ensuring that the engineering teams and product management are in agreement about the resource commitment. Product Management talks about getting new features in place, while engineering has to content with having to dedicate resources to efforts for handling infrastructural and legacy tasks.
What are some of the tasks that an engineering team has to do which are not related to new feature work:
1. Test features from earlier versions and fix issues in these
2. Incorporate new components - you could be incorporating common components (and once you are building a product, there are many features that can be done by utilizing common components such as disc burning engines, installer technologies such as MSI, Installshield, Visual Studio etc); over a period of time, such components need to be upgraded and there will need to be more testing required
3. Spending dedicated time to improve the quality. Companies with large products such as Microsoft, Apple, Adobe, etc defer a number of bugs in their products. Some of these bugs are deferred since they are not critical, and there is a need to release the products. However, over a period of time, it may turn out that the overall impact of these deferred bugs can be high, and may need to spend some dedicated time to fix these bugs.

Now, what needs to be done in such cases since there needs to be an agreement between the engineering teams and product managers.
First, before starting on a new cycle, the team needs to spend around a week (with multiple meetings) to work through what all is planned for the next cycle, and this includes discussing what needs to be the focus of the release. A good way to sort through the legacy features is to emphasize that legacy features are non-negotiable and need to be tested since they impact customers directly. In a number of cases, without this discussion, the Product Management team has not thought through the need to test legacy features and when this discussion happens, this issue will get cleared.
Secondly, when the discussions about the features for the cycle starts, the team will need to work through setting aside dedicated time for infrastructural items, and the engineering team will need to push a bit hard on this issue; and in most reasonable cases, there will be an agreement between the engineering and the product management team as long as the time needed for this infrastructural work does not take too long.
In all such cases, focusing on a mix of the needed technical architecture work and new features should help to resolve such issues.


Getting the priorities right in feature development - what are problems in terms of balance between new features and other work ?

One of the biggest problems that a product development team runs into during the process of product development is about how to prioritize features. Typically for very small products, where the team size is low (less than 5-10 people), there may not be a separate position for a Product Manager, and the work of defining what the Product should be, what features it should contain, and so on, is fairly easy since you have effectively the same person running the engineering and product management activities. However, if you leave aside such cases, you have the normal case of a product that has a separate Product Manager (who works with the engineering teams, and not works for engineering, in order to ensure that there is a clear separation between engineering and the people who drive features).
Now, as I mentioned in the beginning, the problem is in terms of defining the features that need to be added to the product (this is true whether the product is a new product, or whether the product is a newer version of an existing product). A classic Product Manager would want to get new features added that are:
1. Meant to appeal to people from the press and technical reviewers who write about the new product and help in forming pubic opinion about the product. If the product is in a competitive area, then it is critical to get a good series of reviews
2. Mean to ensure that there is not a major discrepancy between the product and competitive products, and more important, the product should have all the latest features that are present with competitors
3. Ensure that existing users get some great new features that will encourage them to update their existing product (for existing products, getting existing users to update is a huge chunk of the business).
So what is the problem in all these ? Consider a team that is working with an older version of Visual Studio or some tool, and needs to upgrade. Now, upgrading the entire project in terms of code can take some time and effort, and this time and effort needs to come out of the same effort that is being used for generating new features. In addition, time needs to be spent on testing existing areas of the product that are not being modified; since they form a part of the product that is being released, they need to be tested to ensure that such functionality is not broken.
The typical problem that arises from these conflicting needs is that, with a given set of resources, not everything that the Product Manager requires will be built. The amount of difference between what the Product Manager wants and what the Product Manager will get accounts for the tension in the team, and can lead to tense discussions (except when you have a seasoned Product Manager who knows the situation, and even while pushing for new features, accepts that a certain amount of time will be utilized for other work).
In the next article on this subject, I will talk about what can be done to make this process much smoother.


Tuesday, September 15, 2009

Overview of Reverse Engineering

Reverse engineering is the attempt to recapture the top level specification by analyzing the product - I call it an "attempt", because it is not possible in practice, or even in theory, to recover everything in the original specification purely by studying the product.

Reverse engineering is difficult and time consuming, but it is getting easier all the time thanks to IT, for two reasons:
- Firstly, as engineering techniques themselves become more computerised, more of the design is due to the computer. Thus, recognisable blocks of code, or groups of circuit elements on a substrate, often occur in many different designs produced by the same computer program. These are easier to recognise and interpret than a customised product would be.
- Secondly, artificial intelligence techniques for pattern recognition, and for parsing and interpretation, have advanced to the point where these and other structures within a product can be recognized automatically.

Reverse engineering generally consists of the following stages:
1. Analysis of the product
2. Generation of an intermediate level product description
3. Human analysis of the product description to produce a specification
4. Generation of a new product using the specification.
There is thus a chain of events between the underlying design specification and any intermediate level design documents lying behind the product, through the product itself, through the reverse engineered product description, through the reverse engineered specification, and into the new product itself.

Reasons for reverse engineering:

- Interoperability.
- Lost documentation: Reverse engineering often is done because the documentation of a particular device has been lost (or was never written), and the person who built it is no longer available. Integrated circuits often seem to have been designed on obsolete, proprietary systems, which means that the only way to incorporate the functionality into new technology is to reverse-engineer the existing chip and then re-design it.
- Product analysis : To examine how a product works, what components it consists of, estimate costs, and identify potential patent infringement.
- Digital update/correction : To update the digital version (e.g. CAD model) of an object to match an "as-built" condition.
- Security auditing.
- Military or commercial espionage : Learning about an enemy's or competitor's latest research by stealing or capturing a prototype and dismantling it.
- Removal of copy protection, circumvention of access restrictions.
- Creation of unlicensed/unapproved duplicates.
- Academic/learning purposes.
- Curiosity.
- Competitive technical intelligence.
- Learning.


Thursday, May 14, 2009

Design Guidelines and Design Principles

Continuing on the earlier post about design processes in software engineering:

Design Guidelines:

The criterion for a good design in order to evaluate the quality is as follows:
- It should have a good architectural structure.
- It should be modular in nature.
- It should lead to interfaces properly.
- It should contain distinct representations of data, architecture, interfaces, components.
- It should lead to data structures that are appropriate for the objects that are to be implemented.
- It should lead to components that have independent functional characteristics.
- A design should be derived using a repeatable method that is driven by information obtained during software requirements analysis.


Design Principles

A set of principles for software design are ::

- The design process should not suffer from “tunnel vision”.
- The design should be traceable to the analysis model.
- The design should not reinvent the wheel.
- The design should “minimize the intellectual distance” between the software and the problem in the real world.
- The design should exhibit uniformity and integration.
- The design should be structured to accommodate change.
- The design should be structured to degrade gently.
- Design is not coding.
- The design should be assessed for quality.
- The design should be reviewed to minimize conceptual errors.


Thursday, May 7, 2009

A brief summary of Design Models

This post is just a brief summary of design models, this is a huge topic, and will cover in future posts in parts.

Data Design Model: In order to implement the software we need to convert the analysis class models into the data structures and design classes and this is accomplished using data/class designs. For data design activity we need to have classes and relationships that are defined by CRC index cards and data content depicted by class attributes.

Architectural Design: It defines the relationship between the structural elements, the architectural styles and design patterns, and the constraints that affect the way in which architectural can be implemented.

Interface Design: It describes how the software elements communicate with each other, with other systems and with human users.

Component Level Design: Structural elements of the software architecture need to be converted into a procedural description of software components and this is accomplished through component level design.

These models collectively form the design model, which is represented diagrammatically as a pyramid structure with data design at the base and component level design at the pinnacle. Each level produces its own documentation, which collectively form the design specifications document, along with the guidelines for testing individual modules and the integration of the entire package.


A brief summary of design engineering

Design Engineering is a process used by software engineers which encompass the set of principles, concepts and practices that result in the development of a high quality system or product. Design Engineering is the point which actually has a great role in determining the quality of the software, and a careful review of the design process is useful in ensuring that the quality of the software remains high. Software design serves as the foundation for all software engineering steps that follow regardless of which process model is being employed. Without a proper design we risk building an unstable system – one that will fail when small changes are made (and we all know how likely small (or even big) changes can happen, one that may be difficult to test; one whose quality cannot be assessed until late in the software process, perhaps when critical deadlines are approaching and much capital has already been invested into the product. Making changes later down in the cycle to compensate for problems in the design process is not guaranteed to succeed, and is expensive.

Steps Involved In Design Engineering:

1. Identifying the need.
2. Defining the problem.
3. Conducting Research.
4. Narrowing Research
5. Analyzing set criteria
6. Finding alternative solutions
7. Analyzing possible solutions
8. Making a decision.
9. Presenting the product.
10. Communicating & selling the product.

During the design process the software specifications are transformed into design models that describe the details of the data structures, system architecture, interface, and components. Each design product is reviewed for quality before moving to the next phase of software development. At the end of the design process a design specification document is produced. This document is composed of the design models that describe the data, architecture, interfaces and components.


Sunday, August 17, 2008

Effort Estimation Technique: Function Point Analysis (Part 1)

The Function Point Analysis technique was developed during the late seventies by IBM, which commissioned one of its employees, Allan Albrecht to develop this technique. In the early eighties, this technique was refined, and then a new organization, International Function Point Users Group (IFPUG), was founded to take the Function Point Analysis technique forward; while keeping the core spirit behind what Albrecht had proposed.
Function Point Analysis (FPA) is a sizing measure that uses a system of sizing as per clear business significance. Function Point Analysis is a structured technique of classifying components of a system. One of the primary goals of Function Point Analysis is to evaluate a system's capabilities from a user's point of view. It is a method used to break systems down into smaller components so that they can be better understood and analyzed. The main objectives of FPA are:
1. Measure software by quantifying the functionality requested by and provided to the customer.
2. Measure software development and maintenance independently of technology used for implementation.
3. Measure software development and maintenance consistently across all projects and organizations.
FPA uses functional, logical entities such as outputs, inputs, and inquiries that tend to relate more closely to the functions (that is, business needs) performed by the software as compared to other measures, such as lines of code. FPA has become generally accepted as an effective way to
* estimate a software project's size (and in part, duration)
* establish productivity rates in function points per hour
* normalize the comparison of software modules
* evaluate support requirements
* estimate system change costs
In the world of Function Point Analysis, systems are divided into five large classes and general system characteristics. The first three classes or components are External Inputs, External Outputs and External Inquires. Now, each of these components transact against files and are hence called transactions. The next two classes, Internal Logical Files and External Interface Files are where data is stored that is combined to form logical information. The general system characteristics assess the general functionality of the system.
Details of each one of these:
1. Data Functions → Internal Logical Files: This contains logically related data that resides entirely within the application’s boundary and is maintained through external inputs.
2. Data Functions → External Interface Files: The second Data Function a system provides an end user is also related to logical groupings of data. In this case the user is not responsible for maintaining the data. The data resides in another system and is maintained by another user or system.
3. Transaction Functions → External Inputs: This is an elementary process in which data crosses the boundary from outside to inside. This data may come from a data input screen or another application. The data may be used to maintain one or more internal logical files. The data can be either control information or business information.
4. Transaction Functions → External Outputs: an elementary process in which derived data passes across the boundary from inside to outside. The data creates reports or output files sent to other applications.
5. Transaction Functions → External Inquiries: The final capability provided to users through a computerized system addresses the requirement to select and display specific data from files. To accomplish this a user inputs selection information that is used to retrieve data that meets the specific criteria. In this situation there is no manipulation of the data. It is a direct retrieval of information contained on the files.


Facebook activity