Subscribe by Email


Tuesday, August 30, 2011

What are different object oriented metrics in software measurement?

Object Oriented Metrics


Lines of code and Function point metrics can be used for object oriented projects but they do not provide enough granularity for schedule. Some object oriented metrics are as follows:

- Number of scenario scripts
A scenario script describes the interaction between user and application. It is directly related to application size and number of test cases developed to exercise the system.

- Number of key classes
Key classes are independent components. The number of key classes is the indication of the amount of effort that is required to develop the software and it also indicates the potential amount of reuse applied during system development. The key classes are directly related to problem domain.

- Number of support classes
Support classes are not directly related to problem domain. Support classes can be developed for key class. Number of support classes indicates amount of effort required to develop software and potential amount of reuse to be applied.

- Number of subsystems
Subsystem is gathering of classes supporting a function visible to the end user. A schedule is laid out in which work on subsystem is partitioned.

- Average number of support classes per key class
Estimation becomes easy and simplified if average number of support classes per key class is known.

As database grows, relationships between object oriented measures and project measures provides metrics for project estimation.


What are different metrics used for software measurement?

Software measurement can be categorized in two ways:
- direct measures of software process.
- indirect measures of product.
There are many factors that can affect the software work so metrics should not be used to compare individuals or teams.

Size oriented software metrics are derived by normalizing quality measures by considering size of the software that is produced. Lines of code is chosen as the normalization value to develop metrics that can get absorbed with similar metrics from other projects. Size oriented metrics are widely used but there is always a debate about their validity and applicability continues.

Function oriented metrics uses measure of functionality that is delivered by an application as a normalization value. Function point metric is based on characteristics of software's information domain and complexity. Function point is language independent and it is based on data that is likely to be known early in evolution.

The quality of the design and the language used to implement the software defines the relationship between lines of code and function points. Function points and LOC based metrics are exact predictor of software development effort and cost.


Monday, August 29, 2011

What are different process metrics and software process improvement?

Process metrics have long term impact. Their intent is to improve the process itself. These metrics are collected across all projects. To make an improvement in any process, develop metrics based on attributes and then use these attributes as indicators leading a strategy for improvement.

The three factors people, product and technology are connected to a process which sits at the center. The efficiency of a software process is measured indirectly. A set of metrics is derived based on the outcomes derived from process. These outcomes include error measures that are uncovered before the release of software, defects reported by end users and other measures. The skill and motivation of the software people doing the work are the most important factors that influence software quality.

Private metrics are private to an individual and serve as an indicator for individual only. It includes defect rates by individuals and software components and the errors that are found during development. Private data can serve as an important driver as individual software engineer works to improve.

Public metrics assimilate the information that was private to individuals and teams. Calendar times, project level defect rates, errors during formal technical reviews are reviewed to uncover indicators improving team performance.

Software process metrics benefits the organization and improve its overall level of process maturity.


Sunday, August 28, 2011

Assignment and logical comparison in the C language

It is necessary to understand how C expresses logical relations. C treats logic as being arithmetical expression. The value 0 (zero) represents false, and all other values represent true. Code written by people uncomfortable with the C language can often be identified by the usage of #define to make a "TRUE" value. Because logic is arithmetic in C, arithmetic operators and logical operators are one and the same. there are a number of operators that are typically associated with logic:

Relational and Equivalence Expressions:
a < b 1 if a is less than b, 0 otherwise. a > b
1 if a is greater than b, 0 otherwise.
a <= b 1 if a is less than or equal to b, 0 otherwise. a >= b
1 if a is greater than or equal to b, 0 otherwise.
a == b
1 if a is equal to b, 0 otherwise.
a != b
1 if a is not equal to b, 0 otherwise

C does not have a dedicated Boolean type as many other languages do. 0 means false and anything else true. Often #define TRUE 1 and #define FALSE 0 are used to work around the lack of a Boolean type. It is a better idea to indicate what you are actually expecting as a result from a function call, as there are many different ways of indicating error conditions, depending on the situation. Another thing to note is that the relational expressions do not evaluate as they would in mathematical texts.

Logical Expressions
a || b
when EITHER a or b is true (or both), the result is 1, otherwise the result is 0.
a && b
when BOTH a and b are true, the result is 1, otherwise the result is 0.
!a
when a is true, the result is 0, when a is 0, the result is 1.

C uses short circuit evaluation of logical expressions. That is to say, once it is able to determine the truth of a logical expression, it does no further evaluation. we need not worry here about trying to access an out-of-bounds array element if it is already known that i is greater than or equal to zero.

Bitwise Boolean Expressions:
The bitwise operators work bit by bit on the operands. The operands must be of integral type. The six bitwise operators are & (AND), | (OR), ^ (exclusive OR, commonly called XOR), ~ (NOT), << (shift left), and >> (shift right). The negation operator is a unary operator which precedes the operand. The others are binary operators which lie between the two operands. The precedence of these operators is lower than that of the relational and equivalence operators; it is often required to parenthesize expressions involving these operators.

Assignment of values:
Programmers should take special care of the fact that the "equal to" operator is ==, not =. This is the cause of numerous coding mistakes and is often a difficult-to-find bug, as the expression (a = b) assigns a a value equal to b and subsequently evaluates to b; but the expression (a == b), called equality operator, checks if a is equal to b. It needs to be noted that, if you confuse = with ==, your mistake will often not be brought to your attention by the compiler. A statement such as if (c = 20) {} is considered perfectly valid by the language, but will always assign 20 to c and evaluate as true. A simple technique to avoid this kind of bug is to put the constant first.


Friday, August 26, 2011

What are the fundamental differences between C and C++ ?

Developments in software technology continue to be dynamic. New tools and techniques are announced in quick succession. To build today’s complex software it is just not enough to put together a sequence of programming statements and sets of procedures and modules; we need to incorporate sound construction techniques and program structures that are easy to comprehend, implement and modify. Thus the languages C and subsequently C++ were born. Since C++ is an extended version of C it forms the superset of C.
Conventional programming uses languages like C (known as procedure-oriented programming (POP)). The problem is viewed as a sequence of things to be done such as reading, calculating and printing. A number of functions are written to accomplish these tasks. The primary focus is on functions. Object-oriented programming (OOP), using languages like C++ is an approach to program organization and development that attempts to eliminate some of the pitfalls of conventional programming methods by incorporating the best of structured programming features with several powerful new concepts.
Therefore, here we derive some basic differences between C and C++:

1. As we saw above, C follows the POP paradigm while C++ follows an OOP paradigm. So, in C the emphasis is on doing things whereas, C++ views problem in terms of objects involved rather than procedure for doing it. The emphasis is on data.
2. Data is given a second class status in C but, in C++ data is given all the priority.
3. In C global data are more vulnerable to an inadvertent change by a function whereas in C++ data is encapsulated or hidden so, secure.
4. C lacks data hiding features but C++ comes with options of data abstraction and encapsulation which makes data hiding possible.
5. C follows a top-down approach while C++ uses the bottom-up approach.
6. In C a number of functions are written to accomplish a task whereas in C++ programs are divided into what are known as objects and tied together using functions.
7. It’s not easy to add functions and data to the existing program in the case of C but it can be easily added wherever required in a C++ program.
8. C doesn’t support class concept but C++ does.
9. Structures are present in both C and C++, but behave differently. C structures do not support functions contained in them.
10. In C. input/ output processing is carried out by functions (scanf and printf). C++ uses console commands “cin” and “cout”.
11. Function overloading and operator overloading are not supported by C. So, it means polymorphism is absent in C. C++ supports polymorphism well.
12. C lacks “NAMESPACE” feature while C++ supports NAMESPACE which avoids name collisions.
13. References can be used in C++ but not in C.
14. C uses malloc() and free() commands for allocation and de-allocation of memory. C++ uses new and delete commands.
15. In C header file used in whereas it is for C++.
16. C++ programs take much more time for compiling as compared to C programs. For this reason C is commonly used.
17. C is a low-level language whereas C++ is an intermediate language.


Thursday, August 25, 2011

What is a process framework? What is the approach of a successful project?

The process framework establishes a skeleton for project planning. It is adopted by allocating a task set that is appropriate to the project. There should be some flexibility given to the software team to select the best process model and engineering tasks suitable for the project.
- For a small project, linear sequential approach can be used.
- If for a project, tight time constraints are applied, RAD model can be used.
- If for a project, deadline is very tight, an incremental approach can be used.

After the selection of the appropriate process model, process framework is adapted to it. The process framework is invariant in nature and it acts as a basis for all software work.Process decomposition starts when the project manager wants to know how can this framework activity be accomplished.

Approach to manage a successful software project are:
- One should start on the right foot.
- Momentum should be maintained.
- Progress should be tracked.
- Decisions should be made smartly.
- Postmortem analysis should be conducted.


Wednesday, August 24, 2011

User Interface Analysis and Design - Testing Interface Mechanisms

There are interface mechanisms through which the interaction between the user and the web application occurs. There are some testing interface mechanisms described below:
- Links are tested to ensure that proper content object or function is reached. External link testing should occur throughout the life of the web application. Links within content object are also tested. Part of a support strategy should be regularly scheduled link tests.

- Client side scripting should be repeated whenever a new version of a popular browser is released. Compatibility testing should be done to ensure that the scripting language that is chosen is working properly in environmental configuration that support the web application.

- Forms testing is done at two levels:
At macroscopic level, tests ensure that labels correctly identify fields within the form; server is receiving the information that is contained within the form; defaults are used when user is not selecting from pull down menu or set of buttons; browser functions do not corrupt data and error checking script is working properly.
At targeted level, tests ensure that form fields are of proper width and data types; appropriate pull-down menus option are specified; tab key is performing in the right manner and browser auto fill features do not lead to data input errors.

- Dynamic HTML in web applications are tested to ensure that the dynamic display is working fine.

- Pop up windows are tested to ensure that a pop up window is properly positioned and sized; the design of pop up window is consistent with the aesthetic design of interface; scroll bars are working properly.

- Streaming Content is tested to ensure that they are up to date, properly displayed and restarted without difficulty.

- Cookies are tested at both server and client side. On server side, tests are conducted to ensure cookie is properly constructed and transmitted to client side. Proper persistence of cookie is tested to ensure that the expiration date is correct. On client side, tests are conducted to ensure whether web applications properly attaches existing cookies to specific request.


Tuesday, August 23, 2011

What constitutes the testing process of web applications?

Web engineering testing process starts with tests that check content and interface functionality. As testing moves further, navigation testing comes into picture and finally tests are done which check the technological capabilities not visible to end users.

Content testing uncovers errors in content.It examines the static as well as the dynamic content of the web application.

Interface testing validates the aesthetic aspects of user interface. It uncovers errors that have occurred due to interaction, omissions, ambiguities.

Navigation testing designs test cases that tests each user scenario against navigation design. Navigation mechanisms are tested against use cases to ensure that any kind of errors are identified and corrected.

Component testing tests content and functional units within a web application. In web application architecture, a unit is a functional component that is directly providing service to end user.

Navigation and component testing are used as integration tests. Strategy behind integration testing depends upon the web application architecture that has been chosen during design.

Thread based testing tests each thread that is integrated tested individually.
Cluster testing uncovers errors which results due to the collaborating pages.

Configuration testing uncover errors specific to a particular client or server environment. Tests are conducted to uncover errors associated with every possible configuration.

Security testing are tests that are designed to make use of weaknesses in the web application and environment.

Performance testing is a series of tests that assess how increased load affects the web application response time and reliability.


WebApp Interface Design - Interface Control Mechanisms and Interface Design Workflow

INTERFACE CONTROL MECHANISM
The objectives of Web application interface are:
- establishing a consistent window into content and functionality provided by interface.
- guiding the users through interactions with web application.
- organizing the content and navigation options.

A metaphor is drawn that guides the user interaction and enables the user to gain understanding of the interface. Some interaction mechanisms available to web application designers are
- navigation menus that list key content and or functionality.
- graphic icons that enable user to select some property or specify a design.
- graphic images that implements a link to content object or the functionality of web application.

INTERFACE DESIGN WORKFLOW
It includes the following tasks:
- The information contained in analysis model is reviewed and refined.
- A rough sketch of web application interface layout is developed.
- The user objectives are mapped to specific interface actions.
- Set of user tasks associated with each action are defined.
- For each interface action, storyboard screen images are developed.
- Input from aesthetic design can be used to refine interface layout.
- User interface objects required to implement interface are identified.
- A procedural representation of user's interaction is developed.
- A behavioral representation is developed.
- Interface layout is described.
- Interface design model is refined and reviewed.


Monday, August 22, 2011

What are different design issues and attributes for web applications?

Design model contains enough information to reflect how requirements are translated into content and executable code. Design should be specific. It is an engineering activity. It leads to a high quality product. the major attributes for quality of web applications are:

- Security of web applications is the ability of WebApp and its server environment to stop unauthorized access or threat.
- Availability plays an important attribute. Availability is the measure of the percentage of time that a web application is available for use. The expectation of a end user regarding the availability of a web application is each and every moment. Using features available on one browser or platform makes the web application unavailable to those who work on different platform or browser.
- Scalability is whether the web application and interfacing systems are able to handle significant variation in volume or will the responsiveness drop. Web application should be designed in such a way that it is able to accommodate the burden.
- Time to market is a measure of quality from a business point of view.

Assessing content quality includes :
- whether the user needs are met by determining the scope and depth of content?
- whether the background and authority of content's authors be easily identified?
- whether it is possible to determine the currency of content, last update and what was updated?
- whether the content and its location stable?
- credibility of content?
- uniqueness of content?
- whether content is valuable to targeted user?
- whether the content is well organized and easily accessible?


Sunday, August 21, 2011

What is meant by Relationship-Navigation Analysis (RNA)?

Relationship navigation analysis (RNA) is a series of analysis steps to identify relationships among the elements that are left uncovered during the creation of the analysis model. There are five steps that constitutes the RNA approach:
- Stakeholder analysis establishes stakeholder hierarchy and identifies various user categories.
- Element analysis identifies content objects and functional elements that are in interest to end uses.
- Relationship analysis identifies the relationship among web application elements.
- Navigation analysis identifies the accessibility of elements by users.
- Evaluation analysis identifies the cost and benefit included.

RELATIONSHIP ANALYSIS
To assess analysis model elements to understand relationships among them, some guidelines are:
- the attributes identified for element.
- whether description about element exists and where?
- is element composed of other smaller elements?
- is element a member of larger collection of elements?
- does analysis class describe the element?
- in using the element, what are the pre and post conditions.
- is the element used in specific ordering of other elements?
- does the element appear in the same place?

The answers to above questions helps the web engineer to position the element in question within the web application and to establish relationships among elements.

NAVIGATION ANALYSIS
After relationship are identified among elements, the web engineer defines how the user category navigates from one element to another. The questions that would clear the navigation requirements are:
- how are navigation errors handled?
- should certain elements be easier to reach?
- should group element navigation be given priority over specific element navigation?
- should links be used for navigation?
- should there be a navigation log for users?
- should a navigation map or menu be established?
- for which user category an optimal navigation be designed?


Friday, August 19, 2011

Overview of Functional and Configuration Model in analysis for WebApps

THE FUNCTIONAL MODEL
There are two processing elements of web application. The functional model addresses the above two elements of web application:
- user observable functionality delivered by web applications to end-users.
- operations within analysis classes that implement behavior within class.

User observable functionality encompasses processing functions initiated directly by user. These functions are implemented using operations within analysis classes but from end-user point of view, the function is the visible outcome.

The operations within analysis class manipulate the attributes of the class involved as class collaborate with one another to accomplish required behavior.

THE CONFIGURATION MODEL
The web application must be thoroughly tested within every browser configuration that is specified as part of configuration model.
In some cases, configuration model is not more than a list of server and client side attributes. For complex web applications, configuration complexities have an impact on analysis and design.
Client side software provides the infrastructure that enables access to the web application from user;s location.
On server side, appropriate interfaces, communication protocols and related information should be specified if web application has to access large database or inter-operates with other applications.


Thursday, August 18, 2011

What is Requirement Analysis for Web Applications?

Requirement analysis for web applications consists of formulation, requirement gathering and analysis modeling.
- In formulation, goals and objectives and categories of users for web application are identified.
- In requirement gathering, communication between web engineering team and stakeholders deepens.
- In analysis modeling, content and functional requirements are listed and interaction scenarios are developed.

USER HIERARCHY
It is a good idea to build a user hierarchy. It provides you with a snapshot of user population and a cross check to help ensure that the needs of every user have been addressed. End-user categories interacting with web application are identified. As the number of user categories increases, developing a user hierarchy is advised. User categories provides an indication of functionality provided by WebApp and indicate need of use cases to be developed for each end-user in hierarchy.

DEVELOPING USE CASES
For each user category, use cases are developed which is described in user hierarchy. A use case is relatively informal i.e. a narrative paragraph that describes a specific interaction between user and web application. As the size of web application grows and analysis modeling becomes more rigorous, the preliminary use cases presented would have to be expanded to conform.

REFINING USE CASE MODEL
Use cases are organized into functional packages and each package is assessed to ensure that it is comprehensible, cohesive, loosely coupled and hierarchically shallow. The new use cases will be added to packages that have been defined, existing use cases will be refined and specific use cases might be reallocated to different packages.


Wednesday, August 17, 2011

What is meant by analysis for web applications?

Web sites are complex and dynamic in nature. Web application analysis concentrates on three important criteria:
- information or content that is presented.
- functions that are to be performed for end user.
- behaviors of web applications.

Analysis of web applications is mainly done by web engineers, non technical content developers and stakeholders. Analysis modeling is important because it enables a web engineering team to develop a concrete model of web application requirements. It helps to define fundamental aspects of problem. There are four important aspects that analysis modeling focus:
- Content analysis identifies content classed and collaborations.
- Interaction analysis describes user interaction, navigation and system behaviors occurring as a consequence.
- Function analysis defines web application functions performed for user and sequence of processing.
- Configuration analysis identifies the operational environment in which a web application resides.

Analysis modeling should be done by web applications when the following conditions are met:
- web application is large or complex.
- number of stakeholder is large.
- number of web engineers is large.
- goals and objectives for web application will effect the business.
- success of web application will have strong affect on the success of business.


Tuesday, August 16, 2011

What are the guidelines to be remembered if In House Web Engineering strategy is chosen for web application development?

Due to the increasing complexity, the web application project becomes like a software engineering project management. It is important to recognize that the guidelines recommended for small and moderately sized WebE projects can be performed quickly. In no case, should WebE planning for projects of this size take more than 5 percent of overall project effort :

- One should understand the scope, the dimensions of change, and project constraints. For an effective WebApp planning, requirements gathering and customer communication are essential precursors.
- An incremental project strategy should be developed so that evolution is not uncontrolled and chaotic.
- Risk analysis should be performed. All risk management tasks are performed for web engineering projects but the approach is abbreviated. Schedule and technology risk is the most important concern for most web engineers.
- The overall project estimate should be developed which focuses on macroscopic rather than microscopic issues.
- A set of web engineering tasks is selected which is appropriate for characteristics of problem, product, project, people on web engineering team.
- A schedule is established in which web engineering tasks are distributed along project timeline for increment to be developed.
- Regardless of project size, it is important to establish project milestones so that progress can be assessed.
- A change management is facilitated by incremental development strategy recommended for web applications. There is a possibility to delay change until next increment which in turn reduces the delay effects that are associated with changes.


Monday, August 15, 2011

What are the guidelines to be remembered if outsourcing strategy is chosen for web application development?

Web applications are outsourced to vendors who specialize in web development. There are some guidelines to be followed while considering outsourcing strategy for the web development:

PROJECT INITIATION :
Before searching for an outsourcing vendor, some tasks needs to be done :
- Analysis tasks should be performed internally.
- Web application audience are identified.
- Overall goals and objectives are defined.
- A rough design of the web application should be developed internally.
- A rough project schedule including delivery dates, milestones dates should be developed.
- Responsibilities for internal organization and outsourcing vendor is created.
- Degree of interaction and oversight with contracting organization is identified.

SELECTING OUTSOURCING VENDORS :
- Interview clients to determine the vendor's professionalism, ability to meet schedule.
- Determining the name of the vendor's chief web engineer.
- Examine work samples of vendor.

ASSESS THE VALIDITY OF PRICE QUOTES AND RELIABILITY OF ESTIMATES:
- Does the quoted cost provide direct or in direct return on investment.
- doe the vendor providing the quote has the professionalism and experience.

UNDERSTANDING THE DEGREE OF PROJECT MANAGEMENT YOU CAN EXPECT:
- It depends on size, cost and complexity of the web application.
- Plans are developed for mitigating, monitoring and managing risks.
- Quality assurance and change control mechanisms are defined.
- Effective communication between contractor and vendor should be established.

THE DEVELOPMENT SCHEDULE SHOULD BE ASSESSED:
- The development schedule should have a fine granularity.
- Tasks and minor milestones should be scheduled on daily timeline.

MANAGE SCOPE:
- Scope changes as web application project moves forward.
- To manage scope, the work to be performed within an increment is frozen.
- Changes are deployed until the next web application increment.


Sunday, August 14, 2011

What are User Interface Design and Operation oriented Metrics?

User interface design metrics are fine but above all else, be absolutely sure that your end users like the interface and are comfortable with the interactions required.
- Layout appropriateness is a design metric for human computer interface. The layout entities like graphic icons, text, menus, windows are used to assist the user.
- The cohesion metric for user interface measures the connection of on screen content to other on screen content. UI cohesion is high if data on screen belongs to single major data object. UI cohesion is low if different data are present and related to different data objects.
- The time required to achieve a scenario or operation, recover from an error, text density, number of data or content objects can be measured by direct measures of user interface interaction.

Operation oriented metrics are:
- Operation complexity is computed using complexity metrics because operations should be limited to a specific responsibility.
- Operation size depends on lines of code. As the number of messages sent by
a single operation increases, responsibilities have not been well allocated within a class.
- Average number of parameters per operation is defined as: larger the number of operation parameters, more complex is the relation between objects.


Saturday, August 13, 2011

What are the metrics for object oriented design?

A more objective view of the characteristics of design can benefit both an experienced designer and the novice. The characteristics that can be measured when we assess an object oriented design are:

- Size which has four views: population, volume, length and functionality.
- Complexity is measured in terms of structural characteristics by checking how classes of an object oriented design are interrelated.
- Sufficiency is defined as the degree to which an abstraction possesses the features required from the point of view of current application.
- Coupling is defined as different connections between the elements of the object oriented design.
- Completeness is defined as the feature set against which we compare the abstraction or design component. It considers multiple points of view. It indirectly implies the degree to which abstraction or design component can be reused.
- Similarity is defined as the degree to which two or more classes are similar in structure, function, behavior etc.
- Volatility for object oriented design is defined as the likelihood that a change will occur.
- Cohesion is defined as the degree to which the set of properties it possesses is part of problem or design domain.
- Primitiveness is the degree to which the operation is not constructed out of a sequence of other operations within the class.


Friday, August 12, 2011

What is software quality? What are McCall and ISO 9126 Quality factors?

To achieve a software of high quality is an ultimate goal. Software quality is assessed by internal and external quality criteria. External quality is critical to the user, while internal quality is meaningful to the developer only. Software quality is a mix of factors that vary across different applications and customers who request them.

McCall's Quality Factors
Software quality is affected by the factors that can be measured directly or indirectly. These include:
- During product operation, factors include correctness, reliability, usability, integrity and efficiency.
- During product transition, factors include portability, re-usability and inter-operability.
- During product revision, factors include maintainability, flexibility and test-ability.

ISO 9126 Quality Factors
This ISO was developed to identify quality attributes which includes:
- Functionality which includes the extent to which software is available for use indicated by sub attributes like suitability, accuracy, inter-operability, compliance and security.
- Reliability is amount of time software is available for use indicated by sub attributes like maturity, fault tolerance and recover-ability.
- Usability is the degree to which software is easy to use indicated by sub attributes like understand-ability, learn-ability and oper-ability.
- Efficiency is the extent to which software makes optimal use of resources indicated by sub attributes like time and resource behavior.
- Maintainability is an ease with which repair can be made to the software indicated by sub attributes like changeability, stability, testability.
- Portability is the ease with it can be transported from one environment to another indicated by sub attributes like adaptability, replace-ability.


Thursday, August 11, 2011

How to test real-time systems?

Real time applications are time dependent and asynchronous in nature. Time is the new factor that adds complication in testing real time systems. The test cases should be designed so that they consider event handling, timing of data and parallelism of tasks handling the data. Also, the relationship that exists between real time software and hardware can cause testing problems.

An effective strategy for testing a real time system are:
- One should test each task independently and tests are designed and executed for each task. This task testing uncovers errors in logic and function but not timing or behavior.
- The behavior of real time system can be simulated by using system models and this behavior can be examined and it can be served as a basis for test case design.
- Testing shifts to time related errors once errors in individual tasks and in system behavior have been isolated. Asynchronous tasks are tested with different data rates and load to test whether inter task synchronization errors will occur.
- System tests are done to uncover errors at the interface. Generally, real time systems process interrupts therefore, testing and handling of boolean events is essential.


Wednesday, August 10, 2011

What comprises the Web Engineering Team?

For a successful web application project, there is a need for a successful web engineering team. Web engineering teams can be organized in the same manner as traditional software team, however, players and their roles are quite different.

The roles that people play in a web engineering team are:

- Content is the most important part of web applications. So, role of content developer or provider focus on the generation of content.
- The content developed should be organized. Web publisher is a person who acts as a mediator between the technical person and non technical content developer.
- Web engineer is responsible for activities like requirements elicitation, analysis modeling, architectural, navigational and interface design, web application implementation and testing. He should be thorough in technologies as well.
- Business domain experts take care of questions related to business goals, objectives and requirements associated with a web application.
- Support specialist is a person whose responsibility is to continue with web application support. All the corrections, adaptations and enhancements are taken care by a support specialist.
- Administrator is responsible for the day to day activity of a web application which also includes development and implementation of policies, support procedures, security rights, handling web traffic etc.

In order to build a team for web application:
- team guidelines are established which includes expectations from team member, how problems are dealt and what methods are used to improve the effectiveness.
- strong leadership.
- team motivation and respect for individual talents.
- commitment from every team member is necessary.
- momentum should be maintained.


Tuesday, August 9, 2011

What are the requirements gathering steps that are used for web applications?

The objectives for web applications are identifying the content and functional requirements and to define the interaction scenarios for different classes of users. To achieve these objectives, following steps are conducted:
- User categories and descriptions are developed for each category by stakeholders.
- Web application requirements are defined and communicated to stakeholders.
- All the information that is gathered is analyzed and then the information is used to follow up with stakeholders.
- The use cases describing the interaction scenarios for each user class are defined.

DEFINING USER CATEGORIES
Understanding the user's background, motivation, and objectives is critical in all software engineering tasks. In order to define a user category:
- one should know the user's overall objective when he or she is using the web application.
- one should know the background of the user and the knowledge of content and functionality of the web application.
- one should know how the user should approach the web application.
- one should know the generic web application characteristics that the user will like or dislike.

COMMUNICATING WITH STAKEHOLDERS AND END USERS
The communication mechanisms that can be used in web engineering work are:
- traditional focus groups.
- electronic focus groups.
- iterative surveys.
- exploratory surveys.
- scenario building.

ANALYZING INFORMATION GATHERED
An evaluation of content objects and operations can be delayed until analysis modeling begins. It is more important to collect information and not evaluating it at this point. As information is gathered, it is categorized by user class and transaction type and then assessed for relevance.

DEVELOPING USE CASES
Use cases tell how a user category will interact with web application to accomplish a specific action. Use cases help the developer to understand the user perception while interacting with the web application, provide detail to create analysis model, help to separate WebE work and provide guidance who test WebApp.


Monday, August 8, 2011

How are web based systems formulated?

The basis of initiating a web application project is firstly, understanding the problem before you begin to solve it and be sure that the solution you conceive is one that people really want and secondly, plan the work before you begin performing it.
Formulation involves a sequence of web engineering actions which begins with identification of needs of business, defining features and functions, performing requirements gathering. Formulation focuses on the big picture; on the business needs and objectives and related information.

Beginning of formulation should be able to answer the main motivation for the web application, the objectives that a web application needs to fulfill and who will be using a web application. While building a web application, it should be build in a single sentence.
There are two type of goals that are identified. First one is informational goals that indicate an intention to provide specific content or information for the end user and second, the applicative goals that indicate the ability to perform some task within the web application.

Once both the goals are identified, a user profile is developed which captures important features that are related to users. Once the goals and user profile is created, formulation focuses on statement of scope for web application.


Sunday, August 7, 2011

What are Web engineering Systems? What are the attributes for web based systems?

Web systems deliver complex array of content and functionality to end users. Web engineering creates high quality web applications. Web based systems grow complex, a failure in one can propagate broad based problems. There is a need for disciplined approaches and new methods and tools for development, deployment and evaluation of web based systems and applications.

Attributes for web based systems are:
- A web application should serve the needs of different community of clients.
- A large number of users may access the web application at one time.
- Web application should be able to handle unpredictable load.
- Users of popular web application demand access and availability of them round the clock.
- Web applications uses hypermedia and access information that exists on databases that were originally not a part of web based environment.
- The content should be of good quality.
- Web applications evolve continuously. Continuous care and feeding allows a web site to grow.
- Web applications exhibit a time to market. Web engineers use methods for planning, analysis, design, implementation and testing to compress time schedules.
- To protect sensitive content and provide secure modes of data transmission, strong security measures should be implemented.
- The look and feel of a web application should be appealing.

The categories of web applications that are encountered in web engineering are informational, download, customization, interaction, user input, transaction oriented, service oriented, portal, database access, data warehousing.


Saturday, August 6, 2011

What are different metrics for testing?

Software testers must rely on analysis, design, and code metrics to guide them in design and execution of test cases. Metrics for testing fall into two broad categories:
- metrics that attempt to predict the likely number of tests required at various testing levels.
- metrics that focus on test coverage for a given component.

Function based metrics use a predictor for overall testing effort. Architectural design metrics provide information on the ease and difficulty which is associated with integration testing.

The metrics defined for object oriented provide a general indication of the amount of testing effort required to exercise an object oriented system. Object oriented testing can be quite complex. Metrics can assist in targeting testing resources at threads, scenarios, and packages of classes that are suspect based on measured characteristics. Design metrics that has direct influence on test-ability of object oriented system include:

- Lack of cohesion in methods (LCOM): Higher the value of LCOM, more states must be tested.
- Percent public and protected (PAP): High value of PAP increases the possibility of side effects among classes because public and protected attributes lead to high coupling.
- Public access to data members (PAD): High value of PAD increases the possibility of side effects among classes.
- Number of root causes (NOR): NOR is the count of distinct class hierarchy described in design model. As NOR increases, testing effort increases.
- Fan-in (FIN): It is an indication of multiple inheritance. If it is greater than 1, a class inherits its attributes and operations from more than one root class.
- Number of children (NOC) and depth of inheritance tree (DIT): The super class methods will have to be retested for each sub class.


Friday, August 5, 2011

What are different component level design metrics?

Component level design focuses on internal workings of software and include measures: cohesion, coupling, and complexity. It helps in judging the quality of a component level design. Once a procedural design is developed, component level design metrics can be applied. It is possible to compute measures of the functional independence, coupling and cohesion of a component and to use these to assess the quality of design.

The cohesiveness of a module can be described by a set of metrics:
- Data slice which is defined as a backward walk that searches for data values that can affect the state of the module.
- Data tokens are variables defined for a module.
- Glue tokens are set of data tokens on data slice.
- Superglue tokens are data tokens common to every data slice in a module.
- Stickiness of glue token is directly proportional to number of data slices it binds.

Coupling metrics is an indication of connectedness of a module to other modules. The metric for module coupling encompasses data and control flow coupling, global coupling and environmental coupling.

Complexity metrics are used to predict critical information about the reliability and maintainability of software systems from automatic analysis of source code. It also provides feedback during software project to help control the design activity. Cyclomatic complexity is the most widely used complexity metric.


Thursday, August 4, 2011

What is the framework for product metrics? What are the measurement principles?

A fundamental framework and a set of basic principles for the measurement of product metrics for software should be established. Talking in terms of software engineering:

- Measure provides a quantitative indication of extent, amount, dimension, size of an attribute of a product or process. Measure is established when a single data point has been collected.
- Measurement is an act of determining a measure. Measurement occurs when one or more data points are collected.
- Metric is the quantitative measure of the degree to which a system, component or process possess a given attribute. It relates individual measures in some way.
- Indicator is a metric or combination of metrics providing insight into software process, project or product itself.

There is a need to measure and control software complexity. It should be possible to develop measures of different attributes. These measures and metric can be used as independent indicators of the quality of analysis and design models.

Product metrics assist in evaluation of analysis and design models, gives an indication of the complexity and facilitate design of more effective testing. Steps for an effective measurement process are:
- Formulation which means the derivation of software measures and metrics.
- Collection is the way ti accumulate data required to derive the metrics.
- Analysis is the computation of metrics.
- Interpretation is the evaluation of metrics.
- Feedback is the recommendation derived after interpretation.

Metrics characterization and validation includes:
- Metric should have desirable mathematical properties.
- The value of metric should increase or decrease in the manner in which a software characteristic increases when positive trait occurs or decreases when undesirable traits are encountered.
- Metric should be validated empirically.


Wednesday, August 3, 2011

How to test Graphical User Interface and Client Server Architecture?

The complexity of graphical user interface has grown leading to more difficulty in the design and execution of test cases. As new age graphical user interface have a same kind of look and feel, standard test cases can be written. A strategy that is similar to random or partition testing can also be used for testing graphical user interface. Finite state modeling graphs can also be used to derive test cases. Testing should be approached using automated tools.

Testing of client server architecture is a challenging job. The distributed nature, performance issues associated with transaction processing, presence of different hardware platforms, network communication, servicing multiple clients, coordination requirements makes the testing more difficult. testing of client server occurs at three levels:
- individual client applications are tested in disconnected mode.
- client and associated server applications are tested in concert.
- complete client server including operations is tested.

Different tests that are conducted for client/server systems are:
- Application function tests.
- Server tests.
- Database tests.
- Transaction tests.
- Network communication tests.


Tuesday, August 2, 2011

How to design a test case for an inter-class?

As the integration of object oriented system begins, the designing of test cases becomes difficult. Testing of collaborations among classes should start at this point. It can be accomplished by applying random and partitioning methods and also scenario based and behavioral testing.

Multiple class random test cases can be generated in following steps:
- Generate random test sequences from the list of class operations for each client class and these messages are sent to server classes.
- Collaborator class and the operation is determined for each message that is generated.
- The transmitted messages are determined for each operation in the server object.
- The next level of operations are determined for each of the messages.

A state diagram for a class is used to help in deriving a sequence of tests that exercises the dynamic behavior and the classes that are collaborated with it. The tests that are designed should achieve all state coverage. Every behavior for the class should be adequately exercised. In case of inter class test case design, multiple state diagrams are used to track the behavioral flow of the system.

A state diagram can be traversed in a breadth first manner. It implies that a test case exercises a single transition. When a new transition is to be tested only previously tested transitions are used.


Monday, August 1, 2011

What are different testing methods that are applicable at the class level?

Object oriented testing begins by evaluating the object oriented analysis and object oriented design models, structured walk-throughs, prototypes, formal reviews of correctness, completeness and consistency.
Test each operation as part of a class hierarchy because its class hierarchy defines its context of use. The approach that can be used :
- Test each method (and constructor) within a class.
- Test the state behavior (attributes) of the class between methods.

Each test case should contain:
- a list of messages and operations that will be exercised as a consequence of the test.
- a list of exceptions that may occur as the object is tested.
- a list of external conditions for setup.
- supplementary information that will aid in understanding or implementing the test.

To test a class, there are two methods called as random testing and partitioning applicable.

For random testing for object oriented classes, the number of possible permutations for random testing can grow quite large. A strategy similar to orthogonal array testing can be used to improve testing efficiency. In random testing, methods applicable to class are identified, constraints are defined, a minimum test sequence is defined and then generate a variety of random (but valid) test sequences.

Partition testing reduces the number of test cases. State based partitioning categorizes class operations based on the ability to change the state of the class. Attribute based partitioning categorizes class operations based on the attributes that they use. Category based partitioning categorizes class operations based on generic function that each perform.


Facebook activity