Subscribe by Email


Friday, April 30, 2010

Organizational Innovation and Deployment (OID) Process Area

The purpose of Organizational Innovation and Deployment (OID) is to select and deploy incremental and innovative improvements that measurably improve the organization’s processes and technologies. The improvements support the organization’s quality and process-performance objectives as derived from the organization’s business objectives.
A Process Management Process Area at Maturity Level 5.

Quality and process-performance objectives that this process area might address include the following:
- Improved product quality (e.g., functionality, performance).
- Increased productivity.
- Decreased cycle time.
- Greater customer and end-user satisfaction.
- Shorter development or production time to change functionality or, add new features, or adapt to new technologies.
- Reduce delivery time.
- Reduce time to adapt to new technologies and business needs.

Specific Practices by Goal


SG 1 Select Improvements
- SP 1.1 Collect and Analyze Improvement Proposals.
- SP 1.2 Identify and Analyze Innovations.
- SP 1.3 Pilot Improvements.
- SP 1.4 Select Improvements for Deployment.
SG 2 Deploy Improvements
- SP 2.1 Plan the Deployment areas.
- SP 2.2 Manage the Deployment.
- SP 2.3 Measure Improvement Effects.

The expected benefits added by the process and technology improvements are weighed against the cost and impact to the organization. Change and stability must be balanced carefully. Change that is too great or too rapid can overwhelm the organization, destroying its investment in organizational learning represented by organizational process assets. Rigid stability can result in stagnation, allowing the changing business environment to erode the organization’s business position.


Thursday, April 29, 2010

Measurement and Analysis (MA) Process Area

The purpose of Measurement and Analysis (MA) is to develop and sustain a measurement capability used to support management information needs. Measurement and analysis (MA) is a Level 2 support process area within the Capability Maturity Model Integration (CMMI) process.
The Measurement and Analysis process area involves the following activities:
- Specifying objectives of measurement and analysis so they are aligned with identified information needs and objectives.
- Specifying measures, analysis techniques, and mechanisms for data collection, data storage, reporting, and feedback.
- Implementing the collection, storage, analysis, and reporting of data.
- Providing objective results that can be used in making informed decisions, and taking appropriate corrective action.

The integration of measurement and analysis activities into the processes of the project supports the following:
- Objective planning and estimating.
- Tracking actual performance against established plans and objectives.
- Identifying and resolving process-related issues.
- Providing a basis for incorporating measurement into additional processes in the future.

Specific Practices by Goal


SG 1 Align Measurement and Analysis Activities
- SP 1.1 Establish Measurement Objectives.
- SP 1.2 Specify Measures.
- SP 1.3 Specify Data Collection and Storage Procedures.
- SP 1.4 Specify Analysis Procedures.
SG 2 Provide Measurement Results
- SP 2.1 Collect Measurement Data.
- SP 2.2 Analyze Measurement Data.
- SP 2.3 Store Data and Results.
- SP 2.4 Communicate Results.

Measurement objectives are used to define measures as well as collection, analysis, storage, and usage procedures for measures. These measures are specified in the project plan. Measures for the supplier, data collection processes and timing, expected analysis, and required storage should be specified in the supplier agreement.


Wednesday, April 28, 2010

Integrated Product and Process Development (IPPD) environment in CMMi

It is a project management process area at Maturity Level 3. The purpose of Integrated Project Management is to establish and manage the project and the involvement of the relevant stakeholders according to an integrated and defined process that is tailored from the organization's set of standard processes. Integrated Project Management also covers the establishment of a shared vision for the project and a team structure for integrated teams that will carry out the objectives of the project.
The project's defined process includes all life cycle processes including the IPPD processes that are applied by the project. Processes to select the team structure, allocate limited personnel resources, implement cross-integrated team communication, and conduct issue-resolution processes are part of the project's defined process.

SPECIFIC GOALS AND PRACTICES


SG1: The project is conducted using a defined process that is tailored from the organization's set of standard process.
- SP1.1: Establish and maintain the project's defined process.
- SP1.2: Use the organizational process assets and measurement repository for
estimating and planning the projects activities.
- SP1.3: Integrate the project plan and the other plans that affect the project to
describe the project's defined process.
- SP1.4: Manage the project using the project plan, the other plans that affect the
project, and the project's defined process.
- SP1.5: Contribute work products, measures, and documented experiences to
the organizational process assets.

SG2: Coordination and collaboration of the project with relevant stakeholders are conducted.
- SP2.1: Manage the involvement of the relevant stakeholders in the project.
- SP2.2: Participate with relevant stakeholders to identify, negotiate, and track
critical dependencies.
- SP2.3: Resolve issues with relevant stakeholders.

SG3: The project is conducted using the project's shared vision.
- SP3.1: Identify expectations, constraints, interfaces, and operational
conditions applicable to the project's shared vision.
- SP3.2: Establish and maintain a shared vision for the project.

SG4: The integrated teams needed to execute the project are identified, defined, structured, and tasked.

- SP4.1: Determine the integrated team structure that will best meet the project
objectives and constraints.
- SP4.2: Develop a primary distribution of requirements, responsibilities, authorities, tasks, and interfaces to teams in the selected integrated team structure.
- SP4.3: Establish and maintain teams in the integrated team structure.


Tuesday, April 27, 2010

Configuration Management (CM) and Decision Analysis and Resolution (DAR) process area in CMMi

The purpose of Configuration Management (CM) is to establish and maintain the integrity of work products using configuration identification, configuration control, configuration status accounting, and configuration audits. It is a support process area at Maturity Level 2.
Configuration management is normally not used or managed within the development process for a Level 1 organization.

Specific Practices by Goal


SG 1 Establish Baselines
- SP 1.1 Identify Configuration Items.
- SP 1.2 Establish a Configuration Management System.
- SP 1.3 Create or Release Baselines.
SG 2 Track and Control Changes
- SP 2.1 Track Change Requests.
- SP 2.2 Control Configuration Items.
SG 3 Establish Integrity
- SP 3.1 Establish Configuration Management Records.
- SP 3.2 Perform Configuration Audits.

Decision Analysis and Resolution (DAR)
It is a support process area at Maturity Level 3. The purpose of Decision Analysis and Resolution (DAR) is to analyze possible decisions using a formal evaluation process that evaluates identified alternatives against established criteria.

Specific Practices by Goal


SG 1 Evaluate Alternatives
- SP 1.1 Establish Guidelines for Decision Analysis.
- SP 1.2 Establish Evaluation Criteria.
- SP 1.3 Identify Alternative Solutions.
- SP 1.4 Select Evaluation Methods.
- SP 1.5 Evaluate Alternatives.
- SP 1.6 Select Solutions.

Note : SP stands for Specific Practice and SG stands for Specific Goal.


Monday, April 26, 2010

Benefits of Capability Maturity Model (CMM)

Quality is the very essence of any organization. An individual is known by the level of quality he/she possesses. Similarly in an organization, be it a products or services company, the organisation is identified in the market by the level of quality it maintains.
The CMM provides software organizations with guidance on how to gain control of their processes for developing and maintaining software and to gradually evolve towards a culture of software engineering and management excellence.

Fundamentally speaking CMM helps an organization in two ways :
- Firstly, CMM instills definite practices, which results in an increase in profitability.
- Secondly and most importantly is the immediate change it brings about in an organization's culture and mentality, thereby helping it to climb up the CMM ladder.

The advantages of moving up the CMM ladder are evident in a large number of organizations :
- A shift from reactive to proactive management.
- Helps build a skilled and motivated workforce.
- Cuts cost in development and support system.
- Shortens delivery schedules and improves delivery of requirements.
- Results in customer satisfaction.
- Improves quality of software products.
- Induces robustness.
- Improves management decision-making.
- Introduces newer technology thus creating competitive advantages.

At Level 1 - Initial level : No benefits, inconsistency, schedule and budget overruns and defective applications.
At Level 2 - Repeatable level : By achieving CMM Level 2, projects can set realistic expectations, commit to attainable deadlines and avoid the Level 1 "death marches" on nights and weekends that produce excessive defects.
At Level 3 - Defined level : At Level 3, IS organizations use historical measures describing the performance of a common application development process as the basis for their estimations.
At Level 4 - Managed level : Predictable results. Knowledge of factors causing variance and reuse.
At Level 5 - Optimizing level : Continuously targeting improvements results

Many of the initial benefits from CMM-based improvement programs result from eliminating rework.


Saturday, April 24, 2010

Introduction to Capability Maturity Model (CMM)

The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DoD).

Levels of Capability Maturity Model


Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels.
- At the initial level, processes are disorganized, even chaotic. Success is likely to depend on individual efforts, and is not considered to be repeatable, because processes would not be sufficiently defined and documented to allow them to be replicated.
- At the repeatable level, policies for managing a development project and procedures to implement those policies are established. Effective management processes for development projects are institutionalized, which allow organizations to repeat successful practices developed on earlier projects, although the specific processes implemented by the projects may differ.
- At the defined level, an organization has developed its own standard software process through greater attention to documentation, standardization, and integration. Processes are used to help the managers, team leaders, and development team members perform more effectively. An organization-wide training program is implemented to ensure that the staff and managers have the knowledge and skills required to fulfill their assigned roles.
- At the managed level, an organization monitors and controls its own processes through data collection and analysis.
- At the optimizing level, processes are constantly being improved through monitoring feedback from current processes and introducing innovative processes to better serve the organization's particular needs.


Friday, April 23, 2010

Nanotechnology can improve efficiency of solar cells

Nanotechnology has shown the possibility of fulfilling everyone’s dream of getting cheap and clean energy through its strategic applications. Its intersection with energy is going to change the way energy was hitherto being generated, stored, transmitted, distributed and managed. Nanotechnology is particularly going to revolutionize the solar energy sector.

Using nano-particles in the manufacture of solar cells has the following benefits:

- Reduced manufacturing costs as a result of using a low temperature process.
- Reduced installation costs achieved by producing flexible rolls instead of rigid crystalline panels.
- Solar cells improves power performance by 60 percent in the ultraviolet range of the spectrum.
Inexpensive solar cells would also help provide electricity for rural areas or third world countries. Since the electricity demand in these areas is not high, and the areas are so distantly spaced out, it is not practical to connect them to an electrical grid. However, this is an ideal situation for solar energy.
Finally, inexpensive solar cells could also revolutionize the electronics industry. Solar cells could be embedded into clothing and be ‘programmed’ to work for both indoor light and sunlight.
Consequently, even though conventional solar cells are expensive and cannot yet achieve high efficiency, it may be possible to lower the manufacturing costs using nanotechnology.


Thursday, April 22, 2010

Nanotechnology - a key for enhancing fuel cell performance

Nanotechnology is being used to reduce the cost of catalysts used in fuel cells to produce hydrogen ions from fuel such as methanol and to improve the efficiency of membranes used in fuel cells to separate hydrogen ions from other gases such as oxygen.

Fuel cells that are currently designed for transportation need rapid start-up periods for the practicality of consumer use. This process puts a lot of strain on the traditional polymer electrolyte membranes, which decreases the life of the membrane requiring frequent replacement. Using nanotechnology, engineers have the ability to create a much more durable polymer membrane, which addresses this problem. Nanoscale polymer membranes are also much more efficient in ionic conductivity. This improves the efficiency of the system and decreases the time between replacements, which lowers costs.

Modern fuel cells have the potential to revolutionize transportation. Like battery-electric vehicles, fuel cell vehicles are propelled by electric motors. But while battery electric vehicles use electricity from an external source and store it in a battery, fuel cells onboard a vehicle are electrochemical devices that convert a fuel's chemical energy directly to electrical energy with high efficiency and without combustion. These fuel cells run at relatively low temperature (<100°C) and therefore need catalysts to generate useful currents at high potential, especially at the electrode where oxygen is reduced (the cathode of the fuel.

Carbon Nanohorns provide a unique combination of strength, electrical conductivity, high surface area and open gas paths making them an ideal next generation electrode for various fuel cell applications. Nanotechnology is playing an increasing role in solving the world energy crisis. Platinum nano-particles produced and marketed under the trade name P-Mite are ideal candidates as a novel technology for low platinum automotive catalysts and for single-nanotechnology research. Lanthanum Nanoparticles, Cerium nanoparticles, Strontium Carbonate Nano-particles, Manganese Nanoparticles, Manganese Oxide Nanopowder, Nickel Oxide Nanopowder and several other nanoparticles are finding application in the development of small cost-effective Solid Oxide Fuel Cells (SOFC). And Platinum Nanoparticles are being used to develop small Proton Exchange Membrane Fuel Cells (PEM).


Wednesday, April 21, 2010

Overview of Nanotechnology and its applications

Nanotechnology, shortened to "nanotech", is the study of the controlling of matter on an atomic and molecular scale. Generally nanotechnology deals with structures of the size 100 nanometers or smaller in at least one dimension, and involves developing materials or devices within that size.
With nanotechnology, a large set of materials and improved products rely on a change in the physical properties when the feature sizes are shrunk.

Nanotechnology Applications in Medicine
The biological and medical research communities have exploited the unique properties of nanomaterials for various applications. Terms such as biomedical nanotechnology, nanobiotechnology, and nanomedicine are used to describe this hybrid field.
- Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology.
- Nanotechnology has been a boom in medical field by delivering drugs to specific cells using nano-particles.
- Nanotechnology can help to reproduce or to repair damaged tissue. “Tissue engineering” makes use of artificially stimulated cell proliferation by using suitable nanomaterial-based scaffolds and growth factors.

Nanotechnology Applications in Electronics
Nanotechnology holds some answers for how we might increase the capabilities of electronics devices while we reduce their weight and power consumption.

Nanotechnology Applications in Space
Advancements in nanomaterials make lightweight solar sails and a cable for the space elevator possible. By significantly reducing the amount of rocket fuel required, these advances could lower the cost of reaching orbit and traveling in space. In addition, new materials combined with nanosensors and nanorobots could improve the performance of spaceships, spacesuits, and the equipment used to explore planets and moons, making nanotechnology an important part of the ‘final frontier.’

Nanotechnology Applications in Food
Nanotechnology is having an impact on several aspects of food science, from how food is grown to how it is packaged. Companies are developing nanomaterials that will make a difference not only in the taste of food, but also in food safety, and the health benefits that food delivers.


Tuesday, April 20, 2010

Introduction to Grid Computing

Grid Computing can be defined as applying resources from many computers in a network to a single problem, usually one that requires a large number of processing cycles or access to large amounts of data.
- Grid computing is the act of sharing tasks over multiple computers.
- These computers join together to create a virtual supercomputer. Networked computers can work on the same problems, traditionally reserved for supercomputers, and yet this network of computers are more powerful.
- The idea of grid computing originated with Ian Foster, Carl Kesselman and Steve Tuecke.
- Grid computing techniques can be used to create very different types of grids, adding flexibility as well as power by using the resources of multiple machines.
- Grid computing is similar to cluster computing, but there are a number of distinct differences. In a grid, there is no centralized management; computers in the grid are independently controlled, and can perform tasks unrelated to the grid at the operator's discretion.
- The computers in a grid are not required to have the same operating system or hardware.
- At its core, Grid Computing enables devices-regardless of their operating characteristics-to be virtually shared, managed and accessed across an enterprise, industry or workgroup.

Benefits of Grid Computing


When you deploy a grid, it will be to meet a set of business requirements. To
better match grid computing capabilities to those requirements, it is useful to
keep in mind some common motivations for using grid computing.
- Exploiting under utilized resources
One of the basic uses of grid computing is to run an existing application on a
different machine. The machine on which the application is normally run might be
unusually busy due to a peak in activity. The job in question could be run on an
idle machine elsewhere on the grid.
- Parallel CPU capacity
The potential for massive parallel CPU capacity is one of the most common
visions and attractive features of a grid. A CPU-intensive grid application can be thought of as many smaller sub-jobs, each executing on a different machine in the grid.
- Virtual resources and virtual organizations for collaboration
Another capability enabled by grid computing is to provide an environment for
collaboration among a wider audience. Grid computing can take these capabilities to an even wider audience, while offering important standards that enable very heterogeneous systems to work together to form the image of a large virtual computing system offering a variety of resources.
- Access to additional resources
In addition to CPU and storage resources, a grid can provide
access to other resources as well. The additional resources can be provided in
additional numbers and/or capacity.
- Resource balancing
A grid federates a large number of resources contributed by individual machines
into a large single-system image. For applications that are grid-enabled, the grid
can offer a resource balancing effect by scheduling grid jobs on machines with
low utilization.
- Reliability
High-end conventional computing systems use expensive hardware to increase
reliability. They are built using chips with redundant circuits that vote on results,
and contain logic to achieve graceful recovery from an assortment of hardware
failures.
- Management
The goal to virtualize the resources on the grid and more uniformly handle
heterogeneous systems will create new opportunities to better manage a larger,
more distributed IT infrastructure. It will be easier to visualize capacity and
utilization, making it easier for IT departments to control expenditures for
computing resources over a larger organization.


Monday, April 19, 2010

Test Automation Framework: What is Data Driven testing (definition and more) ..

In the previous posts, we have been talking about different testing automation frameworks. In the current post, we talk about another test automation framework, called 'Data Driven Testing', including definition and some details.
In this framework, the variables are used for testing of both the output verification values and input values. These values are read from data files (different kind of data objects such as datapools, ODBC sources, csv files, Excel files, DAO objects, ADO objects, and such), and are then loaded into variables (and these variables could be used in scripts that are either manually written or recorded). The test script in turn is supposed to take care of all the process of moving through the application, opening and reading of the data files, and logging of test results.
This may sound similar to the table driven testing (and it is similar in the sense that the test case is contained not in the test script, but in a data file), with the script just being used for moving through the workflow (navigating through the application). The difference is that in this case the data is stored in data objects, not in tables (and the navigation is not stored in the data).
What are some of the advantages of using this automation framework ? There is a reduction in the number of scripts you need for your overall test cases, and if you have a need to accommodate bugs in your workflow, then this framework is the one to use. It is also very handy in terms of effort required for maintenance.


Sunday, April 18, 2010

Test Automation Framework: What is The Test Library Architecture Framework (including definition)

In the previous 2 posts on the subject of Test Automation Frameworks (Keyword Driven / Table Driven, Test Script Modularity), we have covered 2 of the models used for test automation frameworks. In this post, we will cover another test automation framework called 'Test Library Architecture'.
The Test Library Architecture is similar to the Test Script Modularity framework using the same level of abstraction; the difference being that the application being tested is broken down into functions and procedures (or you could break it down into objects and methods rather than scripts). The tester will need to create libraries (in the form of SQABasic libraries, APIs, DLLs, and such) that are supposed to be representing the modules, and functions of the application; once these are created, they are called directly from the test case script.
One of the advantages of this framework is that it provides a high degree of modularization, and increases the ease of maintenance of the automation test case. You can simply modify the script if a control is changed by modifying the library file.


Friday, April 16, 2010

Test Automation Frameworks: What is Keyword-driven/table-driven testing (including definition)

For a few of the last posts, we have been looking at more details about test automation, including the benefits of automation, some scenarios in which we should not use automation, and the scenarios in which we should use automation. In addition, we started discussions around the use of test automation frameworks and how they prove to be more beneficial than just creation test cases as you do recording of your test scenarios. In this post, let us consider one of the test automation frameworks, based on "Keyword-driven/table-driven testing".
'Keyword driven testing' and 'table driven testing' seem like 2 different sets of words, but they actually are used to refer to the same method. They denote an application independent framework, which requires the development of data tables and keywords. These data tables and keywords are independent of the test automation took being used, and also independent of the test scripts used to drive the application that is being tested. These keyword driven tests look very similar to the manual test cases. When keyword driven tests are being used, a table is used to document the functionality being used, and this functionality is also mapped through step by step instructions for each test. The entire testing process is driven based on data.

Benefits of keyword driven testing:
- If the tester needs to be quickly on the job, extensive training on the tool can be done later, but at the instant of testing, the tester needs to know the keywords, and the format of the test plan.
- The Scripting language can be written by somebody who has expertise on the scripting language, and this activity can happen earlier to the test plan. The tester does not have to be bothered about the scripting process.
- A spreadsheet format can be used for writing the detail test plan.

Some problems with this technique:
- If there are a large number of keywords, then the tester needs to learn all these, and this initial effort can take time. Once done, then this is no longer a constraint.
- You need people skilled in using the Scripting language of the tool being used.


Introduction to Peer-to-Peer Networking

Peer to peer is an approach to computer networking where all computers share equivalent responsibility for processing data. Peer-to-peer networking (also known simply as peer networking) differs from client-server networking, where certain devices have responsibility for providing or "serving" data and other devices consume or otherwise act as "clients" of those servers.

Characteristics of Peer-to-Peer Network


- A P2P network can be an ad hoc connection—a couple of computers connected via a Universal Serial Bus to transfer files.
- A P2P network also can be a permanent infrastructure that links a half-dozen computers in a small office over copper wires.
- A P2P network can be a network on a much grander scale in which special protocols and applications set up direct relationships among users over the Internet.
- P2P software systems like Kazaa and Napster rank amongst the most popular software applications ever.
- P2P technologies promise to radically change the future of networking. P2P file sharing software has also created much controversy over legality and "fair use."
- A P2P network implements search and data transfer protocols above the Internet Protocol (IP).
- To access a P2P network, users simply download and install a suitable P2P client application.

A P2P setup can facilitate the following:
- Sharing of file by all the users of the network.
- Telephony.
- Media streaming - both audio and video.
- Community/discussion forums.

Types of P2P Networks


The P2P networks are normally classified as either ‘pure’ or ‘hybrid’ types.
Pure Networks
- Peers have got to be equals – no single node can supersede or dictate terms over another.
- There is no requirement of a central server and as such, no possibilty of a client-server correlation.
- There is also no need for a central router.

Hybrid P2P Networks
- This type of network needs to have a central server that can store data on all the peers and deliver it whenever asked to do so.
- The route terminals are treated as addresses, each one of which can be referenced by a specfic set of indices.
- Since the central server is not supposed to have any kind of resources available to it, the peers themselves are required to host all the resources. As and when required, a peer informs the central server about the type of resource to be shared and the details of the peer/s who should be allowed to share it.

Benefits of P2P Networks


1. Efficient use of resources.
- Unused bandwidth, storage, processing power at the edge of the network.
2. Scalability
- Consumers of resources also donate resources.
- Aggregate resources grow naturally with utilization.
3. Reliability
- Replicas.
- Geographic distribution.
- No single point of failure.
4. Ease of administration
- Nodes self organize.
- No need to deploy servers to satisfy demand (c.f. scalability).
- Built-in fault tolerance, replication, and load balancing.


Thursday, April 15, 2010

Test automation framework: What is Test script modularity (definition and some details)

In the previous post (Test Automation Frameworks), I started out by doing a short definition of Test Automations Frameworks, some benefits, and also listed out the 5 different types of Test Automation Frameworks that are currently there (and based on the number of tools available, the combination can be much more since there are many high end and complex tools available for test automation). In this post, I will talk more about one of these, which is called "Test script modularity".
A simple definition for the Test Script Modularity Framework: This framework is the most basic of the various test automation frameworks, with the ideology being towards the creation of a number of small and independent test scripts that in turn represent the modules, sections and functions of the application that is under testing. Once these scripts are created, they are then added in a hierarchical fashion to create larger tests, with the aim of creating test cases.
What is the basic principle behind this ? It is a basic principle of design to create a layer (or to be more technical, an abstraction layer) for a component. This layer in turn ensures that the component is available to the rest of the application in a way that even when the component is modified, the rest of the application is not affected. This concept is one of the key concepts of the test script modularity framework.
One of the key advantages of this framework is that it results in a high degree of modularization, and ensures that the test suite is easily maintainable. By encapsulating all components, when a component changes, you don't have to change the other components or the test cases that call this component.


Tuesday, April 13, 2010

What is a test automation framework and why you should go in for one ..

In the previous post (problems with manual automation testing), I had talked about why it makes sense to go in for using a strategy for test automation, and to use test automation frameworks. Working without a strategy results in issues dealing with maintenance, loss of efficiency, and ever expanding list of files.
So what is a test automation framework ? Well, if you were to take a basic description, then it is something as simple as a set of concepts, practices, tools that provide support for automated software testing; with one of the main benefits of adopting such practices resulting in lower costs of maintenance. Using a test automation framework means that the need to update a test case can be done with minimal effort, with just the actual test script needing to be updated, and everything else remaining the same. It also helps in documenting the overall set of automated test scripts, thus ensuring that even with team turnover, the process still keeps on working with minimal disturbance.
Let us take some of the test automation frameworks that are used, and then define each of these.
- Test script modularity:
- Keyword-driven/table-driven testing
- Data-driven testing
- Test library architecture
- Hybrid test automation

We will move with these definitions in the next post, since each of them needs to be covered in some detail ..


Saturday, April 10, 2010

Process for creating an automation test framework and how to go about automation testing

Every automation tool will give you the ability to record a series of actions so that these can then be played back and you can create a script out of these. However, this is a very basic level of test automation, which does not provide you any flexibility. This process has its uses when you are creating a simple testing strategy whereby you have to take a few scenarios where you do not expect variations or changes, and you can record these sequences and then use them again and again. However, when you need to make changes or modify values, then the cost of doing so starts increasing tremendously. This is a time when you should evaluating the use of creating testing frameworks for you automated testing. Just the process of starting to design a test automation framework will ensure that you are starting to work through your requirements methodically, and will prevent your team from ending up in a mess (a typical example of a mess is when you have a huge number of unconnected automation scrips with their own needs for maintenance, with poor documentation leading to a disaster when people change).
How do you go about creating test automation frameworks ? Before we even go down this route, we should consider some of the benefits that you would get if you were to have an automation framework / or in the examples below, more of an automation strategy:
- You start to separate your data from tests, something that makes it easier to manipulate different types of data (very useful when you want to test the same test with a wide range of data)
- You can look towards reusing functions (building reusable functions can eventually help in saving a lot of time and make the building of these functions more efficient)
- You prevent a situation where you end up with a huge amount of test cases with high maintenance requirements; maintaining these scripts become easier (very useful when you have teams with high personnel changes or attrition rates)
- You will also start evaluating as to which stage is it practical to start building an automation test plan - in some cases, you may delay till the major elements of UI changes are done and over
- Refinements to the manual test cases are avoided later, instead the necessary adaptation of test cases to automation needs is done when they are getting generated; this saves a lot of effort later


Friday, April 9, 2010

Overview of LocalTalk Protocol

LocalTalk refers to the physical networking -- that means the built-in controller in many Apple computers, the cables and the expansion cards required on some systems. The "official" Apple cabling system typically uses a "bus topology" where each device in the network is directly connected to the next device in a daisy chain. The illustration on the Farallon book cover below gives an idea of how a bus looks.
Ethernet is the most-used method of Macintosh networking and all new Macs sport an Ethernet port, but the longevity of Macs mean there's still a bunch out there with serial ports (i.e., LocalTalk support) but no Ethernet. New iMacs and G4s lack serial ports, so can't network directly with older LocalTalk Macs and printers.

LocalTalk implementation utilized the Mac's RS-422 printer port with twisted-pair cabling and 3-pin DIN connectors. Systems were daisy-chained together and required adapters to work with the Mac's onboard DB-9 or 8-pin DIN connectors. LocalTalk provided a fairly speedy 230.4 kbps networking speed, very useable for file sizes and traffic levels of the day - compare this to the still used 56 kbps modem.

A variation of LocalTalk, called PhoneNet used standard unshielded twisted pair telephone wire with 6 position modular connectors (same as used in the popular RJ11 telephone connectors) connected to a PhoneNet transceiver, instead of the expensive shielded twisted-pair cable. In addition to being lower cost, PhoneNet-wired networks were more reliable due to the connections being more difficult to accidentally disconnect.
A LocalTalk-to-Ethernet Bridge is a network bridge that joins the physical layer of the AppleTalk networking used by previous generations of Apple Computer products to an Ethernet network. Some LocalTalk-to-Ethernet Bridges only performed Appletalk bridging. Others were also able to bridge other protocols. For example: TCP/IP in the form of MacIP.


Wednesday, April 7, 2010

Introduction to Frame Relay

Frame relay is a synchronous HDLC protocol based network. Frame Relay is a standardized wide area networking technology that specifies the physical and logical link layers of digital telecommunications channels using a packet switching methodology. Data is sent in HDLC packets, referred to as "frames".
Frame relay technology was developed to specifically address these needs :
- A higher performance packet technology.
- Simpler network management.
- More reliable networks.
- Lower network costs.
- Integration of traffic from both legacy and LAN applications over the same physical network.

Advantages of Frame Relay


Frame Relay offers an attractive alternative to both dedicated lines and X.25 networks for connecting LANs to bridges and routers. The success of the Frame Relay protocol is based on the following two underlying factors:
- Because virtual circuits consume bandwidth only when they transport data, many virtual circuits can exist simultaneously across a given transmission line. In addition, each device can use more of the bandwidth as necessary, and thus operate at higher speeds.
- The improved reliability of communication lines and increased error-handling sophistication at end stations allows the Frame Relay protocol to discard erroneous frames and thus eliminate time-consuming error-handling processing.

These two factors make Frame Relay a desirable choice for data transmission; however, they also necessitate testing to determine that the system works properly and that data is not lost.


Tuesday, April 6, 2010

Automation software - Winrunner - automated functional GUI testing tool

For many years, Winrunner, by Mercury Interactive, was a leading software for automation testing. It was fairly expensive to use, based on our experience. Buying a license to use the software was expensive, but this was not the only cost. It used to cost a fairly large fraction of the initial cost to retain the AMC (Annual Maintenance Contract), since that would allow you to ensure that you were able to obtain the regular updates as well as get support from the engineers of the company. However, this software is not available for sale anymore; it was for sale when the software was with Mercury Interactive; however, when Mercury was taken over by Hewlett Packard (HP) in 2006, the software was retired after 2 years, with users being suggested to move over to another software called ‘Functional testing software’.
For the time when Winrunner was available, our teams had a fairly good experience with it. The level of training required for using the software was not very extensive, and could learn while using the software (although some amount of scripting knowledge was very useful). So, how would the software work ? Well, the software depended on the functionality of letting users record and play back their UI based testing and recording these interactions as test scripts.
The software worked by emulating user actions, and then customize the created scripts to meet their actual requirements (which is what we mentioned earlier that some amount of scripting experience is useful). And then you could do more steps that you normally find in code debuggers such as adding checkpoints, such that testers can compare the actual results versus the expected results from the testing. And you would get the bonus of being able to do additional steps such as be able to check database integrity and also check transaction accuracy.
You also had add-ins that allowed interfaces to various platforms such as C++, C, Visual Basic, Forte, Delphi, Smalltalk, Baan, Browsers such as IE and AOL, etc.
Link to Winrunner User Guide.


Monday, April 5, 2010

Overview of Network Time Protocol (NTP)

Networked computers share resources such as files. These shared resources often have time-stamps associated with them so it is important that computers communicating over networks, including the Internet, are synchronized. The Network Time Protocol (NTP) is an Internet Standard Recommended Protocol for communicating the Coordinated Universal Time (UTC) from special servers called time servers and synchronising computer clocks on an IP network.

The NTP daemon can not only adjust its own computer's system time. Additionally, each daemon can be a client, server, or peer for other NTP daemons:
- As client it queries the reference time from one or more servers.
- As server it makes its own time available as reference time for other clients.
As peer it compares its system time to other peers until all the peers finally agree about the "true" time to synchchronize to.

Clock Strategy


NTP uses a hierarchical, semi-layered system of levels of clock sources, each level of this hierarchy is termed a stratum and assigned a layer number starting with 0 (zero) at the top. The stratum level defines its distance from the reference clock and exists to prevent cyclical dependencies in the hierarchy.

Importance of NTP


In a commercial environment, accurate time stamps are essential to everything from maintaining and troubleshooting equipment and forensic analysis of distributed attacks, to resolving disputes among parties contesting a commercially valuable time-sensitive transaction.
In a programming environment, time stamps are usually used to determine what bits of code need to be rebuilt as part of a dependency checking process as they relate to other bits of code and the time stamps on them, and without good time stamps your entire development process can be brought to a complete standstill.
So, time is inherently important to the function of routers and networks. It provides the only frame of reference between all devices on the network. This makes synchronized time extremely important and this is where Network Time Protocol comes into picture.

Supported Platforms


NTP's native operating system is UNIX. Today, however, NTP runs under many UNIX-like systems. NTP v4 has also been ported to Windows and can be used under Windows NT, Windows 2000, and newer Windows versions up to Windows Vista and Windows 7.
The standard NTP distribution can not be run under Windows 9x/ME because there are some kernel features missing which are required for precision time keeping.


Sunday, April 4, 2010

Simple Network Management Protocol Cont...

Follow the Basic Encoding Rules when laying out the bytes of an SNMP message :
- The most fundamental rule states that each field is encoded in three parts: Type, Length, and Data.
- Type specifies the data type of the field using a single byte identifier.
- Length specifies the length in bytes of the following Data section.
- Data is the actual value communicated (the number, string, OID, etc).
- Rule applies when encoding the first two numbers in the OID. According to BER, the first two numbers of any OID (x.y) are encoded as one value using the formula (40*x)+y. The first two numbers in an SNMP OID are always 1.3. Therefore, the first two numbers of an SNMP OID are encoded as 43 or 0x2B, because (40*1)+3 = 43. After the first two numbers are encoded, the subsequent numbers in the OID are each encoded as a byte.
- The rule for large numbers states that only the lower 7 bits in the byte are used for holding the value (0-127). The highest order bit is used as a flag to let the recipient know that this number spans more than one byte. Therefore, any number over 127 must be encoded using more than one byte. According to this rule, the number 2680 must be encoded 0x94 0x78.

SNMP Primitives


SNMP has three control primitives that initiate data flow from the requester which is usually the Manager. These would be get, get-next and set. The manager uses the get primitive to get a single piece of information from an agent. You would use get-next if you had more than one item. You can use set when you want to set a particular value.

SNMP Operation


SNMP design is pretty simple. There are two main players in SNMP. The manager and the agent. The manager is generally the ‘main’ station such as HP Openview. The agent would be the SNMP software running on a client system you are trying to monitor.


Saturday, April 3, 2010

Simple Network Management Protocol Architecture - SNMP

SNMP architectural model is a collection of network management stations and network elements. Network management stations execute management applications which monitor and control network elements. The Simple Network Management Protocol (SNMP) is used to communicate management information between the network management stations and the agents in the network elements.

Goals of SNMP architecture


The SNMP explicitly minimizes the number and complexity of management functions realized by the management agent itself.
- The development cost for management agent software necessary to support the protocol is accordingly reduced.
- The degree of management function that is remotely supported is accordingly increased, thereby admitting fullest use of internet resources in the management task.
- The degree of management function that is remotely supported is accordingly increased, thereby imposing the fewest possible restrictions on the form and sophistication of management tools.
- Simplified sets of management functions are easily understood and used by developers of network management tools.

A second goal of the protocol is that the functional paradigm for monitoring and control be sufficiently extensible to accommodate additional, possibly unanticipated aspects of network operation and management.

A third goal is that the architecture be, as much as possible, independent of the architecture and mechanisms of particular hosts or particular gateways.

Elements of the Architecture


- Scope of the management information communicated by the protocol.
- Representation of the management information communicated by the protocol.
- Operations on management information supported by the protocol.
- The form and meaning of exchanges among management entities.
- The definition of administrative relationships among management entities.
- The form and meaning of references to management information.


Friday, April 2, 2010

Overview of Simple Network Management Protocol - SNMP

- The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates the exchange of management information between network devices.
- It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.
- SNMP enables network administrators to manage network performance, find and solve network problems, and plan for network growth.
- SNMP is a popular protocol for network management.
- SNMP can collect information such as a server’s CPU level, server chassis Temperature etc.
- SNMP is the protocol that allows an SNMP manager (the controller) to control an SNMP agent (the controlee) by exchanging SNMP messages.
- The SNMP protocol was designed to provide a "simple" method of centralizing the management of TCP/IP-based networks.

SNMP Basic Components


SNMP consists of three key components: managed devices, agents, and network-- management systems (NMSs).
- A managed device is a network node that contains an SNMP agent and that resides on a managed network.
- An agent is a network-management software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP.
- An NMS executes applications that monitor and control managed devices.

SNMP Commands


- The read command is used by an NMS to monitor managed devices. The NMS examines different variables that are maintained by managed devices.
- The write command is used by an NMS to control managed devices. The NMS changes the values of variables stored within managed devices.
- The trap command is used by managed devices to asynchronously report events to the NMS. When certain types of events occur, a managed device sends a trap to the NMS.

SNMP itself does not define which information (which variables) a managed system should offer. Rather, SNMP uses an extensible design, where the available information is defined by management information bases (MIBs). MIBs describe the structure of the management data of a device subsystem; they use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP.


Thursday, April 1, 2010

Transport Multiplexing Protocol (TMux)

One of the problems with the use of terminal servers is the large number of small packets they can generate. Frequently, most of these packets are destined for only one or two hosts. TMux is a protocol which allows multiple short transport segments, independent of application type, to be combined between a server and host pair.

- TMux protocol is intended to optimize the transmission of large numbers of small data packets that are generated in situations where many interactive Telnet and Rlogin sessions are connected to a few hosts on the network.

- TMux protocol may be applicable to other situations where small packets are generated, but this was not considered in the design.

- TMux is designed to improve network utilization and reduce the interrupt load on hosts which conduct multiple sessions involving many short packets.

- TMux is highly constrained in its method of accomplishing this task, seeking simplicity rather than sophistication.

Protocol Design


TMux operates by placing a set of transport segments into the same IP datagram. Each segment is preceded by a TMux mini-header which specifies the segment length and the actual segment transport protocol. The receiving host demultiplexes the individual transport segments and presents them to the transport layer as if they had been received in the usual IP/transport packaging.
Hence, a TMux message appears as:
| IP hdr | TM hdr | Tport segment | TM hdr | Tport segment| ...|

where:
TM hdr : It is a TMux mini-header and specifies the following Tport segment.
Tport segment : It refers to the entire transport segment, including
transport headers.


Header Format


Each 4 octet TMux mini-header has the following general format:
Length high |
+-------------------------------+
| Length low |
+-------------------------------+
| Protocol ID |
+-------------------------------+
| Checksum |
+-------------------------------+
| Transport segment |
| ... |
| ...
Length : It specifies the octet count for this mini header and the following transport segment, from 0-65535 octets.
Protocol ID : It contains the value that would normally have been placed in the IP header Protocol field.
Checksum : This field is the XOR of the first 3 octets.


Facebook activity