Subscribe by Email


Showing posts with label Environment. Show all posts
Showing posts with label Environment. Show all posts

Friday, September 13, 2013

What is Portability Testing?

- Portability Testing is the testing of a software/component/application to determine the ease with which it can be moved from one machine platform to another. 
- In other words, it’s a process to verify the extent to which that software implementation will be processed the same way by different processors as the one it was developed on.  
- It can also be understood as amount of work done or efforts made in order to move software from one environment to another without making any changes or modifications to the source code but in real world this is seldom possible.
For example, moving a computer application from Windows XP environment to Windows 7 environment, thereby measuring the efforts and time required to make the move and hence determining whether it is re usable with ease or not.

- Portability testing is also considered to be one of the sub parts of System testing as this covers the complete testing of software and also it’s re-usability over different computer environments that include different Operating systems, web browsers.

What needs to be done before Portability testing is performed (pre requisites/pre conditions)? 
1.   Keep in mind portability requirements before designing and coding of software.
2.   Unit and Integration Testing must have been performed.
3.   Test environment has been set up.

Objectives of Portability Testing
  1. To validate the system partially i.e. to determine if the system under consideration fulfills the portability requirements and can be ported to environments with different :-
a). RAM and disk space
b). Processor and Processor speed
c). Screen resolution
d). Operating system and its version in use.
e). Browser and its version in use.
To ensure that the look and feel of the web pages is similar and functional in the various browser types and their versions.

2.   To identify the causes of failures regarding the portability requirements, this in turn helps in identifying the flaws that were not found during unit and integration testing.
3.   The failures must be reported to the development teams so that the associated flaws can be fixed.
4.   To determine the potential or extent to which the software is ready for launch.
5.   Help in providing project status metrics (e.g., percentage of use case paths that were successfully tested).
6.   To provide input to the defect trend analysis effort.



Wednesday, June 5, 2013

Explain the various techniques for Deadlock Prevention

Deadlocks are like a nightmare for the programmers who design and write the programs for the multitasking or multiprocessing systems. For them it is very important to know about how to design programs in such a way as to prevent the deadlocks. 

Deadlocks are a more common problem in the distributed systems which involve a use of the concurrency control and distributed transactions. The deadlocks that occur in these systems are termed as the distributed deadlocks. 

It is possible to detect them using either of the following means:
1. Building a global wait for graph from a local one through a deadlock detector.
2. Using distributed algorithms such as the edge chasing.

- An atomic commitment protocol similar to a two phase commit is used for automatically resolving the distributed deadlocks. 
- Therefore, there is no need for any other resolution mechanism or a global wait for graph. 
- But this is possible only in the commitment ordering based distributed environments. 
- For the environments that have 2 – phase locking, a similar automatic global deadlock resolution takes place.
- There is another class of deadlocks called the phantom deadlocks. 
- These are the ones detected in the system because of some internal delays but they actually do not exist during the detection time.
- Today, their exist a number of ways using which the parallelism can be increased where otherwise severe deadlocks might have been caused by the recursive locks. 
- But like for everything else, this also has a price.
- You either have to accept one of these or both i.e., the data corruption or the performance/ overhead. 
- Preemption and lock–reference counting, WFG or wait – for graph are some of the examples of this. 
- These can be followed either by allowing for the data corruption during the preemption or by using version.
Apart from these heuristic algorithms and the algorithms that can track all the cycles causing the deadlocks can be used for preventing the deadlocks.
These algorithms even though they don’t offer 100 percent parallelism, they prevent deadlocks by providing an acceptable degree of the performance overhead versus parallelism. 

This example will make it clearer: 
- Consider at a crossing junction there are 2 trains approaching each other. 
Their collision can be prevented by some just-in-time prevention means. 
- This mean can be person at the crossing having a switch pressing which will allow only one of them to cross on to the succeeding track super passing the other trains that are also waiting. 
- There are following two types of deadlocks:
  1. Recursive locks: In such locks, only one thread can pass through it. Any other threads or processes entering the lock need to wait for the initial one to pass through after its task is finished.
  2. Non – recursive locks: Here only once a thread can enter the lock. If the same thread again tries to enter the lock without unlocking it, a deadlock can occur.

- There are issues with both of these. 
- The first one does not provides distributed deadlock prevention and the latter one has no concerns for the deadlock prevention. 
- In the first one, if the number of threads trying to enter the lock equals the number of locked threads, then one of the threads has to be assigned as the super one and then only one can execute it till completion. 
- After the execution of the super thread is complete, the condition reverts back from the recursive lock and the super thread removes its status of being super thread and sends a notification to the locker that the condition has to be re-checked. 


Friday, April 5, 2013

What are different types of operating system?


Developing an operating system is one of the most complicated activities and favorite of most of the computing hobbyists. For a hobby OS, its code is not directly derived from the already existing Oss. Some entirely new concepts might also be included in the OS development. It may also start from modeling an existing one. Whatever the case maybe, the hobbyist is his own active developer. Application software might be developed specifically for an OS or hardware. Therefore, when the application has to be ported to some OS that may implement its required functionality differently, the application might be required to be changed, adapted or maintained. 


Types of Operating System

There are many types of operating about which we shall discuss in this article.

1. Real time operating system
- It is a multi – tasking OS aimed at the execution of the applications that are real time. 
-These operating systems work on scheduling algorithms written exclusively for them. 
- This is done so as to make them achieve a behavior that is deterministic in nature. 
- Their main objective is to give a quick response to the events that is also predictable in nature. 
- The design which is implemented is event driven and employs the idea of time sharing and sometimes both. 
- The system that is event driven switches among the different tasks according to the priorities assigned to them. 
- On the other hand, the systems following the time sharing methodology switch between the tasks based up on the clock interrupts.

2. Multi–user and single user operating systems: 
- In the multi–user operating systems the same computer system can be accessed by multiple users at the same time. 
- Systems that can be classified under the multi–user systems are the internet servers and the time sharing systems since using the time sharing principle they allow multiple users to access the system. 
- There are other types of operating systems that allow only one user to execute a number of programs simultaneously and are called the single – user operating systems.

3. Multi–tasking and single–tasking operating systems: 
- The operating systems that allow multiple programs to be executed simultaneously (as per the human time scales) are termed as the multi – tasking OS.
- In the single–taking OS, only one program can be run at a time. 
- Multi –taking can be done in the following two ways:
Ø  Pre–emptive multi –tasking: The CPU time is sliced and each of the time slots are given to each of the programs that are to be executed. This kind of multi–tasking is supported by the operating systems such as Linux, AmigaOS and Solaris.
Ø  Co–operative multi–tasking: Systems following this rely on one process for giving time to the other processors but in a pre–defined manner. This multi–tasking type was used by the MS windows 16 – bit version, Mac OS preceding OS X.
There are Oss that used to support both of these namely win9x and Windows NT.

4. Distributed operating system: 
- This kind of OS is used to manage a group of computers that are independent of each other and makes them seem like one single system.
- This OS led to the development of networked computers which could link to and communicated with one another.
- These computers in turn paved way for distributed computing. 
- They carried out computations on more than one computer. 
- Computers working in cooperation with each other, together make up a distributed system.

5. Embedded Operating System: 
It is used in embedded computer systems such as in PDAs.


Thursday, April 4, 2013

What is an Operating System?


- A collection of small and large software that help in the management of the computer hardware resources is called an operating system. 
- As the term suggests it operates or drives the system. 
- The basic common services required by the computer programs are offered by this OS only. 
- Without an OS, the application programs would fail to function. 
- Operating systems are of many types. 
- One such type is the time sharing OS that schedules the tasks to be done so that the processor time, printing, mass storage and so on resources could be utilized efficiently. 
- It is an intermediate thing between the hardware and the user. 
- It is through the OS that you are able to actually communicate with the computer hardware. 
- Functions such as memory allocation and basic input output operations are dependent totally on the OS. 
- Even though the hardware directly executes the application code, it does frequently involve the OS or OS itself interrupts in between. 
- Any device containing a computer do has an OS such as video game consoles, mobile phones, web servers, super computers and so on.
- Some popular OS are:
Ø  Android
Ø  BSD
Ø  Linux
Ø  iOS
Ø  Microsoft windows
Ø  Windows phone
Ø  Mac OS X
Ø  IBM z/ OS
All the OS have relation with UNIX save windows and z/OS.

- Types of Operating systems are:
  1. Real time OS
  2. Multi – user OS
  3. Multi – tasking OS
  4. Single  - tasking OS
  5. Distributed OS
  6. Embedded OS
- It was in 1950 that the basic operating systems came in to existence such as parallel processing, interrupts and run time libraries.
- Assembly language was used for writing the UNIX OS. 
- There are many sub–categories in the Unix like family of the operating systems:
  1. System V
  2. BSD
  3. Linux and so on.
- A number of computer architectures are supported by these Unix – like systems. 
- They come in heavy use in the following fields:
  1. Servers in business
  2. Work stations in academic
  3. Engineering environments
- Few UNIX variants are available for free such as BSD, Linux etc. and are quite popular. 
- The holder of the Unix trademark is the open group and it has certified four Oss as Unix so far. 
- Two of the original system V Unix descendants are IBM’s AIX and HP’s HP – UX and they run only on the hardware provided by their manufacturer. 
Opposite to these is the sun microsystem’s Solaris OS that can be used on different hardware types (inclusive of the Sparc and x86 servers etc. and PCs). - The POSIX standard was established to the sought the inter-operability of the Unix. 
- This standard is applicable for any OS now, even though originally it was developed especially for the variants of Unix.
- Berkeley Software Distribution or BSD family is a Unix sub–group. 
- It includes the following:
  1. FreeBSD
  2. NetBSD
  3. OpenBSD
- The major use of all of these is in the web servers. 
- Furthermore, they are also capable of functioning as a PC OS. 
- BSD has made a great contribution in the existence of the internet. 
- Most of the protocols were refined and implemented in BSD. 


Thursday, March 21, 2013

What are principles of autonomic networking?


The complexity, dynamism, heterogeneity and so on are on ever rise. All these factors are making the infrastructure of our network insecure, brittle and un – manageable. Today’s world is so dependent on networking that its security and management cannot be risked. In terms of networking, we call this the ‘autonomic networking’. 
The goal of building such systems is to realize such network systems that have capability of managing themselves as per the high level guidance provided by the humans. But meeting this goal calls for a number of scientific advances and newer technologies.

Principles of Autonomic Networking

A number of principles, paradigms and application designs need to be considered.

Compartmentalization: This is a structure having extensive flexibility. The makers of autonomic systems prefer this instead of a layering approach. This is the first target of the autonomic networking.

Function re–composition: An architectural design has been envisioned that would provide highly dynamic, autonomic and flexible formation of the networks on a large – scale. In such architecture, the functionality would be composed in a fashion that is autonomic.

Atomization: The functionality are broken down in to smaller atomic units. Maximum re - composition freedom is made possible by these atomic units.

Closed control loop: This is one of the fundamental concepts of the control theory. It is now also counted among the fundamental principles of the autonomic networking. This loop is known for controlling and maintaining the properties of the controlled system as per the desired bounds. The target parameters are constantly monitored within the desired bounds.

The human autonomic nervous system is what that inspires the autonomic computing paradigm. An autonomic computing paradigm must then have a mechanism by virtue of which it can change its behavior according to the change in various essential variables in the environment and bring it back itself in to the state of equilibrium. 
Survivability can be viewed in the terms of following in case of autonomic networking:
  1. Ability to protect itself
  2. Ability to recover from the faults
  3. Ability to reconfigure itself as per the environment changes.
  4. Ability to carry out its operation at an optimal limit.
The following two factors affect the equilibrium state of an autonomic network:
  1. The internal environment: This includes factors such as CPU utilization, excessive memory and so on.
  2. The external environment: This includes factors such as safety against external attacks etc.
There are 2 major requirements of an autonomic system:
  1. Sensor channels: These sensors are required for sensing the changes.
  2. Motor channels: These channels would help the system in reacting and overcoming the effects of the changes.
The changes that are sensed by the sensor are analyzed for determining the viability limits of the variables. If the variables are detected out of this limit, then the system plans what changes it should introduce in to the system to bring them in their limit, thus bringing back the system in to its equilibrium state. 


Wednesday, March 13, 2013

What are characteristics of autonomic system?


Autonomic systems bring both challenges as well as opportunities for the future networking. The increasing numbers of users have had a negative impact on the complexity of the networks; it has also increased by multiple folds. Autonomic systems provide a solution for this problem. 

Characteristics of Autonomic System

  1. High intelligence: These systems have more intelligence incorporated in to them which lets them tackle this increasing complexity easily.
  2. Business Goal: They are driven by the business goal that the quality of experience of the user must be high. Even with the changing environment, there goals remain the same. But there are changes that take place in the low – level configurations. For example, when a user switches over to a low bandwidth network, the bit rate of the video has to be reduced in order to satisfy the goals of the business.
  3. Complex operations: All the operations carried out in an autonomic system are complex in nature even for the simplest of the services. For example, authentication, video encoding, billing, routing, shaping, QoS prioritizing, admission control.
  4. High level objectives: The human operator just has to specify the high – level objectives and it is left to the system whether it chooses to optimize one or more of the goals. In order to achieve this, the system has to translate these objectives in to low – level configurations.
  5. Adaptability: The system has the ability to adapt itself to the current environment.
  6. Policy continuum: There are a number of perspectives to this as mentioned below:
Ø  Business view: Includes guidelines, processes and goals.
Ø  System view: The service should be independent of the technology as well as the device that is being used.
Ø  Network view: It should be specific to technology but independent of the device.
Ø  Device view: Both technology and device specific.
Ø  Instance view: Operation should be specific to an instance.

  1. Elements: The elements of the network are assumed to be heterogeneous by the autonomic communication systems whereas in plain autonomic computing the elements are taken to be as the homogeneous.
  2. Distributed: These systems work up on a distributed environment.
  3. Complexity: The complexity in autonomic systems is more because of the complex autonomic loop that includes the following operations:
Ø  Interaction between the context  and the business goals
Ø  The MAPE (monitor, analyze, plan and execute) loop.

10. Reliability: In autonomic systems, the network has the authority to decide for itself focusing on high level objectives. Autonomic systems rely heavily up on artificial intelligence. However, there are issues associated with artificial intelligence like it becomes difficult to intervene in between when the things go wrong.It is quite difficult to know whether the system is doing the things it is supposed to do or not.
11. Scalability: This is another major characteristic of autonomic systems. It is required to keep track of the large amounts of knowledge and information. Autonomic systems have three tools to take care of this:
Ø Distributed ontologies
Ø Distributed large – scale reasoning
Ø Exchanging only the useful information
ØDistributing information among the different components of the autonomic network.

But in these cases, detection of the conflicts is a difficult task. For handling the various interactions taking place the various autonomic components efficient protocols are required. 
Currently two approaches have been suggested for developing the autonomic networking systems namely:
1. Evolutionary Approach: Incorporating the autonomic behavior in to the pre – existing infrastructure. This approach will consist of updates in increments till a fully autonomic system is developed. This approach is more likely to be adopted even though it requires a lot of patchwork.
2.  Clean slate approach: This approach is focused up on re – designing of the internet.


Sunday, February 17, 2013

Explain Rapise - Web Functional/Regression Test Tool


In this world of web–based applications, every application is unique and therefore its testing needs are also different from the other applications. To satisfy all those different testing requirements, environments, specifications, scenarios and so on we require many different testing tools. So a number of web functional/ regression testing tools have been developed and all of them possess some unique features that suit to needs of the different testers. 

In this article we discuss about one such web functional/ regression tool called Rapise that has been developed by the Inflectra inc. 

About Rapise

- This tool provides a platform for functional test automation. 
- The architecture of this tool is quite open and can be extended if required. 
These two qualities make it very flexible for testing the applications.
- Rapise has built–in cross–browser testing capabilities that support several versions of the following web browsers:
  1. Microsoft internet explorer
  2. Mozilla firefox and
  3. Google chrome
- It supports the following applications:
  1. Ajax
  2. GWT
  3. YUI
  4. Flash/ flex
  5. AIR
  6. Silverlight and so on.
- Microsoft excel spreadsheets can be used for following approaches such as keyword driven testing and data driven testing.
- Rapise identifies the objects based up on CSS and Xpath. 
- Rapise comes with a built–in OCR i.e., optical character recognition.
- It relies on JavaScript for scripting purpose. 
- An open format is used for storing the scripts as well as the identified objects unlike other tools that store them in a database or a proprietary binary file. 
The JavaScript editor included with Rapise is full function edition and has a feature called the automatic code completion. 
- An active JavaScript debugger that is plug-gable plus has watches and break points, is also included in Rapise tool. 
- Rapise has made cross browser testing much easier than before. 
- This is all because of the cross – browser testing capabilities that are of the best standard and multiple browser versions as mentioned earlier in the article. 
You have the choice of recording and creating one test script and executing the same in all the major browsers without making any modifications to it. 
Rapise claims to have the most flexible as well as the powerful test automation features ever available in the market. 

Features of Rapise Testing Tool

Below we mention some features of Rapise testing tool:
  1. Learn and go functionality: This lets you create test scripts rapidly. This works in a more efficient manner than what traditional recording and playback methods we have. The editing of the object takes place during the learning process.
  2. Keyword driven testing and data driven testing
  3. Windows and web application testing
  4. Can be extended using Javascript
  5. Adobe AIR and Flash support
  6. Supports DevExpress, Infragistics and telerik controls.
  7. Support for Qt framework
  8. Integrated reporting capabilities that are very powerful.
  9. Integration with SpiraTest for effective test management.
- This tool has leveraged the power of extensible architecture by a great measure. 
- For user, the JavaScript source code has been made available for library recognition as well as execution. 
- All these features let you automate the test where other testing tools are bound to fail. 
- With such advanced features, Rapise can be called as the most customizable, extensible and flexible test automation tool. 
- Key functions can be modified by the users as per their requirement and the recording aspects can be customized with the help of custom plug – in libraries. - Since Rapise is using Javascript, a broad range of users can access it.  




Friday, February 15, 2013

What are different Web Functional/Regression Test Tools?


As functional and regression testing is important for many software systems, the same way it is important for the web applications to undergo the functional and regression testing. At present we have a number of tools available for web functional/ regression testing.  

In this article we shall discuss many such tools available:

  1. ManageEngine QEngine: This tool is for functional testing and load testing of the web applications. This tool enables you to carry out GUI testing in minutes and below are some of its features:
Ø Portability: With this feature you can record scripts in windows and play them in linux without any need of recreation.
Ø  Scripting capabilities: These are simplified script creation, keyword driven testing, data – driven testing, object repository, Unicode support.
Ø  Playback options: It includes playback synchronization, chain scripts, multiple playback options etc.
Ø  Validation and verification: The tool comes with a rich library of built – in functions for constructing function calls for requirements such as dynamic property handling, database handling, and screen handling and so on.
Ø   AJAX testing
Ø  Reporting capabilities: The tool provides you with clear and powerful reports indicating the status of the execution of the test.

  1. SeleniumHQ: This tool has a number of smaller projects that combined to create a testing environment to suit your needs:
Ø  Selenium IDE: This one’s an add – on for firefox and can be used for replaying the tests in the same.
Ø  Selenium remote control: Web browsers can be controlled with this client/ server system located on either a local host or a remote one.
Ø  Selenium grid: This is the same as the previous one but can handle multiple servers at a time.
Ø  Selenium core: Testing system based on JavaScript.
Ø Further specific selenium projects have been developed for ruby, eclipse and rails.

  1. Rapise: It was developed by Inflectra inc. This tool has an extensible architecture and cross browser testing capabilities. It supports various versions of Mozilla Firefox, chrome, MS internet explorer and so on. The tool comes with built – in support for AJAX, YUI, GWT, AIR, Silverlight, flash/ flex etc. one can use this tool for carrying out keyword as well as data driven testing through excel spreadsheets. The tool identifies the objects based up on CSS and Xpath. For bitmaps, the tool comes with a built – in OCR (optical character recognition). It uses JavaScript for scripting purposes and therefore also has a JavaScript editor.
  2. funcUnit: This is an open source web application testing framework. The API is based up on JQuery. Almost all the modern browsers are supported on Linux and MAC. Selenium can also be used for executing the tests. It can simulate various user input events clicking, typing, and dragging the mouse and so one. 
  1. qUnit: Any JavaScript code that is generic can be tested by this tool. It is somewhat similar to the JUnit but operates up on JavaScript features. 
  1. EnvJS: This is a simulated browser environment and an open source tool whose code has been written in javascript. 
  1. QF – test: It has been developed by quality first software  as tool for cross  - platform testing and cross browser automation of the web applications. It can be used to test web applications based up on html, ajax, GWT, exTJS, richfaces, Qooxdoo, java and so on. The tool has got some small scale capabilities for test management, intuitive user interface, extensive documentation, capture/ playback mechanism, component recognition and so on. It can be used for handling both custom and complex GUI objects. It has got customizable reporting and integrated test debugger system. 
  1. Cloud testing service: This enables the cloud capabilities to be utilized while web testing. It has been developed by the cloud testing limited. Here, web functionality can be recorded via selenium IDE and a web browser. The scripts can be uploaded to the cloud testing website.




Facebook activity