Subscribe by Email


Showing posts with label Recovery. Show all posts
Showing posts with label Recovery. Show all posts

Monday, October 7, 2013

What is Wifi technology? How does it work?

- Wifi has emerged as a very popular technology. 
- This technology has enabled the electronic devices to exchange information between them and to share the internet connection without using any cables or wires. 
- It is a wireless technology. 
- This technology works with the help of the radio waves. 
- The Wifi is defined as a WLAN (wireless local area network) product by the wifi alliance that is based on the standards defined by IEEE (802.11 standards). 
Most of the WLANs are based upon these standards only and so this technology has been named as wifi which is the synonymous with the term WLAN. 
- The wifi-certified trademark might be used by only those wifi products which have the complete certification for the wifi alliance inter-operability. 
- A number of devices now use wifi such as the PCs, smart phones, video game consoles, digital cameras, digital audio players, tablet computers and so on. 
- All these devices can connect to the network and access internet by means of a wireless network access point. 
- Such an access point is more commonly known as a ‘hotspot’. 
- The range of an access point is up to 20 m. 
- But it has a much greater range outside.  
- An access point can be installed in a single room or in an area of many square miles. 
- This can be achieved by using a number of overlapping access points. 
However, the security of the wifi is less compared to the wired connections for example Internet.
- This is so because a physical connection is not required by an intruder. 
- The web pages using SSL have security but the intruders can easily access the non-encrypted files on the internet. 
- It is because of this, that the various encryption technologies have been adopted by the wifi. 
- The earlier WEP encryption was weak and so was easy to break.
- Later, came the higher quality protocols such as the WPA2 and WPA. 
- The WPS or the wifi protected set up was an optional feature that was added in the year of 2007. 
- This option a very serious flaw which is that it allowed the recovery of the password of the router by an attacker.
- The certification and the test plan has been updated by the wifi alliance for ensuring that there is resistance against attacks in all the devices that have been newly certified.
- For connecting to a wifi LAN, a wireless network interface controller has to be incorporated in to the computer system.
- This combination of the interface controller and the computer is often called as the station. 
- The same radio frequency communication channel is shared by all the stations.
- Also, all the stations receive any transmission on this channel. 
- Also, the user is not informed of the fact that the data was delivered to the recipient and so is termed as the ‘best–effort delivery mechanism’. 
- For transmitting the data packets, a carrier wave is used. 
- These data packets are commonly known as the ‘Ethernet frames’. 
Each station regularly tunes in to the radio frequency channel for picking up the transmissions that are available. 
- A device that is wifi enabled can connect to the network if it lies in the range of the wireless network. 
- One condition is that the network should have been configured for permitting such a connection. 
- For providing coverage in a large area multiple hotspots are required. 
- For example, wireless mesh networks in London. 
- Through wifi, services can be provided in independent businesses, private homes, public spaces, high street chains and so on. 
- These hotspots have been set up either commercially or free of charge. 
- Free hotspots are provided at hotels, restaurants and airports. 


Monday, October 22, 2012

What is the built-in recovery system in Silk Test?


With the in – built recovery system of the silk test automation tool it has been made possible to restore the application under test to its stable state which it was in before it crashed, failed or hanged. The stable state to which the recovery system of the silk test restores the application under test or AUT is called the base state of the application software. 

The test automation counterpart of the silk test i.e., the winrunner does not support this feature. The recovery system comes in to action whenever the application under test fails ungracefully. 

For client server based applications there are 4 major tasks that are carried out by the built – in recovery system of the silk test. Those 4 major tasks have been listed below:
  1. Ensuring the continuous running of the web browser.
  2. Restoring the application back to its default size if it has been minimized by the user.
  3. Setting the working status of the application as active.
  4. Closing the other dialog boxes or child windows that have popped up.
On the other hand the below mentioned are the tasks that are carried out by the built – in recovery system of the silk test for the application software that are browser based:
  1. Waiting for the browser to restore itself to the active mode if it has been inactive for a long time.
  2. Ensuring that the various below mentioned elements are displayed:
a)   Status bar
b)   Text field
c)   Location
d)   Browser tool bars and so on.

- The data regarding this built – in recovery system is stored in a file by the name of defaults.inc which can be found in the same directory where the silk test has been installed. 
- Most of the actions that are carried out on the application under test are when the application is in the default base state. 
- In this state, all the actions are based up on the default properties.
- So, whenever a test case or script is executed the recovery system gets invoked automatically.
- However, the flow of control is different when the tests are run based up on some main function or a test plan. 
- Here, when a test case starts executing via the silk organizer the same test case gets the control first. 
- But before the execution of any test case, a function is called namely “default test case enter”. 
- This function is called with the purpose of calling the set app state function which in turn will invoke the default function “default base state”. 
- Following this the execution of the test case begins. 
- In any of the either cases the control is passed to the default test case exit function:
  1. If the test case finishes with its execution or
  2. If the test case encounters an error while execution
- The default test case exit function keeps the record of the logs for the exceptions in the test case execution and later calls the set base state function which in turn calls the default base state function. 
- Whenever the tests are run via a main function instead of one two recovery functions are invoked in the same way. 
- But here the difference is that instead of calling default test case center the function called is “default script enter” before the scripts start running. 
- The value of this function is NULL by default. 
- When the last test case has finished executing the “default script exit” function is called. 
- The purpose of this function is to log the errors or faults that occurred outside the test case. 


Tuesday, September 18, 2012

How can you handle exceptions in QTP?


Exception handling is one of the major features of any test automation and testing software suite. The same is with the HP’s quick test professional which is also a test automation and software testing suite. 

How exceptions are handled in QTP?

- The exception handling thing is managed by the quick test professional by means of some recovery scenarios. 
- The basic goal of exception handling in quick test professional is to have the tests running even if some unexpected failure is encountered. 
- There is some memory space associated with the applications that are tested. 
- This memory space is hooked by the HP quick test professional and this in turn gives rise to some exceptions which may cause the quick test professional to falter, terminate and become unrecoverable. 
- The recovery scenarios used by the quick test professional come in built with the whole software package. 
- But it is not always beneficial to rely on the recovery scenarios rather it is always better if you can handle the exceptions and errors yourself. 
- A recovery scenario manager is also available which has an in – built wizard using which you yourself can define you very own recovery scenarios. 
- This wizard can be accessed by going to the tools menu and then selecting the option “recovery scenario manager”. 
- These recovery scenarios deal in three steps as stated below:
  1. Triggered events
  2. Recovery steps and
  3. Post recovery test run.
Some may think that using the recovery scenarios is the only option for handling the exception in quick test professional but it is not so.  

Another option is which involves the use of descriptive programming.
- This approach is better than the former option since using it your application can gain more visibility and robustness. 
- Though, the recovery manager can be used for many scenarios but some of the real time scenarios cannot be handled using it. 
- In such cases descriptive programming is the alternative. 
- Third option will be make use of the exception handling capabilities of the visual basic script like the following:
  1. Err object
  2. On error resume next
  3. On error go to 0 statements and so on.
- The last two can be used at the script level. 
- The in–built recovery scenario of the quick test professional supports only 4 exceptions:
  1. Object state exceptions
  2. Pop up exceptions
  3. Application crash exceptions and
  4. Test run error exceptions
- A simple example is when you play back the recorded script, the AUT screen is minimized and a run time error is generated as “object not visible”.
- A “test run handler” can be used for this. 
- Four trigger events have been defined during which the recovery manager is supposed to be used:
  1. When  a pop up window appears while an application is open for the test run,
  2. When the value or the state of a property of an object changes,
  3. When a step in the test run becomes unsuccessful or fails terribly, and
  4. When the open application itself fails while the test is in progress.
- All of the above mentioned 4 triggers are nothing but exceptions. 
The quick test professional can be instructed accordingly regarding how to recover from an unexpected event or failure that occurred during the test run in the application development environment. 
- Separate individual files can be created for different scenarios as per the requirements. 
- If you search the web you can find an advance qtp script that can attach as well as enable the recovery scenario file when the test run is in progress. 


Wednesday, December 28, 2011

What are different characteristics of resilience testing?

What does resilience mean? It’s important to know the meaning of resilience first because so many people confuse themselves with recovery, reliability and resilience. They think it’s all the same. But it is not so.

- Resilience means to recover from a change.
- It’s slightly different from recovery and reliability.
- Every software application or system has to have some degree of resilience in it in order to be more secure and recoverable and reliable.
- Resilience is a non functional requirement of a software system or application.
- Resilience testing falls under the category of non functional testing.

It is very common for the interchanging use of many non functional tests because of the overlapping in the scope between many non functional aspects or requirements.
One thing to be noted is that software performance is a broad and vast term and includes many specific requirements like scalability, reliability, compatibility, security and resilience.

Non functional testing contains the following testing techniques:
- Compliance testing
- Baseline testing
- Documentation testing
- Compatibility testing
- Load testing
- Localization testing
- Endurance testing
- Internationalization testing
- Recovery testing
- Performance testing
- Security testing
- Volume testing
- Usability testing
- Stress testing
- Scalability testing
- Resilience testing

Software system or application developers with disaster recovery plans or techniques are said to be actively and effectively engaged in reducing the risk of the software system or application crash, failure or data loss. But, the irony is that these disaster recovery plans become complacent.

This happens so because many of the software developers or testers have a false sense of security based on the existence of their disaster recovery plans. To ensure the safety of the software system or application the software developers need to test their data recovering strategies. Some software developers or testers feel this doesn’t applies to all programs because they conducted resilience testing when the software system or applications were put in place.

But one should always keep this in mind that the testing environment, the testing strategies and the range of cost effective solutions and tools available are always changing. It is required to keep pace with all these changes.

- The resilience testing strategies need to be tested and reviewed frequently in order to update for these changes.
- Some software developers and testers fear about the time and cost of test cases that give a better grade of tests and hence they are not able to put their good intention in to the practice and hence there remains a lack of resiliency in the software system or application.
- This does not necessarily means that each and every available test case should be implemented for testing the software system or application.

- There should be test plan for carrying out the resilience testing.
- A structured methodology always ensures that the amount of time consumed is minimum and the effectiveness of the testing is maximum.
- Resilience testing is some what similar to stability testing, fail over testing or recovery testing.
- Resilience testing is aimed at determining the behavior of the software system or application in the case of unreliable events, catastrophic problems and system failures, crashes and data losses.
- Resiliency is one of the core attributes of a good and reliable software system or application.
- Any software or hardware malfunctioning or failures are likely to have a considerable impact on the software system or application.

A software system needs to resilient against the following:
- Changes in requirements and specifications of the system.
- Hardware and software faults.
- Changes in data sources.

Resilience needs to be incorporated in the following stages of software development:
- Software design
- Hardware specification
- Configuration
- Documentation
- Testing


Wednesday, December 14, 2011

What are different characteristics of recovery testing?

Recovery testing itself makes clear what it is by through its name. We all know what recovery means. To recover means to return to the normal state after some failure or illness etc. This qualitative aspect is also present in today’s software system or applications.

- The recovery of a software system or application is defined as its ability to recover back form some hardware failure, crashes and similar such problems that are quite frequent with computers.
- Before the release of any software it needs to be tested for its recovery factor. This is done by recovery testing.
- Recovery testing can be defined as the testing of software system or application to determine its ability to recover fatal system crashed and hardware problems.

One should always keep one thing in mind which is that recovery testing is not to be confused with reliability testing since reliability testing aims at discovering the points at which the software system or application tends to fail.

- In a typical recovery testing, the system is forced to fail or crash or hang in order to check how the recovery asset of the software system or application is responding and how much strong it is.
- The software system or application is forced to fail in a variety of ways.
- Every attempt is made to discover the failure factors of the software system or application.

Objectives of Recovery Testing
- Apart from the recovery factor, the recovery testing also aims at determining the speed of recovery of the software system.
- It aims to check how fast the software system or application is able to recover from a failure or crash.
- It also aims to check how better the system recovers.
- It checks the quality of the recovered software system or application. There is some type and extent to which the software is recovered.
- The types and extent are mentioned in the documentation in the requirements and specifications section.
- Recovery testing is all about testing the recovering ability of the software system or application i.e., how well it recovers from the catastrophic problems, hardware failures and system crashes etc.

The following examples will further clarify the concept of recovery testing:

1. Keep the browser in runny mode and assign it multiple sessions. Then just restart your system. After the system has booted in, check whether the browser is able to recover all of the sessions that were running previously before the restart. If the browser is able to recover, then it is said to have good recovering ability.

2. Suddenly restart your computer while an application is in running mode. After the boot in session check whether the data which was being worked upon by the application is still integrate and valid or not? If the data is still valid, integrate and safe the application has a great deal of recovery factor.

3. Set some application like file downloader or similar to that on data receiving or downloading mode. Then just unplug the connecting cable. After a few minutes plug in the cable back and let the application resume its operation and check whether the application is still able to receive the data from the point where it was left. If its not able to resume the data receiving then its said to have a bad recovery factor.

Recovery testing tests the ability of application software to restart the operations that were running just before the loss of the integrity of the applications. The main objective of recovery testing is to ensure that the applications continue to run even after the failure of the system.

Recovery testing ensures the following:
- Data is stored in a preserved location.
- Previous recovery records are maintained.
- Development of a recovery tool which is available all the time.


Wednesday, September 1, 2010

What is Recovery Testing and what are its features.

Recovery testing tells how well an application is able to recover from a crash, hardware failure. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs.
- Recovery is ability to restart the operation after integrity of application is lost.
- The time taken to recover depends upon the number of restart points, volume of application, training and skill of people conducting recovery activities and the
tools available for recovery.
- Recovery testing ensures that the operations can be continued after a disaster.
- Recovery testing verifies recovery process and effectiveness of recovery process.
- In recovery testing, adequate back up data is preserved and kept in secure location.
- Recovery procedures are documented.
- Recovery personnel have been assigned and trained.
- Recovery tools have been developed and are available.

To use recovery testing, procedures, methods, tools and techniques are assessed to evaluate the adequacy. Recovery testing can be done by introducing a failure in the system and check whether the system is able to recover. A simulated disaster is usually performed on one aspect of application system. Recovery testing should be carried for one segment and then on the other segment when there are many failures.

Recovery testing is used when the continuity of the system is needed inorder for system to perform or function properly.User estimates the losses, time span to carry out recovery testing. Recovery testing is done by system analysts, testing professionals and management personnel.


Friday, August 6, 2010

What are different types of black box testing ?

The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. These testing types are again divided in two groups:

Testing in which user plays a role of tester.


- Functional Testing : The testing of the software is done against the functional requirements.
- Load testing : It is the process of subjecting a computer, peripheral, server, network or application to a work level approaching the limits of its specifications.
- Stress Testing : The process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions.
- Ad-hoc testing : Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and randomness guides the test execution activity.
- Smoke Testing : It is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level.
- Recovery Testing : Testing aimed at verifying the system's ability to recover from varying degrees of failure.
- Volume Testing : Huge amount of data is processed through the application in order to check the extreme limitations of the system.
- Usability Testing : This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user.

User is not required.


- Alpha Testing : Testing of a software product or system conducted at the developer's site by the end user.
- Beta Testing : The pre-testing of hardware or software products with selected typical customers to discover inadequate features or possible product enhancements before it is released to the general public. Testing of a rerelease of a software product conducted by customers.
- User Acceptance Testing : The end users who will be using the applications test the application before ‘accepting’ the application. This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.


Friday, January 29, 2010

Bad Block Recovery - Disk Management

Disks have moving parts and some tolerances, they are prone to failure. Most disks even come from the factory with bad blocks. Depending on the disk and controller in use, these blocks are handled in a variety of ways.

- Simple disks such as disks with IDE controllers, bad blocks are handled manually.
The MS-DOS format command does a logical format, scans the disk to find bad blocks. If format finds a bad block, it writes a special value into the corresponding FAT entry to tell the allocation routines not to use that block.

- More sophisticated disks, such as SCSI disks used in high-end PCs and most workstations, are smarter about bad block recovery. The controller has a list of bad blocks on the disk and this list is initialized during low-level format at factory, and is updated over the life of the disk. Low-level formatting also sets aside spare sectors not visible to operating system. The controller can be told to replace each bad sector logically with one of the spare sectors. This scheme is called sector sparing.

A typical bad sector transaction :
- The operating system tries to read logical block 87.
- The controller calculates the ECC and finds that the sector is bad. It reports this finding to the operating system.
- The next time the system is rebooted, a special command is run to tell the SCSI controller to replace the bad sector with a spare.
- After this, whenever the system requests logical block 87, the request is translated into the replacement sector's address by the controller.

An alternative to sector sparing, some controllers can be instructed to replace a bad block by sector slipping.
The replacement of a bad block generally is not a totally automatic process, because the data in the bad block usually are lost. Thus, whatever file was using that block must be repaired, and that requires manual intervention.


Sunday, September 6, 2009

What is the ARIES Recovery Algorithm ?

'Algorithms for Recovery and Isolation Exploiting Semantics', or ARIES is a recovery algorithm designed to work with a no-force, steal database approach; it is used by IBM DB2, Microsoft SQL Server and many other database systems.

Three main principles lie behind ARIES:
- Write ahead logging: Any change to an object is first recorded in the log, and the log must be written to stable storage before changes to the object are written to disk.
- Repeating history during Redo: On restart after a crash, ARIES retraces the actions of a database before the crash and brings the system back to the exact state that it was in before the crash. Then it undo the transactions still active at crash time.
- Logging changes during Undo: Changes made to the database while undoing transactions are logged to ensure such an action isn't repeated in the event of repeated restarts.

The ARIES recovery procedure consists of three main steps :
- Analysis : It identifies the dirty (updated) pages in the buffer and the set of transactions active at the time of crash. The appropriate point in the log where REDO operation should start is also determined.
- REDO phase : It actually reapplies updates from the log to the database. Generally the REDO operation is applied to only committed transactions. However, in ARIES, this is not the case. Certain information in the ARIES log will provide the start point for REDO, from which REDO operations are applied until the end of the log is reached. Thus only the necessary REDO operations are applied during recovery.
- UNDO phase : The log is scanned backwards and the operations of transactions that were active at the time of the crash are undone in reverse order. The information needed for ARIES to accomplish its recovery procedure includes the log, the transaction table, and the dirty page table. In addition, checkpointing is used.

DATA STRUCTURES USED IN ARIES RECOVERY ALGORITHM :
Log records contain following fields :
- LSN
- Type (CLR, update, special)
- TransID
- PrevLSN (LSN of prev record of this txn)
- PageID (for update/CLRs)
- UndoNxtLSN (for CLRs)
* indicates which log record is being compensated
* on later undos, log records upto UndoNxtLSN can be skipped
- Data (redo/undo data); can be physical or logical.

Transaction Table :
- Stores for each transaction:
* TransID, State.
* LastLSN (LSN of last record written by txn).
* UndoNxtLSN (next record to be processed in rollback).
- During recovery:
* Initialized during analysis pass from most recent checkpoint.
* Modified during analysis as log records are encountered, and during undo.

Dirty Pages Table
- During normal processing :
* When page is fixed with intention to update
"Let L = current end-of-log LSN (the LSN of next log record to be generated).
" if page is not dirty, store L as RecLSN of the page in dirty pages table.
* When page is flushed to disk, delete from dirty page table.
* Dirty page table written out during checkpoint.
* (Thus RecLSN is LSN of earliest log record whose effect is not reflected in page on disk).
- During recovery :
* Load dirty page table from checkpoint.
* Updated during analysis pass as update log records are encountered.

Checkpoints :
- Begin_chkpt record is written first.
- Transaction table, dirty_pages table and some other file mgmt information are written out.
- End_chkpt record is then written out.
* For simplicity all above are treated as part of end_chkpt record.
- LSN of begin_chkpt is then written to master record in well known place on stable storage.
- Incomplete checkpoint.
* if system crash before end_chkpt record is written.
- Pages need not be flushed during checkpoint
* They are flushed on a continuous basis.
- Transactions may write log records during checkpoint.
- Can copy dirty_page table fuzzily (hold latch, copy some entries out, release latch, repeat).


Overview of Shadow Paging

A computer system, like any other mechanical or electrical system is subject to failure. There are a variety of causes, including disk crash, power failure, software errors, a fire in the machine room, or even sabotage. Whatever the cause, information may be lost. The database must take actions in advance to ensure that the atomicity and durability properties of transactions are preserved. An integral part of a database system is a recovery scheme that is responsible for the restoration of the database to a consistent stage that existed prior to the occurrence of the failure.

Shadow paging is a technique used to achieve atomic and durable transactions, and provides the ability to manipulate pages in a database. During a transaction, the pages affected by the transaction are copied from the database file into a workspace such as volatile memory, and modified in that workspace. When a transaction is committed, all of the pages that were modified by the transaction are written from the workspace to unused pages in the database file. During execution of the transaction, the state of the database exposed to the user is that is which the database existed prior to the transaction, since the database file still contains the original versions of the modified pages, as they existed before being copied into the workspace if a user accesses the database before the transaction is complete, or upon recovery of failure, it will appear as though the transaction has not occurred.

- Shadow paging is an alternative to log-based recovery; this scheme is useful if transactions execute serially.
- Basic Idea: Maintain two page tables during the lifetime of a transaction – the current page table, and the shadow page table.
- Store the shadow page table in nonvolatile storage, such that state of the database prior to transaction execution may be recovered.
* Shadow page table is never modified during execution.
- Initially, both the page tables are identical. Only current page table is used for data item accesses during execution of the transaction.
- Whenever any page is about to be written for the first time
* A copy of this page is made onto an unused page.
* The current page table is then made to point to the copy.
* The update is performed on the copy.


Facebook activity