Monday, October 7, 2013
What is Wifi technology? How does it work?
Posted by
Sunflower
at
10/07/2013 06:55:00 PM
0
comments
Labels: Access, attacks, Connection, Data, Devices, Encryption, Features, files, Information, Interface, Internet, LAN, Network, Packets, Recovery, Standards, Stations, Technology, Wifi, Wireless
![]() | Subscribe by Email |
|
Monday, October 22, 2012
What is the built-in recovery system in Silk Test?
- Ensuring the continuous running of the web
browser.
- Restoring the application back to its default size
if it has been minimized by the user.
- Setting the working status of the application as
active.
- Closing the other dialog boxes or child windows
that have popped up.
- Waiting for the browser to restore itself to the
active mode if it has been inactive for a long time.
- Ensuring that the various below mentioned elements
are displayed:
- If the test case finishes with its execution or
- If the test case encounters an error while
execution
Posted by
Sunflower
at
10/22/2012 02:17:00 PM
0
comments
Labels: Application, AUT, Automated, Automated Software Testing, Browser, Built-In, Data, files, Functions, Location, Recovery, Scripts, SilkTest, System, Tasks, Testers, Tests, Tools, Users, Web browser
![]() | Subscribe by Email |
|
Tuesday, September 18, 2012
How can you handle exceptions in QTP?
How exceptions are handled in QTP?
- Triggered
events
- Recovery
steps and
- Post
recovery test run.
- Err
object
- On
error resume next
- On
error go to 0 statements and so on.
- Object
state exceptions
- Pop
up exceptions
- Application
crash exceptions and
- Test
run error exceptions
- When a pop up window appears while an
application is open for the test run,
- When
the value or the state of a property of an object changes,
- When
a step in the test run becomes unsuccessful or fails terribly, and
- When
the open application itself fails while the test is in progress.
Posted by
Sunflower
at
9/18/2012 11:30:00 PM
0
comments
Labels: Application, Automation, Descriptive programming, Errors, Exception Handling, Exceptions, Failure, Memory, Objects, Pop-up, QTP, Quick Test Professional, Recovery, Run, Scenarios, Scripts, Testing tools, Tests, Users
![]() | Subscribe by Email |
|
Wednesday, December 28, 2011
What are different characteristics of resilience testing?
What does resilience mean? It’s important to know the meaning of resilience first because so many people confuse themselves with recovery, reliability and resilience. They think it’s all the same. But it is not so.
- Resilience means to recover from a change.
- It’s slightly different from recovery and reliability.
- Every software application or system has to have some degree of resilience in it in order to be more secure and recoverable and reliable.
- Resilience is a non functional requirement of a software system or application.
- Resilience testing falls under the category of non functional testing.
It is very common for the interchanging use of many non functional tests because of the overlapping in the scope between many non functional aspects or requirements.
One thing to be noted is that software performance is a broad and vast term and includes many specific requirements like scalability, reliability, compatibility, security and resilience.
Non functional testing contains the following testing techniques:
- Compliance testing
- Baseline testing
- Documentation testing
- Compatibility testing
- Load testing
- Localization testing
- Endurance testing
- Internationalization testing
- Recovery testing
- Performance testing
- Security testing
- Volume testing
- Usability testing
- Stress testing
- Scalability testing
- Resilience testing
Software system or application developers with disaster recovery plans or techniques are said to be actively and effectively engaged in reducing the risk of the software system or application crash, failure or data loss. But, the irony is that these disaster recovery plans become complacent.
This happens so because many of the software developers or testers have a false sense of security based on the existence of their disaster recovery plans. To ensure the safety of the software system or application the software developers need to test their data recovering strategies. Some software developers or testers feel this doesn’t applies to all programs because they conducted resilience testing when the software system or applications were put in place.
But one should always keep this in mind that the testing environment, the testing strategies and the range of cost effective solutions and tools available are always changing. It is required to keep pace with all these changes.
- The resilience testing strategies need to be tested and reviewed frequently in order to update for these changes.
- Some software developers and testers fear about the time and cost of test cases that give a better grade of tests and hence they are not able to put their good intention in to the practice and hence there remains a lack of resiliency in the software system or application.
- This does not necessarily means that each and every available test case should be implemented for testing the software system or application.
- There should be test plan for carrying out the resilience testing.
- A structured methodology always ensures that the amount of time consumed is minimum and the effectiveness of the testing is maximum.
- Resilience testing is some what similar to stability testing, fail over testing or recovery testing.
- Resilience testing is aimed at determining the behavior of the software system or application in the case of unreliable events, catastrophic problems and system failures, crashes and data losses.
- Resiliency is one of the core attributes of a good and reliable software system or application.
- Any software or hardware malfunctioning or failures are likely to have a considerable impact on the software system or application.
A software system needs to resilient against the following:
- Changes in requirements and specifications of the system.
- Hardware and software faults.
- Changes in data sources.
Resilience needs to be incorporated in the following stages of software development:
- Software design
- Hardware specification
- Configuration
- Documentation
- Testing
Posted by
Sunflower
at
12/28/2011 08:31:00 PM
0
comments
Labels: Application, Category, Change, Characteristics, Faults, Non-functional testing, Performance, Recover, Recovery, Reliability, Requirements, Resilience testing, Secure, Software Systems, Test Plan
![]() | Subscribe by Email |
|
Wednesday, December 14, 2011
What are different characteristics of recovery testing?
Recovery testing itself makes clear what it is by through its name. We all know what recovery means. To recover means to return to the normal state after some failure or illness etc. This qualitative aspect is also present in today’s software system or applications.
- The recovery of a software system or application is defined as its ability to recover back form some hardware failure, crashes and similar such problems that are quite frequent with computers.
- Before the release of any software it needs to be tested for its recovery factor. This is done by recovery testing.
- Recovery testing can be defined as the testing of software system or application to determine its ability to recover fatal system crashed and hardware problems.
One should always keep one thing in mind which is that recovery testing is not to be confused with reliability testing since reliability testing aims at discovering the points at which the software system or application tends to fail.
- In a typical recovery testing, the system is forced to fail or crash or hang in order to check how the recovery asset of the software system or application is responding and how much strong it is.
- The software system or application is forced to fail in a variety of ways.
- Every attempt is made to discover the failure factors of the software system or application.
Objectives of Recovery Testing
- Apart from the recovery factor, the recovery testing also aims at determining the speed of recovery of the software system.
- It aims to check how fast the software system or application is able to recover from a failure or crash.
- It also aims to check how better the system recovers.
- It checks the quality of the recovered software system or application. There is some type and extent to which the software is recovered.
- The types and extent are mentioned in the documentation in the requirements and specifications section.
- Recovery testing is all about testing the recovering ability of the software system or application i.e., how well it recovers from the catastrophic problems, hardware failures and system crashes etc.
The following examples will further clarify the concept of recovery testing:
1. Keep the browser in runny mode and assign it multiple sessions. Then just restart your system. After the system has booted in, check whether the browser is able to recover all of the sessions that were running previously before the restart. If the browser is able to recover, then it is said to have good recovering ability.
2. Suddenly restart your computer while an application is in running mode. After the boot in session check whether the data which was being worked upon by the application is still integrate and valid or not? If the data is still valid, integrate and safe the application has a great deal of recovery factor.
3. Set some application like file downloader or similar to that on data receiving or downloading mode. Then just unplug the connecting cable. After a few minutes plug in the cable back and let the application resume its operation and check whether the application is still able to receive the data from the point where it was left. If its not able to resume the data receiving then its said to have a bad recovery factor.
Recovery testing tests the ability of application software to restart the operations that were running just before the loss of the integrity of the applications. The main objective of recovery testing is to ensure that the applications continue to run even after the failure of the system.
Recovery testing ensures the following:
- Data is stored in a preserved location.
- Previous recovery records are maintained.
- Development of a recovery tool which is available all the time.
Posted by
Sunflower
at
12/14/2011 06:59:00 PM
0
comments
Labels: Ability, Application, Crash, Data, Errors, Factors, Failure, Hardware, Objectives, Operations, Problems, Quality, Recover, Recovery, Recovery Testing, Software Systems, Techniques
![]() | Subscribe by Email |
|
Wednesday, September 1, 2010
What is Recovery Testing and what are its features.
Recovery testing tells how well an application is able to recover from a crash, hardware failure. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs.
- Recovery is ability to restart the operation after integrity of application is lost.
- The time taken to recover depends upon the number of restart points, volume of application, training and skill of people conducting recovery activities and the
tools available for recovery.
- Recovery testing ensures that the operations can be continued after a disaster.
- Recovery testing verifies recovery process and effectiveness of recovery process.
- In recovery testing, adequate back up data is preserved and kept in secure location.
- Recovery procedures are documented.
- Recovery personnel have been assigned and trained.
- Recovery tools have been developed and are available.
To use recovery testing, procedures, methods, tools and techniques are assessed to evaluate the adequacy. Recovery testing can be done by introducing a failure in the system and check whether the system is able to recover. A simulated disaster is usually performed on one aspect of application system. Recovery testing should be carried for one segment and then on the other segment when there are many failures.
Recovery testing is used when the continuity of the system is needed inorder for system to perform or function properly.User estimates the losses, time span to carry out recovery testing. Recovery testing is done by system analysts, testing professionals and management personnel.
Posted by
Sunflower
at
9/01/2010 07:23:00 PM
0
comments
Labels: Applications, Black box testing, Crash, Features, Objectives, Recover, Recovery, Recovery Testing, System, Usage
![]() | Subscribe by Email |
|
Friday, August 6, 2010
What are different types of black box testing ?
The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. These testing types are again divided in two groups:
Testing in which user plays a role of tester.
- Functional Testing : The testing of the software is done against the functional requirements.
- Load testing : It is the process of subjecting a computer, peripheral, server, network or application to a work level approaching the limits of its specifications.
- Stress Testing : The process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions.
- Ad-hoc testing : Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and randomness guides the test execution activity.
- Smoke Testing : It is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level.
- Recovery Testing : Testing aimed at verifying the system's ability to recover from varying degrees of failure.
- Volume Testing : Huge amount of data is processed through the application in order to check the extreme limitations of the system.
- Usability Testing : This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user.
User is not required.
- Alpha Testing : Testing of a software product or system conducted at the developer's site by the end user.
- Beta Testing : The pre-testing of hardware or software products with selected typical customers to discover inadequate features or possible product enhancements before it is released to the general public. Testing of a rerelease of a software product conducted by customers.
- User Acceptance Testing : The end users who will be using the applications test the application before ‘accepting’ the application. This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.
Posted by
Sunflower
at
8/06/2010 12:43:00 PM
2
comments
Labels: Ad-hoc, Alpha, Beta, Black box testing, Functional, Load, Recovery, Smoke, Strategy, Stress, Tester, Testing, Usability, User Acceptance, Volume
![]() | Subscribe by Email |
|
Friday, January 29, 2010
Bad Block Recovery - Disk Management
Disks have moving parts and some tolerances, they are prone to failure. Most disks even come from the factory with bad blocks. Depending on the disk and controller in use, these blocks are handled in a variety of ways.
- Simple disks such as disks with IDE controllers, bad blocks are handled manually.
The MS-DOS format command does a logical format, scans the disk to find bad blocks. If format finds a bad block, it writes a special value into the corresponding FAT entry to tell the allocation routines not to use that block.
- More sophisticated disks, such as SCSI disks used in high-end PCs and most workstations, are smarter about bad block recovery. The controller has a list of bad blocks on the disk and this list is initialized during low-level format at factory, and is updated over the life of the disk. Low-level formatting also sets aside spare sectors not visible to operating system. The controller can be told to replace each bad sector logically with one of the spare sectors. This scheme is called sector sparing.
A typical bad sector transaction :
- The operating system tries to read logical block 87.
- The controller calculates the ECC and finds that the sector is bad. It reports this finding to the operating system.
- The next time the system is rebooted, a special command is run to tell the SCSI controller to replace the bad sector with a spare.
- After this, whenever the system requests logical block 87, the request is translated into the replacement sector's address by the controller.
An alternative to sector sparing, some controllers can be instructed to replace a bad block by sector slipping.
The replacement of a bad block generally is not a totally automatic process, because the data in the bad block usually are lost. Thus, whatever file was using that block must be repaired, and that requires manual intervention.
Posted by
Sunflower
at
1/29/2010 03:32:00 PM
0
comments
Labels: Bad blocks, Disk format, Disk Management, disk structure, disks, Format, Memory, Recovery, Recovery Technique, Sector sparing
![]() | Subscribe by Email |
|
Sunday, September 6, 2009
What is the ARIES Recovery Algorithm ?
'Algorithms for Recovery and Isolation Exploiting Semantics', or ARIES is a recovery algorithm designed to work with a no-force, steal database approach; it is used by IBM DB2, Microsoft SQL Server and many other database systems.
Three main principles lie behind ARIES:
- Write ahead logging: Any change to an object is first recorded in the log, and the log must be written to stable storage before changes to the object are written to disk.
- Repeating history during Redo: On restart after a crash, ARIES retraces the actions of a database before the crash and brings the system back to the exact state that it was in before the crash. Then it undo the transactions still active at crash time.
- Logging changes during Undo: Changes made to the database while undoing transactions are logged to ensure such an action isn't repeated in the event of repeated restarts.
The ARIES recovery procedure consists of three main steps :
- Analysis : It identifies the dirty (updated) pages in the buffer and the set of transactions active at the time of crash. The appropriate point in the log where REDO operation should start is also determined.
- REDO phase : It actually reapplies updates from the log to the database. Generally the REDO operation is applied to only committed transactions. However, in ARIES, this is not the case. Certain information in the ARIES log will provide the start point for REDO, from which REDO operations are applied until the end of the log is reached. Thus only the necessary REDO operations are applied during recovery.
- UNDO phase : The log is scanned backwards and the operations of transactions that were active at the time of the crash are undone in reverse order. The information needed for ARIES to accomplish its recovery procedure includes the log, the transaction table, and the dirty page table. In addition, checkpointing is used.
DATA STRUCTURES USED IN ARIES RECOVERY ALGORITHM :
Log records contain following fields :
- LSN
- Type (CLR, update, special)
- TransID
- PrevLSN (LSN of prev record of this txn)
- PageID (for update/CLRs)
- UndoNxtLSN (for CLRs)
* indicates which log record is being compensated
* on later undos, log records upto UndoNxtLSN can be skipped
- Data (redo/undo data); can be physical or logical.
Transaction Table :
- Stores for each transaction:
* TransID, State.
* LastLSN (LSN of last record written by txn).
* UndoNxtLSN (next record to be processed in rollback).
- During recovery:
* Initialized during analysis pass from most recent checkpoint.
* Modified during analysis as log records are encountered, and during undo.
Dirty Pages Table
- During normal processing :
* When page is fixed with intention to update
"Let L = current end-of-log LSN (the LSN of next log record to be generated).
" if page is not dirty, store L as RecLSN of the page in dirty pages table.
* When page is flushed to disk, delete from dirty page table.
* Dirty page table written out during checkpoint.
* (Thus RecLSN is LSN of earliest log record whose effect is not reflected in page on disk).
- During recovery :
* Load dirty page table from checkpoint.
* Updated during analysis pass as update log records are encountered.
Checkpoints :
- Begin_chkpt record is written first.
- Transaction table, dirty_pages table and some other file mgmt information are written out.
- End_chkpt record is then written out.
* For simplicity all above are treated as part of end_chkpt record.
- LSN of begin_chkpt is then written to master record in well known place on stable storage.
- Incomplete checkpoint.
* if system crash before end_chkpt record is written.
- Pages need not be flushed during checkpoint
* They are flushed on a continuous basis.
- Transactions may write log records during checkpoint.
- Can copy dirty_page table fuzzily (hold latch, copy some entries out, release latch, repeat).
Posted by
Sunflower
at
9/06/2009 11:02:00 PM
0
comments
Labels: Algorithm, ARIES, ARIES recovery algorithm, Recovery
![]() | Subscribe by Email |
|
Overview of Shadow Paging
A computer system, like any other mechanical or electrical system is subject to failure. There are a variety of causes, including disk crash, power failure, software errors, a fire in the machine room, or even sabotage. Whatever the cause, information may be lost. The database must take actions in advance to ensure that the atomicity and durability properties of transactions are preserved. An integral part of a database system is a recovery scheme that is responsible for the restoration of the database to a consistent stage that existed prior to the occurrence of the failure.
Shadow paging is a technique used to achieve atomic and durable transactions, and provides the ability to manipulate pages in a database. During a transaction, the pages affected by the transaction are copied from the database file into a workspace such as volatile memory, and modified in that workspace. When a transaction is committed, all of the pages that were modified by the transaction are written from the workspace to unused pages in the database file. During execution of the transaction, the state of the database exposed to the user is that is which the database existed prior to the transaction, since the database file still contains the original versions of the modified pages, as they existed before being copied into the workspace if a user accesses the database before the transaction is complete, or upon recovery of failure, it will appear as though the transaction has not occurred.
- Shadow paging is an alternative to log-based recovery; this scheme is useful if transactions execute serially.
- Basic Idea: Maintain two page tables during the lifetime of a transaction – the current page table, and the shadow page table.
- Store the shadow page table in nonvolatile storage, such that state of the database prior to transaction execution may be recovered.
* Shadow page table is never modified during execution.
- Initially, both the page tables are identical. Only current page table is used for data item accesses during execution of the transaction.
- Whenever any page is about to be written for the first time
* A copy of this page is made onto an unused page.
* The current page table is then made to point to the copy.
* The update is performed on the copy.
Posted by
Sunflower
at
9/06/2009 02:58:00 PM
0
comments
Labels: Databases, Page, Recovery, Recovery Technique, Shadow Paging, Transaction
![]() | Subscribe by Email |
|