Subscribe by Email


Showing posts with label Strategy. Show all posts
Showing posts with label Strategy. Show all posts

Tuesday, June 25, 2013

Explain about demand paging and page replacements

These are two very important concepts of memory management strategies in the computer operating systems namely demand paging and paging replacements. 

About Demand Paging
- Demand paging is just the opposite concept of the anticipatory paging. 
Demand paging is actually a memory management strategy developed for managing the virtual memory.
- The operating system that makes use of demand paging technique, a copy of the disk page is made and kept in the physical memory whenever a request is made for it i.e., whenever a page fault occurs. 
- It is obvious that the execution of a process starts with none of its page loaded in to the main memory and follows by a number of page faults occurring one after the other until all of its required pages have been loaded in to the main memory. 
- Demand paging comes under the category of the lazy loading techniques. 
This strategy follows that only if the process in execution demands a page, then only it should be brought in to the main memory. 
- That’s why the strategy has been named as demand paging. Sometimes it is even called as the lazy evaluation. 
- Page table implementation is required for using the demand paging technique.
- The purpose of this table is to map the physical memory to the logical memory. 
- This table uses a bit wise operator for marking a page as valid or invalid. 

The following steps are carried out whenever a process demands for a page:
  1. An attempt is made for accessing the page.
  2. If page is present in the memory the usual instructions are followed.
  3. If page is not there i.e., is invalid then a page fault is generated.
  4. Memory reference to a location in the virtual memory is checked if it is valid or not. If it’s an illegal memory access then the process is terminated. If not the requested page has to be paged in.
  5. The disk operations are scheduled for reading the requested page in to the physical memory.
  6. Restarting the instruction that raised the page fault trap.
- The nature of this strategy is itself of great advantage. 
- Upon availability of more space in the physical memory, it allows execution of many processes leading to a decrease in the context switching time.
- At the time of program start up, less latency occurs during loading. 
- This is because the inflow and outflow of the data between main memory and secondary memory is very less.


About Page Replacement
- When less number of real memory frames is available, it leads to invoking a page stealer. 
- This stealer searches through the PFT (page frame table) for pages to steal. 
This table stores references to the pages which are required and modified. 
- If the requested page is found by the page stealer, it does not steal it but the reference flag is reset for that page. 
- So in the pass when the page stealer comes across this page, it steals this page. 
- Note that in this pass the page was flagged as un-referenced. 
- Any change made to the page is indicated by means of the modify flag.
- If the modify flag of the page to be stolen is set, then a page out call has to be made before the page stealer does its work. 
- Thus, the pages that form a part of the currently executing segments are written to so called paging space and the persisting segments are in turn written to the disk. 
- The page replacement is carried by the algorithms called the page replacement algorithms. 
- Besides this, these also keep a track of the faults. 


Tuesday, January 8, 2013

What is Cleanroom Software Engineering?


Cleanroom software engineering is one of the fastest emerging software development processes and has been designed for the production of the software systems and applications with a reliability of certificate level. The credit for the development of this process goes to Harlan Mills and a couple of his colleagues among which one was Alan Hevner at the IBM Corporation. 

What is Cleanroom Software Engineering?

- The cleanroom software engineering is focused on defect prevention rather than their effective removal. 
- The process was named so since the word cleanroom evoked the sense of cleanrooms that are used by the electronics industry for preventing the defects from entering the semiconductors during the fabrication process. 
- The first time when the cleanroom process was used was in the late 80s. 
- This process began to be used for military demonstration process in the early 1990s. 

Principles of Cleanroom Approach

Cleanroom process has its own principles which we have discussed below:
  1. Development of software systems and applications based up on formal methods: Box structure method is what that is used by the cleanroom development for specifying and designing a software product. Later, team review is used for carrying out verification of the design i.e., whether it has been correctly implemented or not.
  2. Statistical quality control through incremental implementation: An iterative approach is followed in the cleanroom software engineering process i.e., the software system is evolved through increments in which the implemented functionality gradually increases. Pre–established standards are used for measuring the quality of all the increments for making verification that the process is making acceptable process. In case the process fails to meet the quality standards, testing of the current increment is stopped and the process is returned to the designing phase.
  3. Statistically sound testing: Software testing in cleanroom development process is carried as a disguise of a statistical experiment. A subset that represents software’s i/p and o/p trajectories is selected and then subjected to testing. The sample so obtained is  then considered for statistical analyzation so as to get an estimation  of the software’s reliability and level of confidence.

Features of Cleanroom Software Engineering

Software products developed using the cleanroom software engineering process have zero defects at the delivery time. Below mentioned are some of the characteristics features of the cleanroom software engineering:
  1. Statistical modeling
  2. Usage scenarios
  3. Incremental development and release
  4. Separate acceptance testing
  5. No requirement of unit testing and debugging
  6. Formal reviews with verification conditions
The defects rate was recorded as follows:
  1. <3 .5=".5" and="and" delivered="delivered" kloc="kloc" o:p="o:p" per="per">
  • 2.7 Per KLOC between first execution and first delivery.
  • Basic technologies thus used can be listed as:
    1. Incremental development: Each increment is carried out from end – to – end and in some cases there is overlapping development of the increments. This whole process takes around 12 – 18 weeks and partitioning though being critical proves to be difficult.
    2. Function – theoretical verification: A parser may check the constructed program for syntax errors but it cannot be executed by the program developer. Verification conditions drive the team review for verification. Verification is improved by 3- 5 times than debugging. Formal inspections also fall under this category.
    3. Formal specifications: This further includes:
    a)    Box structured specification: It includes 3 types of boxes namely:
    Ø  Black box
    Ø  State box
    Ø  Clear box
    b)    Verification properties
    c)    Program functions
    1. Statistical usage testing: It helps in implementing cost effective orientation and process control. It provides a stratification mechanism to deal with situations that are critical. 


    Tuesday, October 23, 2012

    What is Segue Testing Methodology?


    Segue testing methodology as its name suggests has got its origin with the segue software corporation and this is what that we are going to discuss here. 

    This testing methodology is comprised of the 6 major phases that have been discussed in detail below:

    1. Planning phase: In this phase, the testing strategy is determined as well as the specific test requirements are defined.
    2. Capturing phase: In this phase the GUI objects present in the application under test are classified based on some criteria and also a test frame work is created for the execution of the tests.
    3. Creation phase: This phase involves the creation of the automated tests that are reusable as well. Several recording and programming techniques can be used for developing the test scripts using the 4test language of the segue software.
    4. Run phase: In this phase certain specific tests are selected by the user and executed on the application under test or AUT.
    5. Reporting phase: This phase involves the analyzation of the test results and generation of the defect reports.
    6. Tracking phase: This phase involves the tracking of the defects in the application under testing  or AUT and performing the regression testing.

    About Segue Testing Methodology

    - The quest-con technologies is one of the members of the silk elite partner program of the segue software. 
    - This partnership has resulted in the improved integrated automation software of the segue software with the quest assured which is the proprietary methodology of the quest-con  
    - This has proved to be an effective one for the mitigation of the risk that is involved in this whole automating manual process. 
    - Quest assured provides an environment for flexible development of the software.
    - The technologies used by it are all based up on the best practices of the quality assurance.
    - These are effectively integrated by the automation testing principles of the quest-con with the automation software solutions of the segue technology. 
    - For ensuring that the segue tools and technology are utilized to the full potential, the strong knowledge of the quest-con’s manual testing process as well as its capability of automating the appropriate portions of the concerned process are used. 
    - The segue testing actually emphasizes on the idea of the practical quality assurance.
    - Thus, the implementation of the automation solutions becomes relatively easy and less messy as when compared with the existing culture of the corporate. 
    - It is because of the segue testing technology that the software delivered by the segue software ensure accuracy as well as performance of the enterprise applications. 
    - With the segue testing technology, comprehensive performance, scalability, verification solutions and monitoring all is possible. 
    - Further, these qualities of the segue testing methodology make it even more reliable to be used in the fundamental business process and also provides the user with the most predictable outcomes.
    - Segue testing technology has effectively helped the companies in reducing the risks involved with the software.
    - It also helps in increasing the return investment that we find quite associated with the deployment status of the business applications. 
    - This all makes the segue software a leading innovator in technology. 
    One success example of the segue software is the silk test which is now its trade mark. 
    - Since long the segue testing technology has been known to dedicate itself entirely to the quality optimization of the solutions. 
    - Thus, the segue software today has overcome the various optimization challenges that are faced by the software testing world. 
    - The segue testing technology is based up on a result oriented approach which results in the proper optimization of each and every step of the testing process. 


    Friday, July 6, 2012

    What skills needed to be a good software test automator?


    If you have studied the history of the test automation process you might have observed that there have been so many failures which have now become the lessons for the entire testing community. Because of the lessons learnt it was decided, that the scripts that will have to be developed must be reusable. 
    In order make most of any testing methodology, it is required that it must be made manageable as well as reusable. 
    There are certain things that a tester should keep in mind to be a good tester:
    1. Test automation is not a sideline but rather a full time effort.
    2. Test frame work and test design should be treated as separate individual entities an not as the same one.
    3. The test frame work should be independent of the software system or application that is under testing.
    4. It should be easy to expand, perpetuate and maintain the test frame work.
    5. The test design as well as the test strategy should be independent of the frame work.
    6. The test strategy should be so effective that is should be capable of removing the complexities related to the test frame work from the testers.

    Skills needed to be a good software test automator


    In this article we have focussed up on the skills that are needed to be a good software test automator. Some other skills are:
    1. Good logic for programming code of the scripts.
    2. The automator must have good analytical skills.
    3. The automator must have adequate knowledge about the testing tools that are to be used in the automation process.
    4. He/ she should have the habit if thinking out of the box.
    5. Critical in nature
    6. A good automator can think well from the user’s point of view.
    7. A good automator keeps an eye on the details of the test automation process.
    8. The knowledge about domain is a must.
    9. Good judgement skills.
    10. Good code writing skills.

    Qualities of a good Test Automator


    - A good test automator is the one who implements a test strategy that supports development of the intuitive tests and can be executed both via automated tests and manually.
    - The test strategy that a good test automator uses will allow the tests to highlight the steps to be performed.
    - A good test automator develops a frame work to harness the benefits form the implementation of the key word drive testing scripts and tradition testing scripts. 
    - A good automator implements a test automation frame work in such a way that it is completely independent of the software application that is under question. 
    - A good automator fully documents and publishes the test automation frame work.
    - One thing that should always be kept in mind is that the testers are only testers and not programmers. 
    - Good testers and automators are those who have both the testing as well as programming skills. 
    - Most of the testers are domain experts only having zero or very little technical skills that are an immediate requirement for the software testing. 
    - Many testers also have a habit of splitting the time between testing and development phases and so they do not require to learn a complex scripting language. 


    Friday, June 29, 2012

    Give short description on random software testing techniques?


    Random testing is usually not followed because of its bad record as the case of worst program testing. But still it founds use in some of the software testing projects in the field. Though it has got a bad record, it is not always viewed as a foul case of software testing. 

    What are in demand are the software testing methodologies that take in to consideration the structure of the software system or application that is to be tested. Path testing, partition testing etc were some of the resultant techniques that grew out of such demands. 

    Is Random Testing Effective?


    - After a rigorous research, simulation results were presented in which it was observed that it is not always that random testing is bad for testing, but sometimes it proves to be quite cost effective.
    - Apart from the simulation results, the actual results of the random testing experiments were presented in which it was confirmed that the random testing is indeed a quite useful validation tool. 
    - The random testing has been categorized under the black box testing techniques which involve the testing of the software system or application with an arbitrary sub set of all the possible input values. 

    In this article we are going to discuss about the different techniques that can be used to carry out effective random testing. For random testing, the software testing strategies that take in to account the structure of the software system or application are usually preferred. 
    Such two testing strategies are namely:
    1. Path testing and
    2. Partition testing

    Random Software Testing Techniques


    - Path testing is considered to be an instance of the partition testing and is hence usually deployed for the structural testing. 
    - On the other hand, the partition test is usually used for random testing that works with any testing scheme which involves the forced execution of few test cases from the sub sets of a partitions of a domain from where input is supplied. 
    - Simulation results as well as the results of actual random testing were presented and it was concluded from them that the random testing is quite an effective and need not always be a bad case.
    - The actual random testing experiments results declared the random testing as a viable tool for validation testing. 
    - Another technique for random testing is the simple black box random input testing.
    - It is considered to be a crude technique that effectively locates most of the bugs in real time software systems and applications. 
    - In this technique the software system or application is subjected to two kinds of input: With these simple parameters any application can be subjected to random testing. 
    1. Streams of valid mouse and key board events
    2. Streams of valid Win32 messages.

    Goals of Random Testing Techniques


    - Using the random testing techniques the command line applications could be crashed or hung. - The basic goal of any random testing technique is to stress the software system or application program. 
    - In random testing you are required to simulate the user input in the testing environment.
    Firstly, the random user input is delivered to the software system or applications by putting this random input into the main communication stream between the server and the application. 
    In the case of first type of input data, complete random data was sent to the software systems since it provides an insight in to the level of robustness and testing of the software system. 
    - Any failure can be encountered during the normal use of the software system. 


    Saturday, May 19, 2012

    What is meant by risk-driven iterative planning?


    So many approaches to iterative development and planning have been developed under the context of the agile software development process. Risk based iterative planning is one of such approaches and this is what that has been discussed in this article.

    About Iterative Development


    - The iterative development is based on the idea of carrying the development further step by step or incremental accomplishing one goal with each step.
    - This is different from the normal approaches which take a momentary leap from the problem to the solution. 
    - Such small development steps keep the development process on track and moreover it is much easier to test these small steps rather than testing the whole development process in one go.
    - These steps are commonly known as the loops and are repeated throughout the development process.
    - An iterative process may contain various loops depending on the complexity of the problem.
    - Also depending up on the degree of the uncertainty of the problem and the situation, the looping can take various forms. 

    What is Risk Based Iterative Planning


    Below mentioned are the steps that are included in a typical risk based iterative planning:

    1. Problem formulation: This step involves the defining of the following aspects:
    (a) Time scope: the yearly programs are linked to their yearly budgets and other multiple time scopes are also defined.
    (b)  Situation: the situation is defined and broken down in to smaller partial activities which are not interconnected with each other. A frame work corresponding to the budget of the problem is created.
    (c)  Stake holders: the participation of the stake holders is identified and the decision maker is supposed to have full knowledge about the whole development process.
    (d)  Risk: all the risks associated with the whole development process as well as the project are identified and the iterative plan is developed based on it.

         2. Information gathering: The information to be gathered regarding the project includes:
         (a) Strengths
         (b) Weaknesses
         (c) Opportunities
         (d) Threats
         (e)  Level of mobilization
         (f)  Stake holder identification
         (g) User response and so on.

         3. Vision Building: A vision is created for the development process which states how the goal can be achieved.
         4. Strategy Formation: A strategy or approach is defined to accomplish the purpose and the people involved are assigned different responsibilities and they are granted powers as per the contractual agreement.
         5. Implementation: To implement the plan power is granted to the participants and some times partnership is also formed.
         6. Evaluation of the development process: The development plan is evaluated based on certain factors like risk, reliability, stability and so on.

         

    Always Remember about Risk Driven Iterative Planning


    - Each iteration by itself is of no use! 
    - It is only when all the iterations are put together that they become useful. 
    - One point that you should always keep in your mind is that you should be as specific as possible while designing a risk based iterative plan. 
    - Try to keep the length of the iterations as short as possible so that you can complete the iterative development in the allocated period of the time. 
    - The most critical and difficult of the iterative planning is the generation of the tasks.
    - The techniques you are using should have been stream lined to fit your time and cost budget. 
    - You need to maintain a flow between all the iterations. 
    - Remember that you may organize your development process in to iterations but that is not exactly what is called the agile development.
    - The iterative planning is also called the sprint planning. 
    - The iterative planning based up on the associated risks proves to be quite useful since it helps in avoiding many of the grave problems that can later hamper all the development efforts. 


    Tuesday, May 15, 2012

    How does a DU path segment play a role in data flow testing?


    Whenever you would have came across the topic of data flow testing, you surely would have heard about the term “du path segment” but still not familiar with it! This article if focussed up on the du path segments and what role it has got to play in the data flow testing. 
    We will discuss the du path segments under the context of data flow testing and not as a separate topic so that it becomes easy for you to understand. 
    The whole process of data flow testing is guided by a control flow graph that apart from just guiding the testing process also helps in rooting out the anomalies present in the data flow. With all the anomalies being already discovered one can now design better path selection strategies taking in to consideration these data flow anomalies. 

    There are nine possible anomalies combinations as mentioned below:
    1. dd: harmless but suspicious
    2. dk: might be a bug
    3. du: a normal case
    4. kd: a normal situation
    5. kk: harmless but might be containing bugs
    6. ku: a bug or error
    7. ud: not a bug because of re- assignment
    8. uk: a normal situation
    9. uu: a normal situation
    For data flow testing some data object states and usage have been defined as mentioned below:
    1.      Defined, initialized, created à d
    2.      Killed, undefined, unreleased àk
    3.      Used for:
    (a)    Calculations à c
    (b)   Predictions à p

    Terminology associated with Data Flow Testing


    Almost all the strategies that are implemented for the data flow testing are structural in nature. There are certain terminologies associated with the data flow testing as stated below:
    1. Definition clear path segment
    2. Loop free path segment
    3. Simple path segment and lastly
    4. Du path

    What is a DU path Segment?


    - DU path segment can be defined as a path segment which is simple and definition clear if and only if the last link or node of the path has a use of the variable x.

    Let us take an example to make the concept of du path segment clearer. 
    - Suppose a du path for a variable X exists between two nodes namely A and B such that the last link between the two nodes consists of a computational use of the variable X. 
    - This path is definition clear and simple. 
    - If there exists a node C  at the last but one position that is the path is having a predicate use and the path from the node A to node C is definition clear and does not contain any loop. 
    - Several strategies have been defined for carrying out the data flow testing like:
    1. ADUP or all du paths strategy
    2. AU or all uses strategy
    3. APU+ C or all p uses/ some c uses strategy
    4. ACU +P or all c uses/ some p uses strategy
    5. AD or all definitions strategy
    6. APU or all predicate uses strategy
    7. ACU or all computational uses strategy

    Strategy for DU Path Strategy

    We shall describe in detail here only the ADUP or all du paths strategy. 
    - This strategy is considered to be one of the most reliable and strongest data flow testing strategy. 
    It involves the use or exercising of all the du paths included in the definition of the variables that have defined to every use of the definition.
    - All the du paths suppose to be a strong criterion for testing but it does not involve as many tests as it seems.
    - Simultaneously many criterion are satisfied by one test for several definitions and uses of the variables.




    How does a definition use association play a role in data flow testing?


    Definition use association is one of the terms that appear at the scene of data flow testing and quite many of us are unaware of it. This article is all about the concepts of the definition use associations and what role does they have got to play in the data flow testing. 
    The definition use association forms quite an important part of the data flow testing. Let us see how! 
    First we are going to discuss some concepts of the data flow testing in regard with the definition use associations and then we will discuss the role of the definition use associations in the data flow testing. 

    About Data Flow Testing


    - A control flow graph is an important tool that is used by the data flow testing so that the anomalies related to the data can be explored. 
    - A proper path selection strategy is what that is required for detection of such anomalies. 
    - The path strategy which is to be used can be decided on the basis of the data flow anomalies discovered earlier. 
    - Data flow testing is nothing but a family of path testing strategies through the control of a software program or application. 
    - The path testing is required so that the sequence of possible events associated with the objects’ status can be explored.
    - It is necessary that you keep the number of paths sufficient and sensible so that no object is left without initialization and without being used at least once in its life time without carrying out unnecessary testing. 
    Data flow testing is comprised of two types of anomaly detection namely:
    1. Static analysis: It is carried out on the program code without its actual execution. It involves finding syntax errors.
    2. Dynamic analysis: It is carried out on a program while it is in a state of execution. It involves finding the logical errors.
    - The data objects have been categorized in to several categories so as to make data flow testing much easy:
    1. Defined, created, initialized (d)
    2. Killed, undefined, released (k)
    3. Used (u): in calculations (c)
    4. In predicates (p)

    Anomalies discovered by the static analysis are meant to be handled by the compiler itself. But the static analysis and the dynamic analysis do not suffice altogether. A rigorous path testing is required. 

    About Definition Use Associations


    - The definition use associations or the “du segments” are the path segments whose last links have a use of variable X and are simple and definition clear. 
    - Typically  a definition use association is a combination of triple elements (x, d, u) where:
    1. X is the variable
    2. D is the node consisting of a definition of variable x
    3. U is either a predicate node or a statement depending up on the case and consists of a use of x.
    - A sub path from d to u is also included in the flow graph with no definition of variable x occurring in between the d and u. 
    - Below mentioned are some examples of the def use associations:
    1. (x, 3, 4)
    2. (x, 1, 4)
    3. (y, 2, (4, t))
    4. (z, 2, (3, t)) etc.
    - Some of the most common data flow testing strategies are:
    1. All uses (AU)
    2. All DU paths (ADUP) and many more.
    - First advice for effective data flow testing would be to resolve all the data flow anomalies discovered above. 
    - Carrying out data flow operations on the same variable and within the same routines can also reap you good results. 
    - It is advisable to use defined types and strong typing wherever it is possible in the program. 


    Facebook activity