Sunday, July 14, 2013
What is Polling?
Posted by
Sunflower
at
7/14/2013 05:08:00 PM
0
comments
Labels: Busy, Character, Client, Commands, CPU, Devices, External, Input, Instruction, Internal, Interrupts, Operation, Output, Polling, program, Registers, Software, state, Status, System
![]() | Subscribe by Email |
|
Sunday, April 28, 2013
What is fragmentation? What are different types of fragmentation?
What is Fragmentation?
- The external fragmentation
- Internal fragmentation and
- Data fragmentation
Types of Fragmentation
Posted by
Sunflower
at
4/28/2013 10:13:00 PM
0
comments
Labels: Algorithms, Allocation, Chunk, CPU, Data, External, Fragmentation, Inefficient, Internal, Memory, Performance, Principle, program, Space, Storage, System, Types, Wastage
![]() | Subscribe by Email |
|
Wednesday, March 21, 2012
Data flow testing is a white box testing technique - Explain?
A program is said to be in active state whenever there is some data flow in the program. Without having the data flowing around the whole program, it would not have been possible for the software systems or application to do any thing.
Hence, we conclude that data flow is an extremely important aspect of any program since it is what that keeps a program going on. This data flow also needs to be tested like any other aspect of the software system or application and therefore, this whole article is dedicated to the cause of the data flow testing.
What is Data Flow Testing?
- Data flow testing technique has been categorized under the white box testing techniques since the tester needs to have an in depth knowledge of the whole software system or application.
- Data flow testing cannot be carried out without a control flow graph since without that graph the data flow testing won’t be able to explore any of the unreasonable or unexpected things i.e., anomalies that can influence the data of the software system or application.
- Taking these anomalies in to consideration, it helps in defining the strategies for the selection of the test paths that play a great role in filling up the gaps between the branch testing or statement testing and the complete path testing.
- Data flow testing implements a whole lot of testing strategies chosen in the above mentioned way for exploring the events regarding the use of the data that occurs in a sequential way.
- It is a way determining that whether or not every data object has been initialized before it used and whether or not all the data objects are used at least once during the execution of the program.
Classification of Data types
The data objects have been classified in to various types based up on their use:
- Defined, created and initialized data objects denoted by d.
- Killed, undefined and released data objects denoted by k.
- Used data objects in predicates, calculations etc, denoted by u.
Critical Elements for Data Flow Testing
- The critical elements for the data flow testing are the arrays and the pointers.
- These elements should not be under estimated since they may fail to include some DU pairs and also they should not be over estimated since then unfeasible test obligations might be introduced.
- The under estimation is preferable over the over estimation since over estimation is causes more expense to the organization.
- Data flow testing is also aimed at distinguishing between the important and not so important paths.
- During the data flow testing many a times pragmatic compromises are needed to make since there exist so many unpredictable properties and exponential blow up of the paths.
Anomaly Detection under Data Flow Testing
There are various types of anomaly detection that are carried under the data flow testing:
1. Static anomaly detection
This analysis is carried out on the source code of the software program without the actual execution.
2. Dynamic anomaly detection
This is just the opposite of the static testing i.e., it is carried out on a running program.
3. Anomaly detection via compilers
Such detection are possible due to the static analysis. Certain compilers like the optimizing compilers can even detect the dead variables. The static analysis itself is incapable of detecting the dead variables since they are unreachable and thus unsolvable in the general case.
Other factors:
There are several other factors that play a great role in the data flow testing and they are:
1. Data flow modelling based on control flow graph
2. Simple path segments
3. Loop free path segments
4. DU path segments
5. Def – use associations
6. Definition clear paths
7. Data flow testing strategies
Posted by
Sunflower
at
3/21/2012 11:51:00 PM
0
comments
Labels: Anomaly, Application, Create, Critical, Data, Data Flow Testing, Data objects, In-depth, Internal, Paths, program, Software Systems, Strategy, Techniques, Test cases, Tester, White box testing
![]() | Subscribe by Email |
|
Thursday, July 28, 2011
What are the characteristics of testing and a good test?
Testing goal is to find errors. It should exhibit a set of characteristics that achieve the goal of finding the errors with minimum effort. The characteristics of software testing includes:
- How easily a software can be tested? i.e. test-ability
- How efficiently it can be tested? i.e. oper-ability
- What you see is what you test? i.e. observability
- How much we control the software so that testing can be automated and optimized? i.e. controllability
- Isolating problems and perform smarter retesting by controlling scope of testing i.e. decomposability.
- The program should be simple so that it can become easy to test i.e. simplicity.
- The fewer the changes, the fewer the disruptions to testing i.e. stability.
- The more information we have, the smarter we will test i.e. understandability.
A good test has the following characteristics:
- A good test has a high probability of finding an error.
- A good test is not redundant.
- A good test has the highest likelihood of uncovering a whole class of errors.
- A good test should be neither too simple nor too complex.
There are two ways to test an engineered product:
- Knowing the internal workings of product and tests can be conducted that can ensure that internal workings are performed according to specifications and all internal components are exercised properly.
- Knowing the output or the function for which the product is designed, tests are conducted to demonstrate each function is fully operational and checking the errors at the same time.
Posted by
Sunflower
at
7/28/2011 01:12:00 PM
1 comments
Labels: Characteristics, Components, Controllability, Defects, Design, Errors, Function, Internal, Operability, Output, Simplicity, Software testing, Stability, Testability, Tests
![]() | Subscribe by Email |
|
Thursday, July 14, 2011
What are the principles of Design Modeling?
Design models provide a concrete specification for the construction of the software. It represents the characteristics of the software that help the practitioners to construct it effectively. Design modeling represents the dummy model of the thing that is to be built. In software systems, the design model provides different views of the system.
A set of design principles when applied creates a design that exhibits both internal and external quality factors. A high quality design can be achieved.
- The work of analysis model is to describe the information domain of problem, user functions, analysis classes with methods. The work of design model is to translate information from analysis model into an architecture. The design model should be traceable to analysis model.
- A data design simplifies program flow, design and implementation of software components easier. It is as important as designing of processing functions.
- In design modeling, the interfaces should be designed properly. It will make integration much easier and increase efficiency.
- Design should always start considering the architecture of the system that is to be built.
- End user plays an important role in developing a software system. User Interface is the visible reflection of the software. User Interface should be in terms of the end-user.
- Component functionality should focus on one and only one function or sub-function.
- In design modeling, the coupling among components should be as low as is needed and reasonable.
- Design models should be able to give information developers, testers and people who will maintain the software. In other words, it should be easily understandable.
- Design models should be iterative in nature. With each iteration, the design should move towards simplicity.
Posted by
Sunflower
at
7/14/2011 11:52:00 AM
0
comments
Labels: Architectural design, Attributes, Customer, Data, Design Modeling, Domain, Functions, Internal, Models, Principles, Representation, Requirements, Software
![]() | Subscribe by Email |
|
Wednesday, July 13, 2011
What are the principles of Analysis Modeling?
Analysis models represent customer requirements. Design models provide a concrete specification for the construction of the software. In analysis models, software is depicted in three domains. These domains are information domain, functional domain and behavioral domain.
Analysis modeling focuses on three attributes of software: information to be processed, function to be delivered and behavior to be exhibited. There are set of principles which relate analysis methods :
- The data that flows in and out of the system and data stores collectively are called information domain. This information domain should be well understood and represented.
- The functions of the software effect the control over internal and external elements. These functions need to be well defined.
- The software is influenced with external environment. Software behaves in a certain manner. This behavior should be well defined.
- Partitioning is a key strategy in analysis modeling. Divide the models depicting information, function and behavior in a manner which uncovers detail in hierarchical way.
- Description of problem from end-user's perspective is the start point of analysis modeling. Task should move from essential information toward implementation detail.
Posted by
Sunflower
at
7/13/2011 12:52:00 PM
1 comments
Labels: Analysis, Analysis Modeling, Attributes, Customer, Data, Domain, Functions, Internal, Models, Principles, Representation, Requirements, Software
![]() | Subscribe by Email |
|
Friday, April 29, 2011
Explain Black box and White box testing? What are their advantages and disadvantages?
For complete testing of a software product both black and white box testing are necessary.
Black-box testing
This testing looks at the available inputs for an application and the expected outputs that should result from each input. It does not have any relation with the inner workings of the application, the process undertaken or any other internal aspect of the application. Search engine is a very good example of a black box system. We enter the text that we want to search, by pressing “search” we get the results. Here we are not aware of the actual process that has been implemented to get the results. We simply provide the input and get the results.
White-box testing
This testing looks into the complex inner working of the application; it tests the processes undertaken and other internal aspects of the application. While black box testing is mainly concerned with the inputs and outputs of the application, white box testing help us to see beyond i.e. inside the application. White-box testing requires a degree of sophistication which is not the case with the black-box testing, as the tester is required to interact with the objects that are used to develop an application rather than having easy access to the user interface. In-circuit testing is a good example of a white-box system testing where the tester is looking at the interconnections between different components of the application and verifying the proper functioning of each internal connection. We can also consider the example of an auto-mechanic who takes care of the inner workings of a vehicle to ensure that all the components are working correctly to ensure the proper functioning of the vehicle.
The basic difference between black-box and white-box testing is the areas of focus which they choose. We can simply say that black-box testing is focused on results. Where if an action is performed and the desired result is obtained then the process that has actually been used is irrelevant. White-box testing, on the other hand focuses on the internal working of an application and it is considered to be complete only when all the components are tested for proper functioning.
Advantages of Black-box testing
- Since tester does not have to focus on the inner working of an application, creating test cases is easier.
- Test case development is faster as tester need not to spend time on identifying the inner processes; his only focus is on the various paths that a user may take through GUI.
- It is simple to use as it focuses only on valid and invalid inputs and ensures that correct outputs are obtained.
Drawbacks of Black-box testing
Constantly changing GUI makes script maintenance difficult as the input may also be changing. Interacting with GUI may result in making the test script fragile and it may not properly execute consistently.
Advantages of White-box testing
- Since the focus is on the inner working the tester can identify objects pro grammatically. This can be useful when the GUI is frequently changing.
- It can improve stability and re usability of test cases provided the object of an application remains the same.
- By testing each path completely it is possible for a tester to achieve thoroughness.
Drawbacks of White-box testing
Developing test cases for white-box testing involves high degree of complexity therefore it requires highly skilled people to develop the test cases. Although to a great extent fragility is overcome in white-box testing BUT change in the objects name may lead to breaking of the test script.
Posted by
Sunflower
at
4/29/2011 11:53:00 AM
0
comments
Labels: Advantages, Application, Black box testing, Components, Disadvantages, Focus areas, Inputs, Internal, Outputs, Product, Quality, Results, Software, Software testing, Test cases, White box testing
![]() | Subscribe by Email |
|
Wednesday, April 20, 2011
What are different programming guidelines ?
Programming is a skill and the programmer should be given the flexibility to implement the code. Guidelines that should be kept in mind during programming are:
- Pseudocodes can be used to adapt the design to the chosen programming language. They are structured English that describes the flow of a program code. The design is an outline of what is to be done in component and in which the programmer adds his creativity and expertise. Codes can be rearranged orreconstructed with a minimum of rewriting.
- Control structure is based on messages being sent among objects of classes, system states and changes in variables. It is important for the program structure to reflect the design's control structure. Modularity, Coupling, Cohesion are good design characteristics that must be translated to program characteristic.
- Documentation guidelines is a set of written description that explain to the reader what the programs do and how they do it. Two program documentation are created:
Internal documentation is a descriptive document directly written within the source code. A summary information is provided to describe its data structures, algorithms, and control flow.
External Documentation consists of other documents that are not part of the source code but is related to the source code. The preconditions and post conditions of the source code are identified in object oriented system.
Posted by
Sunflower
at
4/20/2011 12:32:00 PM
0
comments
Labels: Code, Components, Control, Design, Documentation, External Documentation, Guidelines, Implementation, Internal, Languages, Messages, program, Programming, Pseudo-codes, Structure
![]() | Subscribe by Email |
|
Sunday, March 27, 2011
What is Quality and what are different perspectives used in understanding quality?
Quality is the total characteristic of an entity to satisfy stated and implied needs. Three perspectives are used in understanding quality:
- quality of the product
- quality of the process
- quality in the context of the business environment.
Quality of the Product
- The quality of the product has a different perspective for different people.
- End users assume that the software has quality if it gives what they want, when they want it, all the time. The ease of use and is also a important criterion for end users.
- For software engineers, they take a look at the internal characteristics rather than the external.
Quality of the Process
As software engineers, we valuethe quality of the software development process. Process guidelines suggests that byimproving the software development process,we also improve the quality of theresulting product. Common guidelines of process include Capability Maturity Model Integration (CMMI),ISO 9000:2000, Software Process Improvement and Capability Determination (SPICE).
Quality in the Context of Business Process
Quality is viewed in terms of the products and services being provided by the business in which the software is used. Improving the technical quality of the business process adds value to the business, i.e., technical value of thesoftware translates to business value.
To address quality issues:
- use quality standards.
- understand people involved in development process.
- understand the systematic biases in human nature.
- commit to quality.
- manage user requirements.
Posted by
Sunflower
at
3/27/2011 10:02:00 PM
0
comments
Labels: Business process reengineering, Characteristics, Context, Development, End-users, External schema, Guidelines, Improvement, Internal, Perspectives, Product, Quality, Software
![]() | Subscribe by Email |
|
Monday, October 11, 2010
What is Black box testing and what are its advantages and disadvantages ?
Black box testing is a test design method. It treats the system as a "black-box" so it does not explicitly use the knowledge of the internal structure. In other words, the test engineer does not require to know the internal working of the black box. Black box testing focuses on the functionality part of the module. Black box testing is also known as opaque box and closed box testing. While the term black box testing is more commonly, many people prefer the terms "behavioral" and "structural" for black-box and white-box respectively.
There are bugs that cannot be found using only black box testing or only white box testing. If the test cases are extensive and the test inputs are also from a large sample space then it is always possible to find majority of the bugs through black box testing.
The basic functional or regression testing tools capture the results of black box tests in a script format. Once captured, these scripts can be executed against future builds of an application to verify that new functionality has not disabled previous functionality.
Advantages of Black Box Testing
- It is not important for the tester to be technical. He can be a non-technical person.
- This testing is most likely to find those bugs as the user would have found.
- Testing helps to identify the vagueness and contradiction in functional specifications.
- Test cases can be designed as soon as the functional specifications are complete.
Disadvantages of Black Box Testing
- There are chances of repetition of tests that are already done by the programmer.
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So, writing test cases is slow and difficult.
- There are chances of having unidentified paths during testing.
Posted by
Sunflower
at
10/11/2010 12:07:00 PM
0
comments
Labels: Advantages, Behavioral, Black box testing, Design, Disadvantages, Internal, Internal Structure, Software, Software testing, Structural testing, Testing tools, Tests, Tools
![]() | Subscribe by Email |
|
Thursday, October 7, 2010
What is white Box Testing and why we do it ?
White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations are performed according to the specifications and all the internal components have been adequately exercised. In other words, white box testing tends to involve the coverage of the specification in the code.
The control structure of the procedural design to derive test cases is used during white box testing. Using the methods of WBT, a tester can derive the test cases that guarantee that all independent paths within a module have been exercised at least once, exercise all logical decisions on their true and false values, execute all loops at their boundaries and within their operational bounds and exercise internal data structures to ensure their validity.
White box testing is done because black box testing uncover sorts defects in the program. These defects are:
- Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. Errors tend to creep into our work when we design and implement functions, conditions or controls that are out of the program.
- The logical flow of the program is sometimes counter intuitive, meaning that our unconscious assumptions about flow of control and data may lead to design errors that are uncovered only when path testing starts.
- Typographical errors are random, some of which will be uncovered by syntax checking mechanisms but others will go undetected until test begins.
All we need to do in white box testing is to define all logical paths, develop test cases to exercise them and evaluate results i.e. generate test cases to exercise the program logic exhaustively. We need to know the program well, the specifications and the code to be tested, related documents should be available to us.
Posted by
Sunflower
at
10/07/2010 12:53:00 PM
0
comments
Labels: Bugs, Code, Code Coverage, Defects, Design, Errors. Specification, Glass Box testing, Internal, Paths, Procedural, Product, Software, Software testing, Structure, Tests, White box testing
![]() | Subscribe by Email |
|
Sunday, August 1, 2010
Statement Coverage Testing in White Box Testing
The purpose of white box testing is to make sure that functionality is proper and the information on the code coverage. It tests the internal structure of the software. It is also known as structural testing, glass testing and clear box testing.
Statement coverage is the most basic form of code coverage. A statement is covered if it is executed. Note that a statement does not necessarily correspond to a line of code. Multiple statements on a single line can confuse issues - the reporting if nothing else.
- In this type of testing the code is executed in such a manner that every statement of the application is executed at least once.
- It helps in assuring that all the statements execute without any side effect.
- Statement coverage criteria call for having adequate number of test cases for the program to ensure execution of every statement at least once.
- In spite of achieving 100% statement coverage, there is every likelihood of having many undetected bugs.
- A coverage report indicating 100% statement coverage will mislead the manager to feel happy with a false temptation of terminating further testing which can lead to release a defective code into mass production.
- We can not view 100% statement coverage sufficient to build a reasonable amount of confidence on the perfect behavior of the application.
- Since 100% statement coverage tends to become expensive, the developers chose a better testing technique called branch coverage.
Posted by
Sunflower
at
8/01/2010 04:36:00 PM
0
comments
Labels: Code, Coverage, Features, Functionality, Internal, Purpose, Software, Statement Coverage, Testing approach, White box testing
![]() | Subscribe by Email |
|
Saturday, July 17, 2010
What can be white box testing used for, tools used for white box testing.
White box testing (WBT) is also called Structural or Glass box testing. It deals with the internal logic and structure of the code. A software engineer can design test cases that exercise independent paths within a module or unit, exercise logical decisions on both their true and false side, execute loops at their boundaries and within their operational bounds and exercise internal data structures to ensure their validity. White Box testing can be used for :
- looking into the internal structures of a program.
- test the detailed design specifications prior to writing actual code using the static analysis techniques.
- organizing unit and integration test processes.
- testing the program source code using static analysis and dynamic analysis techniques.
Tools used for White Box testing:
- Provide run-time error and memory leak detection.
- Record the exact amount of time the application spends in any given block of code for the purpose of finding inefficient code bottlenecks.
- Pinpoint areas of the application that have and have not been executed.
The first step in white box testing is to comprehend and analyze available design documentation, source code, and other relevant development artifacts, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively.
Posted by
Sunflower
at
7/17/2010 01:05:00 PM
0
comments
Labels: Control Structure Testing, Glass Box testing, Internal, Logical, Module, Software testing, Structural testing, Tools, White box testing. Black box testing
![]() | Subscribe by Email |
|
Tuesday, August 4, 2009
DBMS Three-Schema Architecture and Data Independence
WHAT IS DBMS ?
- To be able to carry out operations like insertion, deletion and retrieval, the database needs to be managed by a substantial piece of software; this software is usually called a Database Management System(DBMS).
- A DBMS is usually a very large software package that enables many different tasks including the provision of facilities to enable the user to access and modify information in the database.
- Data Description Languages (DDL) and Data Manipulation Languages (DML) are needed for manipulating and retrieving data stored in the DBMS. These languages are called respectively.
An architecture for database systems, called the three-schema architecture was proposed to help achieve and visualize the important characteristics of the database approach.
THE THREE-SCHEMA ARCHITECTURE:
The goal of the three-schema architecture is to separate the user applications and the physical database. In this architecture, schemas can be defined at 3 levels :
1. Internal level or Internal schema : Describes the physical storage structure of the database. The internal schema uses a physical data model and describes the complete details of data storage and access paths for the database.
2. Conceptual level or Conceptual schema : Describes the structure of the whole database for a community of users. It hides the details of physical storage structures and concentrates on describing entities, data types, relationships, user operations, and constraints. Implementation data model can be used at this level.
3. External level or External schema : It includes a number of external schemas or user views. Each external schema describes the part of the database that a particular user is interested in and hides the rest of the database from user. Implementation data model can be used at this level.
IMPORTANT TO REMEMBER :
Data and meta-data
- three schemas are only meta-data(descriptions of data).
- data actually exists only at the physical level.
Mapping
- DBMS must transform a request specified on an external schema into a request against the conceptual schema, and then into the internal schema.
- requires information in meta-data on how to accomplish the mapping among various levels.
- overhead(time-consuming) leading to inefficiencies.
- few DBMSs have implemented the full three-schema architecture.
DATA INDEPENDENCE
The disjointing of data descriptions from the application programs (or user-interfaces) that uses the data is called data independence. Data independence is one of the main advantages of DBMS. The three-schema architecture provides the concept of data independence, which means that upper-levels are unaffected by changes to lower-levels. The three schemas architecture makes it easier to achieve true data independence. There are two kinds of data independence.
- Physical data independence
* The ability to modify the physical scheme without causing application programs to be rewritten.
* Modifications at this level are usually to improve performance.
- Logical data independence
* The ability to modify the conceptual scheme without causing application programs to be rewritten.
* Usually done when logical structure of database is altered.
Logical data independence is harder to achieve as the application programs are usually heavily dependent on the logical structure of the data. An analogy is made to abstract data types in programming languages.
Posted by
Sunflower
at
8/04/2009 08:37:00 AM
0
comments
Labels: Architecture, Conceptual schema, Data independence, Database Management system, Databases, DBMS, External schema, Internal, Logical, Physical, three-schema architecture
![]() | Subscribe by Email |
|