Subscribe by Email


Showing posts with label Internal. Show all posts
Showing posts with label Internal. Show all posts

Sunday, July 14, 2013

What is Polling?

- Polling is often referred to as the polled operation.
- When the statuses of the external devices are actively sampled by a client program just like a synchronous activity is referred to as the polling. 
- The common use of the polling is in the input and output operations. 
- In rare cases, polling is also called as the software driven I/O or just simply as polled I/O. 
- As and when required, polling is also carried out with the busy waiting synonymous. 
- Polling is then referred to as the busy–wait polling. 
- In this case whenever it is required to carry out an input/ output operation, the system just checks the status of the device required for fulfilling this operation until it is idle. 
- When it becomes idle it is accessed by the I/O operation. 
- Such polling may also refer to a state in which the status of the device is checked again and again for accessing it if idle. 
- If the device is occupied, the system is forced to return to some other pending task. 
- In this case the CPU time is wasted less when compared to what happens in busy waiting. 
- However, this is not a better alternative to interrupt driven I/O polling. 
- In single purpose systems that are too simple, using busy-wait polling is perfectly fine if the system cannot take any action until the I/O device has been accessed. 
- But traditionally, the polling was thought to be a consequence of the operating systems and simple hardware that do not support multitasking. 
- The polling works intimately with the low level hardware usually. 
- For example, a parallel printer port can be polled for checking whether or not it is ready for printing another character. 
- This involves just the examination of a bit. 
- The bit to be examined represents the high or low voltage stage of the single wire in the cable of the printer during the time of reading. 
- The I/O instruction by which this byte is read is also responsible for transferring the voltage state directly to the eight flip flops or circuits. 
- These 8 flip flops together constitute one byte of a register of CPU. 

Polling also has a number of disadvantages. 
- One is that there is limited time for servicing the I/O devices. 
- Polling has to be done within this time period only. 
- But in some cases there are many devices to be checked which cause the polling time to exceed the given limit. 
- The host keeps on hitting the busy bit until the device becomes idle or clear. 
When the device is idle, the state is written in to the command register and also in the data out register. 
- The command ready bit is set to 1. 
- The controller sets the busy bit once it knows that the command ready bit has been set.  
- After reading from the command register, the controller carries out the required I/O operation on the device. 
- On the other hand, if the read bit has been set to one, the controller loads the device data in to the data in register. 
- This data is further read by the host. 
- Once the whole action has been completed, the command ready bit is cleared by the controller. 
- The error bit is also cleared for showing that the operation has been completed successfully. 
- At the end the busy bit is also set.
- Polling can be seen in the terms of master slave scenario where the master sends inquiring about the working status slave devices i.e., whether they are clear or engaged. 


Sunday, April 28, 2013

What is fragmentation? What are different types of fragmentation?


In the field of computer science, the fragmentation is an important factor concerning the performance of the system. It has a great role to play in bringing the performance of the computers. 

What is Fragmentation?

- It can be defined as a phenomenon involving the inefficient use of the storage space that in turn reduces the capacity of the system and also brings down its performance.  
- This phenomenon leads to the wastage of the memory and the term itself means the ‘wasted space’.
- Fragmentation is of three different forms as mentioned below:
  1. The external fragmentation
  2. Internal fragmentation and
  3. Data fragmentation
- All these forms of fragmentation might be present in conjunction with each other or in isolation. 
- In some cases, the fragmentation might be accepted in exchange of simplicity and speed of the system. 

Basic principle behind the fragmentation concept. 
- The CPU allocates the memory in form of blocks or chunks whenever requested by some computer program. 
- When this program has finished executing, the allocated chunk can be returned back to the system memory. 
- The size of memory chunk required by every program varies.
- In its lifetime, a program may request any number of memory chunks and free them after use. 
- When a program begins with its execution, the memory areas that are free to allocated, are contiguous and long. 
- After prolonged usage, these contiguous memory locations get fragmented in to smaller parts. 
- Later, a stage comes when it becomes almost impossible to serve the large memory demands of the program. 

Types of Fragmentation


1.External Fragmentation: 
- This type of fragmentation occurs when the available memory is divided in to smaller blocks and then interspersed. 
- Certain memory allocation algorithms have a minus point that they are at times unable to order the memory used by the programs in such a way that its wastage is minimized. 
- This leads to an undesired situation where even though we have free memory, it cannot be used effectively since being divided in to very small parts that alone cannot satisfy the memory demands of the programs.  
- Since here, the unusable storage lies outside the allocated memory regions, this type of fragmentation is called external fragmentation. 
- This type of fragmentation is also very common in file systems since here many files with different sizes are created as well as deleted. 
- This has a worse effect if the file deleted was in many small pieces. 
- This is so because this leaves similar small free memory chunks which might be of no use.

2. Internal Fragmentation: 
- There are certain rules that govern the process of memory allocation. 
- This leads to the allocation of more computer memory what is required. 
- For example, as the rule memory that is allocated to programs should be divisible by 4, 8 or 16. So if some program actually requires 19 bytes, it gets 20 bytes. 
- This leads to the wastage of extra 1 byte of memory. 
- In this case, this memory becomes unusable and is contained in the allocated region itself and therefore this type of fragmentation is called as the internal fragmentation.
- In computer forensic investigation, the slack space is the most useful source for evidence. 
- However, it is often difficult to reclaim the internal fragmentation. 
- Making a change in the design is the most effective way for preventing it. 
Memory pools in dynamic memory allocation are the most effective methods for cutting down the internal fragmentation. 
- In this the space overhead is spread by a large number of objects.

3. Data Fragmentation: 
This occurs because of breaking up of the data in many pieces that lie far enough from each other.
                                                                                                               


Wednesday, March 21, 2012

Data flow testing is a white box testing technique - Explain?

A program is said to be in active state whenever there is some data flow in the program. Without having the data flowing around the whole program, it would not have been possible for the software systems or application to do any thing.

Hence, we conclude that data flow is an extremely important aspect of any program since it is what that keeps a program going on. This data flow also needs to be tested like any other aspect of the software system or application and therefore, this whole article is dedicated to the cause of the data flow testing.

What is Data Flow Testing?

- Data flow testing technique has been categorized under the white box testing techniques since the tester needs to have an in depth knowledge of the whole software system or application.

- Data flow testing cannot be carried out without a control flow graph since without that graph the data flow testing won’t be able to explore any of the unreasonable or unexpected things i.e., anomalies that can influence the data of the software system or application.

- Taking these anomalies in to consideration, it helps in defining the strategies for the selection of the test paths that play a great role in filling up the gaps between the branch testing or statement testing and the complete path testing.

- Data flow testing implements a whole lot of testing strategies chosen in the above mentioned way for exploring the events regarding the use of the data that occurs in a sequential way.

- It is a way determining that whether or not every data object has been initialized before it used and whether or not all the data objects are used at least once during the execution of the program.

Classification of Data types
The data objects have been classified in to various types based up on their use:

- Defined, created and initialized data objects denoted by d.
- Killed, undefined and released data objects denoted by k.
- Used data objects in predicates, calculations etc, denoted by u.

Critical Elements for Data Flow Testing

- The critical elements for the data flow testing are the arrays and the pointers.

- These elements should not be under estimated since they may fail to include some DU pairs and also they should not be over estimated since then unfeasible test obligations might be introduced.

- The under estimation is preferable over the over estimation since over estimation is causes more expense to the organization.

- Data flow testing is also aimed at distinguishing between the important and not so important paths.

- During the data flow testing many a times pragmatic compromises are needed to make since there exist so many unpredictable properties and exponential blow up of the paths.

Anomaly Detection under Data Flow Testing

There are various types of anomaly detection that are carried under the data flow testing:

1. Static anomaly detection
This analysis is carried out on the source code of the software program without the actual execution.

2. Dynamic anomaly detection
This is just the opposite of the static testing i.e., it is carried out on a running program.

3. Anomaly detection via compilers
Such detection are possible due to the static analysis. Certain compilers like the optimizing compilers can even detect the dead variables. The static analysis itself is incapable of detecting the dead variables since they are unreachable and thus unsolvable in the general case.

Other factors:
There are several other factors that play a great role in the data flow testing and they are:
1. Data flow modelling based on control flow graph
2. Simple path segments
3. Loop free path segments
4. DU path segments
5. Def – use associations
6. Definition clear paths
7. Data flow testing strategies


Thursday, July 28, 2011

What are the characteristics of testing and a good test?

Testing goal is to find errors. It should exhibit a set of characteristics that achieve the goal of finding the errors with minimum effort. The characteristics of software testing includes:
- How easily a software can be tested? i.e. test-ability
- How efficiently it can be tested? i.e. oper-ability
- What you see is what you test? i.e. observability
- How much we control the software so that testing can be automated and optimized? i.e. controllability
- Isolating problems and perform smarter retesting by controlling scope of testing i.e. decomposability.
- The program should be simple so that it can become easy to test i.e. simplicity.
- The fewer the changes, the fewer the disruptions to testing i.e. stability.
- The more information we have, the smarter we will test i.e. understandability.

A good test has the following characteristics:
- A good test has a high probability of finding an error.
- A good test is not redundant.
- A good test has the highest likelihood of uncovering a whole class of errors.
- A good test should be neither too simple nor too complex.

There are two ways to test an engineered product:
- Knowing the internal workings of product and tests can be conducted that can ensure that internal workings are performed according to specifications and all internal components are exercised properly.
- Knowing the output or the function for which the product is designed, tests are conducted to demonstrate each function is fully operational and checking the errors at the same time.


Thursday, July 14, 2011

What are the principles of Design Modeling?

Design models provide a concrete specification for the construction of the software. It represents the characteristics of the software that help the practitioners to construct it effectively. Design modeling represents the dummy model of the thing that is to be built. In software systems, the design model provides different views of the system.

A set of design principles when applied creates a design that exhibits both internal and external quality factors. A high quality design can be achieved.

- The work of analysis model is to describe the information domain of problem, user functions, analysis classes with methods. The work of design model is to translate information from analysis model into an architecture. The design model should be traceable to analysis model.
- A data design simplifies program flow, design and implementation of software components easier. It is as important as designing of processing functions.
- In design modeling, the interfaces should be designed properly. It will make integration much easier and increase efficiency.
- Design should always start considering the architecture of the system that is to be built.
- End user plays an important role in developing a software system. User Interface is the visible reflection of the software. User Interface should be in terms of the end-user.
- Component functionality should focus on one and only one function or sub-function.
- In design modeling, the coupling among components should be as low as is needed and reasonable.
- Design models should be able to give information developers, testers and people who will maintain the software. In other words, it should be easily understandable.
- Design models should be iterative in nature. With each iteration, the design should move towards simplicity.


Wednesday, July 13, 2011

What are the principles of Analysis Modeling?

Analysis models represent customer requirements. Design models provide a concrete specification for the construction of the software. In analysis models, software is depicted in three domains. These domains are information domain, functional domain and behavioral domain.
Analysis modeling focuses on three attributes of software: information to be processed, function to be delivered and behavior to be exhibited. There are set of principles which relate analysis methods :

- The data that flows in and out of the system and data stores collectively are called information domain. This information domain should be well understood and represented.
- The functions of the software effect the control over internal and external elements. These functions need to be well defined.
- The software is influenced with external environment. Software behaves in a certain manner. This behavior should be well defined.
- Partitioning is a key strategy in analysis modeling. Divide the models depicting information, function and behavior in a manner which uncovers detail in hierarchical way.
- Description of problem from end-user's perspective is the start point of analysis modeling. Task should move from essential information toward implementation detail.


Friday, April 29, 2011

Explain Black box and White box testing? What are their advantages and disadvantages?

For complete testing of a software product both black and white box testing are necessary.

Black-box testing
This testing looks at the available inputs for an application and the expected outputs that should result from each input. It does not have any relation with the inner workings of the application, the process undertaken or any other internal aspect of the application. Search engine is a very good example of a black box system. We enter the text that we want to search, by pressing “search” we get the results. Here we are not aware of the actual process that has been implemented to get the results. We simply provide the input and get the results.

White-box testing
This testing looks into the complex inner working of the application; it tests the processes undertaken and other internal aspects of the application. While black box testing is mainly concerned with the inputs and outputs of the application, white box testing help us to see beyond i.e. inside the application. White-box testing requires a degree of sophistication which is not the case with the black-box testing, as the tester is required to interact with the objects that are used to develop an application rather than having easy access to the user interface. In-circuit testing is a good example of a white-box system testing where the tester is looking at the interconnections between different components of the application and verifying the proper functioning of each internal connection. We can also consider the example of an auto-mechanic who takes care of the inner workings of a vehicle to ensure that all the components are working correctly to ensure the proper functioning of the vehicle.

The basic difference between black-box and white-box testing is the areas of focus which they choose. We can simply say that black-box testing is focused on results. Where if an action is performed and the desired result is obtained then the process that has actually been used is irrelevant. White-box testing, on the other hand focuses on the internal working of an application and it is considered to be complete only when all the components are tested for proper functioning.

Advantages of Black-box testing
- Since tester does not have to focus on the inner working of an application, creating test cases is easier.
- Test case development is faster as tester need not to spend time on identifying the inner processes; his only focus is on the various paths that a user may take through GUI.
- It is simple to use as it focuses only on valid and invalid inputs and ensures that correct outputs are obtained.

Drawbacks of Black-box testing
Constantly changing GUI makes script maintenance difficult as the input may also be changing. Interacting with GUI may result in making the test script fragile and it may not properly execute consistently.

Advantages of White-box testing
- Since the focus is on the inner working the tester can identify objects pro grammatically. This can be useful when the GUI is frequently changing.
- It can improve stability and re usability of test cases provided the object of an application remains the same.
- By testing each path completely it is possible for a tester to achieve thoroughness.

Drawbacks of White-box testing
Developing test cases for white-box testing involves high degree of complexity therefore it requires highly skilled people to develop the test cases. Although to a great extent fragility is overcome in white-box testing BUT change in the objects name may lead to breaking of the test script.


Wednesday, April 20, 2011

What are different programming guidelines ?

Programming is a skill and the programmer should be given the flexibility to implement the code. Guidelines that should be kept in mind during programming are:

- Pseudocodes can be used to adapt the design to the chosen programming language. They are structured English that describes the flow of a program code. The design is an outline of what is to be done in component and in which the programmer adds his creativity and expertise. Codes can be rearranged orreconstructed with a minimum of rewriting.

- Control structure is based on messages being sent among objects of classes, system states and changes in variables. It is important for the program structure to reflect the design's control structure. Modularity, Coupling, Cohesion are good design characteristics that must be translated to program characteristic.

- Documentation guidelines is a set of written description that explain to the reader what the programs do and how they do it. Two program documentation are created:

Internal documentation is a descriptive document directly written within the source code. A summary information is provided to describe its data structures, algorithms, and control flow.

External Documentation consists of other documents that are not part of the source code but is related to the source code. The preconditions and post conditions of the source code are identified in object oriented system.


Sunday, March 27, 2011

What is Quality and what are different perspectives used in understanding quality?

Quality is the total characteristic of an entity to satisfy stated and implied needs. Three perspectives are used in understanding quality:
- quality of the product
- quality of the process
- quality in the context of the business environment.

Quality of the Product
- The quality of the product has a different perspective for different people.
- End users assume that the software has quality if it gives what they want, when they want it, all the time. The ease of use and is also a important criterion for end users.
- For software engineers, they take a look at the internal characteristics rather than the external.

Quality of the Process
As software engineers, we valuethe quality of the software development process. Process guidelines suggests that byimproving the software development process,we also improve the quality of theresulting product. Common guidelines of process include Capability Maturity Model Integration (CMMI),ISO 9000:2000, Software Process Improvement and Capability Determination (SPICE).

Quality in the Context of Business Process
Quality is viewed in terms of the products and services being provided by the business in which the software is used. Improving the technical quality of the business process adds value to the business, i.e., technical value of thesoftware translates to business value.

To address quality issues:
- use quality standards.
- understand people involved in development process.
- understand the systematic biases in human nature.
- commit to quality.
- manage user requirements.


Monday, October 11, 2010

What is Black box testing and what are its advantages and disadvantages ?

Black box testing is a test design method. It treats the system as a "black-box" so it does not explicitly use the knowledge of the internal structure. In other words, the test engineer does not require to know the internal working of the black box. Black box testing focuses on the functionality part of the module. Black box testing is also known as opaque box and closed box testing. While the term black box testing is more commonly, many people prefer the terms "behavioral" and "structural" for black-box and white-box respectively.
There are bugs that cannot be found using only black box testing or only white box testing. If the test cases are extensive and the test inputs are also from a large sample space then it is always possible to find majority of the bugs through black box testing.

The basic functional or regression testing tools capture the results of black box tests in a script format. Once captured, these scripts can be executed against future builds of an application to verify that new functionality has not disabled previous functionality.

Advantages of Black Box Testing
- It is not important for the tester to be technical. He can be a non-technical person.
- This testing is most likely to find those bugs as the user would have found.
- Testing helps to identify the vagueness and contradiction in functional specifications.
- Test cases can be designed as soon as the functional specifications are complete.

Disadvantages of Black Box Testing
- There are chances of repetition of tests that are already done by the programmer.
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So, writing test cases is slow and difficult.
- There are chances of having unidentified paths during testing.


Thursday, October 7, 2010

What is white Box Testing and why we do it ?

White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations are performed according to the specifications and all the internal components have been adequately exercised. In other words, white box testing tends to involve the coverage of the specification in the code.

The control structure of the procedural design to derive test cases is used during white box testing. Using the methods of WBT, a tester can derive the test cases that guarantee that all independent paths within a module have been exercised at least once, exercise all logical decisions on their true and false values, execute all loops at their boundaries and within their operational bounds and exercise internal data structures to ensure their validity.

White box testing is done because black box testing uncover sorts defects in the program. These defects are:
- Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. Errors tend to creep into our work when we design and implement functions, conditions or controls that are out of the program.
- The logical flow of the program is sometimes counter intuitive, meaning that our unconscious assumptions about flow of control and data may lead to design errors that are uncovered only when path testing starts.
- Typographical errors are random, some of which will be uncovered by syntax checking mechanisms but others will go undetected until test begins.

All we need to do in white box testing is to define all logical paths, develop test cases to exercise them and evaluate results i.e. generate test cases to exercise the program logic exhaustively. We need to know the program well, the specifications and the code to be tested, related documents should be available to us.


Sunday, August 1, 2010

Statement Coverage Testing in White Box Testing

The purpose of white box testing is to make sure that functionality is proper and the information on the code coverage. It tests the internal structure of the software. It is also known as structural testing, glass testing and clear box testing.

Statement coverage is the most basic form of code coverage. A statement is covered if it is executed. Note that a statement does not necessarily correspond to a line of code. Multiple statements on a single line can confuse issues - the reporting if nothing else.

- In this type of testing the code is executed in such a manner that every statement of the application is executed at least once.
- It helps in assuring that all the statements execute without any side effect.
- Statement coverage criteria call for having adequate number of test cases for the program to ensure execution of every statement at least once.
- In spite of achieving 100% statement coverage, there is every likelihood of having many undetected bugs.
- A coverage report indicating 100% statement coverage will mislead the manager to feel happy with a false temptation of terminating further testing which can lead to release a defective code into mass production.
- We can not view 100% statement coverage sufficient to build a reasonable amount of confidence on the perfect behavior of the application.
- Since 100% statement coverage tends to become expensive, the developers chose a better testing technique called branch coverage.


Saturday, July 17, 2010

What can be white box testing used for, tools used for white box testing.

White box testing (WBT) is also called Structural or Glass box testing. It deals with the internal logic and structure of the code. A software engineer can design test cases that exercise independent paths within a module or unit, exercise logical decisions on both their true and false side, execute loops at their boundaries and within their operational bounds and exercise internal data structures to ensure their validity. White Box testing can be used for :
- looking into the internal structures of a program.
- test the detailed design specifications prior to writing actual code using the static analysis techniques.
- organizing unit and integration test processes.
- testing the program source code using static analysis and dynamic analysis techniques.

Tools used for White Box testing:
- Provide run-time error and memory leak detection.
- Record the exact amount of time the application spends in any given block of code for the purpose of finding inefficient code bottlenecks.
- Pinpoint areas of the application that have and have not been executed.

The first step in white box testing is to comprehend and analyze available design documentation, source code, and other relevant development artifacts, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively.


Tuesday, August 4, 2009

DBMS Three-Schema Architecture and Data Independence

WHAT IS DBMS ?
- To be able to carry out operations like insertion, deletion and retrieval, the database needs to be managed by a substantial piece of software; this software is usually called a Database Management System(DBMS).
- A DBMS is usually a very large software package that enables many different tasks including the provision of facilities to enable the user to access and modify information in the database.
- Data Description Languages (DDL) and Data Manipulation Languages (DML) are needed for manipulating and retrieving data stored in the DBMS. These languages are called respectively.

An architecture for database systems, called the three-schema architecture was proposed to help achieve and visualize the important characteristics of the database approach.

THE THREE-SCHEMA ARCHITECTURE:
The goal of the three-schema architecture is to separate the user applications and the physical database. In this architecture, schemas can be defined at 3 levels :
1. Internal level or Internal schema : Describes the physical storage structure of the database. The internal schema uses a physical data model and describes the complete details of data storage and access paths for the database.
2. Conceptual level or Conceptual schema : Describes the structure of the whole database for a community of users. It hides the details of physical storage structures and concentrates on describing entities, data types, relationships, user operations, and constraints. Implementation data model can be used at this level.
3. External level or External schema : It includes a number of external schemas or user views. Each external schema describes the part of the database that a particular user is interested in and hides the rest of the database from user. Implementation data model can be used at this level.

DBMS Three-Schema architecture

IMPORTANT TO REMEMBER :
Data and meta-data
- three schemas are only meta-data(descriptions of data).
- data actually exists only at the physical level.
Mapping
- DBMS must transform a request specified on an external schema into a request against the conceptual schema, and then into the internal schema.
- requires information in meta-data on how to accomplish the mapping among various levels.
- overhead(time-consuming) leading to inefficiencies.
- few DBMSs have implemented the full three-schema architecture.

DATA INDEPENDENCE

The disjointing of data descriptions from the application programs (or user-interfaces) that uses the data is called data independence. Data independence is one of the main advantages of DBMS. The three-schema architecture provides the concept of data independence, which means that upper-levels are unaffected by changes to lower-levels. The three schemas architecture makes it easier to achieve true data independence. There are two kinds of data independence.

- Physical data independence
* The ability to modify the physical scheme without causing application programs to be rewritten.
* Modifications at this level are usually to improve performance.

- Logical data independence
* The ability to modify the conceptual scheme without causing application programs to be rewritten.
* Usually done when logical structure of database is altered.

Logical data independence is harder to achieve as the application programs are usually heavily dependent on the logical structure of the data. An analogy is made to abstract data types in programming languages.


Facebook activity