Subscribe by Email


Showing posts with label Structure. Show all posts
Showing posts with label Structure. Show all posts

Sunday, June 30, 2013

Explain the single and two level directory structures

About Directory Structure
- Directory structure is referred to the way that the operating system follows for displaying the files and file system to the user in the field of computing. 
- A hierarchical tree structure is used for displaying the files in a typical way. 
- The special kind of string the file name uses or the unique identification of a particular file that is stored in the computer’s file system. 
- Before the 32 bit operating systems actually came in to the scenario; short names of about 6 to 14 characters in size were used for the file names. 
However, the modern operating systems give permission for file names of longer length i.e., of 250 character and that too per path name element. 
- The drive:\ is the root directory in the operating systems such as the OS/2, windows and DOS for example, “C:\”. 
- The “\” is the directory separator but the forward slash “/” is also internally recognized by the operating system.
- A drive letter is use for naming the drives either physically or virtually. 
- This also implies there does not exist a root directory that is formal. 
- Rather, we have root directories in each drive that are independent of each other. 
- However, one virtual drive letter can be formed by combining in to one. 
- This is done by keeping a RAID setting of 0 for the hard drive. 
- The file system hierarchy standard is used by the operating systems such as the UNIX and other UNIX like systems. 
- This is the most common form for the directory structures used by the UNIX operating systems. 
- It is under the root directory “/” that all the files and the directories are stored even if they are actually present on a number of different physical devices.

About Single – level Directory
- This is the simplest of the directory structures. 
- All files are stored in the same directory itself because it is quite easy to understand as well as support. 
- The first computer of the world i.e., the CDC 6600 also operated on just one directory and it could be used by a number of users at the same time. 
- There are significant limitations of the single-level directory. 
- These limitations come in to play when there are more than one users using the system or when the system has to deal with a large number of files. 
- All the files have to be assigned unique names since they are all stored under the same directory. 
- No two files can have the same file name. 
- It may become difficult to keep the names of the files in mind if they are large in number.


About Two–level Directory
- The limitations of the single level directory structure can be overcome by creating an individual directory for every user. 
- This is the most standard solution for the problems of the single level directories. 
- In this two-level directory structure, a UFD or user file directory is made for every user. 
- The structure of all the user file directories is almost the same, but the difference is that only the files of the individual user are stored in one.
- When a user tries to log in or when he starts a task, the system searches for the MFD or master file directory. 
- The name of the user or his/ her account number is used for indexing the MFDs in the operating system. 
- Each of those entries points to the UFD belonging to that user. 
- When a reference is made to some file, the system only searches for the user file directory for example, when a file has to be deleted or created. 


Thursday, June 6, 2013

Explain the structure of the operating systems.

We all are addicted to using computers but we all never really bother to known what is actually there inside it i.e., who is operating the whole system. Then something inevitable occurs. Your computer system crashes and the machine is not able to boot. Then you call a software engineer and he tells you that the operating system of the computer has to be reloaded. You are of course familiar with the term operating system but you know what it is exactly. 

About Operating System

- Operating system is the software that actually gives life to the machine. 
- Basic intelligence is the requirement of every computer system to start with. 
Unlike we humans, computers do not have any inborn intelligence. 
- This basic intelligence is required because this is what the system will use to provide essential services for running the programs such as providing access to various peripherals, using the processor and allocation of memory and so on. 
One type of service is also provided by the computer system for the users. 
- As a user, you may require to create, copy or delete files. 
- This is the system that manages the hardware of the computer system. 
- It also sets up a proper environment in which the programs can be executed. 
It is actually an interface between the software and the hardware of the system.
- On booting of the computer, the operating system is loaded in to the main memory. 
- This OS remains active as long as the system is running. 

Structure of Operating Systems

- There are several components of the operating system about which we shall discuss in this article.
- These components make up the structure of the operating system.

1. Communications: 
- Information and data might be exchanged by the processes within the same computer or different computers via a network. 
- This information might be shared via memory if in the same computer system or via message passing if through some computer network. 
- In message passing, the messages are moved by the operating system.

2. Error detection: 
- The operating system has to be alert about all the possible errors that might occur. 
- These errors may occur anywhere ranging from CPU to memory hardware devices in the peripheral devices in the user application. 
- For all types of error, proper action must be taken by the operating system for ensuring that correct and consistent computing takes place. 
- The users and the abilities of the programmers are enhanced greatly by the debugging facilities.

3. Resource allocation: 
- Resources have to be allocated to all of the processes running. 
- A number of resources such as the main memory, file storage, CPU cycles etc have some special allocation code while other resources such as I/O devices may have request and release codes.

4. Accounting: 
- This component is responsible for keeping the track of the computer resources being used and released.

5. Protection and Security: 
- The owners of data and information might want it to be protected and secured against theft and accidental modification.
- Above all, there should be no interference of the processes with working of each other. 
- The protection aspect involves controlling the access to all the resources of the system. 
- Security involves ensuring safety concerning user authentication in order to prevent devices from invalid attempts.

6. Command line interface or CLI: 
- This is the command interpreter that allows for the direct entry of the command. 
- This is either implemented by systems program or by the kernel.
- There are a number of shells also for multiple implementations.

7. Graphical User Interface: 
This is the interface via which the user is actually able to interact with the hardware of the system. 


Wednesday, May 15, 2013

What is the Process Control Block? What are its fields?


The task controlling block, switch frame or task struct are the names of one and the same thing that we commonly called as the PCB or the process control block. 
This data structure belongs to the kernel of the operating system and consists of the information that is required for managing a specific process. 
- The process control block is responsible for manifesting the processes in the operating system. 
- The operating system needs to be regularly informed about resources’ and processes’ statuses since managing the resources of the computer system for the processes is a part of its purpose. 
- The common approach to this issue is the creation and updating of the status table for every process and resource and objects which are relevant such as the files, I/O devices and so on:
1.  Memory tables are one such example as they consist of information regarding how the main memory and the virtual or the secondary memory has been allocated to each of the processes. It may also contain the authorization attributes given to each process for accessing the shared memory areas.
2.   I/O tables are another such example of the tables. The entries in these tables state about the availability of the device required for the process or of what has been assigned to the process. the status of the I/O operations taking place is also mentioned here along with address of the memory buffers they are using.
3.   Then we have the file tables that contain the information regarding the status of the files and their locations in memory.
4. Lastly, we have the process tables for storing the data that the operating systems require for the management of the processes. The main memory contains at least a part of the process control block even though its configuration and location keeps on varying with the operating system and the techniques it uses for memory management.
- Physical manifestation of a process consists of program data areas both dynamic and static, instructions, task management info etc. and this is what that actually forms the process control block. 
- PCB has got a central role to play in process management. 
- Operating system utilities access and modify it such as memory utilities, performance monitoring utilities, resource access utilities and scheduling utilities etc. 
- The current state of the operating system is defined by the set of process control blocks. 
- It is in the terms of PCBs that the data structuring is carried out. 
- In today’s sophisticated operating systems that are capable of multi-tasking, many different types of data items are stored in process control block. 
- These are the data items that are necessary for efficient and proper process management. 
- Even though the details of the PCBs depend up on the system, the common parts can still be identified and classified in to the following three classes:
1.  Process identification data: This includes the unique identifier of the process that is usually a number. In multi-tasking systems it may consists of user group identifier, parent process identifier, user identifier and so on. These IDs are very much important since they let the OS cross check with the tables.
2.   Process state data: This information defines the process status when it is not being executed. This makes it easy for the operating system to resume the process from appropriate point later. Therefore, this data consists of CPU process status word, CPU general purpose registers, stack pointer, frame pointers and so on.
3.   Process control data: This includes process scheduling state, priority value and amount of time elapsed since its suspension. 


Thursday, September 20, 2012

What is Object Spy? How to Use it?


Object spy is one of the most important tools in winrunner as well as in quick test professional. In this article we are going to discuss what role object spy serves in quick test professional and how it is used in the same. 

What is an Object Spy?

- With the help of object spy the basic structure of any of the test objects can be known. 
- The best thing about object spy in quick test professional is that the structure of a particular object can be viewed in the tree format. 
- All this makes understanding of the test objects much easier. 
- In addition to this use, the object spy can also aid us in viewing the test object and run time properties as well as methods of all the objects present in the software system or application. 

How the object spy can be used in quick test professional?<>

- For launching object spy just go to tools menu and click on the object spy tab. 
- Clicking on this tab will open up a dialog box named “object spy”.
- In that dialog box you need to select the application software whose object methods and properties you wish to see. 
- After this in the object spy window you need to select the pointer image. 
- All the objects present in that particular application will be listed and from these you can select the one whose object methods and properties you wish to view. 
- There are two particular things that can be viewed using the object spy as mentioned below:
  1. Properties of a specific object and
  2. Methods applicable for that object.
- Many different properties of the selected objects can be viewed. 
- This viewing of the object methods and properties is facilitated by the pointing hand mechanism of the object spy. 
- As this pointer is hovered above the objects, their corresponding methods and properties are displayed in the window of the object spy dialog box. 
- The details of the object may also include the hierarchy tree of the test objects. 
- The object spy also displays the syntax for the object methods at your command. 
- Different properties can be viewed in different environments. 
- Objects such as following can be viewed using the object spy in quick test professional:
  1. Dialog
  2. Static
  3. Active X
  4. Edit and so on.
- The object spy can be considered to be the same as the GUI spy that comes with the winrunner. 
The object spy contributes a lot while writing descriptive programming. 
- In object, the properties of the objects are displayed along with their corresponding values. 
- Object spy is such a feature provided by the quick test professional with which the total information regarding the objects can be obtained.
- There are 3 tabs in the object spy dialog box namely:
  1. Properties tab
  2. Methods tab and
  3. Navigation tab
- Clicking on the properties tab all the properties of the objects along with the values and hierarchy tree are displayed. 
- Clicking on the methods tab all the methods of the test objects and run time objects are displayed. - Clicking on the navigation tab, the tool bar is displayed. 
- Object spy just like the gui spy of the winrunner is an in built feature. 
- For opening the object spy apart from just clicking on the object spy option on the tool bar there are two more ways:
  1. Go to the object repository window and double click the object spy icon.
  2. After opening the object repository navigate to the tools and select the object spy option. 


Friday, May 18, 2012

Explain unstructured loops in detail?


Loops are one of the most important of the languages like C and C++. They have greatly reduced the drudgery of the programmers and developers which would otherwise have made the programming of a software system or application more hectic. The necessity of the loops cannot be ignored when it comes to the repetition of a particular executable statement or a block of statements in a program. 

In other words,
Say we have the below written statement in C++ programming language that we need to print only one time:
“hello! Welcome to C++ programming”
To print it one time we shall write the code like this:
Cout<<”hello! Welcome to C++ programming\n”;
Say now you need to print this statement a 100 times! What you are going to do- write the above C++ statement a 100 times? No! Certainly not! This is unfeasible and a complete waste of time! So what is the alternative that we have? Yes of course we have the “loops”. 
Using loop we will have to write only a small code instead of writing that C++ statement again and again for 100 times. Using loop we shall write like this:

For( int i = 1; i <=100; i++ )
{
Cout<<”hello! Welcome to C++ programming\n”;
}

The above loop is a for loop and it will the statement that we wish to be printed 100 times. See to how much extent our task has been reduced! This would not have been possible without loops. 

Loops generally are classified based on their types namely in to:
  1. The while loop
  2. The for loop
  3. The do – while loop
But based up on their structure, they are classified in to two types:
  1. Structured loops and
  2. Unstructured loops
This article is all about the unstructured loops. We shall discuss them in detail. 

The Unstructured Loops


- The unstructured loops can be defined as the loops that are void of a single header or a node in the control flow graph that has all the nodes dominating the whole loop body. 
- The main problem that arises with these unstructured loops is of managing them.
- Managing them is such a hectic task.
- For analyzing the unstructured loops the programmers, developers and researchers have come up with so many ways but none of them seems to be so efficient.
- One of the ways is to use a control flow graph along with the scope graph corresponding to the function containing the unstructured loop to be analyzed. 
- This method involves attaching of each and every iteration counter with each of the loop header.
- Attaching the iteration counters in such a way can cause over estimation of the control flow. 
- So to overcome this problem, the iteration counters are attached to each basic block and this helps a great deal in achieving a lot of flow information. Even this way results in some disadvantage! 
Therefore another method of managing the unstructured loops has been developed. 
- Another method has been developed for transforming the unstructured loops in structured ones. 
- If one looks at the unstructured loop with a view of the control flow graph they seem to have entry edges to one or more than one node all over the loop. 
- In another way we can say that an unstructured consists of parts of several different loops. 
Several structured loops can also be merged in to an unstructured loop with the help of a code size optimizing compiler.
- A straight way has been developed for the elimination of the unstructured loops which is creating a scope with a single header for each entry of the loop. 


Sunday, April 15, 2012

What is meant by refactoring of code?

Re-factoring of code or code refactoring is the topic of discussion of this article.

We can define the code refactoring technique as one of the disciplined techniques which is used for the following purposes:

- Restructuring of an existing body of the source code,
- Alteration of the internal structure of the code such that its external behavior remains the unaltered.

About Code Re-factoring



- The above two purposes are achieved by the application of a series of refactorings. - Each of the refactorings of the series is usually a tiny change in the source code of the computer program.
- The code refactoring technique is deployed for making improvements in the non – functional attributes of the software system or applications.
- A refactoring is not intended towards the modification of the functional requirements of the software system or application.

Advantages of Code Re-factoring


There are many advantages of the code re-factoring technique, few of which have been mentioned below:

- Improves the readability of the source code of the software system or application.
- Reduces the complexity of the code so as to improve it maintainability.
- Provides a more expressive object model or we can say an internal architecture for improving the extensibility and so on.

How is Re-factoring of code done?


It is obvious that with the continuous improvement in the design of the source code, it becomes better in its working and performing tasks.

- The developers, programmers and testers usually keep on adding new features in to the software system or application without performing adequate code refactoring which is quite an unhygienic technique.

- Carrying out the code re-factoring process continuously makes it easier to maintain and extend the source code of the program.

- The motivation to the code re-factoring is often provided by the code smell.

- Up on the identification of the problem, they can be effectively fixed by a series of code re-factoring.

- The code re-factoring solves the problem by simply transforming it in to a new form that possesses the same behavior as was in its previous form but the code is no longer found to “smell”.

- For re-factoring the longer routine, it is often sub divided into smaller sub routines which are then individually re-factorized.

- And for the duplicate routines, the duplicate is discarded and one shared function in its place is used.

- Technical debt is a serious consequence that can be faced if the code refactoring is not done effectively.

Categories in which benefits of code re-factoring should fall


The benefits of the code re-factoring usually fall in to two basic categories:

1. Extensibility
- After re-factoring, it becomes extremely easy for extending the capabilities of the software system or application if it makes use of design patterns that are recognizable and also provides flexibility in those parts of the program where there was no flexibility previously.

2. Maintainability

- Up on the code refactoring it becomes a lot easier to fix the errors and bugs because the readability of the code increases and then it becomes easy to grasp what the author intends to achieve.

- The maintainability is increased by the division of a monolithic routine in to individual single purpose methods and by placing them in to a more appropriate class.

- A solid set of automated unit tests is required before you start the process of code re-factoring.

- These tests will demonstrate the correctness of the module and then the whole process enters in to an iterative cycle making small program transformations each time iteration takes place.

- Whenever a test fails, you can undo the last transformation and keep on trying until you get it right. Such small transformations make your program what you want it to be!


Saturday, April 14, 2012

What is meant by incremental and evolutionary delivery?

After the software system or application product is complete, it has to be delivered to the client or the owner of the product. There are several types of delivery methods that are followed by the organizations like the incremental delivery and the evolutionary delivery.

Here in this article we are going to discuss about the above mentioned delivery types i.e., the evolutionary delivery and the incremental delivery. These days the software design is considered to be a major factor in deciding how successfully the software product can be delivered to the client being the end goal of the software project.

As we all know the below mentioned factors are known to impede a software project:
1. Over design
2. Under design
3. Wrong design etc.

For the efforts of a development team to be successful the design of the software system or application has to be good and efficient. It has been quite often observed that the best designs are a result of the emergent or evolutionary design or continuous design rather than just being a result of the efforts that are employed to make the design follow the up front approach.

About Incremental Delivery Method


- The continuous design involves starting up with a modicum of the up front approach and the technical directions are not followed for as long as possible.

- Following this approach, one can strive to apply lessons that one has learned from the software project for making continuous improvements in the software system or application design instead of just getting trapped in to design which is erroneous and has been developed too early.

- Apart from all this, the incremental delivery helps in creating as well as improving the business values rather than just focussing up on the structure of the build of the software system or application.

- With incremental delivery a project team can happen to deliver the best business value.

- In incremental delivery it is not necessary to follow the continuous design though it helps in making the delivery more efficient and creating software systems and applications with better designs.

Advantages of Incremental Delivery Method



1. It helps in keeping the business priority i.e., you get a better chance in developing the features most valuable to the business.

2. It helps in regard to the risk mitigation since most of the times either the business value is not delivered or the wrong system is delivered. Demonstrating the working features to the client as early as possible an help you out of such harsh traps.

3. It helps in regulating early delivery. The production can start with the completed features.

4. It facilitates flexible delivery. Quite often the business priorities are changed by the clients. In such cases, working with a way that assumes changes in priorities is the best option. The likeliness of the efforts being wasted is reduced.

Disadvantages of Incremental Delivery Method


1. It is very hard to do.
2. Requires high quality code structure.
3. Much time is wasted in the architectural concerns.

About Evolutionary Delivery Method


Now coming to the evolutionary delivery, it has been observed that many a times the focus is up on the evolution and the delivery is neglected.

- Evolutionary delivery is all about what is to be delivered and when and how the deliveries are to be achieved and in whose name the delivery is to be made.

- All these questions imply a frame work of activities which are to be pursued during the development course of a project.

- The basic principle of evolutionary delivery method is that the product delivery is started early since the user is interested in the system.


Wednesday, April 4, 2012

What is meant by adaptive and predictive planning?

Learning is an important aspect of any development process be it of any field. Learning can be classified in to many types, but in this article only two types have been discussed namely:
- Adaptive learning and
- Predictive learning

Adaptive Planning or Learning

- Adaptive learning is considered to be a computer driven educational method in which the computers are the interactive teaching devices rather than having human teachers do the teaching.

- The presentation of the educational material is adapted by the computers according to the weaknesses of the students which are determined by the responses of the students to the questions asked by the computer.

- Here the whole learning process is motivated by the idea of using electronic education for incorporating the interactive values to a student that would have been provided by an actual human tutor or teacher.

- This technology encompasses various aspects taken from the fields like education, psychology, computer science and so on.

- Adaptive learning was evolved because it is not possible to achieve tailored learning with non adaptive and traditional approaches.

- The learner is transformed from the passive receptor of the information to the collaborator of the educational processes.

- The primary application of the adaptive learning is in basically the following two fields which have been designed as both web applications and the desk top applications:
1. Education and
2. Business training.

Adaptive learning is also known by several other names like:
1. Computer based learning
2. Adaptive educational hyper- media
3. Intelligent tutoring systems
4. Adaptive instructions
5. Computer based pedagogical agents

Models or Components of Adaptive learning

The whole process of adaptive learning has been divided in to some separate models or components as mentioned below:

1. Student Model
- This model keeps a track of the student and learns about him.
- This model makes use of algorithms that have been researched for over 20 years.
- The CAT (computer adaptive testing) makes use of the simplest means for the determination of the students’ skill.
- Nowadays, the students’ models make use of richer algorithms for providing a more extensive diagnosis of the weaknesses of the students.
- This it does by linking the questions and the concepts and using ability levels to define the strengths and weaknesses.

2. Instructional model
- This model is actually responsible for conveying the information.
- It makes use of the best technological methods for educational purposes like multimedia presentations along with the expert teacher advice.
- When the students make mistakes, the model provides them with useful hints.
- These hints can be question specific.

3. Instructional environment
This provides an interface for the system and human interaction.

4. Expert model
- This model is responsible for teaching the students using the stored information which is to be taught.
- This may include solutions for question sets, lessons and tutorials.
- Some very sophisticate expert models may use expert methodologies for the illustration of the solution of the questions.
- In some of the adaptive learning systems, the qualities of an expert model may be acquired by an instructional model.

Predictive Planning or Learning

- The predictive learning involves machine learning i.e., to say an agent has to build a model of its own environment type by carrying out various actions in several circumstances.

- The knowledge of the effects of the actions tried out is used for turning the models in to planning operators.

- This is done so that the agent is able to act purposefully in that environment.

- We can say that the predictive learning is all about learning with a minimum of the mental structure that exists already.

- Some say that this kind of learning has been inspired by the Piaget’s account of the construction of knowledge of the world by interacting with it.


Tuesday, March 6, 2012

What are the different attributes of the use cases?

Use cases sound similar to test cases. But the two are not same. Where the test cases are used in software testing, the use cases are used in the general software systems engineering. We have dedicated this whole article to the discussion of the use cases.

What are Use Cases?

- The use case is a set of steps which are implemented to define the interactions occurring between the software system and a role.

- This is basically done to achieve a predefined goal. The role is commonly called as an actor and can be an external software system or simply a human.

- The use cases are often used at the higher levels of the software systems engineering.

- The goals to achieve for which the use cases are developed are often defined by the stake holders.

- The requirements are stated by the stake holders in a separate documentation.

- This documentation serves at a contract between the two i.e., between the software developers or programmers and the stake holders or the clients.

- The structure of a use case has been a quite debated topic. Many opinions were given regarding the how the structure of a typical use case should be.

- But usually the structure defined by the Alistair Cockburn is followed. It is popularly known as the “fully dressed use case structure”.
It consists of the following aspects:

1. Title of the use case
2. Primary role
3. Goal of the use case
4. Scope
5. Level
6. Interests
7. Stake holders
8. Preconditions or requirements
9. Guarantees (both minimal and success)
10.Success scenario
11. Technology
12. Data variations
13. Extensions and
14. Related information

He gave some other many useful aspects related to the use cases like:

1. Casual use case structure
2. Design scope icon
3. Goal level icon

ROLE OF AN ACTOR

- The role of an actor in a use case is to make effective decisions and so calls for the need of good decision making ability.

- It is not necessary that an actor should always be a human. It can be a computer system also.

- Though the decisions of an actor influences the whole use case implementation, there does not exist any direct link between the system and the actor.

ATTRIBUTES OF USE CASES

- Certain necessary attributes have been discovered so far that are needed by the use case to accomplish the goal for which it has been developed.

- The attributes need not be only functions; they can be in the form of data variables also.

- Not all the attributes have been extracted from the preceding steps; some have been taken as the general assumptions out of common knowledge of the use cases.

1. System requirements: These requirements form the basis for thedelopmnt of a use case.
2. The requirements need to be complete and correct and of quality.
3. Software requirements specification testing
4. User- centred approach for the specification of the requirements.
5. Unified modelling language: this language based on the object oriented paradigm is employed for the construction, visualization and documentation of the system.
6. Use case diagrams

The below mentioned are the three attributes of the use cases:

1. Level of validity
This attribute is concerned with the validation of the use cases. It queries about the completeness of the use case, achievement of the goals, changes to be made, addressing of the additional goals and representation of the additional actors and so on.

2. Metrics
This use case attribute deals with the factors like the ambiguity, completeness, trace-ability and volatility of the use case.

3. Risk indicators
Typical 3 risk indicators have been identified namely incompleteness, options and weak phrases.


Saturday, February 18, 2012

What are different tips to estimate testing time?

Time is a very important factor when it comes to the success of any matter. Therefore, timing plays a great role in the successful completion of a software project. It would not be wrong to say that the time estimation like other aspects of the software engineering forms an equally important part of the whole software development cycle.

BENEFITS OF KEEPING TIME ESTIMATION

- Keeping time estimation before the start of the software project keeps the whole development cycle on track.
- This doesn’t let your time get wasted.
- Since, there is a time limit; you have to complete the project within that time period.
- Furthermore, if you complete your projects on time, your clients will be impressed and your reputation will build up which in turn will fetch you more projects.
- An experienced software developer might be able to make better time estimations as compared to the one who is fresh in the industry.
- One who has worked up on various different software projects certainly must be having an idea of the time that will be taken up by the testing process.

TIPS FOR ESTIMATING TESTING TIME ARE:
Testing time cannot be estimated blindly. It should be done accurately and it should be realistic.

1. BUFFER TIME
- Your time estimation should involve some buffer time.
- But, keep in mind that it should be realistic.
- The role of the buffer is to help in case you have an unexpected delay in the software testing process.
- This buffer time accounts for the lost time.
- Apart from providing time for coping up with delays, a buffer also helps in providing the maximum coverage for the testing processes.

2. TIME TAKEN BY BUG CYCLE
- Never forget that this time estimation also includes the time that will be taken up by the bug cycle.
- You may estimate some time for a cycle, but remember that the actual cycle can very well require much more time.
- This problem should be avoided.
- As we all know that the testing process depends on the structure and design of the program.
- The more good the structure and design is, less will it take time.
- If the structure of the program itself is not good then more and more time will be required to fix the subsequent problems and this leads to the over run of the time estimation.

3. INCLUDE UNEXPECTED LEAVES
- The estimated testing time should also have a place for the unexpected leaves.
- Some members of the software development may require a leave until the completion of the project.
- This will help to keep your testing time estimation realistic.

4. AVAILABILITY OF RESOURCES
- You should keep in mind the availability of the resources for the time period within which you have to complete your project.
- If in case you get short of any of the resources you can update your testing time estimations accordingly.
- This is another measure to keep your time estimation realistic.

5. COMPARISON BETWEEN OLDER & NEWER VERSION OF SOFTWARE
- You can sometimes compare this software version with its older version for the test outputs.
- This will save your precious time.
- This is termed as parallel testing.
- Based on the testing time estimation of the older version you can decide time estimation for the upcoming version.

6.COUNT YOUR MISTAKE & REVIEW
- It is a universal fact that everybody makes mistakes.
- So, there is possibility that you may make some mistake while estimating the testing time.
- So don’t forget to review it once and make any changes if required.
- Always keep in mind that changing testing time estimations can have a bad effect on your reputation.
- So don’t make changes unless very much required.

7. COUNT YOUR EXPERIENCE
- You can very well employ your past experience to make wise time estimations.

8. EVALUATE YOUR TEAM EFFICIENCY
Know the work efficiency of your team members.


Tuesday, February 7, 2012

What are different kinds of risks involved in software projects?

When we create a development cycle for a project, we develop everything like test plan, documentation etc but we often forget about the risk assessment involved with the project.

It is necessary to know what all kinds of risks are involved with the project. We all know that testing requires too much of time and is performed in the last stage of the software development cycle. Here the testing should be categorized ion the basis of priorities. And how you decide which aspect requires higher priority? Here comes the role of risk assessment.

Risks are uncertain and undesired activities and can cause a huge loss. First step towards risk assessment is the identification of the risks involved. There can be many kinds of risks involved with the project.

DIFFERENT KINDS OF RISKS INVOLVED

1.Operational Risk
- This is the risk involved with the operation of the software system or application.
- It occurs mainly due to false implementation of the system or application.
- It may also occur because of some undesired external factors or events.
- There are several other causes and main causes are listed below:

(a) Lack of communication among the team members.
(b) Lack of proper training regarding the concerned subject.
(c) Lack of sufficient resources required for the development of the project.
(d) Lack of proper planning for acquiring resources.
(e) Failure of the program developers in addressing the conflicts between the issues having different priorities.
(f) Failure of the team members in dividing responsibilities among themselves.

2. Schedule Risk
- Whenever project schedule falters, schedule risks are introduced in to the software system or application.
- Such kinds of risks may even lead it to a complete failure bringing down the economy of the company.
- A project failure can badly affect the reputation of a company.
- Some causes of schedule risks have been stated below:

(a) Lack of proper tracking of the resources required for the project.
(b) Sometimes the scope of the project may be extended due to certain reasons which might be unexpected. Such unexpected changes can alter the schedule.
(c) The time estimation for each stage of the project development cycle might be wrong.
(d) The program developers may fail to identify the functionalities that are complex in nature and also they may falter in deciding the time period for the development of these functionalities.

3. Technical Risks
- These types of risks affect the features and functionalities of a software system or application which in turn affect the performance of the software system.
- Some likely causes are:

(a) Difficulty in integrating the modules of the software.
(b) No better technology is available then the existing ones and the existing technologies are in their primitive stages.
(c) A continuous change in the requirements of the system can also cause technical risks.
(d) The structure or the design of the software system or application is very complex and therefore is difficult to be implemented.

4. Programmatic Risk
- The risks that fall outside the category of operational risks are termed as programmatic risks.
- These too are uncertain like operational risks and cannot be controlled by the program.
- Few causes are:

(a) The project may run out of the funds.
(b) The programmers or the product owner may decide to change the priority of the product and also the development strategy.
(c) A change in the government rule.
(d) Development of the market.

5. Budget Risk
- These kinds of risks arise due to budget related problems.
- Some causes are:

(a) The budget estimation might be wrong.
(b) The actual project budget might overrun the estimated budget.
(c) Expansion of the scope might also prove to be problem.


Thursday, January 19, 2012

What are merits and demerits of sequential test approach?

Each and every development process in this world follows a sequential approach for its development. Similarly software development is carried out with a sequential approach.
A software development plan is called system development methodology or software development methodology.

- It can be defined as a frame work that is used to plan, structure and control the whole process of development of information software or system.

- The sequential approach is followed in every aspect of software engineering whether it be development or testing to keep the development systematic and on track.

Sequential approach to testing has got both merits and demerits.
As sequential approach to testing is followed in which testing is seen as flowing steadily downwards through the phases or levels of different and various kinds of testing like performance testing, unit testing, integration testing, alpha testing, beta testing etc. the sequential approach to testing is based on certain principles which have been stated below:

- Testing plan is divided in to sequential phases. Some splash back and over lapping is accepted between any two phases of testing to a certain extent.

- More emphasis is on testing, deadlines or target dates, time schedules, implementation and budget of an entire software or system at one time.

- A very control is kept over the testing of a software system or application via extensive formal reviews and documentation, approval by the client or the customer and users.

- Control is also maintained over information technology management which is done mostly at the end of most of the phases before the beginning of a phase of testing.

Though, Sequential approach is a traditional approach to development in software engineering, it has been badly blamed for several large scale software projects over time, over budget and some times for failures in timed delivery.

- This basically happens due to big design up front approach.

- At other times this approach has been superseded by more versatile and flexible methodologies developed especially for development of software system or applications.

- This sequential testing approach is frequently used in software processes of software development.

The whole software development progress is seen flowing steadily downwards through the following phases:
- Phase of requirements specifications
- Phase of conception
- Phase of initiation
- Phase of analysis
- Phase of designing
- Phase of construction
- Phase of coding
- Phase of integration
- Phase of testing
- Phase of debugging
- Phase of validation
- Phase of production
- Phase of implementation
- Phase of installation and
- Phase of maintenance

This sequential approach basically originated in construction and manufacturing industries. This hardware oriented model or sequential approach to development was simply adopted for the development of software systems or applications also.

- While following a sequential approach it should be made sure that before moving on to the next phase, the preceding phase is perfectly completed.

- However some cases may include some slight variations.

- It’s been a observed fact that time spent in early phases of the software development process has great benefits.

- A bug or an error found in early levels of testing cost less as compared to those found in later stages of development.

- It also requires less efforts and time to repair or fix.

- If a program design or structure after development turns out to be impossible to implement than it will be a complete waste of efforts and time.

- It is easier to fix the errors and bugs in the early stages than to realize later that all the work done is of no use.

Thus following a sequential approach makes sure each and every step is 100 percent perfect and the testing process can be carried on further.


Sunday, December 18, 2011

What are different characteristics of system testing?

System testing is a word heard often. But what is meant by that actually? As the name suggest, one make out that it has got something to do with testing of systems. Scientifically it can be defined as the testing of both the components of the system i.e.software and hardware.

- The system testing is carried out on a finished, complete, and integrated system to check the system's cooperation according to the specified conditions and requirements.
- System testing is categorized under the category of black box testing, and therefore doesn’t require any knowledge of the internal structure and design of the source code.
- According to the rules and regulations of the system testing, only the integrated components that have passed the integration testing successfully can be given as input for software system.
- The software system that has been incorporated successfully with the appropriate hardware system can also be taken as input to the system testing.
- The system testing aims at detecting all the discrepancies, defects and constraints.
- The software system itself integrated with any other software or hardware system and has successfully passed the system integration testing can also be considered as an input for the system testing.
- System testing deals with the inconsistencies and flaws that are present in the system software which is made up of integrated software and hardware components. - System testing like other testing methodologies is a much limited kind of testing.
- System testing is concerned with the detecting of defects within the assemblages i.e., inter- assemblages as well as within the software system as a whole entity.
- Unlike integration testing and unit testing, the system testing is carried out on the whole software system as one unit.
- System testing mainly deals with basic and important contexts namely functional requirement specification (FRS) and system requirement specification (SRS).
- System testing is not only about testing the design of the software system but, also its behavior and the expected features of the customers.
- System testing also tests the software system up to the limits and also beyond the limits and conditions specified for the software and hardware components.
- System testing is performed to explore the functionality of a software system.
- System testing is carried out before the system is assembled and after the system has been finished and completed.

There are various testing techniques that together make up a complete system testing methodology. Few have been listed below:
- Stress testing
- Load testing
- Error handling testing
- Compatibility testing
- Performance testing
- Usability testing
- Graphical user interface testing
- Security testing
- Volume testing
- Scalability testing
- Sanity testing
- Exploratory testing
- Smoke testing
- Regression testing
- Ad hoc testing
- Installation testing
- Recovery testing
- Reliability testing
- Fail over testing
- Maintenance testing
- Accessibility testing

While carrying out the system testing it is very important to follow the systematic procedures.
- Only specifically designed test cases should be used for testing.
- Examiners test he system by breaking in the system i.e., by giving incorrect data.
- Unit testing and integration testing form the base of the system testing.
- System testing forms a crucial step of the process of quality management.
- System is tested to determine if it meets all the functional requirements and also helps in verification and validation of application architecture and business requirements.

Conditions to be followed before system testing is carried:
- All the units must have successfully passed the unit testing.
- All the modules or units must have been integrated and successfully passed the integration test.
- The surrounding environment should resemble the production environment.

Steps that should be followed during the system testing:
- A system test plan should be created.
- Test cases should be created.
- Scripts should be created to build environment.


Wednesday, December 7, 2011

What are different characteristics of software performance testing?

Software performance is indeed an important part of software engineering and software development plan. It can be defined as the testing carried out to determine the quality and standard of the response and stability of the software system under a certain work load. Sometimes it an also be used to examine other qualitative aspects of the software system like scalability, reliability, security, stress, and resource usage.

In actual, the software performance testing is essentially a part of performance engineering. This is a very crucial testing methodology and is gaining popularity day by day. It is a testing methodology which seeks to raise the standards of the performance factors of the design of the software system. It is also concerned with the architecture of the internal structure of a software system or application.

Performance testing tries to build excellent performance into the architecture and design of the software system before the actual coding of the software application or system. Performance testing consists of many sub testing genres.

Few have been discussed below:

- Stress testing
This testing is done to determine the limits of the capacity of the software application. Basically this is done to check the robustness of the application software. Robustness is checked against heavy loads i.e. above the maximum limit.

- Load testing
This is the simplest of all the testing methods. This testing is usually done to check the behavior of the application or software or program under different amounts of load. Load can either be several users using the same application or the difficulty level or length of the task. Time is set for task completion. The response timing is recorded simultaneously. This test can also be used to test the databases and network servers.

- Spike testing
This testing is carried out by spiking the particular and observing the behavior of the concerned application software under each case that whether it is able to take the load or it fails.

- Endurance testing
As the name suggests the test determines if the application software can sustain a specific load for a certain time. This test also checks out for memory leaks which can lead to application damage. Care is taken for performance degradation. Throughput is checked in the beginning, at the end and at several points of time between the tests. This is done to see if the application continues to behave properly under sustained use or crashes down.

- Isolation testing
This test is basically done to check for the faulty part of the program or the application software.

- Configuration testing
This testing tests the configuration of the application software application. It also checks for the effects of changes in configuration on the software application and its performance.

Before carrying out performance testing some performance goals must be set since performance testing helps in many ways like:
- Tells us whether the application software meets the performance criteria or not.
- It can compare the performance of two application soft wares.
- It can find faulty parts of the program.

There are some considerations that should be kept in mind while carrying out performance testing. They have been discussed below:

- Server response time:
It is the time taken by one part of the application software to respond to the request generated by another part of the application. The best example for this is HTTP.

- Throughput
It can be defined as the highest number of users who use concurrent applications and that is expected to be handled properly by the application.
A high level plan should be developed for performing software performance testing.


Saturday, October 8, 2011

Some details about Strings and Arrays of Strings in C

Multiple character constants can be dealt with in 2 ways in C. if enclosed in single quotes, these are treated as character constants and if enclosed in double quotes, these are treated as string literals. A string literal is a sequence of characters surrounded by double quotes. Each string literal is automatically added with a terminating character ‘\0’. Thus, the string “abc” will actually be represented as follows:

“ abc \0” in the memory and its size is not 3 but 4 characters ( inclusive of terminator character ).

Arrays refer to a named list of a finite number n of similar data structure elements. Each of the data elements can be referenced respectively by a set of consecutive numbers, usually 0, 1, 2, 3, ……., n. if the name of an array of 10 elements is ARR, then its elements will be referred as shown below:

ARR [ 0 ], ARR [ 1 ], ARR [ 2 ], ARR [3], …… ARR [9]

Arrays can be one dimensional, two dimensional or multi dimensional. The functions gets() and puts () are string functions. The gets() function accepts a string of characters entered at the keyboard and places them in the string variable mentioned with it. for example :

Char name[ 21 ];

The above code declares a string namely name which can store 20 valid characters ( width 21 specifies one extra character ‘\0’ with which a string is always terminated ). The function gets() reads a string of maximum 20 characters and stores it in a memory address pointed to by name. As soon as the carriage return is pressed, a null terminator ‘\0’ is automatically placed at the end of the string. The function puts () writes a string on the screen and advances the cursor to the newline. Any subsequent output will appear on the next line of the current output by puts ().

Arrays are a way to group a number of items into a larger unit. Arrays can have data items of simple types like int or float, or even of user defined types like structures and objects. An array can be of strings also. Strings are multi dimensional arrays comprised of elements, each of which is itself an array.

A string is nothing but an array of characters only. In actual C does not have a string data type rather it implements string as single dimension character arrays. Character arrays are terminated by a null character ‘\0’. So, for this reason the character arrays or strings are declared one character larger than the largest string they can hold.

Individual strings of the string array can be accessed easily using the index. The end of a string is determined by checking for null character. The size of the first index ( rows ) determines the number of strings and the size of the second index ( columns ) determines maximum length of each string. By just specifying the first index, an individual string can be accessed. You can declare and handle an array of strings just like a two dimensional array. See an example below:

Char name [10] [20] ;

Here the first dimension declares how many strings will be there in the array and the second dimension declares what will be the maximum length of a string. Unlike C++, C has some different functions for adding or concatenating strings, checking string length and to see the similarity of two strings. The functions are namely strlen, strcmp, strcat, strrev etc and are included in header file string.h. Strings are used for holding long inputs.


Facebook activity