Subscribe by Email


Showing posts with label Rules. Show all posts
Showing posts with label Rules. Show all posts

Saturday, September 14, 2013

Explain Border Gateway Protocol (BGP)?

- BGP or Border gateway protocol is the set of rules that is implemented for making the routing decisions at the core of the internet. 
- It involves the use of the IP networks table or we can say prefixes which are used for designating the reach-ability of the network to the autonomous systems. 
- This protocol falls under the category of the path vector protocol or sometimes classified as a variant of the distance vector routing protocols. 
- The metrics of the IGP or the interior gateway protocol are not used by the border gateway protocol rather paths, rule sets or polices are used for making decisions for routing. 
- This is why the border gateway protocol is often called a reach-ability protocol rather than being termed as a routing protocol. 
- The BGP has ultimately replaced the EGP or the exterior gateway protocol. 
This is so because it allows the full decentralization of the routing process for making transition between the ARPANET model’s core and the decentralized system that consists of a NSFNET backbone and the regional networks associated with it. 
- The present version of the BGP that is being used is the version 4. 
- The earlier versions were discarded for being obsolete. 
- The major advantage is of the classless inter-domain routing and availability of a technique called the route aggregation for making reductions in the routing size. 
- The use of the BGP has made the whole routing system a decentralized system.
- BGP is used by most of the internet service providers for establishing a route between them. 
- This is done especially when the ISPs are multi-homed. 
- That’s why even though it is not used directly by the users; it is still one of the most important protocols in networking. 
- The BGP is used internally by a number of large private IP networks. 
- For example, it is used to combine many large open shortest path first or OSPF networks where these networks do not have the capability to scale to the size by themselves. 
- BGP is also used for multi-homing a network so as to provide a better redundancy. 
- This can be either to many ISPs or to a single ISP’s multi access points. 
Neighbors of the border gateway protocol are known as the peers. 
- They are created by manually configuring the two routers so as to establish a TCP session on the port. 
- Messages called the 19 byte keep alive messages are sent to the port periodically by the BGP speaker for maintaining the connection. 
- Among the various routing protocols, the most unique is BGP since it relies up on TCP for transporting. 
- When the protocol is implemented in the autonomous system among two peers, it is called IBGP or the internal border gateway protocol. 
- The protocol is termed as the EBGP or the external border gateway protocol when it runs between many autonomous systems.
- Border edge routers are the routers that are implemented on the boundary for exchanging information between various autonomous systems.
- BGP speakers have the capability for negotiating with the session’s option capabilities such as the multi-protocol extensions and a number of recovery modes. 
- The NLRI (network layer reach-ability information) can be prefixed by the BGP speaker if at the time of the creation itself, the multi-protocol extensions are negotiated. 
- The NLRI is advertised along with some address family prefix. 
The family consists of the following:
Ø  IPv4
Ø  IPv6
Ø  Multicast BGP
Ø  IPv4/ IPv6 virtual private networks

- These days the border gateway protocol is being commonly employed as the generalized signaling protocol whose purpose is to carry information via the routes that might not form the global internet’s part. 


Monday, December 3, 2012

What is trace-ability alert? How to trigger a trace-ability alert in Test Director?


The process of sending e–mails in order to notify the ones that are responsible whenever some change is made to the project. This can be done by instructing the test director to create an alert whenever a change occurs and send e – mails appropriately. One’s own follow up alerts can also be added. 
There are certain rules called the trace-ability notification rules (based up on the associations that were made in the test director among the tests, requirements and defects) which are activated by the test director administrator for generating the automatic trace-ability alerts.

On what occasions a trace-ability alert issued?

Only for the following issues test director can generate the trace-ability alerts:
  1. Whenever a requirement (except change of a status) changes, the designer of the associated tests is notified by the test director.
  2. Whenever a requirement having an associated test changes, all the project users are notified by the test director.
  3. Whenever the defect status changes to ‘fixed’, the responsible tester of the associated test is notified by the test director.
  4. Whenever a test run is successful, the user assigned to the associated test is notified by the test director.

Steps to trigger trace-ability alert

  1. Log on to the project  as a different user.
  2. Click on the test plan tab to turn on the test plan module which will display the test plan tree. Expand the concerned subject folders and select the required test. A designer box displaying the user name in the details tab in the right pane is seen. One thing to be noted is that whenever an associated requirement changes, the trace-ability notification is only viewed by the designer.
  3. Click on the requirements tab to turn on the requirements tree and also make sure that it is in the document view.
  4. Among the requirements choose the one that you want to change.
  5. For changing the priority of the requirement click on the priority down arrow and select the required priority. This will cause the test director to generate an alert for the test associated with the requirement selected above. Also, an e – mail will be sent to the designer who designed this test.
  6. When you are done log out of the project by clicking on the log out button present on the right side of the window.

How to view a trace-ability alert?

This trace-ability change can be viewed for a single entity or all the entities in the project. Here by entity we mean a test, a defect or a test instance. To view the trace-ability alert follow the below mentioned steps:
  1. Log on to the project as the designer of the test.
  2. Click on the test plan tab to view the test plan tree. Expand the subject folders to display that test. You will see that the test has a trace changes flag which is an indication of the fact that a change was made to the requirement associated with it.
  3. Clicking on the trace changes flag for the test will enable you to view the trace-ability alert. Also, the trace changes dialog box will open up. Clicking on the requirement link will make the test director to highlight that particular requirement in the requirements module.
  4. For viewing all of the trace-ability alerts click on the trace all changes button in the common test director tool bar. A dialog box listing all the trace-ability changes will open up.
  5. Once done close the dialog box. 


Thursday, July 26, 2012

How can data caching have a negative effect on load testing results?


It is quite a heavy task to retrieve data from a certain repository if we see it through a performance point of view. It becomes much more difficult when the data repository lies too far from the application server.Retrieving data becomes difficult also when a specific type of data is accessed over and over again. 
Caching is a technique that has been developed as a measure for reducing the work load and the time consumed for retrieval of the data. 
In this article, we have discussed about the negative effects that simple data caching can have up on the load testing. 

Rules for Caching Concepts


Some rules have been laid down for the caching concepts which have been mentioned below:

1. The data caching is useful if used only for a short period of time and does not works when used through the life cycle of the software system or application.
2. Only that data which is not likely to be changed quite often should be cached.
3. There are certain data repositories that have the capability of supporting the notification events in case if the modification of the data takes place outside the application.

If these above stated rules are not followed properly, the data caching is sure to have a negative impact up on the load testing. 

How data caching produces a negative impact on load testing?


- This is so because the data caching has got some pitfalls which come in our observation only when there are potential situations when there is a possibility of data expiry and software system or application using inconsistent data. 
- Using caching technique is quite simple but any fault can cause an impact on load testing.
- Load testing involves putting demands on the software system or application in order measure its response. 
- The outcomes of the load testing helps in measuring the difference between the responses of the software system or application under normal as well as peak load conditions. 
- Load testing is usually used as a means to have a measure of the maximum capacity at which the software system or application can operate easily. 
- Data caching initiates quick response from the software system or application for obtaining cookies etc. 
- Though data caching responds faster than the usual memory transactions, it has a negative impact on the result of the load testing i.e. you will not get the original results rather the results you will get will be the altered ones. 

What you will get to see is the wrong performance of the software system or application. 

What is the purpose of caching?


- Caching is done with the purpose of storing certain data so that that data in the subsequent stages can be served faster. 
- Data caching affects load testing results in a way until and unless the cache is cleared up by the testing tool after every iteration of the virtual user, an artificial faster page load time is started to be given by the caching mechanism. 
- Such artificial timings will alter your load testing results and invalidate them. - In caching, all the recently visited web pages are stored. 
- When we carry out load testing, our aim is always to check the software system or application under load. 
- So if by chance the caching option is left enabled, what will happen is that the software system or application will try retrieving the data from the data that is locally saved giving false measure of the performance determination. 
- So, the caching option should always be disabled while you carry out load testing. 


Saturday, June 30, 2012

What are the advantages of optimizing test automation process?


Like all the other processes in the field the test automation process has also been optimized and has reaped huge benefits. 
- Test automation is carried out for the manual testing processes employing formalized testing processes and this automated test is then further optimized to make it much more effective. 
Test automation is like any other process that requires writing a computer program that will do a particular thing like testing in the case of test automation. 
- Test automation takes a lot of time but once if done can save a whole lot of time afterwards.

In this article we are going to discuss the effect the optimization inflicts up on the test automation process plus we will also chalk up the advantages as well as disadvantages of optimizing the test automation process. 

- Throughput represents a very critical issue in almost all the testing processes.
- It is quite important to take care of the throughput factor especially at the maintenance level. 
There are some general rules and approaches that have been defined to be used during the analyzation of the total requirements of the software testing system and these rules and approaches have worked wonders in reducing the test time. 
- Apart from these rules and approaches there are other things like new programming environments, new interface standards (like Ethernet) that helps in making a significant improvement in the testing system. 

Till now a variety of approaches have been validated that can be used for the optimization of the test automation process in order to increase the throughput.
For optimization we can say that using it one can make significant gains only with a modest investment of time and effort. The software quality can be optimized by adopting the best of the software testing methodologies available, tools, processes and of course people.

For further optimizing your test automation process you must take a look at the below mentioned aspects:
  1. Code, documents and inspections.
  2. Unit testing
  3. Prototyping
  4. Allocation of the time and other resources.
  5. Designing and coding according to the testing.
  6. Automation of the regression testing.
  7. Designing of the tests for the verification of the user expectations and specifications.
  8. Document analyzation
  9. Performing positive root cause analysis.
- Some of the software systems and applications are quite small and simple and so the time taken to process the requests is quite less and negligible. 
- However, when it comes to the larger software systems and applications, the optimization of the test automation process can save a lot of time for even the most basic test case.
- For testing any big and complex software system or application, the optimization becomes crucial. 
- The goal of every software tesing methodlogy is to ensure that the software applications works just like what the customer has figured in his/ her perspective. 
- One of the myth about the optimization is that it is quite expensive and hence it should be used when extremely necessary.
- There are people who believe that 100 percent optimization of the test automation process is an ultimate objective. 
- However there is no particular formula, the relative merits of the optimization of the test automation process depends up on many factors. 
- When you optimize the test automation process, the efficiency of the whole automation process is increased gradually. 

Everyday we witness a dramatic increase in the complexity of the computing environments which puts quite a great pressure on the automated testing thus calling for optimization. 


Thursday, June 28, 2012

What is meant by decision table testing and when it is used?


Heard of decision table testing before? This concept is rarely heard since it is not used by the testers very often. This article is focussed upon the decision table testing and when it is used. 

"Decision table testing proves to be a very handy software testing methodology which comes to the tester’s rescue whenever a combination of inputs is to be dealt with and different results are produced". 

To understand this concept you can take example of two binary inputs A and B. You will get 4 different combinations of these two inputs which will produce 4 different results based up on whatever operation is performed on them. If you observe some of these outputs to be the same, then you can select any of them and the output which is different for testing. 

With a small number of inputs you won’t realise the importance of this testing technique since you will feel like using a normal testing technique. But with a large number of inputs, the significance of the decision table testing becomes quite clear. The below mentioned expression gives the possible number of combinations of the inputs:
2^n,  where n stands for the number of inputs.
Let us take n=10. The number of possible input combinations comes as 1024! 

What is Decision Table Testing?


- Decision table is actually a table that showcases all the different possible combinations of the supplied inputs along with their corresponding outputs. 
- Decision table testing is one of the black box testing techniques. 
- This testing technique is widely used in web applications however; it has got limited scope when it comes to equivalence partitioning and boundary value analysis.
- In boundary value analysis and equivalence partitioning, the decision table testing can be applied only in specific conditions.
- Mostly, decision table testing is used for testing rules and logics. 
- Sometimes, it is also used to evaluate complex business rules. 
- These complex rules are broken down in to simple decision tables. 

Advantages of Decision Table Testing


Below mentioned are some of the advantages of the decision table testing:
  1. With decision table testing you get a frame work that facilitates complete and accurate processing of the rules and logics.
  2. Decision table testing helps in the identification of the test scenarios faster because of its simple and accurate tabular representations.
  3. Decision tables are quite easy to understand.
  4. Decision tables require less maintenance and updating the contents is also very easy.
  5. With a decision table you can verify whether or not you have checked all the possible test combinations.

What portions are defined for decision table testing?


- Out of all the black box testing methods, decision table testing is quite rigorous. 
- But nonetheless, decision tables provide quite a compact and precise way for modelling a complex logic. 
- Below mentioned 4 portions have been defined for a typical decision table:
  1. Stub portion,
  2. Entry portion,
  3. Condition portion, and lastly
  4. Action portion.
- “Rule” is the column in entry portion and indicates which actions are to be taken for the condition that is indicated in the condition portion of the table.
- In some decision tables all the conditions are binary, such kind of decision tables are called “limited entry decision tables”. 
- On the contrary, if the conditions have several values, such a table is known as “extended entry decision table”. 

There is one disadvantage of decision table testing: 
It is very difficult to scale up the decision tables. 


Wednesday, June 13, 2012

What is a structure point and what are its characteristics in domain engineering?


Structure points are one of the terms with which most of us are rarely familiar. But, the structure points indeed play an important role in the domain engineering.  Those who know something about the domain engineering, might be familiar with the term. This article is all about the structure points, their characteristics and what role have they got to play in the domain engineering. 
But, before moving on to the structure points, you need to know at least a little about the domain engineering.  

About Domain Engineering


Domain engineering consists of 3 primary phases namely:
  1. Domain analysis
  2. Domain design
  3. Domain implementation
- Firstly, the application domain that is to be investigated is defined and all the common and varying points obtained from the domain are categorized. 
- These common and varying points are represented by the domain model. 
- The representative applications that are the result of the domain analysis are subjected to analyzation based on which the domain model is prepared which serves with the development of the architecture of the software system or application. 
- This process results in the formation of another model called the structural model. 
- This structural is said to consiss of a small number of structural elements in which you can clearly observe the interaction patterns manifesting. 
- The architectural that has been created in the previous steps can be re- used wherever required in the whole domain.
- The structural model is where the structure points come in to play! They act as distinct constructs within the structural model aspects like:
  1. Interface
  2. Response mechanism
  3. Control mechanism etc.
The domain engineering promotes the re- use of the components of the existing software systems or applications. A repository of the re- usable components or artifacts is created.
Now moving on to the characteristics of the structure points, they have got three basic characteristics:

Characteristics of Structure Points


1. The structure points ought to implement the concept of information hiding by the means of isolating all the complexity of them. This has provided a great deal of help in the reduction of the overall perceived complex nature of the software system or application.

2. With the structure points, abstractions having a limited number of instances in the application re- occur in all the applications that lie within a domain. The size of the class hierarchy should be small for this characteristic to take effect. Plus if the abstraction does not occur in all the parts of the software application, you won’t be able to justify the cost to verify, document and disseminate the structure points.

3. The rules that govern the use of structure point are very easy to understand plus the interface of the structure point is relatively very simple.

What is Structural Modeling and what is the role of structure point?


- The structure modelling is an essential approach to the domain engineering and is facilitated by the structure points. 
- Structural modelling is actually a pattern based approach and works up on the assumption that every application domain consists of repeating patterns that can be effectively reused. 
- A structural model is composed of a number of structure points.
- These elements only characterize the architecture of the software systems or applications. 
Simple patterns of interaction among these structure points can result in the formation of many architectural units. 
- Thus, structure point can be identified as a distinct construct within a structural model. 

Therefore the characterization of the structure points can be done as follows:
  1. The number of instances of the structure point should be limited.
  2. The interface should be relatively simple.
  3. Information hiding must be implemented by the structure point by isolation all the complexity contained within the structure point. 


Sunday, June 3, 2012

What is release planning and what is the need of release planning?


Release planning forms a very important part of the whole software development life cycle and from the term itself you can make out that it is related to the release of the software product.

What is a Release Plan?


- A release plan is drawn up during the release planning meeting. 
- The purpose of the release planning meeting is to lay out the overall project plan. 
- The release plan is further used to plan the iterations and schedules for the other processes. 
- For every individual iteration, a specific iteration plan is designed keeping in mind the specifications of the release plan. 
- It is important that a balance should be maintained between the technical aspect and the business aspect of a software project else the development conflicts will arise and the developers will never be able to finish the software project on time. 
- So, to get a better release plan it is important all the technical decisions must be handled by the technicians and all the business decisions are taken up by the business people. 

How to draw a proper release plan?


- To draw out a proper release plan, it is important that these classes of the stake holders co- ordinate properly. 
- In order to facilitate the co- ordination among these two, a set of rules has been defined for the release planning.
- With these rules it has been made possible that each and every individual involved with the project is able to state his/ her own decision.
- With such a way, it gets easy to plan a release schedule to which every one can commit to. 
Otherwise, the developers will find it difficult to negotiate with the business persons. 
- The essence of the release planning meeting lies in the proper estimation of all the user stories in terms of the ideal programming weeks. 

What is an ideal programming week?


Now you must be wondering what an ideal programming week is. 
- The ideal programming week is defined as how long one can imagine regarding the implementation of a particular user story if there was nothing else to be done. 
- Here by nothing else we do not mean a total absence of the other activities! 
- It only means the absence of the dependencies and extra work but presence of tests.

Factors on which a release plan depends are:


- The importance level of a user story is decided by the customer itself.
- He/ she also decide how much priority is to be given to which user story regarding its completion. 
- There are two factors based up on which the release plan can be drawn:
  1. Scope or
  2. Time

Role of Project Velocity in Release Planning


- A measure called the “project velocity” helps with the release planning. 
- This measure proves to be a great aid in determining the number of the user stories that can be implemented before the last date of the completion of the software project.
- Or in the terms of the scope, the project velocity helps in determining the number of user stories that can be completed. 
- When the release plan is created according to the scope, the total weeks of the estimated user stories is divided by the project velocity to obtain the total number of the iterations that can be carried out till the due date of the project completion. 

Philosophy Underlining Release Planning


The philosophy that underlies the release planning is that the quantification of a software product can be done by the below mentioned 4 variables:
  1. Scope: It defines how much work is to be done.
  2. Resources: It states the number of the people available.
  3. Time: It is the time of the release of the software product and
  4. Quality: It defines how good the software is. 


Saturday, May 5, 2012

What is the development style used in test driven development?


Several aspects, approaches and styles have been designed for carrying out the test driven development process. Adequate focus is kept on the writing of the code so that only the necessary code is produced that is required for passing the tests. Such an approach makes sure that the program designs are simple, clearer and cleaner. 

Popular aspects used in TDD


Below mentioned are few of the aspects that are quite popular among the programmers and developers using the test driven development methodology:

KISS:  
- It stands for 'Keep it Simple Stupid' and states that keeping some systems simple yields better outcomes rather than keeping them complex. 
- It defines simplicity as a key goal in the process of program designing and avoids unnecessary complexity. - Some examples of failure to follow Kiss are given by the function creep, scope creep and so on. 
- This principle of software programming was coined by Kelly Johnson.


YAGNI:  
- It stands for 'You Ain’t Gonna Need It'. 
-This though being one of the primary principles of the extreme programming is followed in the test driven development also.
- According to this principle, the functionalities should not be added until they are very much required.
- In other words, it says that the functionalities should be implemented only when they are actually needed and not merely by what one foresees. 
- This aspect has got a few drawbacks.
- The time required is obtained from the other processes like adding, testing, etc. 
- It calls for the need of debugging and documentation of the new features. 
- New features might impose certain constraints which can conflict with the working of a necessary feature in the future. 
- The program may experience the code bloat i.e., it may get bigger and complex and thus complicated. 
- A strict revision control is needed. 
- Addition of more and more features may cause snow ball effect leading to the creeping featurism.


Fake it till you make it:
- This aspect boosts the real confidence of the developers, thus preventing them from getting stuck in to their self fulfilling prophecies. 
- This technique can effectively combat the depression that most of the developers and programmers experience.

Tests are written to achieve the desired design of the software system or application whether is it advance or primitive. The code may pass all the tests being simpler as compared to the target pattern. This may sound odd at the first go but it eventually helps the developer to keep a sharp focus up on the important elements.

Whichever style is followed, there are two basic steps that should be compulsorily followed

1. First write the tests: 
It is required that the tests are written first before the functionality is actually implemented. This step is    known for having two benefits:
       (a)  It ensures that the application is worth testing i.e., it provides testability to the application. The application is considered to be tested via the outset by the developer and he/ she does not need to worry about testing later.
      (b) It ensures that every feature and functionality has a unique test developed for it so that the functionalities are tested as the executable specifications.


2. First fail the test cases: 
This step is carried out in order to ensure the correctness of the test and also its error detection ability. Once this is done, it becomes easy for the implementation of the functionality. This step is the essence of the test driven development. The following steps are constantly repeated:
(a)    Adding of tests cases that fail.
(b)   Writing ode to pass them.
(c)    Refactoring
The productivity is also enhanced following the above two steps. 


Tuesday, April 17, 2012

Explain the concepts of syntax test technique?

Very few people are familiar with the term syntax test technique. We shall discuss about this testing technique but after a brief discussion about the syntax of the programming languages.

Every real world language in this world has got certain rules following which the meaningful statements and sentences can be drafted from the raw words. These rules are collectively known as the syntax of the language. Similarly syntax also exists for the computer programming languages.

About Syntax and Syntax Test Technique



- Each programming language has got its own unique syntax.

- The syntax is known to define the surface of a language.

- Type of syntax of a language depends on the type of the programming i.e., whether it is a text based language or a visual based language.

- Syntax forms a part of the semantics.

- The syntax test technique involves the process of parsing i.e., the linear sequence of the tokens is transformed in to a hierarchical syntax tree.

- Parsing is also an effort and time consuming process but, nowadays several automated tools are available for this purpose also and are quite effective in generating parses.

- These parses are generated using the language grammar specifications as stated in the Backus- Naur form.

- Backus- Naur forms as well as regular expressions of the lexicon together comprise the syntax of the textual programming languages.

- There are other rules called productions which are used to classify the syntax in to different categories.

- The syntax just describes whether or not the program is valid.

- It is the semantics which describe the meaning of the program.

- It is not necessary that a syntactically correct code of the program should be semantically correct also.

Steps of Syntax Test Technique


A typical syntax testing technique consists of the following steps:

1. Identification of the format of the target language.

2. Definition of the syntax of the target language in to formal notation like Backus- Naur form.

3. Testing the syntax under normal conditions by keeping the Backus- Naur form of the language under adequate coverage. This is the minimal requirement for carrying out a syntax test.

4. Testing of the garbage conditions i.e., testing the software system against the invalid input test data. This condition has a high pay off and automation is highly recommended.

5. Debugging of the whole software program.

6. Automation of the test execution process. This is necessary since a lot of test cases are required for the effective syntax testing.

7. For carrying out the whole process, 3 most frequent wrong actions have been identified as shown below:

(a) The recognizer could not identify the string which was good.
(b) The recognizer accepted a bed string.
(c) The recognizer crashed or hanged during the process of the recognition of the good and bad strings.
(d) Any incorrectness in the Backus- Naur specifications can spoil a good string and turn it in to a bad one.

8. There should be a proper testing strategy since all the strings cannot be tested.

9. Only one error should be generated and tested each time.

10. First, all the single errors should be tested using specifically created test cases, then the double errors and lastly the triple errors are tested.

11. Your focus should be on one level at a time.

Dangers related to Syntax Testing


Certain dangers have also been identified related to the syntax testing:

(a) It is quite common for the testers to forget the normal cases.

(b) While testing, testers often go overboard with the testing combinations.

(c) Syntax testing is often taken very lightly since it is pretty easy when compared to the structural testing.

(d) A lack of knowledge about the program can make you to execute many test cases. So its better to have a thorough study of the program before you test it.


Tuesday, February 7, 2012

What are different kinds of risks involved in software projects?

When we create a development cycle for a project, we develop everything like test plan, documentation etc but we often forget about the risk assessment involved with the project.

It is necessary to know what all kinds of risks are involved with the project. We all know that testing requires too much of time and is performed in the last stage of the software development cycle. Here the testing should be categorized ion the basis of priorities. And how you decide which aspect requires higher priority? Here comes the role of risk assessment.

Risks are uncertain and undesired activities and can cause a huge loss. First step towards risk assessment is the identification of the risks involved. There can be many kinds of risks involved with the project.

DIFFERENT KINDS OF RISKS INVOLVED

1.Operational Risk
- This is the risk involved with the operation of the software system or application.
- It occurs mainly due to false implementation of the system or application.
- It may also occur because of some undesired external factors or events.
- There are several other causes and main causes are listed below:

(a) Lack of communication among the team members.
(b) Lack of proper training regarding the concerned subject.
(c) Lack of sufficient resources required for the development of the project.
(d) Lack of proper planning for acquiring resources.
(e) Failure of the program developers in addressing the conflicts between the issues having different priorities.
(f) Failure of the team members in dividing responsibilities among themselves.

2. Schedule Risk
- Whenever project schedule falters, schedule risks are introduced in to the software system or application.
- Such kinds of risks may even lead it to a complete failure bringing down the economy of the company.
- A project failure can badly affect the reputation of a company.
- Some causes of schedule risks have been stated below:

(a) Lack of proper tracking of the resources required for the project.
(b) Sometimes the scope of the project may be extended due to certain reasons which might be unexpected. Such unexpected changes can alter the schedule.
(c) The time estimation for each stage of the project development cycle might be wrong.
(d) The program developers may fail to identify the functionalities that are complex in nature and also they may falter in deciding the time period for the development of these functionalities.

3. Technical Risks
- These types of risks affect the features and functionalities of a software system or application which in turn affect the performance of the software system.
- Some likely causes are:

(a) Difficulty in integrating the modules of the software.
(b) No better technology is available then the existing ones and the existing technologies are in their primitive stages.
(c) A continuous change in the requirements of the system can also cause technical risks.
(d) The structure or the design of the software system or application is very complex and therefore is difficult to be implemented.

4. Programmatic Risk
- The risks that fall outside the category of operational risks are termed as programmatic risks.
- These too are uncertain like operational risks and cannot be controlled by the program.
- Few causes are:

(a) The project may run out of the funds.
(b) The programmers or the product owner may decide to change the priority of the product and also the development strategy.
(c) A change in the government rule.
(d) Development of the market.

5. Budget Risk
- These kinds of risks arise due to budget related problems.
- Some causes are:

(a) The budget estimation might be wrong.
(b) The actual project budget might overrun the estimated budget.
(c) Expansion of the scope might also prove to be problem.


Facebook activity