Subscribe by Email


Showing posts with label Priority. Show all posts
Showing posts with label Priority. Show all posts

Sunday, December 16, 2018

Able to report defects in an agreed format

During the course of a software development project, one of the most critical workflows is the defect workflow. The software coding team releases features and code to the testing team, which tests these features against their test cases and if there are defects, these are typically logged in a defect tracking system where their progress can be monitored and they can be tracked to closure (either with the defect being closed and an upgraded feature released, or with the defect having been closed as not to be fixed or not even being a defect at all).
However, this is an area that leads to a lot of dispute. There can be significant discussions and disputes between  the coding team and the testing team over what the severity and priority of a defect can mean, and from my experience, what I have seen tells me that even if one were to define a sort of standard for these terms across the organization, individual teams still need to work out their own precise definition of what these terms mean. Even more critical is the fact that individuals coders and testers also understand these terms and even though these can be subjective criteria, they also have developed a level of understanding with their counterparts in the different teams so that even though there may be some dispute over these terms when applied to a specific defect, the individuals can work it out.
Even though I stated some easy solutions in the above paras, there are many complications that come  about during the course of a software development project. For example, there can be senior coders who have a lot of heft and hence can speak with a lot of authority to members of the testing team. I remember a case where a senior developer called a new tester and asked him to explain the defect he had raised - it was marked as a very high severity and the developer felt that it was a side case and should not have been marked as a very high severity. This discussion ended with a conclusion, but there have been other cases where the tester felt that they were right and resented the fact that the developer used his / her seniority to try and talk them down. These issues can become serious if they happen many times, and it may become necessary for a defect review committee or the respective team leads/  managers to resolve these kind of issues. Because human nature being what it is, there  will be teams where you will have some individuals who get into these sort of disputes and they need to be resolved quickly.
For the above case, I remember one team which took a more drastic approach. They had set up an defect review committee that met once every few hours and every new defect that was created had to be reviewed by the committee before it could be taken up for any action. Without trying to criticize, it did seem odd because it meant the senior members who were part of the committee had to spend their time even on trivial defects that could be in most cased discussed and resolved between the developer and the tester.
Another problem that seemed to be happening at regular intervals was when a new member would come into the team, whether through new hiring or through a transfer from another team. People from another team could sometimes cause more challenges since they would have their own conceptions of the defect workflow and would find it hard to understand why this team may have a different version of the same. In these cases, some amount of hand holding by a more senior member of the team would really help. 
These cases can go on and on, but the basic idea is that there needs to be a spirit of discussion and cooperation between team members that will help to understand these workflows and follow them in a manner that reduces disputes.


Friday, September 27, 2013

What are the parameters of QoS - Quality of Service?

With the arrival of the new technologies, applications and services in the field of networking, the competition is rising rapidly. Each of these technologies, services and applications are developed with an aim of delivering QoS (quality of service) that is either better with the legacy equipment or better than that. The network operators and the service providers follow from trusted brands. Maintenance of these brands is of critical importance to the business of these providers and operators. The biggest challenge here is to put the technology to work in such a way that all the expectations of the customers for the availability, reliability and quality are met and at the same time the flexibility for quick adaptation of the new techniques is offered to the network operators. 

What is Quality of Service?

- The quality of service is defined by its certain parameters which play a key role in the acceptance of the new technologies. 
- The organization working on several specifications of QoS is ETSI.
- The organization has been actively participating in the organization of the inter-operability events regarding the speech quality.
- The importance of the QoS parameters has been increasing ever since the increasing inter-connectivity of the networks and interaction between many service providers and network operators for delivering communication services.
- It is the quality of service that grants you the ability for the making parameters specifications based up on multiple queues in order to shoot up the performance as well as the throughput of wireless traffic as in VoIP (voice over internet), streaming media including audio and video of different types. 
- This is also done for usual IP over the access points.
- Configuration of the quality of service on these access points involves setting many parameters on the queues that are already there for various types of wireless traffic. 
- The minimum as well as the maximum wait times are also specified for the transmission. 
- This is done through the contention windows. 
- The flow of the traffic between the access point and the client station is affected by the EDCA (AP enhanced distributed channel access) parameters. 
The traffic flow from client to the access point is controlled by the station enhanced distribution channel access parameters. 

Below we mention some parameters:
Ø  QoS preset: The options listed by the QoS are WFA defaults, optimized for voice, custom and WFA defaults.
Ø  Queue: For different types of data transmissions between AP – to – client station, different queues are defined:
- Voice (data 0): Queue with minimum delay and high priority. Data which is time sensitive such as the streaming media and the VoIP are automatically put in this queue.
- Video (data 1): Queue with minimum delay and high priority. Video data which is time sensitive is put in to this queue automatically.
- Best effort (data 2): Queue with medium delay and throughput and medium priority. This queue holds all the traditional IP data. 
- Background (data 3): Queue with high throughput and lowest priority. Data which is bulky, requires high throughput and is not time sensitive such as the FTP data is queued up here.

Ø AIFS (inter-frame space): This puts a limit on the waiting time of the data frames. The measurement of this time is taken in terms of the slots. The valid values lie in the range of 1 to 255.
Ø Minimum contention window (cwMin): This QoS parameter is supplied as input to the algorithm for determining the random back off wait time for re-transmission.
Ø cwMax
Ø maximum burst
Ø wi – fi multimedia
Ø TXOP limit
Ø Bandwidth
Ø Variation in delay
Ø Synchronization
Ø Cell error ratio
Ø Cell loss ratio



Saturday, August 24, 2013

How can the problem of congestion be controlled?

Networks often get trapped in the situation of what we call network congestion. For avoiding such collapses, congestion avoidance and congestion control techniques are often used by the networks nowadays. 

In this article, we discuss about how we can control the problem of network congestion using these techniques. Few very common techniques are:
  1. Exponential back off (used in CSMA/ CA protocols and Ethernet.)
  2. Window reduction (used in TCP)
  3. Fair queuing (used in devices such as routers)
  4. The implementation of the priority schemes is another way of avoiding the negative effects of this very common problem. Priority schemes let the network transmit the packets having higher priority over the others. This way only the effects of the network congestion can be alleviated for some important transmissions. Priority schemes alone cannot solve this problem.
  5. Another method is the explicit allocation of the resources of the network to certain flows. This is commonly used in CFTXOPs (contention – free transmission opportunities) providing very high speed for LAN (local area networks) over the coaxial cables and phone lines that already exist.
- The main cause of the problem of network congestion is the limited capacity of the network. 
- This is to say that the network has limited. 
- The resources also include the link throughput and the router processing time. 
- Congestion control is concerned with curbing the entry of the traffic in to the telecommunications network so that the problem of congestive collapse can be avoided. 
- The over-subscription of the link capabilities is avoided and steps are taken to reduce the resources. 
- One such step is reducing the packet transmission rate. 
- Even though if it sounds similar to flow control, it is not the same thing. 
- Frank Kelly is known as the pioneer of the theory of congestion control. 
- For describing the way in which the network wide rate allocation can be optimized by the individuals by controlling their rates, he used two theories namely the convex optimization theory and the micro economics theory. 

Some optimal rate allocation methods are:
Ø  Max – min fair allocation
Ø  Kelly’s proportional fair allocation

Ways to Classify Congestion Control Algorithm

There are 4 major ways for classifying the congestion control algorithms:
  1. Amount as well as type of feedback: This classification involves judging the algorithm on the basis of multi-bit or single bit explicit signals, delay, loss and so on.
  2. The performance aspect taken for improvement: Includes variable rate links, short flow advantage, fairness, links that can cause loss etc.
  3. Incremental deployability: Modification is the need of sender only, modification is required by receiver and the sender, modification is needed only by the router, and modification is required by all three i.e., the sender, receiver and the router.
  4. Fairness criterion being used: It includes minimum potential delay, max – min, proportional and so on.
Two major components are required for preventing network congestive collapse:
  1. End to end flow control mechanism: This mechanism has been designed such that it can respond well to the congestive collapse and thus behave accordingly.
  2. Mechanism in routers: This mechanism is used for dropping or reordering packets under the condition of overload.

- For repeating the dropped information correct behavior of the end point is required. 
- This indeed slows down the information transmission rate. 
- If all the end points exhibit this kind of behavior, the congestion would be lifted from the network. 
- Also, all the end points would be able to share the available bandwidth fairly. - Slow start is another strategy using which it can be ensured that the router is not overwhelmed by the new connections before congestion can be detected. 


Wednesday, June 19, 2013

Explain the Priority CPU scheduling algorithm

A number of scheduling algorithms are available today and all are appropriate for some different kinds of scheduling environments. In this article we give a brief explanation about the ‘priority CPU scheduling algorithm’. 

For those who are not familiar with this scheduling algorithm, a special case of the priority algorithm is the shortest job first scheduling algorithm (SJF). 

- This algorithm involves associating a priority with each and every thread or process. 
- Out of all the processes, the one with the highest priority is chosen and given to the processor for execution. 
- Thus, it is decided by the priority that which process has to be executed first. 
There are cases when the two or more processes might have the same priority. 
- In such case FCFS (first come first served) scheduling algorithm is applied. 
The process first in the queue is then executed first. 
- The SJF is essentially a modification of the priority algorithm. 
- Here, the priority of a process (indicated by p) is simply taken as the inverse of the following CPU burst as predicted. 
- This implies if a process is having a large CPU burst, then its priority will be low accordingly and similarly if the CPU burst is small, the priority will be high. 
Numbers in some fixed range are used for indicating the priorities such as from 0 to 4095 or from 0 to 7 etc. 
- One thing to be noted is that there has been no general agreement up on whether the number 0 indicates lowest priority or highest priority.
- In some systems the lower priorities are indicated by the low numbers while in some systems low numbers mean higher priorities. 
- The latter case i.e., using low numbers for representing high priorities is more common.
For example, consider the 5 processes P1, P2, P3, P4 and P5 having CPU burst as 10, 1, 2, 1, 5 respectively and priority also respectively as 3, 1, 4, 5, 2. Using the priority scheduling algorithms, the processes will be executed in the following order:
P2, P5, P1, P3, P4

There are two ways of defining the priorities i.e., either externally or internally. This gives two types of priorities:

  1. Internally defined priorities: These priorities make use of some quantities that can be measured for computing a process’s priority. These quantities include memory requirements, time limits, ration of the I/O burst and CPU burst, number of files and so on.
  2. Externally defined priorities: These priorities are defined by some criteria that are external to the operating system. Such factors include political factors, department leading the work; importance of the process, amount of money paid and so on.
The priority scheduling can be itself divided in to two types namely non – preemptive or preemptive. The priority of the process waiting in the ready queue is compared with that of the executing process.

Ø  Preemptive priority scheduling: Here the CPU is preempted if the waiting process has a priority higher than that of the currently executing process.
Ø  Non – preemptive priority scheduling: Here the new process will be let waiting in the ready queue till the execution of the current process is complete.


Starvation or the indefinite blocking presents a major problem in priority scheduling. A process will be considered blocked if it is ready for execution but has to wait for CPU. It is very likely that the low priority processes will be left waiting indefinitely for CPU. In a system that is heavily loaded most of the time, if the number of high priority processes is large, the low priority processes will be prevented from getting processor. 


Saturday, April 20, 2013

Explain the concepts of threads and processes in operating system?


Threads and processes are an important part of the operating systems that have features of multi–tasking and parallel programming. These come under the sole concept of ‘scheduling’. Let us try to understand these concepts with the help of an analogy.

- Consider the process to be a house and threads are its occupants. 
- Then, process is like a container having many attributes. 
- These attributes can be compared to that of a house such as number of rooms, floor space and so on. 
- Despite having so many attributes, this house is a passive thing which means it can’t perform anything on its own. 
- The active elements in this situation are the occupants of the home i.e., the threads. 
- The various attributes of the house are actually used by them. 
- Since you too live in a house you must have got an idea how it actually works and behaves. 
- You do whatever you like in the house if only you are there. 
- What if another person starts living with you? You just can’t do anything you want to do. 
- You cannot use the washroom without making sure that the other person is not there. 
- This can be related to multi – threading. 
- Just as a part of estate is occupied by the house, an amount of memory is occupied by the process. 
- Just as the occupants are allowed to freely access anything in the house, similarly the occupied memory is utilized by the threads that are a part of that process i.e., the access to memory is common. 
- If one process allocates some memory, it can be accessed by all other threads also. 
- If such a thing is happening, it has to be made sure that from all the threads, the access to the memory is synchronized. 
- If it cannot be synchronized, then it becomes clear that the memory has been allocated specifically to a thread. 
- But in actual, things are a lot more complicated because at some point of time everything has to be shared. 
- If one thread wants to use some resource that is already under use by some other thread, than that thread has to follow the concept of mutual exclusion. 
An object known as the mutex is used by the thread for achieving exclusive access to that resource. 
- Mutex can be compared to a door lock. 
- Once a thread locks this, no other thread can use that resource until the mutex is again unlocked by that thread. 
- Mutex is one resource that a thread uses. 
- Now, suppose there are many threads waiting to use the resource when mutex is unlocked, the question that arises now is that who will be next one to use the resource. 
- This problem can be solved by either deciding on the basis of length of wait or on basis of priority. 
- Suppose there is a location that can be accessed by more than one threads simultaneously.
- You want to have only a limited number of threads using that memory location at any given point of time. 
- This problem cannot be solved by mutex but with another resource called semaphore. 
- Semaphore with a count of 1 is the resource that can only be used by one thread at a time. 
- In semaphore of greater count more threads can access it simultaneously.  
- It just depends up on how you characterize or set the lock.


Wednesday, April 17, 2013

What are Real-time operating systems?


- The RTOS or a real time operating system was developed with the intention of serving the application requests that occur in real time. 
- This type of operating system is capable of processing the data as and when it comes in to the system. 
- This it does without making any buffering delays. 
- The time requirements are processed in 10ths of seconds or even on much smaller scale. 
A key characteristic feature of the real operating system is that the amount of time they take for accepting and processing a given task remains consistent. 
- The variability is so less that it can be ignored totally.

Real time operating systems also there are two types as stated below:
  1. The soft real –time operating system: It produces more jitter.
  2. The hard real – time operating system: It produces less jitter when compared to the previous one.
- The real time operating systems are driven by the goal of giving guaranteed hard or soft performance rather than just producing a high throughput. 
- Another distinction between these two operating systems is that the soft real time operating system can generally meet deadline whereas the hard real time operating system meets a deadline deterministic ally.
- For the scheduling purpose, some advance algorithms are used by these operating systems. 
- Flexibility in scheduling has many advantages to offer such as the cso (computer system orchestration) of the process priorities becomes wider.
- But a typical real time OS dedicates itself to a small number of applications at a time. 
- There are 2 key factors in any real –time OS namely:
  1. Minimal interrupt latency and
  2. Minimal thread switching latency.
- Two types of design philosophies are followed in designing the real  time Oss:
  1. Time sharing design: As per this design, the tasks are switched based up on a clocked interrupt and events at regular intervals. This is also termed as the round robin scheduling.
  2. Event – driven design: As per this design, the switching occurs only when some other event demands higher priority. This is why it is also termed as priority scheduling or preemptive priority.
- In the former designs, the tasks are switched more frequently than what is strictly required but it proves to be good at providing a smooth multi – tasking experience. 
- This gives the user an illusion that he/ she is solely using the machine. 
- The earlier designs of CPU forced us to have several cycles for switching a task and while switching it could not perform any other task. 
- This was the reason why the early operating systems avoided unnecessary switching in order to save the CPU time. 
- Typically, in any design there are 3 states of a task:
  1. Running or executing on CPU
  2. Ready to be executed
  3. Waiting or blocked for some event
- Many of the tasks are kept in the second and third states because at a time the CPU can perform only one task. 
- The number of tasks waiting to be executed in the ready queue may vary depending on the running applications and the scheduler type being used by the CPU. 
- On multi – tasking systems that are non – preemptive, one task might have to give up its CPU time to let the other tasks to be executed. 
- This leads to a situation called the resource starvation i.e., the number of tasks to be executed is more and the resources are less.


Monday, December 3, 2012

What is trace-ability alert? How to trigger a trace-ability alert in Test Director?


The process of sending e–mails in order to notify the ones that are responsible whenever some change is made to the project. This can be done by instructing the test director to create an alert whenever a change occurs and send e – mails appropriately. One’s own follow up alerts can also be added. 
There are certain rules called the trace-ability notification rules (based up on the associations that were made in the test director among the tests, requirements and defects) which are activated by the test director administrator for generating the automatic trace-ability alerts.

On what occasions a trace-ability alert issued?

Only for the following issues test director can generate the trace-ability alerts:
  1. Whenever a requirement (except change of a status) changes, the designer of the associated tests is notified by the test director.
  2. Whenever a requirement having an associated test changes, all the project users are notified by the test director.
  3. Whenever the defect status changes to ‘fixed’, the responsible tester of the associated test is notified by the test director.
  4. Whenever a test run is successful, the user assigned to the associated test is notified by the test director.

Steps to trigger trace-ability alert

  1. Log on to the project  as a different user.
  2. Click on the test plan tab to turn on the test plan module which will display the test plan tree. Expand the concerned subject folders and select the required test. A designer box displaying the user name in the details tab in the right pane is seen. One thing to be noted is that whenever an associated requirement changes, the trace-ability notification is only viewed by the designer.
  3. Click on the requirements tab to turn on the requirements tree and also make sure that it is in the document view.
  4. Among the requirements choose the one that you want to change.
  5. For changing the priority of the requirement click on the priority down arrow and select the required priority. This will cause the test director to generate an alert for the test associated with the requirement selected above. Also, an e – mail will be sent to the designer who designed this test.
  6. When you are done log out of the project by clicking on the log out button present on the right side of the window.

How to view a trace-ability alert?

This trace-ability change can be viewed for a single entity or all the entities in the project. Here by entity we mean a test, a defect or a test instance. To view the trace-ability alert follow the below mentioned steps:
  1. Log on to the project as the designer of the test.
  2. Click on the test plan tab to view the test plan tree. Expand the subject folders to display that test. You will see that the test has a trace changes flag which is an indication of the fact that a change was made to the requirement associated with it.
  3. Clicking on the trace changes flag for the test will enable you to view the trace-ability alert. Also, the trace changes dialog box will open up. Clicking on the requirement link will make the test director to highlight that particular requirement in the requirements module.
  4. For viewing all of the trace-ability alerts click on the trace all changes button in the common test director tool bar. A dialog box listing all the trace-ability changes will open up.
  5. Once done close the dialog box. 


Tuesday, August 7, 2012

Financial Prioritization - Ways to evaluate a cash flow stream?

Financial analysis of themes helps in prioritization because for most organizations the bottom line is the amount of money earned or saved. It is usually sufficient to forecast revenue and operational efficiencies for the next two years. One can always look ahead, however, if necessary.

A good way of modeling the return from a theme is to consider the revenue it will generate from new customers, from current customers buying more copies or additional services, from customers who might have otherwise gone to a competitive product, and from any operational efficiencies it will provide.

Money earned or spent today is worth more than the money earned or spent in future. To compare a current amount with a future amount, the future amount is discounted back into a current amount. The current amount is the amount that could be deposited in a bank or into some other relatively safe investment and that would grow to the future amount by the future time.

What are four ways to evaluate a cash flow stream?


The four good ways to evaluate a cash flow stream are:
- Net present value (NPV) : 
Using this method to prioritize themes has the advantages of being easy to calculate and easy to understand. The primary disadvantage of NPV is that comparing the values of two different cash flow streams can be misleading.

- Internal rate or return (IRR) or Return on Investment :
It provides a way of expressing the return on a project in percentage terms. IRR is the measure of how quickly the money invested in a project will increase in value. Usually, IRR is not used in isloation.
There are couple of disadvantages of IRR. First, The calculation is hard to do by hand, the result may be more subject to distrust be some. Second, IRR cannot be calculated in all situations.

- Payback Period : 
NPV looks at a cash flow stream as a single, present value amount. IRR looks at a cash flow stream as an interest rate. Payback period looks at cash flow stream as amount of time required to earn Back the initial investment.
Two primary advantages of payback period is when comparing and prioritizing themes. First, calculations and interpretations are straight. Second, it measures amount and duration of finacial risk taken on by the organization.
First disadvantage to payback period is that it fails to take into account the time value for money. Second disadvantage is that it is not a measure of the profitability of a project or theme.

- Discounted payback period :
To remedy the first drawback of payback period, simply apply the appropriate discount factor to each item in the cash flow stream

By calculating these values for each theme, the product owner and team can make intelligent decisions about the relative priorities of the themes.


Saturday, August 4, 2012

What factors are needed to prioritize themes?

Need of users is considered before planning for a project. To achieve the best combination of product features, schedule, and cost requires deliberate consideration of the cost and value of the user stories and themes.
We need to prioritize and this responsibility is shared among the whole team. Individual user stories or features are aggregated into themes. Stories and themes are then prioritized relative to one another for the purpose of creating a release plan.

There are four primary factors to be considered when prioritizing:
1. The financial value of having the features.
2. The cost of developing new features.
3. The amount and significance of learning and new knowledge created by developing the features.
4. The amount of risk removed by developing features.

1. Determine the Value of Theme
- Estimate the financial impact over a period of time.
- It can be difficult to estimate the financial return on theme.
- It usually involves estimating number of new sales, average value of sales and so on.

2. Determine the Cost of Developing new Features
- Estimating cost of a feature is a huge determinant in overall priority of a feature.
- The best way to reduce the cost of change is to implement a  feature as late as possible.
- The best time to add feature is when there is no more time to change.
- Themes seem worthwhile when viewed in terms of time they will take.
- It is important to keep in mind that time costs money.
- The best way to do this while prioritizing is to do a rough conversion of story points or ideal days into money.

3. Learning New Knowledge
The knowledge that a team develops can be classified in two areas:
Product Knowledge
- It is the knowledge about what will be developed.
- It includes knowledge about features that are included and the features that are not included.
- Better knowledge of product will help the team to make better decisions.

Project Knowledge
- It is the knowledge about how product will be created.
- It includes knowledge about technologies, skills of developers, functioning of team together etc.
- The other side of acquiring knowledge is reducing uncertainty.

4. Risk
- A risk is anything that has not happened yet but might happen. It would threaten or limit the success of the project.
- Types of risks involved in a project are : schedule risk, cost risk and functionality risk.
- Struggle exists between high risk and high-value features of a project.
- Each approach has its drawbacks and the only solution is to give neither risk nor value total supremacy when prioritizing.

All these factors are combined by thinking first of the value and cost of the theme. Doing so will sort the themes into an initial order. Themes can then be moved forward or back in this order based on the other factors.


Wednesday, June 6, 2012

List out the differences between extreme programming and scrum?


The scrum development and extreme programming as we all know are two very great and popular agile software development processes. 
The common thing between the two is that both of these are agile processes, but on the other side there are a lot of differences too! 
In this article we are going to discuss the same i.e., what are the differences between extreme programming and scrum? There is no doubt in the fact that the two processes are very well aligned with each other! 
It often happens while following one of these processes that if you been following extreme programming you will feel as if you had been following scrum throughout the development process and vice versa.

Differences between Scrum & Extreme Programming


The differences between the extreme programming and scrum are quite subtle but they are important things which distinguish between the two processes from each other. Below mentioned are some of the differences between the scrum and the extreme programming:

Difference #1:
- The first subtle difference between the two is the duration of the iterations.
- In extreme programming the iterations are usually short i.e., one or two weeks long. 
- On the other hand the scrum iterations are quite long and may range from several weeks to months. The sprint is the name given for the iterations carried out in the scrum methodology.

Difference #2:
- In scrum, any changes during the development process are not supported by the scrum and neither do any changes are allowed in the iterations or sprints. 
- Once the sprint planning meeting is completed and it is decided to deliver a set of product back log items, the plan is not altered till the very end of the sprint. 
- In contrast to scrum, extreme programming iterations are known to be quite flexible since they allow changes to be incorporated in to its iterations. 
- But the changes can be made only if the team has not started working on a particular feature. And also the feature to be changed must be of the equivalent size of the feature that is to be replaced and has not been built up.

Difference #3:
- Extreme programming, has been known to work strictly in the order of the priority assigned to the various features and aspects. 
- These features and functionality had already been prioritized by the customers. 
- The customer is not considered to be the whole owner of the extreme programming product, whereas in the scrum it is do considered. 
- In the scrum development, the features are not prioritized by the customer instead the customers prioritize the product’s back log items.
- It is then the responsibility of the development team to determine the sequence for the development of the back log items. 
- In the extreme programming, the development always starts from the highly prioritized features and functionality.
- In scrum, the developers think that starting from a highly prioritized feature is not always a good option. They think that working up on low priority items makes more sense.

Difference #4:
- Under the concept of the scrum development process, no particular engineering practices have been stated. 
- But the extreme programming does prescribes some engineering practices like the TDD or test driven development, automated testing, simple designing, programming in pairs, refactoring and so on. 
But following the same practices is not always desirable! It is needed that the teams should incorporate their own values.

The above mentioned differences are quite small but they do make considerable set of difference and can have a profound impact on the development team. 
The extreme programming works well when it is not mandated and the teams are left to discover their own practices. Scrum methodology when worked out with time boxed iterations and additional focus can work wonders. 


Monday, April 16, 2012

What is the difference between priority and severity?

Severity and priority have always been the most effective measure to characterize a bug. Severity and priority levels have been in use since a long time by most of the software development organizations and standards as a measure to calculate the degree of the harm that can be done by a bug.

"The severity of a bug is an indication of its harmfulness or badness whereas the priority level of a bug is an indication of the urgency for fixing it."

Role of Testing Team



- The testing team is responsible for setting up the severity as well the priority levels for the bugs.
- The way they set the severity and priority level, it should be meaningful, easy to understand, consistent and of course reasonable!
- A bug as we all know is a defect in the coding of a software program which hinders its development and meeting the expectations of the users.

About Priority



- Priority is way of rating the urgency for fixing or correcting a bug which in turn implies its importance also.
- The priorities are set keeping in mind the goals and the specifications of the software project.
- Priority has nothing to do with the quality of the software product as affected by the bug.
- Priority is nothing but a functional of the class and severity of the bug.

About Severity



- Severity is a means for the measurement of the harm and disruption caused by the bug to the functioning of the software system or application.
- It is the severity that gives the information regarding the impact as well as visibility of the bug on the quality of the software product and its functioning.
- Severity is an effective means for rating of the impact of a bug on the quality of a software product as it is perceived.
- Therefore, the severity of the bugs should be assigned very carefully.
- It is a commonly observed issue that the simple concept of severity is made over complicated due to a lack of data.

Sub-Components of Severity



To say severity has got many sub components but on a whole basis only two are considered:

1. Visibility
Visibility gives the probability of the occurrence of the bug again in the future in a particular feature and functionality of the program.

2. Impact
Impact gives the measure of the disruption caused to the user of the system when the bug is encountered.

How Severity and Priority Calculated?



- The total severity is calculated as an average of the values of the visibility and impact factors.
- This characteristic of severity allows us to view it as a measure of the quality of the software product s it is perceived.
- Using priority as a quality indicator is a great mistake.
- Unlike severity, the priority is measured as an attribute of the bug and the defined goals of the project.

Priority on the basis of Urgency



- Some testers define priority on the basis of the urgency which is an absolutely wrong way.
- The urgency should be defined on the basis of the priority level.
- Urgency is directly proportional to the priority.
- The priority levels range from 1 to 7 where 1 being the highest priority level and 7 being the lowest.
- By default the priority of each discovered bug is 3.
- It will be more good if one priority scheme is followed across all the projects i.e., to have a global priority scheme.
- Priority tells us how important or urgent is to get the bug fixed.

In some cases it happens that going gets tough without rectifying the bugs in the lower levels. Such situations demand the highest priority.


Thursday, February 23, 2012

What is meant by severity of a bug? What are different types of severity?

We all know what a software bug is! It is a flaw, error or mistake in the software system or application that can cause it to crash or fail. Pretty much simple!
But very few of us are actually aware about the severity of a bug i.e., how much destruction it can cause to a software system or application.

- Bugs are of course results of the mistakes made by the software programmers and developers while coding the software program.
- Sometimes incorrect compilation of the source code by the program can also cause bugs.
- A buggy program is very hard to clean.
- Bugs can have a chain reaction also i.e., one bug giving rise to another and that in turn giving rise to one more bug and so on.
- Each bug has its own level of severity that it causes to the software system or application.
- While some bugs can work out total destruction of the program, there are some bugs that do not even come in to detection.
- Some bugs can cause the program to go out of service.
- In contrast to these harmful bugs, there are other bugs which are useful such as security bugs.

WHAT IS SEVERITY OF A BUG & ITS TYPES

-"Severity can be thought of as a measure of the harm that can be caused by a bug."
- Severity is an indication of how bad or harmful a bug is.
- The higher the severity of a bug, the more priority it seeks.
- Severity of the bugs of software can sometimes be used as a measure of its overall quality.
- Severity plays a major role in deciding the priority of fixing the bug.
- It is important that the severity of the bugs is assigned in a way that is logical and easy to understand.

There are several criteria depending on which the severity of a bug is measured. Below mentioned is one of the most commonly used ranking scheme for measuring severity of bugs:

1.Severity 1 Bugs
bugs coming under this category cease the meaningful operations that are being operated by a software program or application.

2.Severity 2 Bugs
Bugs coming under this category cause the failure of the software features and functionalities. But, still the application continues to run.

3.Severity 3 Bugs
Bugs coming under this category can cause the software system or application to generate unexpected results and behave abnormally. These bugs are responsible for inconsistency of the software system.

4.Severity 4 Bugs
Bugs coming under these categories basically affect the design of a software system pr application.

COMPONENTS OF SEVERITY
Severity has two main components namely the following:

1. Impact
- It is a measure of the disruption that is caused to the users when they encounter a bug while working.
- There is a certain level to which there is an interference with the user performing a task.
- Even the impact is classified in to various levels.

2. Visibility
- It is the measure of the probability of encountering the bug in future or we can say that it is measure of the closeness of a bug to the execution path.
- It is the frequency of the occurrence of a bug.

The severity is calculated as the product of both the impact as well as visibility. A measure of perceived quality and usefulness of the software product is given by the severity. Therefore it would not be wrong to say that the severity provides an overall measure of the quality of the software system or application.


Tuesday, February 7, 2012

What are different kinds of risks involved in software projects?

When we create a development cycle for a project, we develop everything like test plan, documentation etc but we often forget about the risk assessment involved with the project.

It is necessary to know what all kinds of risks are involved with the project. We all know that testing requires too much of time and is performed in the last stage of the software development cycle. Here the testing should be categorized ion the basis of priorities. And how you decide which aspect requires higher priority? Here comes the role of risk assessment.

Risks are uncertain and undesired activities and can cause a huge loss. First step towards risk assessment is the identification of the risks involved. There can be many kinds of risks involved with the project.

DIFFERENT KINDS OF RISKS INVOLVED

1.Operational Risk
- This is the risk involved with the operation of the software system or application.
- It occurs mainly due to false implementation of the system or application.
- It may also occur because of some undesired external factors or events.
- There are several other causes and main causes are listed below:

(a) Lack of communication among the team members.
(b) Lack of proper training regarding the concerned subject.
(c) Lack of sufficient resources required for the development of the project.
(d) Lack of proper planning for acquiring resources.
(e) Failure of the program developers in addressing the conflicts between the issues having different priorities.
(f) Failure of the team members in dividing responsibilities among themselves.

2. Schedule Risk
- Whenever project schedule falters, schedule risks are introduced in to the software system or application.
- Such kinds of risks may even lead it to a complete failure bringing down the economy of the company.
- A project failure can badly affect the reputation of a company.
- Some causes of schedule risks have been stated below:

(a) Lack of proper tracking of the resources required for the project.
(b) Sometimes the scope of the project may be extended due to certain reasons which might be unexpected. Such unexpected changes can alter the schedule.
(c) The time estimation for each stage of the project development cycle might be wrong.
(d) The program developers may fail to identify the functionalities that are complex in nature and also they may falter in deciding the time period for the development of these functionalities.

3. Technical Risks
- These types of risks affect the features and functionalities of a software system or application which in turn affect the performance of the software system.
- Some likely causes are:

(a) Difficulty in integrating the modules of the software.
(b) No better technology is available then the existing ones and the existing technologies are in their primitive stages.
(c) A continuous change in the requirements of the system can also cause technical risks.
(d) The structure or the design of the software system or application is very complex and therefore is difficult to be implemented.

4. Programmatic Risk
- The risks that fall outside the category of operational risks are termed as programmatic risks.
- These too are uncertain like operational risks and cannot be controlled by the program.
- Few causes are:

(a) The project may run out of the funds.
(b) The programmers or the product owner may decide to change the priority of the product and also the development strategy.
(c) A change in the government rule.
(d) Development of the market.

5. Budget Risk
- These kinds of risks arise due to budget related problems.
- Some causes are:

(a) The budget estimation might be wrong.
(b) The actual project budget might overrun the estimated budget.
(c) Expansion of the scope might also prove to be problem.


Facebook activity