Subscribe by Email


Wednesday, July 31, 2013

Project schedule: A team member departs and a feature is at risk - What do you do ?

This is the kind of situation that no project manager would want to land into. You are into a tight project, and like any other project, there is some amount of tension in the product (it has always been my understanding, and more that of my managers as well that if everything is going fine in a project, there is something wrong with the planning; some amount of tension in the project is necessary for the team to work perfectly and work at full capacity). You have the confidence that with effective project management, which includes some great risk and issue management, you will be able to ensure that incoming issues that could imperil the schedule of the project are handled well, and if there are issues that are beyond your control, you have escalated them to the right set of stakeholders for the next action.
However, there are some set of circumstances that can cause a lot of tension in a project, such as the concept of the team falling behind in the implementation of the initial agreed set of features. At the start of a project, the team and the Product Manager typically agree to a set of features to be implemented during the project schedule. These features also have a minimal set of important features that need to be implemented without fail for the product release to be deemed as worthy of release.
Now, the team is implementing these features and are at the second half of the schedule. Some of the features have been implemented, but there is still critical work remaining to be done. At this stage, one of the team-members working on one of the important feature has to leave - whether this be due to attrition, or the team member having to leave become of some personal emergency. Now you are in a situation where one of the important features deemed necessary for the release of the cycle is at risk, and you need to figure out what you need to do. Here are some possible options, some of which will work while others would not work:
- If the person is leaving because of attrition, then the leaving date discussion can be tweaked to ensure that the person finishes the work and then leaves.
- If the person is leaving because of an emergency, and even in the case of attrition, the transition from the leaving team member to the person who is the replacement needs to happen, and if the amount of work pending to be done is less, this can happen very quickly.
- If the team is a bit ahead of schedule, then it would be possible to still get the important feature done, without stopping any other work. However, if this does not seem possible, then it would be important to ensure that the relevant discussion happens with regard to dropping one of the less important features and getting the more important feature completed.
- If the amount of time left is less, and the completion of the important feature is at risk, then it is important to have a conversation with the stakeholders to ensure that experienced team members are brought in from another team to get the work done.
In all such cases, it is important to ensure that you review the current situation, determine the resource situation and the amount of resources required to complete the important feature, and have a discussion with all the stakeholders. In the extreme situation, if there is a need to ensure that the feature needs to be done and the schedule is at risk, then the schedule may need to be extended to ensure that it is done.


Monday, July 29, 2013

Working with vendors: Asking for a weekly status report

When the team works with vendors, there is always the element of doubt regarding the capabilities of the team with the vendor. In many cases, this may not be because the capabilities of the team with the vendor is any less, but because every product or project is different from the others, and it takes time for the team with the vendor to achieve the same level as the core team. However, this may not always happen; there may not be enough time for the vendor team to come anywhere close to the same level, and this time may not be available during the course of the project. This is one case, but there may be other cases where the coordinator from the vendor side may not be so competent, or there may be other reasons which is causing some sort of problems in terms of the client team feeling that there is some problem with the way that the vendor team is executing the project.
A number of these problems arise because of coordination and communication issues, and it is important that such matters be resolved; there should be a communication protocol setup to ensure that such matters don't cause conflict between the teams. There are several methods to have a regular communication process between these teams:
- Senior leads from both teams should setup a regular meeting for discussing issues (in my experience, this was a once in a week meeting that could be cancelled if there were no issues - this meeting was a big help to quickly reach conclusion on some meetings)
- A regular status meeting between the managers of both teams (such a meeting ensured that issues that were getting escalated were discussed and action items decided on how to resolve such meetings; in my experience, this meeting was also a weekly meeting that could be cancelled if required)
- The simplest way that we devised to highlight current status, ongoing items and ongoing issues was through a weekly status report. We discussed this with the managers and leads of the vendor teams, and then figured out a format which covered all these status items. For example, if there was an issue that was needed to be highlighted from the vendor team, they would put this in the report along with the other items, and this report was circulated to the entire team. This ensured that there was knowledge of what the vendor team was doing, what were the next items on their schedule and what were some of the major issues that they were facing. It also caused team members to flag issues where they had a different understanding from what the vendor had communicated, and quickly led to a resolution of issues.
We had asked the vendor team to ensure that this report was available every Monday afternoon, which also covered the entire items from the previous week, and on the odd occasion where team members were working over the weekend, these items were also incorporated in the report. A side benefit was that these reports conveyed an impression of the amount of work being done by the vendor teams, which was a subjective cross-check during the billing process.


Project Management - Writing down the issues as they come to you ..

One of the most difficult items that would come up during my experience as a project manager was about collating issues for later use. For example, there could be an issue with a team member in terms of their productivity, or issues with a vendor about their quality standard, or even responsiveness to an email sent for a query, and so on. There are numerous items that pop up like this during the course of a project, and the project manager has to resolve then on an ongoing basis. This is the typical life of a project manager.
However, I found that even though these items were resolved on a regular basis, the lack of capturing them in a detailed way made things difficult later. One of the prime examples of this was about interaction with an external team. We went through a post-mortem with the team, after a cycle where the external team had caused us some amount of grief. There were deliveries that were not on schedule, the quality level of one of the deliveries was bad enough that the delivery was rejected (although to get to the point of rejection took a couple of testers a period of around 2 days, and we did not really want to spend this kind of time period for this delivery of an external component).
Similarly, at the end of the cycle (and sometimes during the period of the cycle) there would be the need for feedback on team members, typically other managers, based on the cycle and the quality of their work. However, in all these cases, there were problems. When you are going through a busy cycle, how often do you really remember what happened in specific instances; and there is a lot of feedback that you should not look too much in the past but look towards the future.
So what would you do ? I would typically keep a specific file that would list issues where I thought some later feedback is required or where I felt that the person to whom I was corresponding could have dealt with issues better and I kept the same file in a special folder, and also kept copies of the relevant emails on this subject in the same folder detail.
Not only did this help in driving specific issues through later post-mortem, but even for my study, where I was discussing these sort of issues to drive process changes or improvements, it would really help that I could recall specific issues, the conclusions, my feedback on whether there were specific improvements that could be made, and also had email to even correct people if they pointed out something that was different from what had actually happened (and you would not believe how many times something like this had actually happened).


Friday, July 26, 2013

Feeling that the team has peaked too early - done with defects earlier than usual

It is an odd situation to be in. You have a product schedule, and in this schedule, you end up with your feature development work at a certain stage, and then reach a stage where the entire activity involves testing of the product. If you define this in more detail, it is the classical waterfall development model. First there is the requirements phase, followed by the design phase where the requirements are converted into design and architecture followed by the actual coding where the design is converted into code by the development team.  This code is then tested by the quality team to detect defects in this code, which follows a process of detecting defects by the testing team, getting them fixed by the development team and re-testing, and so on, till the defects have been detected to a high degree, at which the product is deemed fit to release.
You can have different development methodologies, whether these be Waterfall, iterative or Scrum, or any other. In my experience, as you move towards getting more and more functionality being coded and available to the testing team, there will come a point in the schedule where all the functionality has been built in and the team has the whole product available to do testing. It is hard to visualize a schedule where the product is being built with more and more functionality being built right till the last stages of the release.
As you have more and more parts of the product being available, you need time to do testing of the product with all the pieces connected, all the workflows being available and so on. You can have features being tested part by part as they are being built, but there will always be workflows which are not possible to test until all the pieces have been connected together.
So, in a typical development cycle, you do an estimation of the time needed for the development phase vs. the time needed for testing post development, with a lot of this estimation being done based on historical records and information, and bake this into the schedule. Accordingly, you plan for the phase where the fresh  development activity has to come to and end, and plan for getting this done in your schedule.
Once you reach that stage, it means that your team has completed all the development activity for building in the features needed in the product, and post this stage, you will only have testing and defect fixing. Seems fine, right. Unfortunately, it could be that you have done a much better job at code review, unit testing and other such activities, which resulted in your code being of a better quality than normal. What this results in is an embarrassing situation where you defect timeline is actually projected to come to an end before the due date; and this is where the manager asks the question about the team actually allocating too much time for testing and less time for feature development; since with these statistics, it is pretty clear that you could have done more feature work.
You can resolve this, although it is hard to squeeze in a new feature after you have completed testing; instead, you need to spend time reassuring the team and the manager that they have done a perfect job, that it is not a shame if you ended up completing the work before the actual timeline (although you need to incorporate this entire situation for the post-mortem and for future planning). Further, there are always important features that you were not able to plan for the current cycle, so it would make sense to actually get a head start on such features after branching the code.


Thursday, July 25, 2013

Defect Management: Dividing overall defect reports into separate functional areas - Part 3

This is a series of posts where I look at the creation of a defect status report, in a way that it provides enhanced information to the team and its managers and helps them to make decisions. In the previous post (Count of new incoming defects), I talked about adding more parameters to the defect report that help in information that can let the team know whether the number of defects getting added on a daily basis will enable the team to reach their defect milestones by the required time and date. This kind of data, coming in on a regular daily cycle, helps the team to decide whether their current defect pattern is on the desired path, above it, or below it, and accordingly make the required decisions.
This post will see more details added to the defect report that provides a lot of useful information to the team. The last post talked about the flow of incoming defects that move to the ToFix state. However, there is another type of information that is relevant. Defects that are present in the system are not only owned by the development team. In addition to the defects in core code, there may be defects that are present in the components used in the application. These are defects that are not attributable to the development team, but to the vendors or other groups that provide the components. A number of teams that I know typically track these defects in the defect database, but distinct from the defects with the core team.
The nature of defects that are against external components is different from those in the core code. Even though to the customer it does not matter whether the defect is within the core code or in an external component, the amount of effort required in terms of coordination and communication is entirely different from the other defects that are with the core developmental team. If a defect is with a component that is not owned by the team, the timeline for fixing of the defect may take longer and need persuasion; or there may be a lot of back and forth between the tester and the external component team to study the situation in which the defect occurs (which also includes sending the environment in which the defect occurred to the external vendor - and this has its own cost and restrictions, since if the team is working on a new version of the software, there would NDA issues and IP issues related to sending details of the environment to the external component team), and so on. Another concern could be that that even if such a defect is resolved, it might need a new version of the component, which has its own extra cost about testing the component on its own to check whether it is fine or there are other issues with the same.
As a result, it needs to separate out the incoming defects about whether they belong to the core team or whether they are attributable to people outside the team; and if the proportion of such defects that are outside the core team is increasing, it is a matter of concern to the team, since resolving such defects typically takes much more effort and time.


Wednesday, July 24, 2013

Defect Management: Dividing overall defect reports into separate functional areas - Part 2

Part 1 (Dividing defect reports into ToFix, ToTest and ToDefer counts) of this post talked about the importance of Defect Management in a software project, and then got into some details about the regular sending out of a report on Total defects, with these defects having been broken down into ToFix, ToTest and ToDefer stats, maintained on a graph over a period of time with daily updates, so that the team and the managers can figure out whether the team is on progress to resolve these bugs.
This post continues on this line, talking about additional items that can be added to this defect chart and metrics to provide more information to the team and determine whether it is on the right track or not. Are all these metrics important ? There is a lot of comments about not over-burdening people with too many statistics, and there are more comments about letting people do their work rather than sending so many statistics that they stop looking at these stats. However, it is also true that the role of the team managers is to look at the broader situation in terms of project status, and the defect management is an important part of this. It is true that the team members should not be burdened with these figures, but for the team managers, it is critical to look at such data.
So, the team looks at the ongoing figures for defects in terms of ToFix over a period of days and tries to determine whether the team is on the right track or not. So what else should you be capturing ? Another metric that can now be added to such a report is about the number of defects that are still incoming. There are primarily 2 ways in which defects can be added to the count of developers:
- New defects that are logged against the development team and which add to their count and to the overall ToFix count
- Defects that have been rejected by the testing team after they have been marked fixed by the developer but there is a problem in the fix (this can vary a lot among different teams and even in a team - a developer could be fixing defects with hardly any returns and there could be another developer who is under pressure and many of whose defects are returned because of some problems). So, whether to determine this kind of statistic and calculate metrics for such a case determines of whether the team is seeing such kind of returns for the defect management.
Once you have these kind of defect counts, it helps in determining the current status of defects and see whether the team is on the right track. So, you have a total count of open ToFix defects, and there is a decline in such a count needed to hit the deadlines. However, for getting to such a deadline, you need the number of incoming defects to be also fitting into this strategy. If there are a large number of incoming defects, then the team will not be easily able to determine whether their ToFix defect count is decreasing by the amount they want to hit their targets, and this then needs a change to the strategy to determine whether the team will get there or not.


Tuesday, July 23, 2013

Defect Management: Dividing overall defect reports into separate functional areas - Part 1

Handling defects is one of the major efforts that plays an integral role in handling a project schedule and making it successful. I have known multiple teams where the team did not have a good running estimate of their defect count and the defect estimation over the remaining period of time left in the schedule; as a result, when the team was closer to the final stages of the schedule, they found that they had too many defects that made the remaining part of the schedule very tight - which meant that if they were to do an accurate reckoning of their status, they would need to either defer more defects and maybe end up with a product that is lower in quality; or the product would need to extend their timeline / schedule, which has a huge implication  for the team and many other teams that are involved in the release schedule of the product.
How do you avoid this ? The first paragraph of this post points out a huge problem, but the answer cannot be handled in a single post; it can be handled by a single cheesy phrase but which does not provide any solutions - "You need to do Defect Management". Now, let us get down to the meat of this post - this post just takes a specific aspect of defect management - sending a split of the defect counts as per the different areas. This in turn provides a better view of the defect picture to the team and helps in the process of overall defect management.
We wanted to have a system whereby we could track the counts for each of the separate functional areas and yet have the management team have access to these data on an ongoing basis. These also helped the separate functional teams do a targeting of the counts of the defects of their respective functional areas and work towards reducing this count. So, we took the overall data for defects for the product (open defects) and split these into the following areas:
Open defects:
ToFix (these are primarily defects owned by the development team, although there could be defects that are carried by other team - such as where there are defects with components supplied by external teams)
ToTest (these are primarily defects owned by the testing team, although since anybody can file a defect within the team, there may be people other than the testing team who own a defect)
ToDefer (the exact terminology of these defects can be different across organizations; but these are typically defects that are with a defect review committee for evaluation. These can be significant defects that need evaluation by the committee before they are to be fixed, or these can be defects that are not worthy of fixing but the team wants the committee to take a final call, and so on).
What is the purpose of sending out separate stats on a regular basis ? These data, when plotted on a regular graph over a period of time provides a large amount of information. The team and the managers, although they are in the thick of things, sometimes need to see such kind of aggregate information to take a good decision. For example, if the team is in the second part of the cycle and close to the timeline, and yet the ToFix graphs do not show a declining trend, then this is something to worry about. When such a stage happens, I have seen the development team manager doing a major discussion with the team to figure out how to reduce these counts and figure out what is happening. In extreme cases, I have seen the team actually take a hard look at these defect counts and then make a recommendation for extending the schedule (which is not a simple step to take).


Sending a weekly list of tasks for the team at the beginning of the week

As a part of project management, it is very important to ensure that the team has adequate knowledge of the tasks that are coming up in the near future. If you maintain an updated task schedule, then the task schedule will have an updated list of the tasks for each team member, and this should be easily accessible to the team members. It is important that the team members be given access to the schedule and task tracking tool with the required level of access rights. Them not having access to the tool would put a lot of pressure on the project manager to control the tool, and also be accessible whenever a team member needs access to the tool, either for viewing a certain detail, or for modifying a certain detail.
However, it is also a reality that in many cases, especially when you have a team that is not so mature, there is a resistance to accessing the tool (and this may be justified to some degree; I have seen many tools that combine a high amount of functionality into the UI of the tool with the assumption that it will be primarily accessed by the project manager and can be fairly cumbersome for a developer or tester to update). As a result, you end up with a situation where the team members has not really accessed the details of the upcoming tasks and there are delays (and this can be almost comical if it was not serious to the schedule - once when I asked a couple of team members about why they did not check the details of the schedule in the tool to review their upcoming tasks; they put the blame on me about having a horrible tool and why I did not inform them about the tasks in their name).
Based on all this information and after discussion with the team managers, there was felt the need to actually send out a communication of the tasks that are doing to coming to team members in the upcoming week. So, we worked on setting up a report (or rather, 2 reports) which tried to provide the following details:
- A large list of the upcoming tasks for the week across all the team members. This would list the tasks per day as well as the team member and the estimated days. This list was also printed and up on a large board next to the team's physical location where they could see the list at all times. We also encouraged the team members to actually run a line through a task that was completed, which was a good way of letting the team see that there were tasks that were getting done.
- In addition to this, the other report went at individual level. This report listed out the ongoing tasks for the person, the upcoming tasks for the week and the tasks that were also expected to be completed in the upcoming week. This report was more detailed, and there were enabled links in the report that would take them the team members (via the login process) to the section of the tool where they could update their tasks. This particular report was however tweaked to ensure that team members could log into the report site and do customization - one team member had typically broken up his task into 2 day chunks and he had set the report to come to him once in 2 days, ensuring that the report was very useful for him.


Sunday, July 21, 2013

Comparison between Virtual Circuit and Datagram subnets

Difference #1:
- In virtual circuits the packets are allowed to contain in them the circuit number rather than storing the full address of the destination. 
- This reduces the requirement for a much larger memory and bandwidth. 
- This also makes it cheaper in cost. 
- On the other hand, the data-grams have to contain the full destination address rather than a single circuit number.
- This causes a significant overhead in the data-gram sub nets. 
- Also, this leads to wastage of the bandwidth. 
- All this implies that the data-gram sub nets are more costly when compared to the virtual circuits. 

Difference #2:
- A set up phase is required for the virtual circuits. 
- For establishing this phase a lot of resources are required along with a lot of time. 
- Data-gram sub net in contrast does not require establishment of set up phase. 
- Hence, there is no requirement of resources.

Difference #3:
- In virtual circuits, for indexing purpose the circuit numbers are used by the router. 
- These numbers are stored in a table and are used for finding out the destination of the packet. 
- This procedure is quite simple when compared with the one used by the data-gram sub nets. 
- The procedure used in data-gram sub nets for determining the destination of the packet is quite complex. 

Difference #4:
- Virtual circuits allow for reserving the resources in advance on the establishment of the resources.
- This has a great advantage which is that the congestion is avoided in the sub net. 
- However, in the data-gram sub nets, it is quite difficult to avoid congestion. 

Difference #5:
- If a crash occurs in a router, then it will lose its memory. 
- Even if it backs up after sometime, all the virtual circuits that pass via it must be aborted. 
- This is not a major problem in the case of the data-gram sub nets. 
- Here, if the router crashes, the only packets that will have to suffer will be the ones that were queued for that router at that instant of time. 

Difference #6:
- The virtual circuits can vanish as a result of the loss or fault on the current communication line.
- In data-gram sub nets, it is comparatively easy to compensate for the fault or loss on the communication line. 

Difference #7:
- In virtual circuits there is one more cause for the traffic congestion. 
- This cause in the use of the fixed routes for the transmission of the data packets throughout the network. 
- This also leads to the problem of unbalanced traffic. 
- In data gram sub nets the routers are given the responsibility of balancing the traffic over the entire traffic.
- This has been made possible because it is allowed to change the routers halfway between the connections. 

Difference #8:
- Virtual circuits are one way of implementing the connection-oriented services. 
- For various types of data gram sub nets, a number of protocols are defined by the internet protocol. 
- Internet protocol provides the data-gram service at the internet layer. 
- In contrast with the virtual circuits, data gram sub nets are connection-less service. 
- It is the best effort message delivery service but at the same time is very unreliable. 
- There are a number of high level protocols such as TCP that are dependent up on the data gram service of the internet protocol.
- This calls for additional functionality. 
- The data gram service of IP is even used by the UDP. 
- The fragments of a data gram might be referred to as the data packets. 
- The IP and UDP both provide unreliable services and this is why both of them are termed as data grams. 
- The fragments of TCP are referred to as TCP fragments to distinguish it from data-grams. 


Processes of the last few days and weeks of a product release schedule ..

As part of a normal product cycle, the last few days and weeks are the ending points of all the hard work, frustration and rewards of the schedule. It is a critical time period in the schedule with the following properties:
- The team is doing their final set of testing / verification. Towards this end, the entire set of test plans and cases would not be running since that may take up more time than is available. For shorter time periods and for quick verification that everything is running fine, a subset of the test cases would be made ready to execute. If automation is made available, then these should be running on a regular basis.
- Any changes to code or defects fixes are being monitored very closely and thoroughly to ensure that there is no risk from these changes to the code. In some cases, when time in the schedule is short, only defects that are deemed very high severity will be fixed and that too when the impact of the changes can be safely evaluated
- All the release documents including the Help file and release notes are all ready and reviewed
-  External testers / pre-release testers have been using the software for some time and all of them have verified that they have been using the product and a majority of them have also deemed the software product ready for release. This can be critical. When the software is near the release period, people not belonging to the team and who are an approximation of the customer base of the product should be comfortable with the quality level of the product and the set of features in the product.
- In many cases, during the testing process, features that require connection to an online server are typically connected to a test or staging server rather than the live server. During the last few days, these connections are changed to the live server and the product is testing with those rather than with the staging server. There is a possibility that this kind of change may bring about some instability in the system and hence the changes need to happen before the release date (atleast a few days before the release date).
- When the product also needs to be released on DVD and on the product web store, there is a need to do final verification on such systems to ensure that everything is working fine (which includes generating actual DVD's and installing from them to ensure that everything works as it should).
- Defect stats are monitored very finely to ensure that there are no surprises at that end.


Saturday, July 20, 2013

What are data gram sub-nets?

- A data gram is defined as the basic transfer unit used in the networks that operate with the help of packet switching network. 
- In such networks, the time of the arrival and delivery is not guaranteed. 
- Also, the network services do not guarantee that whether it will be an ordered delivery or not. 
- The first project to use the data grams was the CYCLADES which was again a packet switching network. 
- The hosts in this network were responsible for making a reliable delivery rather than relying on the network for doing so. 
- This they did using the data grams that were themselves so unreliable and by associating the mechanisms of the end to end protocols. 
- According to Louis Pouzin, there are two sources from which came the inspiration for the data grams namely the Donal Davie’s studies and simplicity of the things. 
- The concept of the data gram sub-net was eventually adopted for the formulation of the protocols such as apple talk, Xerox network systems and of course the internet protocol.
- Data grams are used at the first 4 layers of the OSI model. 
- Each layer has its own name for the data grams as we mention below:
  1. Layer 1: chip (CDMA)
  2. Layer 2: frames (IEEE 802.3 and IEEE 802.11), cell (ATM)
  3. Layer 3: data packet
  4. Layer 4: data segment
- A data gram is a data packet that is self-reliant. 
- This means it does not rely on any of the exchanges made earlier since the fixed connection between the two points of communication has no connection such as in a majority of the telephonic conversations. 
- Virtual circuits and data gram sub-nets are two equally opposite things. 

Data gram is defined as an independent and self-contained data entity by the RFC 1594 that carries sufficient information required for routing from one source to another without relying on the transporting network and the earlier exchanges between the two same hosts.

- The services offered by the data gram sub nets can be compared to the mail delivery services. 
- This is so because the user needs to mention only the destination address.
- However, this service does not give any guarantee of whether the data gram will be delivered or not and also does not provide any confirmation upon successful delivery of the packet. 
- These are of course two major disadvantages of the data gram sub nets. 

- In data gram sub nets, the data grams or the data packets are routed along a route that is created at the same time. 
- In data gram sub nets the routes are not predetermined. 
- This again has its disadvantages. 
- Also, the order in which the data grams have to be sent or received is not considered. 
- In some cases, a number of data grams having same destination might travel along various different routes.

- There are two components of every data gram namely the header and the data payload.
- The former consists of all the information that is enough for the routing purpose from source to the destination without being dependent on the exchanges that were made before between the network and the equipment. 
The source as well the destination address might be included in the header as a kind of a field. 
- The data that is to be transmitted is stored in the latter part of the data gram. 
- In some cases the data payloads might be nested in to the tagged header. 
This process is commonly known as the encapsulation. 
- There are various types of data grams for which various standards are defined by the internet protocol or IP. 


Friday, July 19, 2013

What are the goals and properties of a routing algorithm?

Routing requires the use of routing algorithms for the construction of the routing tables.
A number of routing algorithms are today available with us such as:
1.   Distance vector algorithm (bellman ford algorithm)
2.   Link state algorithm
3.   Optimized link state routing algorithm (OLSR)
- In a number of web applications, there are a number of nodes which require communicating with each other via communication channels. 
- Few examples of such applications are telecommunication networks (such as POTS/ PSTN, internet, mobile phone networks, and local area networks), distributed applications, multiprocessor computers etc. 
- All nodes cannot be connected to each other since doing so will require many high powered transceivers, wires and cables. 
- Therefore, the implementation is such that the transmissions of nodes are forwarded by the other nodes till the data or info reaches its correct destination. 
- Thus, routing is the process of determining where the packets have to be forwarded and doing so.

Properties of Routing Algorithm
- The packets must reach their destination if there are no factors preventing this such as congestion.
- The transmission of data should be quick.
- There should be high efficiency in the data transfer.
- All the computations involved must not be long. They should be as easy and quick as possible.
- The routing algorithm must be capable of adapting to the two factors i.e., changing load and changes in topology (this includes the channels that are new and the deleted ones.)
- All the different users must be treated fairly by the routing algorithm.
The second and the third properties can be achieved using fastest or the shortest route algorithms. 
- Graphical representation of the network is a crucial part of the routing process.
- Each network node is represented by a vertex in the graph whereas an edge represents a connection or a link between the two nodes. 
- The cost of each link is represented as the weight of the edge in the graph. 
- There are 3 typical weight functions as mentioned below:
1.   Minimum hops: The weight of all the edges in the graph is same.
2.  Shortest path: The weight of all the edges is a constant non – negative value.
3.   Minimum delay: The weight of every edge depends up on the traffic on its link and is a non – negative value.
However in real networks, the weights are always positive.

Goals of Routing Algorithms
- The goal of these routing algorithms is to find the shortest path based up on some specified relationships that if used will result in the maximum routing efficiency. 
- Another point is to use as minimum information as possible.
- Goal of the routing algorithm is also to keep the routing tables update with all alternative paths so that if one fails, the other one can be used.
- The channel or the path that fails is removed from the table. 
- The routing algorithms need to be stable in order to provide meaningful results but at the same time is quite difficult to detect the stable state of an algorithm. 
- Choosing a routing algorithm is like choosing different horses for different courses. 
- The frequency of the changes in the network is one thing to be considered. 
Other things to be considered include the cost function that is needed to be minimized and the calculation of the routing tables in a centralized fashion.
- For static networks the routing tables are fixed and therefore they require only simple routing algorithms for calculation. 
- On the other hand, the networks that are dynamic nature require distributed routing algorithms which are of course complex.



Impact of any change in the final schedule date

A product development cycle can be fairly chaotic. When you hold a finished product in your hands or have it installed in your machine, the product would look great and you would not even think about the intense passion and drama that is part of the process involved in bringing out such a release. Most teams start out with a concept that this cycle of the product release will be simple, and have a lower amount of tension in the ongoing release. However, it has been my experience that a large number of such releases can be incredibly full of tension; you enjoy bringing out a product that is liked and purchased by a large number of users all over the world, but the times when you are working on the product cycle can be enjoying and frustrating at times.
One of the most tension filled times is the last week and last days of the schedule; you are this close to making the release date, you wonder whether there is anything that was forgotten; you pray to whoever you hold dear and holy that there are no major defects that come up in the last few days of the release cycle that can have an impact on the quality or the schedule of the release. In most cases, changing the release date of the product is next to impossible given the number of items that get shaken up. What happens when you have to change the release date:
- All media communication is screwed and it can end up as a PR disaster worrying people about the quality of the product
- There is a huge impact of the confidence of the senior management in the team management especially when this schedule release impact comes up suddenly
- There is a revenue impact, since as part of finance sheets, there would already have been a calculation of the revenue that will come; any change in this date will cause some shortage in the revenue sheet. If this is a major product, it could actually spiral all the way to change in earnings of the organization.
- There are many processes that are already set in motion such as DVD production, retail channels, and delivery to other partners, and all of these are impacted. For example, if your product is planned to be loaded as part of the installed software on OEM's such as the desktop or laptops of HP, Sony, Dell, etc, there will be hell to pay. These partners have finely tuned schedules and it will take some major negotiation for these schedules to be reset.
- The team would already be high-strung because it is near the end of the cycle. They are expecting a period of down and low tension for some time after the release, and if the release is delayed, this can cause morale issues within the team and continued tension.
- If you are using partners and vendors along with the core team, any delay of the schedule will also need more time for these teams. For example, vendors who provide language translation and review services can be pretty expensive, and any schedule delays will need to factor these in.

As a result of these reasons, and more like them, the team always needs to be on the ball. It would be real bad management and estimation if something happens in the last few weeks to cause the schedule to be changed. There can always be legitimate reasons for a schedule to get changed, but these should be known well in advance so that preparations for all the reasons listed out above can take place.


Thursday, July 18, 2013

What is a routing algorithm in network layer?

About Routing
- The process of path selection in the network along which the data and the network traffic could be send is termed as routing. 
- Routing is a common process carried out in a number of networks such as the transportation networks, telephone networks (in circuit switching), electronic data networks (for example, internet). 
- The main purpose of routing is to direct the packet forwarding from source to its destination via the intermediate nodes. 
- These nodes are nothing but hardware devices namely gateways, bridges, switches, firewalls, routers and so on. 
- A general purpose system which does not have any of these specialized routing components can also participate in routing but only to a limited extent.

But how to know where the packets have to be routed? 
- This information about the source and the destination address is found in a table called the routing table which is stored in the memory of the routers. 
These tables store the records of routers to a number of destinations over the network. 
- Therefore, construction of the routing tables is also an important part of efficient routing process. 
- Routing algorithms are used to construct this table and for selecting the optimal path or route to a particular destination. 

- A majority of the routing algorithms are based on single path routing techniques while few others use multi-path routing techniques. 
- This allows for the use of other alternative paths if one is not available. 
- In some, the algorithm may discover equal or overlapping routes. 
- In such cases the following 3 basis are considered for deciding up on which route is to be used:
  1. Administrative distance: This basis is valid when different routing protocols are being used. It prefers a lower distance.
  2. Metric: This basis is valid when only one routing protocol is being used throughout the networks. It prefers a low cost route.
  3. Prefix-length: This basis does not depends on whether the same protocol is being used or there are many different protocols involved. It prefers the longer subnet masks.
Types of Routing Algorithms

Distance Vector Algorithms: 
- In these algorithms, the basic algorithm used is the “Bellman – Ford algorithm”. 
- In this approach, a cost number is assigned to all the links that exist between the nodes of a network.
- The information is send by the links from point A to point B through the route that results in the lowest total cost.
- The total cost is the sum of the costs of all the individual links in the route. 
The manner of operation of this algorithm is quite simple.
- It checks from its immediate neighboring nodes that can be reached with the minimum cost and proceeds.

Link-state Algorithms: 
- This algorithm works based up on the graphical map of the network which is supplied as input to it. 
- For producing this map, each of the nodes assembles the information regarding to which all nodes it can connect to in the network. 
- Then the router can itself determine which path has the lowest cost and proceed accordingly. 
- The path is selected using standard path selection algorithms such as the Dijkstra’s algorithm. 
- This algorithm results in a tree graph whose root is the current node. 
- This tree is then used for the construction of the routing tables.

Optimized link state Routing Algorithm: 
- This is the algorithm that has been optimized to be used in the mobile ad-hoc networks. 
- This algorithm is often abbreviated to OLSR (optimized link state routing). 
This algorithm is proactive and makes used of topology control messages for discovering and disseminating the information of the link’s state via mobile ad-hoc network. 


Infrastructure: Need to do a frequent clean up of old builds from servers ..

This is a constant problems that organizations that have large products face. If your product has a size of around 500 MB or more, and you are in the development face, the chances are quite high that you will be generating a build every day. The build contains new fixes and code changes, so getting a new build every day ensures that there is a quick turnaround in terms of defects closure as well as ensuring that new features done by the developers get to the hands of testers almost as soon as the features are done.
However, there is an infrastructural issue involved with getting so many builds. I am talking of the cases where the typical release cycle of such a product is more than a few months. During this period, the team will generate a number of such builds that need to be hosted on servers so that they are accessible to the team members (and they needed to be transferred to additional servers if the team has members located in different geographical locations, or if there are vendors located in a different location and the vendors can only be given access to a different server outside of the main server that is accessible only to employees). Now, there can be additional builds that are required (for example, you may have the same application that has a different structure for the DVD release vs. the release on the company online store, and if there is distribution to vendors, there might be a different version). In some cases, the different language versions of the product might need to be different applications (builds) which also increases the size considerations.
Now, all these place a lot of constraints on infrastructure. Central servers are typically set up in a RAID kind of configuration which means that space needed is actually much more. Server capacity in terms of hard disk is cheap, but you can be pretty sure that at some point, unless you do some optimization, the additional hard disk capacity required on a regular basis can start becoming costly, not only from the perspective of equipment, but also from the perspective of the staff needed for maintaining this capacity, as well as making it more difficult to find what you want. It always makes sense to do some kind of optimization of the storage needs of the product in terms of builds, since if a build is not being used, the need of storing the build is unnecessary.
A initial thought might show that only recent builds need to be stored, but that is an over simplification. There might be defects that were filed some time back, and for the purpose of evaluating these defects, the builds on which these defects were found need to be accessible. Further, during the process of coding, there could be errors inserted into the code, but not detected for some time (this could be weeks or even months). Even though the code could be checked by doing a differential in the source code repository, it may be necessary to test the performance in the build in which the code change was first detected to see what the code change caused. There could be numerous such reasons as to why a specific build is needed at some point in the future, and hence, there needs to be a process defined that will let the team control which builds need to be deleted from the server, thus leading to an optimization of the server space. Here are some points that could help in this:
- If there are builds from the earlier cycle, then it is probable that those build are not necessary. It might just be enough to retain build that were of significant interest in terms of milestones.
- If a build had a problem in terms of the build either not launching or being rejected for the purpose of testing, those builds need not be retained and can be deleted
- When builds are older than a few months, the team can decide on a policy to check whether such builds can be deleted or not, and so on.


Wednesday, July 17, 2013

What are network layer design issues?

- The network layer i.e., the third layer of the OSI model is responsible for facilitating the exchange of the individual information or data pieces between hosts over the network. 
- This exchange only takes place between the end devices that are identified. 
For accomplishing this task, 4 processes are used by the network layer and these are:
Ø  Addressing
Ø  Encapsulation
Ø  Routing
Ø  Decapsulation
In this article we focus up on the design issues of the network layer. 

- For accomplishing this task, the network layer also need s to have knowledge about the communication subnet’s topology and select the appropriate routes through it. 
- Another thing that the network layer needs to take care of is to select only those routers that do not overload the other routers and the communication lines while leaving the other lines and router in an idle state.

Below mentioned are some of the major issues with the network layer design:
  1. Services provided to the layer 4 i.e., the transport layer.
  2. Implementation of the services that are connection oriented.
  3. Store – and  - forward packet switching
  4. Implementation of the services that are not connection oriented.
  5. Comparison of the data-gram sub-nets and the virtual circuits.
- The sender host sends the packet to the router that is nearest to it either over a point-to-point carrier link or LAN. 
- The packet is stored until its complete arrival for the verification of the check sum. 
- Once verified, the packet is then transmitted to the next intermediate router. 
- This process continues till the packet has reached its destination. 
- This mechanism is termed as the store and forward packet switching.

The services that are provided to the transport layer are designed based up on the following goals:
  1. They should be independent of the router technology.
  2. Shielding from the type, number and topology of the routers must be provided to the transport layer.
  3. The network addresses that are provided to the transport layer must exhibit a uniform numbering plan irrespective of whether it’s a LAN or a WAN.
Now based up on the type of services that are offered, there is a possibility for two different organizations.

Offered service is Connection-less: 
- The packets are individually introduced in to the sub-net and the routing of the packets is done independently of each other. 
- It does not require any advance set up. 
- The sub-net is referred to as the data gram sub-net and the packets are called data-grams.

Offered service is connection-oriented: 
- In this case the router between the source and the destination must be established prior to the beginning of the transmission of the packets. 
- Here, the connection is termed as the virtual circuit and subnet as the “virtual circuit subnet” or simply VC subnet.

- Choosing a new router every time is a thing to be avoided and this is the basic idea behind the use of the virtual circuits. 
- Whenever we establish a connection, a route has to be selected from source to destination. 
- This is counted as a part of the connection setup only. 
- This route is saved in the routers tables that are managed by the routers and is then used by the flowing traffic. 
- On the release of connection, the VC is automatically terminated. 
- In case of the connection oriented service, an identifier is contained in each packet which tells the virtual circuit to which it belongs.

- In data-gram sub-net circuit setup is not required whereas it is required in the VC circuit. 
- The state info is not held by the routers in the data gram subnet whereas router table space is required for each VC for each connection. 


Facebook activity