Subscribe by Email

Tuesday, April 30, 2013

What is hard disk and what is its purpose?

- HDD or Hard Disk Drive is for data storage. 
- It is used for storage and retrieval of the digital information or data that is stored on it. 
- The data is stored or retrieved by means of its discs that rotate rapidly. 

Hard Disk and its Purpose

- These discs are known as the platters and are coated with some sort of magnetic material. 
- The major characteristic as well as benefit of hard disk drives is that they retain the data even when the power supply is switched off. 
- From hard disk, the data can be read in a manner of random access. 
- This means that the storing and retrieval of the individual blocks of the data can be done either sequentially or in any order that the user may like. 
- A hard disk may consist of one or more than one of those rigid platters. 
These rotating discs have magnetic heads that are located on an actuator arm that is continuously moving and reads and writes data on their surfaces. 
IBM was the first to introduce the hard disk in the year of 1956. 
- Hard disk drives are the most dominant and the prominent secondary storage device for the computers since 1960s. 
- Since then, it has been continuously improved. 
- The HDD units are produced by more than 200 companies; among them most prominent developers are Toshiba, Seagate, Western digital etc. 

HDD’s primary characteristics are:
Ø  Capacity and
Ø  Performance
- The former is specified in terms of the unit prefixes. 
- In some systems, the capacity of the hard disk drive might be unavailable to the user since being used by the operating system and the file system and may have a possibility of occurrence of redundancy.
- The latter is specified in terms of the movements of the heads for a file i.e., the average access time in addition to the time taken for moving the file under the head i.e., the average latency and data rate. 

HDDs are available in two most common factors namely:
Ø  3.5 inch for desktop computers
Ø  2.5 inch for laptops

HDDs might be connected to the system by any of the following standard interface cables:
Ø  Serial ATA or SATA cable
Ø  USB cable
Ø  Serial attached SCSI or SAS cable

- In the year of 2012, flash memory emerged as a tough primary competitor for the hard disk drives. 
- These flash memories are some sort of solid state drives or SSDs. 
- However, the HDDs will still continue to dominate the secondary storage for its advantages such as price per unit of storage and recording capacity. 

- But there is a different scenario is the case of portable electronics.
- Here, the flash drives are considered to be more useful then rotating HDDs because here the durability and physical size of the drive has also to be considered more when compared to price and capacity.
HDD uses the magnetic recording technology where the data is recorded by magnetizing a thin film of material that is typically ferromagnetic on a disk. 
The binary data bits are represented by the sequential change in the direction of the magnetization.
- An encoding scheme is used for encoding the user data. 
- An example of such encoding scheme run – length limited encoding. 
It is these schemes that determine how the magnetic transitions would represent the data.

The latest HDD technologies are:
Ø  Shingled write
Ø  CPP/ MGR heads
Ø  Heat assisted magnetic recording
Ø  Bit – patterned recording

Defect handling - Planning how to find bug bugs earlier in the cycle

In every cycle of software development, whether this be product development or working on a project, one of the key items is about finding defects. There can be small defects or big defects, but the plan is always to try to fix these defects. Smaller defects can sometimes be more easy to handle, since they have a lesser impact on the customers, and in several cases, it would even be easy to defer some of these bugs if they are low severity and there is a squeeze on time and resources. However, larger bugs, those that have an impact on functionality or workflows are harder, and so are the ones that are complex or need more time to fix. These are the sort of bugs / defects that a team would find hard to defer or leave alone, and they can be critical to fix.
One of the biggest problems with such defects is the time period in which many of these defects are found. Defects which have a high impact are typically found in the latter part of the cycle, primarily because a lot of the functionality comes to be ready in the latter half of the cycle. If there are a number of features in development, the earlier parts of the cycle see the development of these features. As time progresses, these features start getting into a good shape and the integration points of these features start getting worked on. This is the time when the workflows of the product start coming into shape, and that is when the testing team will be able to check the integration of these features into one another.
The testing effort at this stage is able to detect workflow problems, as well as design flaws where the type and amount of information flowing from one feature to the other may have issues, and not happening as per design. These defects take more time to analyse, and may also need teams from the different feature areas to collaborate to figure out the defects. As a result, the time involved to fix these defects is more, and this gets problematic when the number of such defects found is more than expected, which means that the time that the team has to fix issues may be less. Further, the later that such defects are found, the greater the risk that the fix for such defects can cause other problems; because of such risks, defects found later are at a greater risk of not being fixed.
What do you do ? Some of the issues may be more problematic to solve. Feature development work focuses on the specific features in the earlier part of the cycle, with the focus shifting to integration only later in the cycle, so changing the timelines for this may be more difficult. But, it is possible to do studies to estimate the amount of bugs that will be found (far easier of this is just a new version of the product being developed and there is historic data), and then plan for more time for the same. At the same time, a number of problems typically are found out if there is inadequate time spent during the design and architecture phase and teams should ensure that they are spending the right amount of time on these activities. Further, as the feature is being developed, workflow or integration related flows should be tested, even if integration has not been completed. An example of this can be done is to prepare a software harness which will allow the input and output of data from the various features even if the integration has not been done. Doing this ensures that a number of the defects that are found post integration can be found earlier in the cycle, and this saved a lot of time and effort.

Monday, April 29, 2013

What is cache memory?

Cache memory is a certain memory aid for computers that speeds them up very well. 
- In cache memory, the storage of the data is transparent so as to make the processing of the future requests faster. 
- A cache might store in it the values that have already computed or duplicate of some values stored somewhere else in the memory. 
- Whenever some data is requested, it is first looked up in the cache memory. - If the data is found here, it is returned to the processor and this is called a ‘cache hit’. 
- In this case the time taken for accessing the data is reduced. 
- This access is thus faster than that of the main memory. 
- Another case is of cache miss when the required data is not found in the cache.
- Then again the data has to be fetched or computed from its original source or the storage location which is slow as obvious. 
- The overall performance of the system increases in proportion with the number of requests that can be served from the cache memory.
- In order to maintain the cost efficiency as well as efficiency in data usage, the size of the cache is kept relatively small as compared to the main memory. 
However, the caches have proven themselves from time to time because of their ability to recognize the patterns of access in the applications having some locality of reference. 
- Temporal locality is exhibited by the references if the data that was previously requested is requested once again.
- These references apart from temporal locality also exhibit spatial locality if the storage location of the requested data is close to the data that was previously requested.

How is cache implemented?

- The cache is implemented as a memory block by the hardware and as a place of temporary storage. 
- Here, only that data is stored which is likely to be accessed again and again. 
Caches are not only used by hard drives and CPUs but also by the web servers and browsers. 
- Pools of entries together make up the cache. 
- Each entry has a datum associated with and a copy of it is stored in the backing store. 
- Each entry is also tagged for the specification of the datum’s identity in the backing store.
- When a datum is required to be accessed by a cache client (it might be an operating system, CPU or web browser.) that it thinks might be available in the backing store, the cache is first checked. 
- If the desired entry is found, it is returned for the use. This is cache hit.
- Similarly, a web browser might look in its local cache available at the disk to see if it has the contents of a web page. 
- In this case the URL serves as the searching tag and the contents are the datum. 
- The rate of successful cache accesses is known as the hit rate of the cache.
- In case of a cache miss, the datum not cached is copied in to the cache so as to prevent future cache misses. 
- For making space for this datum, some already existing datum in the cache is removed. 
- Which datum is to be removed is determined by using the replacement algorithms. 

Sunday, April 28, 2013

What is fragmentation? What are different types of fragmentation?

In the field of computer science, the fragmentation is an important factor concerning the performance of the system. It has a great role to play in bringing the performance of the computers. 

What is Fragmentation?

- It can be defined as a phenomenon involving the inefficient use of the storage space that in turn reduces the capacity of the system and also brings down its performance.  
- This phenomenon leads to the wastage of the memory and the term itself means the ‘wasted space’.
- Fragmentation is of three different forms as mentioned below:
  1. The external fragmentation
  2. Internal fragmentation and
  3. Data fragmentation
- All these forms of fragmentation might be present in conjunction with each other or in isolation. 
- In some cases, the fragmentation might be accepted in exchange of simplicity and speed of the system. 

Basic principle behind the fragmentation concept. 
- The CPU allocates the memory in form of blocks or chunks whenever requested by some computer program. 
- When this program has finished executing, the allocated chunk can be returned back to the system memory. 
- The size of memory chunk required by every program varies.
- In its lifetime, a program may request any number of memory chunks and free them after use. 
- When a program begins with its execution, the memory areas that are free to allocated, are contiguous and long. 
- After prolonged usage, these contiguous memory locations get fragmented in to smaller parts. 
- Later, a stage comes when it becomes almost impossible to serve the large memory demands of the program. 

Types of Fragmentation

1.External Fragmentation: 
- This type of fragmentation occurs when the available memory is divided in to smaller blocks and then interspersed. 
- Certain memory allocation algorithms have a minus point that they are at times unable to order the memory used by the programs in such a way that its wastage is minimized. 
- This leads to an undesired situation where even though we have free memory, it cannot be used effectively since being divided in to very small parts that alone cannot satisfy the memory demands of the programs.  
- Since here, the unusable storage lies outside the allocated memory regions, this type of fragmentation is called external fragmentation. 
- This type of fragmentation is also very common in file systems since here many files with different sizes are created as well as deleted. 
- This has a worse effect if the file deleted was in many small pieces. 
- This is so because this leaves similar small free memory chunks which might be of no use.

2. Internal Fragmentation: 
- There are certain rules that govern the process of memory allocation. 
- This leads to the allocation of more computer memory what is required. 
- For example, as the rule memory that is allocated to programs should be divisible by 4, 8 or 16. So if some program actually requires 19 bytes, it gets 20 bytes. 
- This leads to the wastage of extra 1 byte of memory. 
- In this case, this memory becomes unusable and is contained in the allocated region itself and therefore this type of fragmentation is called as the internal fragmentation.
- In computer forensic investigation, the slack space is the most useful source for evidence. 
- However, it is often difficult to reclaim the internal fragmentation. 
- Making a change in the design is the most effective way for preventing it. 
Memory pools in dynamic memory allocation are the most effective methods for cutting down the internal fragmentation. 
- In this the space overhead is spread by a large number of objects.

3. Data Fragmentation: 
This occurs because of breaking up of the data in many pieces that lie far enough from each other.

Capturing information from within the product - Need to balance the privacy perspective

This is a post that is again based on real life experience. Let me lay out an example to you. Lisa is the program manager of a team which is developing a new version of a software that prepared greeting cards. The software allows the users to add their own images or use images that are available within the software, and you can do the same for greetings and videos, and the output is a rich electronic greeting card that can be sent out by email or posted on social networking platforms, and it even provides a more plainer output that can be printed.
Now, this card needs to work on many different operating systems, needs to work at machines with different performance parameters, and also needs to work on different browsers. How do you get this sort of information ? Well, you can get this information from users in the terms of mailers, surveys, questions from within the software, and other such ways, or you can get it from the users own machines. So, Lisa did some discussion with her team, and came out with a technical solution - there will be some tracking devices built within the software that will, on every launch, report back to the company about the platform and browser that is there on the machine of the user and this data will be located in a database where this information can be mined.
In the next version of the software, this process worked out real well. The team was very enthusiastic about the capabilities of retrieving all sorts of information from the user's machine and starts building more of this. Lisa also got swept along in this tide and was an enthusiastic collaborator about trying to determine what more information can be mined. And all of this was done with the best of intentions. So, there was a need to know which printers the users were having, since this would allow the determine which printers were most often used (the belief was that more customers would be using low end printers given the profile of customers), and this would allow some more optimization of the printing capabilities.
Piracy was a big problem, so there was a thought about tracking the serial number on the user's machine, and also getting the machine name and any other such information that would allow identification of the user. Somebody had a bright idea about trying to determine whether the customers were also using a rival software, which was not easy to track, but they managed to do it by searching the registry for the other rival software. By this time Lisa was getting a bit nervous, all her instincts were reporting problems.
And then the whole thing blew up. A user who had a personal firewall and a bit of paranoia found that the software was causing a number of hits on the firewall, and decided to investigate further. As he investigated and reported, other users also started getting curious and doing investigations, and also reporting more such issues, and also raising questions on the user forums. The legal department of the company stepped in and wanted to know what all information was being tracked, and why (WHY in capital terms) they were not consulted. It turned out that there are privacy implications, and every legal team has a list of items that they find fine to track, and other info on the user's machine which absolutely cannot be tracked. The tracking of a rival installation was way beyond what was permissible, and caused a lot of problems when it was discovered.
The company did a mea culpa (not fully though), and also promised that they will ensure that at the time of installation, there will be a note that will let the users know about what all information is being tracked, and also provide the users with an option about tracking such information (this language was all couched in very positive words, but it was far more than what the team was doing). So, when you start trying to get info from the user through your software, make sure that what you are doing is above board. 

Saturday, April 27, 2013

Learning - Setup groups within the company and share experiences - Part 1

The first question is about why such a topic would belong in a blog about software processes ? Seems a bit strange to have a blog about experience sharing, since it seems to something that is not specific to the software area. However, I have found that the busy nature of the software professional leads to a scarcity of time, leaving little time for reviewing ones own experience, what to talk about trying to do experience sharing with people from other teams. This being busy nature has only increased due to the economic problems of the last few years, with the same number of people expected to do more.
So what is this post about ? In any decent sized organization, whether it be a product development organization or an organization that works on projects, a number of teams do similar kind of work and hence are in a position to learn a lot from each other. Even though their products or projects may be somewhat different from each other, the level of difference with each other would not be significant enough. As a result, if there was coordination (or rather, let us not use the term of coordination, rather let us talk about how they can share their experiences), then the teams and their managers can learn a lot about how each team handled some situations which are common.
This is best served through an example. We had a situation where we were running into problems with an external vendor (not even a vendor, more like a open source component maker who would post new versions of their component with updates and notes on a regular basis, every 6 months or so). We did not have any clarity as to what were the contact details of the key people in the component maker, we did not know details about the kind of testing processes that they had, and so on. This was causing us problems, since the component was important to us (it saved us a lot of money that would otherwise be needed to buy a professional software that did the same function).
At that time, we recollected that another team in the company was doing something similar in terms of functionality. We did a quick engineering discussion with them, and realized that they were far ahead of us in terms of figuring out these kind of details that we were looking for. They had knowledge of the key people in that open source project, and they had discussions which provided them far more comfort in terms of the testing processes and the level of quality in the released version of that component.
Because of this particular discovery, we had a senior engineering person within our group interact with a similar person from their team on a regular basis for these kind of discussions. We also created an email group with the specific name of that component, and made it open for other people within the company to join. The resultant was that over a period of time, we discovered 2 smaller project teams within the company that were also looking to evaluate such a component, and they got a head start based on the discussions that they had with us and with the other teams.
Now, if you do such discussions, not only for such an example, but for other cases where you are running into problems, such as problems with the latest seeds of MS Windows or OS from Apple, or where you are running into other problems, such groups can provide a lot of help, since it allows people to share issues and share solutions. However, it takes time and effort to do this kind of sharing and coordination, even though the rewards that you get from such sharing can be well worth it.

I will add more about this in the next post in the series (Sharing experiences within the company - Part 2 - TBD)

Friday, April 26, 2013

What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?

- Thrashing takes place when the sub-system of the virtual memory of the computer system is involved in a state of paging constantly.
- It rapidly exchanges data in the memory with the data available on the disk excluding level of processing of most of the applications. 
- Thrashing leads to the degradation of the performance of the computer or may even cause it to collapse. 
- The problem may further worsen until the issue is identified and addressed. 
- If there are not enough pages available for the job, it becomes very likely that your system will suffer from thrashing since it’s an activity involving high paging. 
- This also leads to high rate of page fault. 
- This in turn cuts down the utilization of the CPU. 
- Modern systems utilize the concept of the paging system for executing many programs.
- However, this is what makes them prone to thrashing. 
- But this occurs only if the system does not have at present sufficient memory as required by the application or if the disk access time is too long. 

- Thrashing is also quite common in the communication systems where the conflicts concerning the internal bus access is common. 
- The order of magnitude or degree by which the latency and throughput of a system might degrade depends up on the algorithms and the configuration that is being used. 
- In systems making use of virtual memory systems, workloads and programs presenting insufficient locality of reference may lead to thrashing. 
- Thrashing occurs when the physical memory of the system is not able to contain in itself the workload or the program. 
- Thrashing can also be called as the constant data swapping.
- Older systems were low end computers i.e., the RAM they had was insufficient to be employed in modern usage patterns. 
- Thus, when their memory was increased they became noticeably faster. 
- This happened because of the availability of more memory which reduce the amount of swapping and thus increased the processing speed. 
- IBM system/ 370 (mainframe computer) faced this kind of situation. 
- In it a certain instruction consisted of an execute instruction pointing over to another move instruction. 
- Both of these instructions crossed the page boundary and also the source from which the data has to be moved and the destination where it was to be placed both crossed the page boundary. 
- Thus, this particular instruction altogether required 8 pages and that too at the same time in memory. 
- Now if the operating system allocated less than 8 pages, a page fault is sure to occur. 
- This page fault will lead to thrashing of all the attempts of restarting the failing instruction. 
- This may even reduce the CPU utilization to almost zero!

How can a system handle thrashing?

For resolving the problem of thrashing, the following things can be done:
1. Increasing the amount of main memory i.e., the RAM in the system. This is the best ever solution for this and will be helpful for a long term also.
2. Decreasing the number of programs to be executed by the system.
3. Replacing the programs that utilize heavy memory with their less memory utilizing equivalents.
4. Making improvements in the spatial locality.

- Thrashing can also occur in cache memory i.e., the faster storage space that is used for speeding up the data access. 
- Then it is called cache thrashing. 
- It occurs when the cache is accessed in a way that it leaves it of no benefit. 
When this happens many main memory locations compete with each other for getting the same cache lines that it turn leads to a large number of cache misses.

Thursday, April 25, 2013

What is the difference between Hard and Soft real-time systems?

- Real time operating systems are the systems that have been developed for serving to the real time requests of the applications. 
- They are readily capable of processing the data as it is inputted. 
- They do not make any delays in buffering. 
- Thus, the time taken for processing is quite less.
- The scheduling algorithms used by the real time operating systems are quite advance and dedicate themselves to a small set of applications. 
- Minimal thread switching latency and interrupt latency are two key factors of these kinds of operating systems. 
- For these systems the amount of time they take for responding matters more than amount of work they do.
These systems are used to maintaining consistency in producing the output. 

The real – time operating systems can be divided in to two categories namely:
  1. The hard – real time operating system and
  2. The soft – real time operating system
In this article we discuss about these two systems and the common differences between them.
  1. The hard real time operating systems produce less jitter while producing the desired outputs. On the other hand, the jitter produced by the soft real time operating system is quite more when compared to its hard real time counterpart.
  2. The thing that distinguishes them is not the main goal but rather the type of performance it gathers i.e., whether hard or soft.
  3. The soft real time operating systems have been designed as such that they can usually meet the deadlines whereas, the hard real time operating systems are designed in such a way so as to meet the deadlines deterministic-ally.
  4. The hard – real time systems are also called as the immediate real time systems. They are bound to work within the confined strict deadlines. If in case, the application is unable to complete its task in the allotted time, then it is said to have failed. Some examples of the hard – real time operating systems are: anti-lock brakes, aircraft control systems and the pacemakers.
  5. Hard real time operating system are bound to adhere to the deadlines assigned to them. Missing a deadline can incur a great loss. As for the soft real time operating systems, it is acceptable if the deadline is missed such as in the case of the online databases.
There is also a third category of the real – time operating systems that is not so known. It is called the ‘firm RTOS’. They also need to keep up to the deadline since missing it won’t cause any catastrophic effect but may give results that are undesirable.

More about Real time Operating System

- The embedded systems have evolved all of a sudden and now they are present all around us in digital homes, cell phones, air conditioners, cars and so on. 
- We very rarely recognize the extent to which they have eased our day to day life. 
- Safety is another aspect of our lives for which we depend on these embedded systems. 
- The thing that controls these systems is the operating systems. 
- Real time operating system is what that is used by most of these gadgets. 
- The tasks that are assigned to a real – time OS always have deadlines. 
- The OS adheres to this while completing it. 
- If these systems miss the deadline the results can be very dangerous and even catastrophic. 
- With each passing day the complexity of these systems is increasing and so our dependence on them.
- Some examples of real – time operating systems are:
Ø  RTLinux
Ø  Windows CE
Ø  LynxOS
Ø  VxWorks
- The RTOS know well not to compromise with the deadlines. 

Wednesday, April 24, 2013

What is multi-tasking, multi-programming and multi-threading?

When it comes to computing, there are 3 important tasks that are inter-related concepts namely multi-programming, multitasking and multi-threading. 

What is Multitasking?

- This has actually emerged out of the need of multitasking since while the system performed one task a lot of time was wasted. 
- As their needs grew,people wanted the computer to perform many tasks at the same time. Multi-tasking is what we call it. 
- Here, multiple tasks or processes are carried out simultaneously.
- The common processing resources i.e., the main memory and the CPU are shared by these processes. 
- If the system has only one CPU to work with, then it can only run one task at a time. 
- Such systems seek to multi-task by scheduling all the processes required to be carried out. 
- It runs one task and the other one waits in the pipeline.
The CPU is reassigned to all the tasks turn by turn and this is termed as a context switch. 
- When this happens very frequently, it gives an illusion that the processes are being executed in parallel. 
- There are other systems called multi-processor machines which have more than one CPU and can perform a number of tasks greater than the number of CPUs. 
- There are a number of scheduling strategies that might be adopted by the operating systems and they are:
Ø  Multi – programming
Ø  Time – sharing
Ø  Real – time systems

What is Multi-Programming?

- Earlier we had very slow peripheral devices and therefore the CPU time was a luxury and so expensive. 
- Whenever a program was being executed for accessing a peripheral, the CPU was to keep waiting for the peripheral to finish with processing the data. 
- It is very inefficient. 
- Then came the concept of multi–programming which was a very good solution. 
-  When the program reached the waiting status, its context was stored in the memory and the CPU was given some other program to execute. 
- This processing continued till all the processes at hand were completed. 
- Later,developments such as VMT or virtual machine technology and virtual memory greatly increased the efficiency of the multi – programming systems. 
With these two technologies the programs were able to make use of the OS and the memory resources just as they were being used by the currently executing programs. 
- However, there is one drawback with multi–programming which is that is does not guarantees that all programs will be executed in a timely manner. 
- But then also it was of a great help in processing multiple batches of programs.

What is Multi-threading?

- With multi–tasking a great improvement was seen in the throughput of the computer systems. 
- So programmers found themselves implementing programs in sets of cooperating processes.
- Here, all the processes were assigned different tasks like one would take input, other one would process it and a third one would write the output to the display. 
- But for this, there was a requirement of tools that allowed an efficient exchange of the data.
- Threads were an outcome of the idea that the processes can be made to cooperate efficiently if their memory space is shared.
- Therefore, threads can be defined as the processing running in a memory context that is same for all. 
- These threads are said to be light – weight since there is no need for a change of memory context for switching between them. 
- The scheduling followed here is of the preemptively. 

Learning about external dependencies - getting information from external teams by getting on their lists

We had a huge problem around the start of the year. Our product is a greeting card application, which allows you to take your images or some standard images, add text or standard greetings, add voice which can be customized or taken from some standard greetings, and then you can convert this into a rich content greeting card that can be sent via email or via social networking. For integrating with social networking, we used the API's that were publicly made available by these platforms. We were not big enough that we could establish a direct contact with the social networking platforms, so we would just use these API's.
Near the beginning of the year, we started getting customer complaints that the product would just not work with a particular social networking site. They would try to use the product to send it, and they would get back an error that was basically saying something like: "Resource not found". This was reported on the user forums that we had for our product and once one person reported such a problem, others were able to confirm. This was good since it confirmed that it was not just a problem with one user, but also meant that people who were not using that particular feature also started feeling unhappy.
However, what really caused us a lot of problems was when one of our more technically oriented users pointed out that this could be because one of the API's that we were using was no longer in existence, it had replaced by another such API. And even worse, the user pasted the relevant notice from their technical forum where it was mentioned, pointed out the tweet where it has been pointed out, and so on. All this was repeated by more users who started making fun. How we had not bothered to keep up, how our problem was affecting a product that they were using, and whether we could finally get our thumb out and do something. Boy, was it embarrassing, and the problem was, nobody had much of sympathy for us. After all, we should have been on the ball, and should have known that the API was going to be replaced. The social networking platform had been talking about that for around 6 months now, so that people had enough time to do something.
The good news was that we could replace the API without needing to make a change in the application that users had, since the API call would in turn look at a piece of code on our website, and we were able to redirect them. It slightly decreased speed by around 5 seconds, but that was acceptable rather than try to roll out a patch to all the affected users.
Another policy was set in place as a result of this. From this point onwards, any external technology that we used was tracked in a database where we also tracked the site of the technology, we tracked all their accounts (twitter, facebook, forums, email) and once every month, we had a person assigned to review these so that if there was any notice, we could something rather than finding out later. It was good that we did so, since we ran into a possibly bigger problem later, but we were prepared and handled it so well that nobody noticed any problem.

Tuesday, April 23, 2013

What is Throughput, Turnaround time, waiting time and Response time?

In this article we discuss about four important terms that we often come across while dealing with processes. These 4 factors are:
1.  Throughput
2.  Turnaround Time
3. Waiting Time
4.  Response time

What is Throughput?

- In communications networks like packet radio, Ethernet etc., throughput refers to the rate of the successful delivery of data over the channel. 
- The data might be delivered via either logical link or physical link depending on the type of communication that is being used. 
- This throughput is measured in the terms of bps or bits per second or data packets per slot. 
- Another term common in networks performance is the aggregate throughput or the system throughput. 
- This equals to the sum of all the data rates at which the data is delivered to each and every terminal in a network. 
- In computer systems, throughput means the rate of successful completion of the tasks by the CPU in a specific period of time. 
- Queuing theory is used for the mathematical analyzation of the throughout. 
There is always a synonymy between the digital bandwidth consumption and the throughput. 
- Another related term is the maximum throughput.This bears synonymy with the digital bandwidth capacity.

What is Turnaround Time?

- In computer systems, the total time taken by the CPU from submission of a task or thread for execution to its completion is referred to as the turnaround time. 
- The turnaround time varies depending on the programming language used and the developer of the software.
- It deals with the whole amount of time taken for delivering the desired output to the end user following the start of the task completion process. 
- This is counted among the metrics that are used for the evaluation of the scheduling algorithms used by the operating systems. 
- When it comes to the batch systems, the turnaround time is more because of the time taken in the formation of the batches, executing and returning the output.

What is Waiting Time?

- This is the time duration between the requesting of an action and when it occurs. 
- Waiting time depends up on the speed and make of the CPU and the architecture that it uses. 
- If the processor supports pipeline architecture, then the process is said to be waiting in the pipe. 
- When the current task in processor is completed, the waiting task is passed on to the CPU for execution. 
- When the CPU starts executing this task, the waiting period is said to be over. 
- The status of the task that is waiting is set to ‘waiting’. From waiting status, it changes to active and then halts.

What is Response Time?

- The time taken by the computer system or the functional unit for reacting or responding to the input supplied is called the response time. 
- In data processing, there are various situations for which the user would perceive the response time:
Ø  Time between operator entering a request at a terminal  and
Ø  The instant at which appears the first character of the response.
- Coming to the data systems, the response time can be defined as the time taken from the receipt of EOT (end of transmission) of a message inquiry and start of the transmission in response to that inquiry. 
- Response is an important concept in the real time systems and it is the time that elapses between the dispatch of the request until its completion. 
- However, one should not confuse response time with the WCET.
- It is the maximum time taken by the execution of the task without any interference. 
- Response time also differs from the deadline. 
- Deadline is the time for which the output is valid. 

Facebook activity