Subscribe by Email


Showing posts with label Threads. Show all posts
Showing posts with label Threads. Show all posts

Saturday, May 11, 2013

What is meant by Deadlock? List the necessary conditions for arising deadlocks?


Consider two competing processes or actions in a situation where both of them wait for each other to be done and so neither of them ever finish. Such a situation is called a deadlock. 
- When the number of competing processes is exactly two, then it is said to be a ‘deadly embrace’. 
- The two involved competing actions tend to move towards some sort of tragedy which might be mutual extinction or mutual death. 

"In operating systems a situation occurs where two threads or processes enter the waiting state at the same time because of the resource that they both want is being used by some other process that also in waiting state for some resource being held by another process in waiting state and so on". 

- It so happens that the process is then unable to change its state since the resources it requires are being used by the other processes which is then keeping the process in waiting state indefinitely. 
- The system is now in a deadlock. 
- Systems such as the distributed systems, parallel computing systems, multi-processing systems face the problem of being in a deadlock quite often. 
- This is so because here the hardware and software locks are purposed for handling the resources that are shared and implementing the process synchronization. 
- Deadlocks may also occur in telecommunication systems because of the corrupt signals and their loss rather than resource contention. 
- A deadlock situation can be compared to problems such as that of the catch-22 or chicken or egg problem. 
- A deadlock can also occur in a circular chain pattern. 
For example, consider a computer having 3 processes and corresponding 3 CD drives i.e., one held by each process. 
- Now all the three processes would be in a deadlock if they all request another drive.

Conditions for a Deadlock to arise

There are certain conditions that should be there for a deadlock to arise:
  1. Mutual exclusion: There has to be at least one resource that cannot be shared. So that only one process would use at any given time.
  2. Resource holding (or hold and wait): There should be at least one resource held by a process that in turn should be request more resources that are being held by other processes.
  3. No preemption: Once the resources have been allocated, they should not de-allocated by the operating system. The condition is that the process holding the resource must release it voluntarily.
  4. Circular wait: A circular chain of processes must be formed as explained in the earlier example.
"These 4 conditions for deadlock are collectively called the ‘Coffman conditions’. If any of these conditions is not met, a deadlock can’t occur".

- Handling a deadlock is an important capability of the operating systems. 
However, there are many modern operating systems that still cannot avoid deadlocks. 
- On occurrence of a deadlock many non-standard approaches are followed by different operating systems for handling it. 
- Many of these approaches try to avoid at least one of the Coffman conditions (generally the 4th one). 
- Below we discuss some of the approaches:
  1. Ignoring deadlock
  2. Detection
  3. Prevention
  4. Avoidance
- There is a second kind of deadlocks called the ‘distributed deadlock’ and it is common where concurrency control is used or we can say in the distributed systems. 


Thursday, May 9, 2013

What is a thread? What is meant by multi-threading?


About Thread

- A thread is a smaller instance of a process i.e., a sequence of instructions and an operating system scheduler can manage it independently. 
- A thread is sometimes also called as the light weight process. 
- The way these threads and processes are implemented is different for different operating systems. 
- But in majority of the cases threads are contained within a process. 
- The same process can have more than one thread. 
- These threads have to share the resources including memory while different processes may not share these. 
- In simple words, we can say that the instructions or code and the context or the values of the process are shared by its constituting threads. 

In this article we focus on threads and multi-threading.

- Multi-threading is a task of multiprocessor systems.
- But even single processor systems can do it by time division multiplexing just like the multitasking. 
- In TDM context switch occurs between the many threads. 
- This happens many times and so it seems to the user that a number of processors are being executed concurrently.
- However, in multiprocessor systems concurrency can be truly achieved since every processor processes one thread and so many threads are executed simultaneously. 
- Both multiprocessor and time threading is supported by most of the modern operating systems with help from the process scheduler. 
- The threads can only be manipulated through a system and this all is facilitated by the kernel of the operating system. 
- This is why some implementations have been named as the kernel thread. 
- An example of kernel thread is the LWP or the lightweight process and it shares same state as well as info. 
- Some programs even use user space threads when threading with the help of signals and timers etc. 
- These programs perform a kind of ad hoc time slicing. 
- Some may take threads and processes to be the same but there is a considerable difference between the two:
  1. Processes are independent whereas the threads are a subset of the processes.
  2. More state information is contained in threads whereas the same process state, resources and memory are shared by all the threads contained in that process.
  3. Address spaces of different processes are different whereas the same address space is also shared by the threads.
  4. IPC or the inter–process communication is the only medium for the processes to communicate with each other.
  5. The threads within the same process are switched with context switch which is faster than that of the processes.

Features of Multi-threading

- Multi-threading is now among the widespread programming models. 
- The major characteristic feature of this model is that multiple threads can execute within the same process context. 
- Even though the resources of the process are shared by the threads, they execute independently. 
- The most widespread application of this model is in parallel computing.
- Full advantage of this technology can be taken only when it is applied to a multiprocessor system or a distributed system. 
- This is so because the program threads have a natural tendency to heed to the true concurrent execution. 
- But in these cases, necessary precautions must be taken for avoiding the race conditions and other undesirable behavior. 
- For the correct manipulation of data thread synchronization is also important. - Mutually exclusive operations are another requirement of the threads for preventing the simultaneous modification of the common data. 
- If these primitives are used carelessly, then it can lead the system to a deadlock. 
- Another feature of multi-threading is that it is always responsive to I/P. 
- This can be contrasted with the single threaded applications where if one block occurs, the whole program freezes.


Wednesday, April 24, 2013

What is multi-tasking, multi-programming and multi-threading?


When it comes to computing, there are 3 important tasks that are inter-related concepts namely multi-programming, multitasking and multi-threading. 

What is Multitasking?

- This has actually emerged out of the need of multitasking since while the system performed one task a lot of time was wasted. 
- As their needs grew,people wanted the computer to perform many tasks at the same time. Multi-tasking is what we call it. 
- Here, multiple tasks or processes are carried out simultaneously.
- The common processing resources i.e., the main memory and the CPU are shared by these processes. 
- If the system has only one CPU to work with, then it can only run one task at a time. 
- Such systems seek to multi-task by scheduling all the processes required to be carried out. 
- It runs one task and the other one waits in the pipeline.
The CPU is reassigned to all the tasks turn by turn and this is termed as a context switch. 
- When this happens very frequently, it gives an illusion that the processes are being executed in parallel. 
- There are other systems called multi-processor machines which have more than one CPU and can perform a number of tasks greater than the number of CPUs. 
- There are a number of scheduling strategies that might be adopted by the operating systems and they are:
Ø  Multi – programming
Ø  Time – sharing
Ø  Real – time systems

What is Multi-Programming?

- Earlier we had very slow peripheral devices and therefore the CPU time was a luxury and so expensive. 
- Whenever a program was being executed for accessing a peripheral, the CPU was to keep waiting for the peripheral to finish with processing the data. 
- It is very inefficient. 
- Then came the concept of multi–programming which was a very good solution. 
-  When the program reached the waiting status, its context was stored in the memory and the CPU was given some other program to execute. 
- This processing continued till all the processes at hand were completed. 
- Later,developments such as VMT or virtual machine technology and virtual memory greatly increased the efficiency of the multi – programming systems. 
With these two technologies the programs were able to make use of the OS and the memory resources just as they were being used by the currently executing programs. 
- However, there is one drawback with multi–programming which is that is does not guarantees that all programs will be executed in a timely manner. 
- But then also it was of a great help in processing multiple batches of programs.

What is Multi-threading?

 
- With multi–tasking a great improvement was seen in the throughput of the computer systems. 
- So programmers found themselves implementing programs in sets of cooperating processes.
- Here, all the processes were assigned different tasks like one would take input, other one would process it and a third one would write the output to the display. 
- But for this, there was a requirement of tools that allowed an efficient exchange of the data.
- Threads were an outcome of the idea that the processes can be made to cooperate efficiently if their memory space is shared.
- Therefore, threads can be defined as the processing running in a memory context that is same for all. 
- These threads are said to be light – weight since there is no need for a change of memory context for switching between them. 
- The scheduling followed here is of the preemptively. 


Saturday, April 20, 2013

Explain the concepts of threads and processes in operating system?


Threads and processes are an important part of the operating systems that have features of multi–tasking and parallel programming. These come under the sole concept of ‘scheduling’. Let us try to understand these concepts with the help of an analogy.

- Consider the process to be a house and threads are its occupants. 
- Then, process is like a container having many attributes. 
- These attributes can be compared to that of a house such as number of rooms, floor space and so on. 
- Despite having so many attributes, this house is a passive thing which means it can’t perform anything on its own. 
- The active elements in this situation are the occupants of the home i.e., the threads. 
- The various attributes of the house are actually used by them. 
- Since you too live in a house you must have got an idea how it actually works and behaves. 
- You do whatever you like in the house if only you are there. 
- What if another person starts living with you? You just can’t do anything you want to do. 
- You cannot use the washroom without making sure that the other person is not there. 
- This can be related to multi – threading. 
- Just as a part of estate is occupied by the house, an amount of memory is occupied by the process. 
- Just as the occupants are allowed to freely access anything in the house, similarly the occupied memory is utilized by the threads that are a part of that process i.e., the access to memory is common. 
- If one process allocates some memory, it can be accessed by all other threads also. 
- If such a thing is happening, it has to be made sure that from all the threads, the access to the memory is synchronized. 
- If it cannot be synchronized, then it becomes clear that the memory has been allocated specifically to a thread. 
- But in actual, things are a lot more complicated because at some point of time everything has to be shared. 
- If one thread wants to use some resource that is already under use by some other thread, than that thread has to follow the concept of mutual exclusion. 
An object known as the mutex is used by the thread for achieving exclusive access to that resource. 
- Mutex can be compared to a door lock. 
- Once a thread locks this, no other thread can use that resource until the mutex is again unlocked by that thread. 
- Mutex is one resource that a thread uses. 
- Now, suppose there are many threads waiting to use the resource when mutex is unlocked, the question that arises now is that who will be next one to use the resource. 
- This problem can be solved by either deciding on the basis of length of wait or on basis of priority. 
- Suppose there is a location that can be accessed by more than one threads simultaneously.
- You want to have only a limited number of threads using that memory location at any given point of time. 
- This problem cannot be solved by mutex but with another resource called semaphore. 
- Semaphore with a count of 1 is the resource that can only be used by one thread at a time. 
- In semaphore of greater count more threads can access it simultaneously.  
- It just depends up on how you characterize or set the lock.


Tuesday, February 7, 2012

What are common programming bugs every tester should know?

A programming bug as we all know is common or “one in all” term for a flaw, error or mistake in a software system or program. A bug is known for producing unexpected result always or results in the abnormal behavior of the software system or program.

CAUSES OF BUGS
- Root causes of the bugs are the faults or mistakes introduced in to the program’s source code or design and structure or its implementation.
- A program or a piece of program too much affected with bugs is commonly termed as a “buggy” program or code.
- They can be introduced unknowingly in the software system or program during the coding, specification, data entry, designing and documentation.
- Bugs can also arise due to complex interactions between the components of a complex computer program or system.
- This happens because the software programmers or developers have to combine a great length of code and therefore, they may not be able to track minor bugs.
- The discovered bugs are also documented and such documents or reports are called bug reports or trouble reports.

HOW BUGS INFECT A PROGRAM ACTUALLY?
- A single bug can trigger a number of faults or errors within the program which can affect the program in many ways.
- The degree of affecting depends on the nature of the bug.
- It can either affect the program very badly causing it to rash or hang or it may have only a subtle affect on the system.
- There are some bugs that are not detected in the entire software testing process.
- Some bug may cause a chain effect which can be described as one bug causing an error and that error causing some other errors and so on.
- Some bugs may even shut down the whole software system or application.
- Bugs can have serious impacts.
- Bugs can destroy a whole machine.
- Bugs are after all mistakes of human programmers.

TYPES OF BUGS
Bugs are of many types. There are certain types of common bugs that every programmer should be introduced with.

First we are listing some security vulnerabilities:
- Improper encoding
- SQL injection
- Improper validation
- Race conditions
- Memory leaks
- Cross site scripting
- Errors in transmission of sensitive data
- Information leak
- Controlling of critical data
- Improper authorization
- Security checks on the client side and
- Improper initialization

SOME COMMON BUGS ARE:

1. Memory leaks
- This bug is catastrophic in nature.
- It is most common in languages like C++ and C i.e., the languages which do not have automatic garbage collection feature.
- Here the rate of consumption of memory is higher as compared to rate of de- allocating memory which is zero.
- In such a situation the executing program comes to a halt because there is no availability of free memory.

2. Freeing the resource which has already been freed
- This bug is quite frequent in occurrence.
- Usually it happens that the resources are freed after allocation but here already freed resource is freed which causes an error.

3. De-referencing of NULL operator
- This bug is caused due to an improper or missing initialization.
- It an also be caused due to incorrect use of reference variables.

4. References
- Sometimes unexpected or unclear references are created during the execution which may lead to the problem of de- allocation.

5. Deadlocks
- These bugs though rare are catastrophic and are caused when two or more threads are mutually locked by each other or those threads get entangled.

6. Race conditions
- These are frequent and occur when the same resource or result is being tried to be accessed by two threads.
- The two threads are said to be racing.


Wednesday, August 26, 2009

Overview of Threads

A thread is an encapsulation of the flow of control in a program. Most people are used to writing single-threaded programs - that is, programs that only execute one path through their code "at a time". Multi-threaded programs may have several threads running through
different code paths "simultaneously".
A thread is similar to the sequential programs: a single thread also has a beginning, an end, a sequence, and at any given time during the run time of the thread there is a single point of execution. However, a thread itself is not a program--it cannot run on its own--but runs within a program.

Why use threads?
Threads should not alter the semantics of a program. They simply change the timing of operations. As a result, they are almost always used as an elegant solution to performance related problems. Here are some examples of situations where you might use threads :
* Doing lengthy processing: When a windows application is calculating it cannot process any more messages. As a result, the display cannot be updated.
* Doing background processing: Some tasks may not be time critical, but need to execute continuously.
* Doing I/O work: I/O to disk or to network can have unpredictable delays. Threads allow you to ensure that I/O latency does not delay unrelated parts of your application.

In the program, some operations incur a potentially large delay or CPU hogging, but this delay or CPU usage is unacceptable for other operations; they need to be serviced now.
* Making use of multiprocessor systems: You can't expect one application with only one thread to make use of two or more processors! Chapter 3 explains this in more detail.
* Efficient time sharing: Using thread and process priorities, you can ensure that everyone gets a fair allocation of CPU time.

Similarities between process and threads :
* Like processes threads share CPU and only one thread active (running) at a time.
* Like processes, threads within a processes, threads within a processes execute sequentially.
* Like processes, thread can create children.
* And like process, if one thread is blocked, another thread can run.

Differences between process and threads :
* Unlike processes, threads are not independent of one another.
* Unlike processes, all threads can access every address in the task.
* Unlike processes, thread are design to assist one other. Note that processes might or might not assist one another because processes may originate from different users.


Tuesday, August 25, 2009

Overview of Race Conditions in Operating Systems

A race condition happens when a system depends on something being done outside of its control before the system reaches a point where it needs to use the results of that something, and there's no way to guarantee that that something will actually be finished when the system needs it.
For example, suppose there's a person who runs a program every morning that prints letters that have been queued throughout the previous day. There's another person in another department who runs a program that queues a letter, and then offers to let the person modify it while it's sitting in the printing queue. If the person runs this program too early in the day (before the printing program gets run), they're essentially in a "race" to finish their work before the printing program runs.

Symptoms Of Race Condition :

The most common symptom of a race condition is unpredictable values of variables that are shared between multiple threads. This results from the unpredictability of the order in which the threads execute. Sometime one thread wins, and sometime the other thread wins. At other times, execution works correctly. Also, if each thread is executed separately, the variable value behaves correctly.
While many different devices are configured to allow multitasking, there is still an internal process that creates a hierarchy of functions. In order for certain functions to take place, other functions must occur beforehand. While the end user perceives that all the functions may appear to be taking place at the same time, this is not necessarily the case.
One common example of a race condition has to do with the processing of data. If a system receives commands to read existing data while writing new data, this can lead to a conflict that causes the system to shut down in some manner. The system may display some type of error message if the amount of data being processed placed an undue strain on available resources, or the system may simply shut down. When this happens, it is usually a good idea to reboot the system and begin the sequence again. If the amount of data being processed is considerable, it may be better to allow the assimilation of the new data to be completed before attempting to read any of the currently stored data.


Facebook activity