Subscribe by Email


Showing posts with label Distributed. Show all posts
Showing posts with label Distributed. Show all posts

Wednesday, June 5, 2013

Explain the various techniques for Deadlock Prevention

Deadlocks are like a nightmare for the programmers who design and write the programs for the multitasking or multiprocessing systems. For them it is very important to know about how to design programs in such a way as to prevent the deadlocks. 

Deadlocks are a more common problem in the distributed systems which involve a use of the concurrency control and distributed transactions. The deadlocks that occur in these systems are termed as the distributed deadlocks. 

It is possible to detect them using either of the following means:
1. Building a global wait for graph from a local one through a deadlock detector.
2. Using distributed algorithms such as the edge chasing.

- An atomic commitment protocol similar to a two phase commit is used for automatically resolving the distributed deadlocks. 
- Therefore, there is no need for any other resolution mechanism or a global wait for graph. 
- But this is possible only in the commitment ordering based distributed environments. 
- For the environments that have 2 – phase locking, a similar automatic global deadlock resolution takes place.
- There is another class of deadlocks called the phantom deadlocks. 
- These are the ones detected in the system because of some internal delays but they actually do not exist during the detection time.
- Today, their exist a number of ways using which the parallelism can be increased where otherwise severe deadlocks might have been caused by the recursive locks. 
- But like for everything else, this also has a price.
- You either have to accept one of these or both i.e., the data corruption or the performance/ overhead. 
- Preemption and lock–reference counting, WFG or wait – for graph are some of the examples of this. 
- These can be followed either by allowing for the data corruption during the preemption or by using version.
Apart from these heuristic algorithms and the algorithms that can track all the cycles causing the deadlocks can be used for preventing the deadlocks.
These algorithms even though they don’t offer 100 percent parallelism, they prevent deadlocks by providing an acceptable degree of the performance overhead versus parallelism. 

This example will make it clearer: 
- Consider at a crossing junction there are 2 trains approaching each other. 
Their collision can be prevented by some just-in-time prevention means. 
- This mean can be person at the crossing having a switch pressing which will allow only one of them to cross on to the succeeding track super passing the other trains that are also waiting. 
- There are following two types of deadlocks:
  1. Recursive locks: In such locks, only one thread can pass through it. Any other threads or processes entering the lock need to wait for the initial one to pass through after its task is finished.
  2. Non – recursive locks: Here only once a thread can enter the lock. If the same thread again tries to enter the lock without unlocking it, a deadlock can occur.

- There are issues with both of these. 
- The first one does not provides distributed deadlock prevention and the latter one has no concerns for the deadlock prevention. 
- In the first one, if the number of threads trying to enter the lock equals the number of locked threads, then one of the threads has to be assigned as the super one and then only one can execute it till completion. 
- After the execution of the super thread is complete, the condition reverts back from the recursive lock and the super thread removes its status of being super thread and sends a notification to the locker that the condition has to be re-checked. 


Wednesday, March 13, 2013

What are characteristics of autonomic system?


Autonomic systems bring both challenges as well as opportunities for the future networking. The increasing numbers of users have had a negative impact on the complexity of the networks; it has also increased by multiple folds. Autonomic systems provide a solution for this problem. 

Characteristics of Autonomic System

  1. High intelligence: These systems have more intelligence incorporated in to them which lets them tackle this increasing complexity easily.
  2. Business Goal: They are driven by the business goal that the quality of experience of the user must be high. Even with the changing environment, there goals remain the same. But there are changes that take place in the low – level configurations. For example, when a user switches over to a low bandwidth network, the bit rate of the video has to be reduced in order to satisfy the goals of the business.
  3. Complex operations: All the operations carried out in an autonomic system are complex in nature even for the simplest of the services. For example, authentication, video encoding, billing, routing, shaping, QoS prioritizing, admission control.
  4. High level objectives: The human operator just has to specify the high – level objectives and it is left to the system whether it chooses to optimize one or more of the goals. In order to achieve this, the system has to translate these objectives in to low – level configurations.
  5. Adaptability: The system has the ability to adapt itself to the current environment.
  6. Policy continuum: There are a number of perspectives to this as mentioned below:
Ø  Business view: Includes guidelines, processes and goals.
Ø  System view: The service should be independent of the technology as well as the device that is being used.
Ø  Network view: It should be specific to technology but independent of the device.
Ø  Device view: Both technology and device specific.
Ø  Instance view: Operation should be specific to an instance.

  1. Elements: The elements of the network are assumed to be heterogeneous by the autonomic communication systems whereas in plain autonomic computing the elements are taken to be as the homogeneous.
  2. Distributed: These systems work up on a distributed environment.
  3. Complexity: The complexity in autonomic systems is more because of the complex autonomic loop that includes the following operations:
Ø  Interaction between the context  and the business goals
Ø  The MAPE (monitor, analyze, plan and execute) loop.

10. Reliability: In autonomic systems, the network has the authority to decide for itself focusing on high level objectives. Autonomic systems rely heavily up on artificial intelligence. However, there are issues associated with artificial intelligence like it becomes difficult to intervene in between when the things go wrong.It is quite difficult to know whether the system is doing the things it is supposed to do or not.
11. Scalability: This is another major characteristic of autonomic systems. It is required to keep track of the large amounts of knowledge and information. Autonomic systems have three tools to take care of this:
Ø Distributed ontologies
Ø Distributed large – scale reasoning
Ø Exchanging only the useful information
ØDistributing information among the different components of the autonomic network.

But in these cases, detection of the conflicts is a difficult task. For handling the various interactions taking place the various autonomic components efficient protocols are required. 
Currently two approaches have been suggested for developing the autonomic networking systems namely:
1. Evolutionary Approach: Incorporating the autonomic behavior in to the pre – existing infrastructure. This approach will consist of updates in increments till a fully autonomic system is developed. This approach is more likely to be adopted even though it requires a lot of patchwork.
2.  Clean slate approach: This approach is focused up on re – designing of the internet.


Tuesday, March 12, 2013

What are autonomic systems? What is the basic concept behind autonomic system?


In this article we shall discuss about the autonomic systems, but before moving on to that we shall see a brief discussion regarding the autonomic computing. 

About Autonomic Computing

- Distributed computing resources have the ability of self–management. 
- This kind of computing is called autonomic computing and such systems are called autonomic systems. 
- Because of their unique capabilities, these systems are able to adapt to the changes that are both predictable and unpredictable. 
- At the same time, these systems keep the intrinsic complexities hidden from the users as well as the operators. 
- The concept of autonomic computing was initiated by IBM in the year of 2001. - This was started in order to keep a curb on the growing complexity of the management of the computer systems and also to remove any complexity barriers that prove to be a hindrance in development.

About Autonomic Systems

- Autonomic systems have the power to make decisions of their own. 
- They do this because of the high level policies. 
- These systems automatically check and optimize their status and adapt to the conditions that have changed. 
- The frame work of these computing systems is constituted of various autonomic components that are continuously interacting with each other. 
Following are used to model an autonomic component:
  1. 2 main control loops namely the global and the local.
  2. Sensors (required self – monitoring)
  3. Effectors (required for self-adjustment)
  4. Knowledge
  5. Adapter or planner
- The number of computing devices is increasing by a great margin every year. - Not only this, each device’s complexity is also increasing. 
- At present highly skilled humans are responsible for managing such huge volume of complexity. 
- The problem here is that the number of such skilled personnel is not much and this has led to a rise in the labor costs.
- It is true that the speed and automation of the computing systems have revolutionized the way world runs but now there is a need for a system that is capable of maintaining these systems without any human intervention. 
- Complexity is a major problem of the today’s distributed computing systems particularly concerning their management. 
- Large scale computer networks are employed by the organizations and institutions for their computation and communication purposes. 
- These systems run diverse distributed applications that are capable of dealing with a number of tasks. 
- These networks are being pervaded by the growing mobile computing. 
- This means that the employees have to be contact with their organizations outside office through devices such as PDAs, mobile phones and laptops that connect through wireless technologies. 
- All these things add to the complexity of the overall network that cannot be managed by human operators alone. 
- There are 3 main disadvantages of manual operating:
  1. Consumes more time
  2. Expensive
  3. Prone to errors
Autonomic systems are a solution to such problems since they are self – adjustable and do not require human intervention. 
- The inspiration or the concept behind the autonomic systems is the autonomic nervous system found in humans.
- This self – manageable system controls all the bodily functions unconsciously. - In autonomic systems, the human operator just has to specify the high level goals and rules and policies that would guide the management. 

- There are 4 functional areas of an autonomic system:
  1. Self–configuration: Responsible for the automatic configuration of the network components.
  2. Self–healing: Responsible for the automatic detection and correction of the errors.
  3. Self–optimization: Monitors and controls the resources automatically.
  4. Self–protection: Identifies the attacks and provides protection against them.
- Below mentioned are some characteristics of the autonomic systems:
  1. Automatic
  2. Adaptive
  3. aware


Wednesday, July 18, 2012

What are the differences between testing WEB application and testing client-sever application?


Web application as we know are the kind of applications that can be accessed by one over a computer or internet or intranet network. Some of the web applications are also coded in to the web browser via web browser supported language like HTML or JavaScript. 

There are so many reasons that make web application quite famous with its users like:
  1. Because of web browsers’ ubiquity.
  2. Because web applications provide a means to use web browsers as a client that are usually termed as thin clients.
  3. Because they can be updated and maintained without having to disturb and install the software system or application on 1000s of client systems.
  4. And also because they support cross platform compatibility.
Now that we have discussed regarding the web applications, let us see what client server applications are! 

- A client server application is actually not an application rather it is a computing model that acts like a distributed application whose purpose is to partition the work loads or the tasks among the resource or service providers that are merely servers and clients (the service requester).
- The communication among the server and the client is established over a computer network and in most of the cases the servers as well as the client reside in the same system.  

In this article we are going to discuss the differences between testing the two types of the above mentioned applications. It is very necessary to test these applications since our personal as well as commercial needs are much dependent on these. 
First we will be talking about web application testing and then later regarding client server application testing so that the differences between the two will be clear to you! 

About Web Application Testing


- Web application testing is a combo of the following types of testings:
  1. Usability testing
  2. Compatibility testing
  3. Security testing
  4. Performance testing
  5. Interface testing and
  6. Functionality testing
- All these above mentioned tests make up a complete testing path for web applications. 
- These two types of testing i.e., the web application testing and testing client server application, differ on the basis of the environment in which they are carried out. 
- Testing a web application proves to be more difficult when compared to testing a web client server application and is quite complex too! This is so because the testers do not hold much control over the web application under question. 
- In web application testing, the application to be tested is loaded on a server whose location either or not may be known to the testers. 
- The .exe file is not installed on the client side and therefore it is to be tested on different web browsers. 
- Web applications are mostly tested for their compatibility with different OS platforms, error handling, back end testing, and load testing and static pages. 

About Client Server Testing


- Client server application testing is quite simple as compared to the web application testing and basically involves testing of two components.
- Here, like the web application testing is loaded on the server machine but unlike the web application testing, the exe file is installed on all the client machines. 
- The testing here is carried out broadly in categories mentioned below:
  1. GUI on both the sides.
  2. Functionality
  3. Client server interaction
  4. Functionality
  5. Back end testing and so on.
- The kind of environment that is used in client server application testing is pretty much like the one that is found in intranet networks. 
- The testing team knows all about the location of the servers in the test scenario. 


Thursday, January 5, 2012

What are different aspects of distributed testing?

We have heard a lot about different kinds of testing such as regression testing, scalability testing, web testing, unit testing, visual testing, and performance testing and so on. But do you know what is distributed testing? Ever heard about it? No? Then this piece of writing is certainly for you!

This type of testing usually receives very less coverage and that’s why most of the people are not familiar with it. Here I have attempted to explain what is meant by distributed testing and how it compares with its non distributive counterpart.

Non distributed testing can be defined as the tests that run or execute only on a single computer and usually do not involve any kind of interaction with the other computer systems. I used the word “usually” here because there exist some tests that are executed from a host machine to test the target device which holds an embedded or a real time operating system. Non distributed test cases can be configured very easily.

Non distributed testing is further divided into two sub categories namely local testing and remote testing. They have been discussed in details below:

- Local Testing
This kind of testing involves running the test cases on a local computer system. The tests used are called local tests. For performing local test you don’t have to be connected to a network connection.

- Remote Testing
This kind of testing requires a network connection so that you can run a test on a remote computer system to which you don’t have local access. This is very comfortable since you can work right from your desk and you also get the results right on your desk. Remote tests can be performed on several computer systems at a time. The best about thing about remote testing is that no matter how many software systems are under the distributed testing, there is no interference between the processors of different CPUs.

Now that you have got the idea of how non distributed testing is like, it will be easy for you to understand distributed testing is like.
- A distributed test case consists of many parts that interact with each other.
- Each part of the test case is executed on different computer system.
- The interaction of the different parts of the distributed testing sets it apart from non distributed testing.
- If you notice the testing is all about testing the interaction between different computer systems.
- All of the test cases being processed on different processors have a common aim irrespective of the system on which they are performed.
- Distributed testing is not to be confused with simultaneous testing since in simultaneous testing there is no interaction between the different tests.
- Platform proves to be one of the great challenges to the distributed testing.
- The testing environment should be capable of working on all the platforms involved effectively.
- After setting up of your testing environment, you need to make a test plan or say that you need to describe how you want to carry out distributed testing.
- This can be done via a test scenario.
- A test scenario lists all the test cases and also describes how they are to carried out on the computer systems.
- The description of the test cases is provided in the form of a formal directive.
- Test scenario is an effective way to describe test cases.
- For distributed testing we use distributed directives and for non distributed testing we use remote directives.


Wednesday, February 3, 2010

Overview of Distributed File Systems (DFS)

A distributed file system or network file system is any file system that allows access to files from multiple hosts sharing via a computer network. This makes it possible for multiple users on multiple machines to share files and storage resources.
In order to understand the structure of a distributed file system, the terms service, server and client should be defined. A service is a software entity running on one or more machines and providing a particular type of function. A server is the service software running on a single machine. A client is a process that can invoke a service using a set of operations that forms its client interface.
A distributed file system (DFS) is a file system whose clients, servers, and storage devices are dispersed among the machines of a distributed system. A service activity has to be carried out across the network, and instead of a single centralized data repository, there are multiple and independent storage devices. the distinctive features of a DFS are the multiplicity and autonomy of clients and servers in the system.
A DFS should look to its clients like a conventional, centralized file system. The client interface of a DFS should not distinguish between local and remote files. The most important performance measurement of a DFS is the amount of time needed to satisfy various service requests. In a DFS, a remote access has the additional overhead attributed to the distributed structure. This overhead includes the time needed to deliver the request to the server, as well as the time for getting the response across the network back to the client. DFS manages a set of dispersed storage devices which is the DFS's key distinguishing feature.


Monday, August 17, 2009

Introduction to Distributed Database Systems

A distributed database appears to a user as a single database but is, in fact, a set of databases stored on multiple computers. The data on several computers can be simultaneously accessed and modified using a network. Each database server in the distributed database is controlled by its local DBMS, and each cooperates to maintain the consistency of the global database.

Clients, Servers, and Nodes :
A database server is the software managing a database, and a client is an application that requests information from a server. Each computer in a system is a node. A node in a distributed database system can be a client, a server, or both.
A client can connect directly or indirectly to a database server.
Distributed Database Architecture

Site Autonomy :
Site autonomy means that each server participating in a distributed database is administered independently (for security and backup operations) from the other databases, as though each database was a non-distributed database. Although all the databases can work together, they are distinct, separate repositories of data and are administered individually. Some of the benefits of site autonomy are as follows:
- Nodes of the system can mirror the logical organization of companies or cooperating organizations that need to maintain an "arms length" relationship.
- Local data is controlled by the local database administrator. Therefore, each database administrator's domain of responsibility is smaller and more manageable.
- Independent failures are less likely to disrupt other nodes of the distributed database. The global database is partially available as long as one database and the network are available; no single database failure need halt all global operations or be a performance bottleneck.
- Failure recovery is usually performed on an individual node basis.
- A data dictionary exists for each local database.
- Nodes can upgrade software independently.

Homogenous Distributed Database Systems :
A homogenous distributed database system is a network of two or more Oracle databases that reside on one or more machines. An application can simultaneously access or modify the data in several databases in a single distributed environment. For example, a single query on local database MFG can retrieve joined data from the PRODUCTS table on the local database and the DEPT table on the remote HQ database.
Homogeneous Distributed Database Systems

Heterogeneous Distributed Database Systems :
In a heterogeneous distributed database system, at least one of the databases is a non-Oracle system. To the application, the heterogeneous distributed database system appears as a single, local, Oracle database; the local Oracle server hides the distribution and heterogeneity of the data.
The Oracle server accesses the non-Oracle system using Oracle8i Heterogeneous Services and a system-specific transparent gateway. For example, if you include a DB2 database in an Oracle distributed system, you need to obtain a DB2-specific transparent gateway so that the Oracle databases in the system can communicate with it.


Facebook activity