- Recursive
locks: In such locks, only one thread can pass through it. Any other
threads or processes entering the lock need to wait for the initial one to
pass through after its task is finished.
- Non
– recursive locks: Here only once a thread can enter the lock. If the same
thread again tries to enter the lock without unlocking it, a deadlock can
occur.
Wednesday, June 5, 2013
Explain the various techniques for Deadlock Prevention
Posted by
Sunflower
at
6/05/2013 01:46:00 PM
0
comments
Labels: Algorithms, Conditions, Data, Deadlock Prevention, Deadlocks, Distributed, Environment, Multiprocessing, Multitasking, Operating System, Performance, Prevention, Processes, Resources, System, Wait
![]() | Subscribe by Email |
|
Wednesday, March 13, 2013
What are characteristics of autonomic system?
Characteristics of Autonomic System
- High intelligence: These
systems have more intelligence incorporated in to them which lets them
tackle this increasing complexity easily.
- Business Goal: They
are driven by the business goal that the quality of experience of the user
must be high. Even with the changing environment, there goals remain the
same. But there are changes that take place in the low – level
configurations. For example, when a user switches over to a low bandwidth
network, the bit rate of the video has to be reduced in order to satisfy
the goals of the business.
- Complex operations: All the operations carried out in an autonomic system are complex in
nature even for the simplest of the services. For example, authentication,
video encoding, billing, routing, shaping, QoS prioritizing, admission
control.
- High level objectives: The human operator just has to specify the high – level objectives and it
is left to the system whether it chooses to optimize one or more of the
goals. In order to achieve this, the system has to translate these
objectives in to low – level configurations.
- Adaptability: The
system has the ability to adapt itself to the current environment.
- Policy continuum: There are a number of perspectives to this as mentioned below:
- Elements: The
elements of the network are assumed to be heterogeneous by the autonomic
communication systems whereas in plain autonomic computing the elements
are taken to be as the homogeneous.
- Distributed: These
systems work up on a distributed environment.
- Complexity: The
complexity in autonomic systems is more because of the complex autonomic
loop that includes the following operations:
Posted by
Sunflower
at
3/13/2013 01:41:00 PM
0
comments
Labels: Autonomic Systems, Business, Characteristics, Complexity, device, Distributed, Elements, Environment, Goals, Networking, Networks, Objectives, Operation, Reliable, Scalability, System, Technology, Users, Views
![]() | Subscribe by Email |
|
Tuesday, March 12, 2013
What are autonomic systems? What is the basic concept behind autonomic system?
About Autonomic Computing
About Autonomic Systems
- 2 main control loops namely the global and the
local.
- Sensors (required self – monitoring)
- Effectors (required for self-adjustment)
- Knowledge
- Adapter or planner
- Consumes more time
- Expensive
- Prone to errors
- Self–configuration: Responsible for the automatic configuration of the
network components.
- Self–healing: Responsible for the automatic detection and correction of the
errors.
- Self–optimization: Monitors and controls the resources automatically.
- Self–protection: Identifies the attacks and provides protection against them.
- Automatic
- Adaptive
- aware
Posted by
Sunflower
at
3/12/2013 02:00:00 PM
0
comments
Labels: Autonomic Computing, Autonomic Systems, Changes, Characteristics, Complexity, Components, Computing, Development, Distributed, Features, Framework, Network, Operators, System, Users
![]() | Subscribe by Email |
|
Wednesday, July 18, 2012
What are the differences between testing WEB application and testing client-sever application?
- Because of web browsers’ ubiquity.
- Because web applications provide a means to use web
browsers as a client that are usually termed as thin clients.
- Because they can be updated and maintained without
having to disturb and install the software system or application on 1000s
of client systems.
- And also because they support cross platform
compatibility.
About Web Application Testing
- Usability testing
- Compatibility testing
- Security testing
- Performance testing
- Interface testing and
- Functionality testing
About Client Server Testing
- GUI on both the sides.
- Functionality
- Client server interaction
- Functionality
- Back end testing and so on.
Posted by
Sunflower
at
7/18/2012 11:40:00 AM
0
comments
Labels: Applications, Client, Client Server, Compatibility, Components, Cross platform, Differences, Distributed, Errors, GUI, Network, Platform, Servers, Tasks, Testers, Testing, Users, Web Applications, Web browsers
![]() | Subscribe by Email |
|
Thursday, January 5, 2012
What are different aspects of distributed testing?
We have heard a lot about different kinds of testing such as regression testing, scalability testing, web testing, unit testing, visual testing, and performance testing and so on. But do you know what is distributed testing? Ever heard about it? No? Then this piece of writing is certainly for you!
This type of testing usually receives very less coverage and that’s why most of the people are not familiar with it. Here I have attempted to explain what is meant by distributed testing and how it compares with its non distributive counterpart.
Non distributed testing can be defined as the tests that run or execute only on a single computer and usually do not involve any kind of interaction with the other computer systems. I used the word “usually” here because there exist some tests that are executed from a host machine to test the target device which holds an embedded or a real time operating system. Non distributed test cases can be configured very easily.
Non distributed testing is further divided into two sub categories namely local testing and remote testing. They have been discussed in details below:
- Local Testing
This kind of testing involves running the test cases on a local computer system. The tests used are called local tests. For performing local test you don’t have to be connected to a network connection.
- Remote Testing
This kind of testing requires a network connection so that you can run a test on a remote computer system to which you don’t have local access. This is very comfortable since you can work right from your desk and you also get the results right on your desk. Remote tests can be performed on several computer systems at a time. The best about thing about remote testing is that no matter how many software systems are under the distributed testing, there is no interference between the processors of different CPUs.
Now that you have got the idea of how non distributed testing is like, it will be easy for you to understand distributed testing is like.
- A distributed test case consists of many parts that interact with each other.
- Each part of the test case is executed on different computer system.
- The interaction of the different parts of the distributed testing sets it apart from non distributed testing.
- If you notice the testing is all about testing the interaction between different computer systems.
- All of the test cases being processed on different processors have a common aim irrespective of the system on which they are performed.
- Distributed testing is not to be confused with simultaneous testing since in simultaneous testing there is no interaction between the different tests.
- Platform proves to be one of the great challenges to the distributed testing.
- The testing environment should be capable of working on all the platforms involved effectively.
- After setting up of your testing environment, you need to make a test plan or say that you need to describe how you want to carry out distributed testing.
- This can be done via a test scenario.
- A test scenario lists all the test cases and also describes how they are to carried out on the computer systems.
- The description of the test cases is provided in the form of a formal directive.
- Test scenario is an effective way to describe test cases.
- For distributed testing we use distributed directives and for non distributed testing we use remote directives.
Posted by
Sunflower
at
1/05/2012 01:38:00 PM
0
comments
Labels: Computer system, computers, Devices, Distributed, Distributed Testing, Environment, Interaction, Local Testing, Network, Platforms, Processors, Remote Testing, Scenarios, Test cases, Test Plan, Tests
![]() | Subscribe by Email |
|
Wednesday, February 3, 2010
Overview of Distributed File Systems (DFS)
A distributed file system or network file system is any file system that allows access to files from multiple hosts sharing via a computer network. This makes it possible for multiple users on multiple machines to share files and storage resources.
In order to understand the structure of a distributed file system, the terms service, server and client should be defined. A service is a software entity running on one or more machines and providing a particular type of function. A server is the service software running on a single machine. A client is a process that can invoke a service using a set of operations that forms its client interface.
A distributed file system (DFS) is a file system whose clients, servers, and storage devices are dispersed among the machines of a distributed system. A service activity has to be carried out across the network, and instead of a single centralized data repository, there are multiple and independent storage devices. the distinctive features of a DFS are the multiplicity and autonomy of clients and servers in the system.
A DFS should look to its clients like a conventional, centralized file system. The client interface of a DFS should not distinguish between local and remote files. The most important performance measurement of a DFS is the amount of time needed to satisfy various service requests. In a DFS, a remote access has the additional overhead attributed to the distributed structure. This overhead includes the time needed to deliver the request to the server, as well as the time for getting the response across the network back to the client. DFS manages a set of dispersed storage devices which is the DFS's key distinguishing feature.
Posted by
Sunflower
at
2/03/2010 02:12:00 PM
0
comments
Labels: Client, Code Divison Multiple Access, DFS, Distributed, Distributed file systems, File systems, files, Machines, Network systems, Service
![]() | Subscribe by Email |
|
Monday, August 17, 2009
Introduction to Distributed Database Systems
A distributed database appears to a user as a single database but is, in fact, a set of databases stored on multiple computers. The data on several computers can be simultaneously accessed and modified using a network. Each database server in the distributed database is controlled by its local DBMS, and each cooperates to maintain the consistency of the global database.
Clients, Servers, and Nodes :
A database server is the software managing a database, and a client is an application that requests information from a server. Each computer in a system is a node. A node in a distributed database system can be a client, a server, or both.
A client can connect directly or indirectly to a database server.
Site Autonomy :
Site autonomy means that each server participating in a distributed database is administered independently (for security and backup operations) from the other databases, as though each database was a non-distributed database. Although all the databases can work together, they are distinct, separate repositories of data and are administered individually. Some of the benefits of site autonomy are as follows:
- Nodes of the system can mirror the logical organization of companies or cooperating organizations that need to maintain an "arms length" relationship.
- Local data is controlled by the local database administrator. Therefore, each database administrator's domain of responsibility is smaller and more manageable.
- Independent failures are less likely to disrupt other nodes of the distributed database. The global database is partially available as long as one database and the network are available; no single database failure need halt all global operations or be a performance bottleneck.
- Failure recovery is usually performed on an individual node basis.
- A data dictionary exists for each local database.
- Nodes can upgrade software independently.
Homogenous Distributed Database Systems :
A homogenous distributed database system is a network of two or more Oracle databases that reside on one or more machines. An application can simultaneously access or modify the data in several databases in a single distributed environment. For example, a single query on local database MFG can retrieve joined data from the PRODUCTS table on the local database and the DEPT table on the remote HQ database.
Heterogeneous Distributed Database Systems :
In a heterogeneous distributed database system, at least one of the databases is a non-Oracle system. To the application, the heterogeneous distributed database system appears as a single, local, Oracle database; the local Oracle server hides the distribution and heterogeneity of the data.
The Oracle server accesses the non-Oracle system using Oracle8i Heterogeneous Services and a system-specific transparent gateway. For example, if you include a DB2 database in an Oracle distributed system, you need to obtain a DB2-specific transparent gateway so that the Oracle databases in the system can communicate with it.
Posted by
Sunflower
at
8/17/2009 04:06:00 PM
0
comments
Labels: Architecture, Data, Database Systems, Distributed
![]() | Subscribe by Email |
|