Wednesday, August 28, 2013
What are different policies to prevent congestion at different layers?
Posted by
Sunflower
at
8/28/2013 10:09:00 PM
0
comments
Labels: Avoidance, Capacity, Congestion, Congestion control, Control, Efficiency, Layers, Network, Networking, Operation, Parameters, Path, Policies, Prevent, Prevention, Resources, Throughput, traffic, User
|
| Subscribe by Email |
|
Tuesday, August 20, 2013
When is a situation called as congestion?
Posted by
Sunflower
at
8/20/2013 08:13:00 PM
0
comments
Labels: Communication, Condition, Congestion, Connection, Data, Increments, Input, Links, Load, Network, Network Congestion, Networking, Output, Packets, Protocols, Quality, Queue, Routers, States, Throughput
|
| Subscribe by Email |
|
Sunday, July 7, 2013
Differentiate between persistent and non-persistent CSMA?
- The length of the packets is
constant.
- The errors can only be
caused by collisions except which there are no errors.
- Capture effect is absent.
- The transmissions made by
all the other hosts can be sensed by each of the hosts.
- The transmission time is
always greater than the propagation delay.
Posted by
Sunflower
at
7/07/2013 12:58:00 PM
0
comments
Labels: Behavior, Carrier, Carrier Sense Multiple Access, Channel, Collisions, CSMA, Data, Differences, Errors, Frames, Hosts, Non-persistent, Packets, Persistent, Protocols, Station, Technology, Throughput, transmission
|
| Subscribe by Email |
|
Saturday, June 15, 2013
What is CPU Scheduling Criteria?
What Criteria is used by algorithms for Scheduling?
Below mentioned are some of the criteria used by these algorithms for scheduling:
1. CPU utilization:
- It is a property of a good system to keep the CPU as busy as possible all the time.
- Thus, this utilization ranges from 0 percent to 100 percent.
- However, in the systems that are loaded lightly, the range is around 40 percent and for the systems heavily loaded it ranges around 90 percent.
2. Throughput:
- The work is said to be done if the CPU is busy with the execution of the processes.
- Throughput is one measure of CPU performance and can be defined as the number of processes being executed completely in a certain unit of time.
- For example, in short transactions throughput might range around like 10 processes per second.
- In longer transactions this may range around only one process being executed in one hour.
3. Turnaround time:
- This is an important criterion from the point of view of a process.
- This tells how much time the processor has taken for execution of a processor.
- The turnaround time can be defined as the time duration elapsed from the submission of the process till its completion.
4. Waiting time:
- The amount of time taken for the process for its completion is not affected by the CPU scheduling algorithms.
- Rather, these algorithms only affects the time when the process is in waiting state.
- The time for which the process waits is called the waiting time.
5. Response time:
- The turnaround is not a good criterion in all the situations.
- The response time is favorable in the case of the interactive systems.
- It happens many a times that a process is able to produce the output in a fairly short time compared to the expected time.
- This process then can continue with the next instructions.
- The time taken for a process from its submission till production of the first response is calculated as the response time and is another criterion for the CPU scheduling algorithms.
All these are the primary performance criteria out of which one or more can be selected by a typical CPU scheduler. These criteria might be ranked by the scheduler depending up on their importance. One common problem in the selection of performance criteria is the possibility of conflict ion between them.
For example, increasing the number of active processes will increase the CPU utilization but at the same time will decrease the response time. This is often desirable to produce reduction in waiting time and turnaround time also. In a number of cases the average measure is optimized. But there are certain cases also where it is more beneficial to optimize the maximum or the minimum values.
It is not necessary that a scheduling algorithm that maximizes the throughput will decrease the turnaround time. Out of a mix of short and long jobs, if a scheduler runs only the short jobs, it will produce the best throughput. But at the same time the turnaround time for the long jobs will be so high which is not desirable.
Posted by
Sunflower
at
6/15/2013 09:22:00 AM
0
comments
Labels: Algorithms, CPU, Criteria, Input, Load, Multiprocessor, Multitasking, Output, Performance, Processes, Response, Schedule, Scheduling, System, Throughput, Time, Transaction, Turnaround, Utilization, Waiting
|
| Subscribe by Email |
|
Thursday, May 2, 2013
What is a CPU Scheduler?
- Throughput: It is the ratio of total number of
processes executed to a given amount of time.
- Latency: This factor can be sub – divided in
to two namely response time and the turnaround time. Response time is the
time taken from the submission of the process till its output is produced
by the processor. The latter i.e., the turnaround time is the time period
elapsed between the process submission and its completion.
- Waiting/ fairness time: This is the equal CPU
time given to each process or we can say that the time is allocated as
per the priority of the processes. The time for which the processes wait
in the ready queue is also counted in this.
Types of CPU Schedulers
Posted by
Sunflower
at
5/02/2013 05:28:00 PM
0
comments
Labels: Algorithm, Communication, CPU, CPU Scheduling, Data, Latency, Memory, Methods, Multi-tasking, Operating System, Resources, Scheduling, System, Tasks, Throughput, Transmit, Types, Waiting time
|
| Subscribe by Email |
|
Tuesday, April 23, 2013
What is Throughput, Turnaround time, waiting time and Response time?
What is Throughput?
What is Turnaround Time?
What is Waiting Time?
What is Response Time?
Posted by
Sunflower
at
4/23/2013 06:57:00 PM
0
comments
Labels: Communication, Complete, CPU, Data, digital, Factors, Network, Operating System, Packets, Performance, Processes, Response time, Submit, Tasks, Thread, Throughput, Turnaround time, Waiting time
|
| Subscribe by Email |
|
Wednesday, April 17, 2013
What are Real-time operating systems?
- The soft real –time operating system: It
produces more jitter.
- The hard real – time operating system: It
produces less jitter when compared to the previous one.
- Minimal interrupt latency and
- Minimal thread switching latency.
- Time sharing design: As per
this design, the tasks are switched based up on a clocked interrupt and events
at regular intervals. This is also termed as the round robin scheduling.
- Event – driven design: As per
this design, the switching occurs only when some other event demands
higher priority. This is why it is also termed as priority scheduling or
preemptive priority.
- Running or executing on CPU
- Ready to be executed
- Waiting or blocked for some
event
Posted by
Sunflower
at
4/17/2013 07:05:00 PM
0
comments
Labels: Algorithms, Applications, Data, Design, Events, Factors, Features, Hard, Multi-tasking, Operating System, OS, Priority, Process, Real time Operating system, Scheduling, soft, System, Tasks, Throughput, Types
|
| Subscribe by Email |
|
Monday, July 16, 2012
What are the metrics that can be used during performance testing?
- Scalability
- Reliability
- Resource
usage and so on.
- Load
testing
- Stress
testing
- Soak
testing or Endurance testing
- Spike
testing
- Isolation
testing and
- Configuration
testing
Metrics used during Performance Testing
Posted by
Sunflower
at
7/16/2012 10:47:00 AM
0
comments
Labels: Application, Attributes, Data, Errors, Graphs, Load, Load Testing, Metrics, Peak, Performance testing, Quality, Reliable, Request, Response time, Server, Software System, Software testing, Throughput, Time, Users
|
| Subscribe by Email |
|
Tuesday, December 27, 2011
What are different characteristics of Scalability Testing?
Scalability can be essentially defined as the ability of a software application, network, process or program to effective and gracefully handle the increasing workload and effectively and easily carry out the specified tasks assigned properly. Throughput is the best example for this ability of a software application.
- Scalability as such is very difficult to define without practical examples.
- Therefore, scalability is defined based on some dimensions.
- Scalability is very much needed in communication areas like in a network, in software applications, in handling huge databases and it is also a very important aspect in routers and networking.
- Software applications and systems having the property of scalability are called scalable software systems or applications.
- They improve throughput to surprising extent after addition of new hardware devices. Such systems are commonly known as scalable systems.
- Similarly if a design, network, systems protocol, program or algorithm is suitable and efficient enough and works well when applied to greater conditions and problems in which the input data is in large amount or the problem or situation has got several nodes, they are said to be efficiently scalable.
If, during the process of increasing the quantity of input data the program fails, the program is not said to scale. Scalability is so much needed in the field of information technology. Scalability can be measured in several dimensions. Scalability testing deals with testing of these dimensions only.
The kinds of scalability testing have been discussed in detail below:
- Functional scalability testing:
In this testing new functionalities which are added to the software application or the program to enhance and improve its overall working are tested.
- Geographic scalability testing:
This testing tests the ability of the software system or the application to maintain its performance and throughput, and usefulness irrespective of distributing of working nodes in some geographical pattern.
- Administrative scalability testing:
This testing deals with the increment of working nodes in software, so that a single difficult task is divided among smaller units making it much easier to accomplish.
- Load scalability testing:
This testing can be defined as the testing of the ability of a divided program to divide further and unite again to take light and heavy workload accordingly.
There are several examples available for scalability today. Few have been listed below:
- Routing table of the routing protocol which increases in extent with respect to the increase in the extent of network.
- DBMS (data base management system) is scalable in the sense that more and more data can be uploaded to it by adding new required devices.
- Online transaction processing system can also be stated as scalable as one can upgrade it and more transactions can be done easily at one time.
- Domain name system is a distributed system and works effectively even when the hosting is on the level of World Wide Web. It is scalable.
Scaling is done basically in two ways. These two ways have been discussed below:
- Scaling out or scaling horizontally: This method of involves addition of several nodes or work stations to an already divided or distributed software application. This method has led to the development of technologies namely batch processing management and remote maintenance which were not available before the discovery of this technology.
Scaling up or scaling vertically:
Scaling up or scaling vertically can be defined as the addition of hardware or software resources to any single node of the system. These resources can either be CPUs or memory devices. This method of scaling has led to a tremendous improvement in virtualization technology.
Posted by
Sunflower
at
12/27/2011 07:15:00 PM
0
comments
Labels: Algorithm, Application, Communication, Databases, Design, Effective, Fault, Hardware, Network, Quality, Requirements, Resources, Scalability, Scalability testing, Software Systems, Tasks, Throughput
|
| Subscribe by Email |
|