Subscribe by Email


Showing posts with label States. Show all posts
Showing posts with label States. Show all posts

Thursday, August 29, 2013

How can traffic shaping help in congestion management?

- Traffic shaping is an important part of congestion avoidance mechanism which in turn comes under congestion management. 
- If the traffic can be controlled, obviously we would be able to maintain control over the network congestion. 
Congestion avoidance scheme can be divided in to the following two parts:
  1. Feedback mechanism and
  2. The control mechanism
- The feedback mechanism is also known as the network policies and the control mechanism is known as the user policies.
- Of course there are other components also but these two are the most important. 
- While analyzing one component it is simply assumed that the other components are operating at optimum levels. 
- At the end, it has to be verified whether the combined system is working as expected or not under various types of conditions.

Network policy has got the following three algorithms:

1. Congestion Detection: 
- Before information can be sent as the feedback to the network, its load level or the state level must be determined. 
- Generally, there can be n number of possible states of the network. 
- At a given time the network might be in one of these states. 
- Using the congestion detection algorithm, these states can be mapped in to the load levels that are possible. 
- There are two possible load levels namely under-load and over-load. 
- Under-load means below the knee point and overload occurs above knee point. 
- If this function’s k–ary version is taken, it would produce k load levels. 
- There are three criteria based up on which the congestion detection function would work. They are link utilization, queue lengths and processor utilization. 

2. Feedback Filter: 
- After the load level has been determined, it has to be verified that whether or not the state lasts for duration of sufficiently longer time before it is signaled to the users. 
- It is in this condition that the feedback of the state is actually useful. 
- The duration is long enough to be acted up on. 
- On the other hand a state that might change rapidly might create confusion. 
The state passes by the time the users get to know of it. 
- Such states misleading feedback. 
- A low pass filter function serves the purpose of filtering the desirable states. 

3. Feedback Selector: 
- After the state has been determined, this information has to be passed to the users so that they may contribute in cutting down the traffic. 
- The purpose of the feedback selector function is to identify the users to whom the information has to be sent.

User policy has got the following three algorithms: 

1.Signal Filter: 
- The users to which the feedback signals are sent by the network interpret them after accumulating a number of signals. 
- The nature of the network is probabilistic and therefore signals might not be the same. 
- According to some signals the network might be under-loaded and according to some other it might be overloaded. 
- These signals have to be combined to decide the final action. 
- Based up on the percentage, an appropriate weighting function might be applied. 

2. Decision Function: 
- Once the load level of the network is known to the user, it has to be decided whether or not to increase the load.
- There are two parts of this function: the direction is determined by the first one and the amount is decided by the second one. 
- First part is decision function and the second one is increase/ decrease algorithms. 

3. Increase/Decrease Algorithm: 
- Control forms the major part of the control scheme.
- The control measure to be taken is based up on the feedback obtained. 
- It helps in achieving both fairness and efficiency. 


Tuesday, August 27, 2013

What are general principles of congestion control?

- Problems such as the loss of data packets occur if the buffer of the routers overflows.
- This overflow is caused by the problem of the congestive collapse which is a consequence of the network congestion. 
- If the packets have to be re-transmitted more than once, it is an indication that the network is facing the problem of congestion. 
- Re-transmission of the packets is the treatment of only this indication but not for problem of the network congestion. 
- In the problem of congestive collapse, there are a number of sources that make attempts for sending data and that too at a quite high rate. 
- For preventing this problem of the network congestion, it requires mechanisms that are capable of throttling the sending node if in case the problem of network congestion occurs. 
- Network congestion is a real bad thing as it manifests in the network’s performance that the upper layer applications receive. 
- There are various approaches available for preventing and avoiding the problem of network congestion and thus implementing proper congestion control. 
- When the capacity of the network is exceeded by the demands for the resources and too much queuing occurs in the network causing loss of packets, congestion of packets is said to occur. 
- During this problem of network congestion, the throughput of the network might drop down to zero and there might be a high rise in the path delay. 
Network can recover from the state of congestive collapse using a congestion control scheme. 
- A network can operate in a region where there is high throughput but low delay with the help of the congestion avoidance scheme.
- These schemes keep the network away from falling in to a state of congestive collapse. 
- There is a big confusion over congestion control and congestion avoidance. Most of us think it is the same thing but it is not. 
- Congestion control provides a recovery mechanism whereas the congestion avoidance provides a prevention mechanism. 
- Today’s technological advances in the field of networking have led to a rise in the network links’ bandwidth. 
- In the year of 1970, ARPAnet came in to existence and built using the leased telephone lines that had a 50 kbits/second bandwidth. 
- LAN (local area network) was first developed in the year of 1980 using token rings and Ethernet and offered a bandwidth of 10 mbits/ second. 
- During the same time many efforts were made for standardizing the LAN using the optical fibers providing a 100 mbits/seconds and higher bandwidth. 
- Attention to the congestion control has been increased because of the increase in the mismatching that occurs between the various links composing the network. 
- Routers, IMPs, gateways, intermediate nodes links etc. are the hot-spots for the congestion problems. 
- It is at these spots that the bandwidth of the receiver falls short for accommodating all the incoming traffic. 
- In the networks using the connection-less protocols, it is even more difficult to cope with the problems of network congestion. 
- It is comparatively easy in the networks using the connection-oriented protocols.
- This happens so because in such networks, the network resources are kept under advance reserve during setting up the connection.
- One way for controlling congestion problems is preventing the setting up of new connections if congestion is detected anywhere in the network but it will also prevent the usage of the reserved resources which is a disadvantage. 


Tuesday, August 20, 2013

When is a situation called as congestion?

- Network congestion is quite a common problem in the queuing theory and data networking. 
- Sometimes, the data carried by a node or a link is so much that its QoS (quality of service) starts deteriorating. 
- This situation or problem is known as the network congestion or simply congestion. 
This problem has the following two typical effects:
Ø  Queuing delay
Ø  Packet loss and
Ø  Blocking of the new connections


- The last two effects lead to two other problems. 
- As the offered load increases by the increments, either the throughput of the network is actually reduced or the throughput increases by very small amounts. 
- Aggressive re-transmissions are used by the network protocols for compensating for the packet loss. 
- The network protocols thus tend to maintain a state of network congestion for the system even if the actual initial load is too less that it cannot cause the problem of network congestion. 
- Thus, two stable states are exhibited by the networks that use these protocols under similar load levels. 
- The stable state in which the throughput is low is called the congestive collapse. 
- Congestive collapse is also called congestion collapse.
- In this condition, the switched computer network that can be reached by a packet when because of congestion there is no or little communication happening.
- In such a situation even if a little communication happens it is of no use. 
There are certain points in the network called the choke points where the congestion usually occurs.
- At these points, the outgoing bandwidth is lesser than the incoming traffic. 
Choke points are usually the points which connect the wide area network and a local area network. 
- When a network falls in such a condition, it is said to be in a stable state. 
- In this state, the demand for the traffic is high but the useful throughput is quite less.
- Also, the levels of packet delay are quite high. 
- The quality of service gets extremely bad and the routers cause the packet loss since their output queues are full and they discard the packets. 
- The problem of the network congestion was identified in the year of 1984. 
The problem first came in to the scenario when the backbone of the NSF net phase dropped 3 times of its actual capacity. 
- This problem continued to occur until the Van Jacobson’s congestion control method was implemented at the end nodes.

Let us now see what is the cause of this problem? 
- When the number of packets being set to a router exceeds its packet handling capacity, many packets are discarded by the routers that are intermediate. 
- These routers expect the re-transmission of the discarded information. 
- Earlier, the re-transmission behavior of the TCP implementations was very bad. 
- Whenever a packet was lost, the extra packets were sent in by the end points, thus repeating the lost information. 
- But this doubled the data rate. 
- This is just the opposite of what routine should be carried out during the congestion problem. 
- The entire network is thus pushed in a state of the congestive collapse resulting in a huge loss of packets and reducing the throughput of the network. 
Congestion control as well as congestion avoidance techniques are used by the networks of modern era for avoiding the congestive collapse problem. 
Various congestion control algorithms are available that can be implemented for avoiding the problem of network congestion. 
- There are various criteria based up on which these congestion control algorithms are classified such as amount of feedback, deploy-ability and so on. 


Wednesday, August 14, 2013

What is the idea behind link state routing?

There are two main classes of the routing protocols out of which one is the link state routing protocol on which we shall focus in this article. The other class of the routing protocols is the distance vector routing protocols.
In computer communications, the link state routing protocol is applied in the packet switching networks. 
We can give two main examples of the link state routing protocol namely:
  1. IS – IS i.e., intermediate system to intermediate system and
  2. OSPF i.e., the open shortest path first
Almost every switching node is capable of performing the link – state routing in the network. Switching nodes can be defined as the nodes that can forward the packets. We call these nodes as routers in the internet.

Idea behind the Link State Routing

 
- A map is constructed by every node concerning its connectivity with the network. 
- This map is actually in a graph’s form. 
- This graph shows all the connections that exist between the various nodes in the graph. 
- The next “best logical path” is then calculated independently by each node to every destination in the network based up on its possibility. 
- The routing table of the node will then be formed by this collection of the best paths.
This is in total contrast with the second class of the routing protocols.
- In the distance vector routing protocols, the routing table of a node is also shared by its neighbors whereas in the link state routing protocol, only the connectivity related information is passed between the nodes. 
- The simplest configuration of the link state routing protocols is the one that has no area. 
- This implies that each node possesses a map of the whole network. 
- The first main stage involves providing map of the network to each node. 

For doing the following, subsidiary steps are followed:
  1. Determination of the neighboring nodes: Each node determines to which all neighboring nodes it is connected to. Also, it needs to know whether over the links through which it is connected are fully working or not. A reach-ability protocol is used for accomplishing this task. This protocol is run regularly but in separation from the neighboring nodes.
  2. Distribution of the map information: The link state advertisement is the short message that is made by the node in case of some modification in the connectivity periodically.
- The above obtained set of such link state advertisements is used for the creation of the map of the entire network. 
- The second stage involves production of the routing tables through the map inspection. 
This again involves a number of steps:
  1. Calculation of the shortest paths: Shortest path from one node to other nodes is determined by running a shortest path routing algorithm over the entire map. The commonly used algorithm is the Dijkstra’s algorithm.
  2. Filling the routing table: The table is filled with best shortest paths obtained in the above step for every two nodes.  
  3. Optimizations: We gave a glance of the simple form of this algorithm but in practical applications this form is used along with a number of optimizations. Whenever a change is detected in the network connectivity, the shortest path tree has to be immediately recomputed and then the routing table must be recreated. A method was discovered by the BBN technologies for recomputing only the affected part of the tree.

- Routing loops can be formed if working of the nodes using the exactly same map is not proper. 
- For adhoc networks such as the mobile networks, the optimized form of the protocol i.e., the optimized link state routing protocol is used. 


Monday, August 5, 2013

What is optimality principle?

A network consists of nodes which require communicating with other on various grounds. This communication is established via communication channels that exist between them. The communication involves data transfers. In a network a node may or may not have a link with every other node in the network. 
Applications that require communicating over a network include:
1. Telecommunication network applications such as POTS/ PSTN, local area networks (LANs), internet, mobile phone networks and so on.
2. Distributed system applications
3. Parallel system applications

- As we mentioned above, each and every node might not be linked with every other nodes since for doing so a lot of wires and cables are required which will the whole network more complicated. 
- Therefore, we bring in the concept of the intermediate nodes. 
- The data transmitted by the source node is forwarded to the destination by these intermediate nodes. 
Now the problem that arises is which path or route will be the best to use i.e., the path with the least cost. 
- This is determined using the routing process. 
- The best path thus obtained is called the optimal route. 
- Today, we have a number of algorithms available for determining the optimal path. 

These algorithms have been classified in to two major types:
  1. Non – adaptive or static algorithms
  2. Adaptive or dynamic algorithms

Concept of Optimality Principle

- This is the principle followed while determining the optimal router between the two routes. 
The general statement of the principle of optimality is stated below:
“An optimal policy has the property that whatever the initial state and initial decision are, the remaining decision must constitute an optimal policy with regard to the state resulting from the first decision.”

- This means if P is an optimal state that results in another state say Q, and then the portion of the original from that state to this state i.e., from P to Q must be optimum. 
- This only means the optimality of the part of the optimal policy is preserved. - The initial state and the final state are the most important parts of the optimum. 
- Consider an example, suppose we have problem with 3 inputs and 26 states. - Here, the state is associated with the optimum and the total cost is associated with the optimum policy.
- If brute force method is used for 3 inputs and 100 stages we have the total number of computations as 3100
- That means for solving this problem, a super computer is required.
- Therefore, the approach used for solving this problem is a parallel processing approach. 
- Here, for the each state the least step is computed and stored during the programming. 
- This reduces the number of possibilities and hence reducing the amount of computation.
- The problems become complex if the initial and the final states are undefined. - It is necessary for the problem to follow the principle of optimality in order to use the dynamic programming. 
- This implies that whatever the state may be, the decisions that follow must be optimal in regard with the state obtained from the previous decision. 
- This property is found in combinatorial problems but since they use a lot of time and memory, this method is inefficient for them. 
- These problems can be solved efficiently if some sort of best first search and pruning technique is applied.

- In regard to the routing in networks, it follows from the optimality principle if a router B lies between router A and C which lie on an optimal path, then the path between the router B and C is also an optimal path and lies on the same path. 
- Sink tree is formed as a result of all optimal routes which is the ultimate goal of all the routing algorithms.


Tuesday, July 9, 2013

Explain CSMA with collision detection?

- CSMA with collision detection is abbreviated as CSMA/CD. 
- CSMA in itself makes use of the LBT technology i.e., listen or sense before talk. 
- But when incorporated with the ability of collision detection, it gets much better. 
- If the channel is sensed to be idle the data packets or frames are transmitted immediately but if not, the transmitter is bound to wait for some time before it can re-transmit. 
- Sensing the channels prior to transmission is absolutely necessary if the collisions are to be avoided. 
- Sensing the channel is the most effective way of avoiding the collisions. 
- There are two types of CSMA protocols namely persistent and the non-persistent CSMA.
- In CSMA/CD protocol all the hosts have freedom for transmitting and receiving the data frames on one and the same channel. 
- Also, the size of the packets is variable.

CSMA/CD comprises of two processes:
Carrier Sense: In this process the transmitter or the host checks if the channel or the line is not occupied before starting the transmission.
Collision Detection: CSMA/CD tries to detect the collisions in the shortest possible time. If it happens to detect a collision, it stops the transmission then and there and waits for a random amount of time which is equal to the binary exponential back-off. It then again senses the channel.

- For ensuring there occurs no collision during the transmission of a packet, a host must have the capability of detecting the collision before the transmission process is complete. 
- What happens is that the host A sensing the line to be idle starts transmitting a frame. 
- Just before the first unit of this frame reaches host B, it also senses the line to be idle and starts its transmission. 
- Now the host B receives data while its transmission is still in progress and so it detects that a collision is about to occur. 
- A collision occurs close to the host B. the host A also receives data in midst of its transmission and therefore detects the collision. 
- For making the hosts detect collision before transmission, a minimum length has to be decided for the packets that are transmitted via CSMA/CD networks. 

There are 3 states for a CSMA/ CD channel namely:
  1. Contention
  2. Transmission
  3. Idle
- Ethernet is the most popular example of the CSMA/CD networks. 
- A minimum slot time is required for collision detection between the stations.
This slot time must equal twice the maximum value of the propagation delay. - The host acquires the channel on the basis of the 1 – persistence. 
- Also, a jam signal is transmitted if a case of collision detection occurs. 
- CSMA/CD make use of the binary exponential back-off algorithm. 
- It is obvious that the idle time of the channel will be small if the load is heavy. 
- It normalizes all the packets with respect to the time of the packet transmission.
- CSMA/CD represents a very effective method for media access control. 
There are different methods available for detecting the collisions. 
- Which method is to be followed depends largely on the transmission medium that exists between the two stations. 
- For example, if the two stations are connected via electrical buses, the collision can be detected by making comparison between the transmitted and the received data. 
- Some other way involves recognition of a signal of higher amplitude than the normal one. 
- The jam signal used in the CSMA/CD networks is constituted of 32 bit binary pattern.



Saturday, June 15, 2013

What is Process State Diagram?

In the systems where multiple processors or multitasking is involved, a process has to go through a number of states. In this article we shall discuss about these states. 
The kernel of the operating system may not recognize these states distinctly but still for the understanding of how the processes are executed they act as useful abstractions. 
These various states can be looked up in a process state diagram for a systematic view. This diagram shows the transitions of the process between various states with arrows. Processes can be stored both in the secondary or virtual memory and in the main memory as per the situation.

Process States

- The primary process states occur in all types of the systems. 
- Processes in these states are usually stored in the main memory. 
Basically there are 5 major states of any process as discussed below:

Ø  Created: 
- It is also known as the ‘new’ state. 
- A process occupies this state up on it creation. 
- While waiting for being admitted to the next ready state this process lies in this state. 
- The admission scheduler will decide whether to admit the process to next state or to delay it on a short or long term. 
- However, this admission is approved in an automatic way in most of the desktop computers. 
- But in the systems with real time operating systems this is not true. 
- Here, the admission might be delayed by a certain amount of time.
- If too many states are admitted to the ready state in a real time operating system, condition of over contention and over saturation might occur disabling the system to meet its deadlines.

Ø  Ready or Waiting: 
- This state is taken up a process when it has been loaded in to the physical memory of the system and is waiting to be executed by the processor or precisely waiting to be context switched by the dispatcher. 
- At any instant of time there might be a number of processes waiting for their execution. 
- Here the processes have to wait in a queue called the run queue out of which only one process will be taken up by a processor. 
- Processes that are waiting for obtaining input from some event are not put in to this ready queue.

Ø  Running: 
- When a process is selected by the CPU for execution, its state is changed to running. 
- One of the processors executes the instructions of the process one by one. 
Only one process can be run by the process at a time.

Ø Blocked: 
- When a process is blocked because of some event such as I/O operations is put in to the blocked state. 
- Another reason for a process being a blocked state can be its running out of the CPU time allocated to it.

Ø  Terminated: 
- Termination of a process may occur when either its execution is complete or it has been explicitly killed.
- This state is called the terminated or halted. 
- This process is called a zombie process if after coming in the terminated state it is not removed from the main memory. 

There are two additional states for supporting the virtual memory. In these states the process is usually stored in the secondary memory of the system:

Ø Swapped out or Waiting: 
- A process is said to be swapped out when it is removed from the primary memory and placed in the secondary memory. 
- This is done by the mid - term scheduler. 
- After this the state of this process changes to waiting.  

Ø  Swapped out and Blocked:
- In some cases, the processes that were in blocked state might be swapped out. 
- The same process might be again swapped in providing the conditions remain the same.



Tuesday, June 4, 2013

Explain briefly Deadlock Avoidance and Detection?

Deadlocks are a serious issue that needs to be avoided since it can cause the whole system to hang or crash.

What is Deadlock Avoidance?


- Avoiding a deadlock is possible only if certain information regarding the processes is available with the operating system.
- This information has to be made available to the OS just before the resources are allocated to the processes.
- These are the processes that are to be consumed by the process in its lifetime.
- For every resource request made by the process, any potential threats are checked by the system i.e., whether granting the request of the process will send it in to an unsafe zone or not.
- If it is so then there are possibilities that the system could enter a deadlock.
- Therefore, only those requests are granted by the process that will ensure a safe state of the process.
- It is important for the system to determine whether the next level of the process will be safe or unsafe.
- There are 3 things that the operating system must know at any before or after the execution of the process:
1. The currently available resources.
2. The resources currently allocated to the processes.
3. Resources to be required and released in the future by these processes.

- It is possible that a process might be in an unsafe state but still may not cause a deadlock.
- By the notion of the safe and unsafe state of the process we refer to the system’s ability of entering in to a deadlock.
An example will make it clearer:
- Consider a resource A requested by a process which would make the process state unsafe.
- At the same time it releases another resource say B preventing the circular wait of the resources.
- In such a situation, the system is said to be in an unsafe state though not necessarily in a deadlock.
- There are various algorithms that have been designed for deadlock avoidance and one such is the banker’s algorithm.
- To use this algorithm knowledge about the resource usage limit is required in advance.
-  It is impossible for most of the systems to know what a process will request for in advance.
- This only implies that the deadlock avoidance is also not possible here.
- There are other two algorithms for achieving this task namely wound/ wait and wait/ die algorithms.
- Each of them makes use of a symmetry breaking technique.

What is Deadlock Detection?


- Deadlocks are free to occur under the implementation of this concept.
- Then through the state of the system, the occurrence of the deadlock is confirmed and subsequently mended.
- Here, the resource allocation activities are tracked along with the process states by certain algorithms.
- After this, the algorithm is used for removing the deadlock.
- Deadlock detection is quite easy since the OS scheduler knows about the resources that have been locked by the processes.
- Model checking is one of the techniques used for deadlock detection.
- In this a finite state model is created up on which a progress analysis of the process is carried out and all the terminal sets of the model are found.
- Each of these stands for a deadlock.
- Correction of the deadlock can be done by any of the below mentioned methods after the deadlock has been detected:
1. Process termination: This is about aborting one or more of the processes that cause the deadlock thus ensuring a certain and speedy removal of the deadlock. But this method might prove to be a little expensive because of the loss of the partial computations.
2. Resource preemption: This is about a successive preemption of the allocated resources until the breakdown of the deadlock.


Monday, May 6, 2013

What is a Safe State and what is its use in deadlock avoidance?


Safe state plays a great role in avoiding the deadlocks. In this article we discuss in detail the concept of this safe state.
When do we call a state safe?
It is when even if the system allocates resources to all the processes and no deadlock occurs. This allocation is to the maximum limits and can be done in any preferred order. To put it down more formally, we can say that a system is considered to be in a safe state only if a safe sequence exists. This would become clearer from the following example:

Consider the following sequence of processes:

- Now this sequence is considered to be a safe one for the current state of the allocation if the resource requests made by each of the processes Pi can be satisfied by resources that are currently available including the resources held by some another process that precedes Pi.
- In this case, if the resources required by the Pi are not presently available, then it can wait till the preceding process completes its executions and releases the resources.
- Once it finishes, the resources it held, now can be utilized by the Pi for completing the task assigned to it and then it also releases back the resources to be used by succeeding processes.
- It then finally terminates.
- If there exists no sequence like this, then the system is said to be in an unsafe state.
- A deadlock cannot occur in a safe state and so this state cannot be called a deadlocked one. 
- But on the other side, a state is unsafe if it has a deadlock.
- However, it is not necessary that the reason for all states being unsafe is the deadlock.
- An unsafe state can however lead to a deadlock. 
- It is in the safe states, that the operating system is capable of avoiding the deadlocks.
- When the operating system falls in an unsafe state, it is no more in a position to prevent the requests of the processes that would cause a deadlock to occur.
- It is the behavior of the processes by which the unsafe states of the system are controlled.
- Another major difference between the safe and the unsafe states is that in a safe state it is guaranteed by the operating system that the execution of the processes will be completed in expected time but in the case of unsafe states it gives no such guarantee.
- If the concept of the safe state is predefined, then algorithms can be designed that would make sure that no deadlocks occur.
- The idea behind these algorithms would be to ensure the following things:
1. The system does not come out of the safe state.
2. The system is kept in a safe state initially.
3. The system must be able to determine if a resource requested by a process can be allocated immediately to it or it requires waiting.
4. The system grants the request of the process if and only if after finishing it, the system would still be in a safe state.

- One disadvantage of such algorithms is low resource utilization. It is because the process would still have to wait for the resource even if it is available.
- A deadlock occurs when two or more processes that are competing with one another to wait for each other to finish and neither of them do so.
- The deadlock which involves only two processes is called a deadly embrace.
- This may also occur if one process is waiting for the other to finish which in turn is waiting for some other process to finish and so on.


Sunday, May 5, 2013

What is DRAM? In which form does it store data?


The random access memory is of two types out of which one is dynamic random access memory or DRAM and the other one is SRAM or static random access memory. 
Here we shall focus up on the first type i.e., the Dynamic RAM. 

What is Dynamic Random Access Memory (DRAM)?

- In dynamic RAM, each bit of the data is stored in a separate capacitor. 
- All these capacitors are housed within an IC (integrated circuit).
- These capacitors can be in either of the two states:
  1. Charged and
  2. Discharged
- The two values of a bit are represented by means of these two states only. 
The two values of bit are 0 and 1. 
- However, there is a disadvantage of the dynamic RAM. 
- These capacitors tend to leak charge and therefore may lose all the stored information. 
- Therefore, it is very important to keep the capacitors flushing with fresh charge. 
- They are refreshed at regular intervals of time. 
- It is because of this refreshing requirement this type of RAM has been named so. 
- The main memory or the physical memory of the CPU is constituted of this dynamic RAM only.
- Apart from desktops, DRAM is also used in workstation systems, laptops, video game consoles etc. 
- The structural simplicity is one of the biggest advantages of the DRAM. 
- For each bit it only requires one capacitor and one transistor, whereas SRAM requires 4 to 6 transistors for the same purpose. 
- This enables the dynamic RAM to attain very high density. 
- DRAM is a volatile memory unlike the flash memory and so it loses data whenever the power supply is cut.
- The capacitors and the transistors it uses are extremely small and so billions of them can be easily be integrated in to one single memory chip.
- DRAM consists of array of charge storage cells arranged in a sort of rectangular way. 
- Each of the cells consists of one transistor and one capacitor. 
- Word lines are the horizontal lines that connect the rows with each other. 
Two bit lines compose each of the columns of cells. 
- These lines are called the + and – bit lines.
- It is specified by the manufacturers that at what rate the storage cell capacitors are to be refreshed. 
- Typically, it is less than or equal to 64 ms. 
- The DRAM controller consists of the refresh logic that is responsible for automating the periodic refresh. 
- This job cannot be done by any other software and hardware. 
- Thus, the circuit of the controller is very complicated. 
- The capacity of DRAM per unit surface is greater than that of the SRAM. 
Some systems may refresh one row at one instant while others may refresh all the rows simultaneously every 64 ms.  
- Some systems use an external timer based up on whose timing they refresh a part of the memory. 
- Many of the DRAM chips come with a counter that keeps track of which row is to be refreshed next.
- However, there are some conditions under which the data can be recovered even if the DRAM has not been refreshed since few minutes. 
- Bits of the DRAM might flip to opposite state spontaneously because of the electromagnetic interference in the system. 
- Background radiation is the major cause for the occurrence of the majority of the soft errors.
- Because of these errors the contents of the memory cells may change and circuitry might be harmed. 
- Redundant memory bits along with the memory controllers are one potential solution to this problem. 
- These bits are within the modules of the RAM. 
- The parity is recorded by these bits which enable the reconstruction of the missing data via ECC or error – correcting code.


Facebook activity