Subscribe by Email


Showing posts with label Types. Show all posts
Showing posts with label Types. Show all posts

Friday, October 4, 2013

What is a substitution cipher method?

There are two classic methods for cryptography namely transposition cipher method and the substitution cipher method. In this article we shall discuss about the latter one i.e., the substitution cipher method. 
- This method of encoding involves replacement of the units or letters of the plain text with some other units or letters. 
- The encoded text is then called as the cipher text. 
- The replacement of the units is made based up on some regular system. 
These units might be individual letters, pairs or triplets of letters and so on. 
On the receiver’s side, an inverse substitution is required for deciphering the text. 
- We can make a comparison between the transposition ciphers and the substitution ciphers. 
- In the former ciphers, the plain text units are rearranged unlike in substitution cipher where units are replaced.
- The order of rearrangement in the transposition ciphers is somewhat more complex than what is followed by the substitution ciphers and the units are not changed.
- On the other side, the sequence of the units remains same in the substitution cipher but they are themselves altered. 

There are various types of substitution cipher as mentioned below:

Ø  Simple substitution ciphers: 
- This involves substitution of the single letters and thus has been termed as the simple substitution. 
- The alphabet can be written out in some order so as to represent the substitution.
- This alphabet is referred to as the substitution alphabet. 
- This alphabet might be revered or shifted or scrambled in some complex manner. 
- In such cases, it is termed as the deranged alphabet or the mixed alphabet. 
The creation of the mixed alphabets involves writing out a keyword while removing the repeating letters and then rewriting the leftovers in the same sequence. 
- For avoiding the transmission errors, the cipher text is written in block form and the spaces and the punctuation are omitted. 
- This also helps in creating disguises for the boundaries between the words.

Ø Homophonic substitution: 
- This method is followed for increasing the difficulty for the frequency analysis attacks. 
- The frequencies of the letters of the plain text are disguised by homophony. 
Here the letters of the plain text are mapped to many symbols of the cipher text. 
- Normally the plain text symbols with highest frequencies are mapped with more equivalents when compared to their low frequency counterparts. 
- This leads to the flattening of the frequency distribution which in turn raises the difficulty of frequency analysis. 
- For the invention of larger alphabets a number of solutions are employed. 
The simplest of these solutions is using a numeric substitution alphabet. 
- Another method uses the variations of the existing alphabet i.e., writing it upside down, or in upper case and lower case etc. 
Nomenclature is also a variant of the homophonic substitution. 
- The other two types of homophonic ciphers namely straddling checker board and book cipher.

Ø Polyalphabetic substitution: 
- It involves the use of the multiple cipher alphabets. 
- For the facilitation of the encryption process, these alphabets are written out in a big table which is referred to as the tableau. 
- The particular poly alphabetic cipher is defined by the method with which the tableau is filled and the alphabet is chosen. 
- Some types of the polyalphabetic ciphers are:
             1. Beaufort cipher
             2. Gronsfeld cipher
             3. Running key cipher
             4. Autokey cipher

Ø  Polygraphic substitution: 
Here the letters of the plain text are substituted in terms of large groups instead of individual letter substitution.

Ø Mechanical substitution ciphers: 
Some examples of this type of substitution ciphers are enigma, rotor cipher machines etc.

Ø The one-time pad: 
This one is a special substitution cipher which has been proven that it is unbreakable mathematically.



Wednesday, September 25, 2013

What is meant by multiplexing?

- Multiplexing or muxing is a very important process in computer networks and the telecommunications. 
Using this process, a number of digital data streams or analog message signals are combined as one signal and then transported over the common medium. 
Multiplexing is used wherever it is required to share a resource that is very expensive. 
- The most common example of multiplexing is of using one wire for several telephone calls. 
- The origin of the multiplexing dates back to 1870s when telegraphy was started.
- Now it is used to a great extent in the field of communications. 
- The telephone carrier multiplexing was developed by George Owen Squire in the field of telephony. 
- The communication channel over which the multiplexed signal might be transmitted might be a physical transmission medium. 
- The high level communication channel’s capacity is divided by multiplexing process in to a number of low level logical channels where for each message or data stream one channel is used. 
- Demultiplexing is the reverse process of multiplexing. 
- This is used for the extraction of the original signals on the reception side.
- A multiplexer or MUX is a device that is used for carrying out the multiplexing process and the demultiplexer or DEMUX is the device that performs demultiplexing. 
- IMUX or inverse multiplexing is another process whose aim is just the opposite of the multiplexing.
- It breaks down a single data stream in to various streams while transferring them at the same time over various communication channels.
- Later, the original stream is recreated.

Types of Multiplexing

Many different types of multiplexing technologies are available today. Each has its own significance:

Ø SDM or space-division multiplexing: 
This technique implies on using different point – to – point wires for individual communication channels. For example, an audio cable of analogue stereo, multi – pair telephone cable, switched star network, mesh network. However typically the wired SDM is not usually considered as multiplexing. In SDM a phased array antenna is formed by multiple antennas. For example MIMO (multiple – input and multiple – output), SIMO (simple – input and multiple – output), MISO (multiple – input and single – output) etc.

Ø  FDM or frequency-division multiplexing: 
This is considered to be an analog process, here the signals are sent in to different frequency ranges over a shared medium. For example, TV and radio broadcasting from satellite stations through the earth’s atmosphere. One cable is given in each house but over this cable many signals can be sent to other subscribers also. For accessing the desired signal, the users require to tune to that particular frequency. WDM or wavelength division multiplexing is a variant of FDM.

Ø TDM or time-division multiplexing: 
Unlike FDM, TDM is a digital technology but very rarely it might be used as an analog technology also. The process involves putting bytes in a sequence for each input stream one by one. this sequencing is done in such a way that the receiver can appropriately  receive them. If this is done quickly, the fact that another logical communication path was served in that circuit time won’t be detected by the receiver.

Ø  CDM or code-division multiplexing: 
In this multiplexing technique, the same frequency spectrum is shared by the several channels at the same time. Also the bandwidth of the spectrum is quite high when compared to the symbol rate or the bit rate. It is implemented in either of the two forms namely direct sequence spread spectrum and frequency hopping.

Some other types of multiplexing techniques which are less prominent are:
Polarization-division multiplexing: Used in optical and radio communications.
Orbital angular momentum multiplexing


Thursday, August 22, 2013

What is a spanning tree?

Spanning tree is an important field in both mathematics and computer science. Mathematically, we define a spanning tree T of an un-directed and connected graph G as a tree consisting of all the vertices and all or some edges of the graph G.
- Spanning tree is defined as a selection of some edges from G forming a tree such that every vertex is spanned by it. 
- This means that every vertex of graph G is present in the spanning tree but there are no loops or cycles. 
- Also, every bridge of the given graph must be present in its spanning tree. 
We can even say that a maximal set of the graph G’s edges containing no cycle or a minimal set of the graph G’s vertices forms a spanning tree. 
- In the field of graph theory, it is common finding the MST or the minimum spanning tree for some weighted graph. 
- There are a number of other optimization problems that require using the minimum spanning trees and other types of spanning trees. 

The other types of spanning trees include the following:
Ø  Maximum spanning tree
Ø  An MST spanning at least k number of vertices.
Ø  An MST having at the most k number of edges per vertex i.e., the degree constrained spanning tree.
Ø  Spanning tree having the largest no. of leaves (this type of spanning tree bears a close relation with the “smallest connected dominating set”).
Ø  Spanning tree with the fewest number of leaves (this spanning tree bears a close relation with the “Hamiltonian path problem”).
Ø  Minimum diameter spanning tree.
Ø  Minimum dilation spanning tree.

- One characteristic property of the spanning trees is that they do not have any cycles. 
- This also means that if you add just an edge to the tree, a cycle will be created. 
- We call this cycle as the fundamental cycle. 
- For each edge in the spanning there exists a distinct fundamental cycle and therefore there arises a one – to – one correspondence among the edges that are not present and the fundamental cycles. 
- For a graph G that is connected and has V vertices, there are V-1 edges in its spanning tree. 
- Therefore, for a general graph composed of E edges, its spanning tree will have E-V+1 number of fundamental cycles.
- For the cycle space of a given spanning tree these fundamental cycles are used. 
- The notion of the fundamental cut set as well as of the fundamental cycle forms a dual.  
- If we delete even one edge from the spanning tree, two disjoint sets will be formed of the vertices. 
- The set of the edges that if taken out from the graph G partitioning the vertices in to same disjoint sets is defined as the fundamental cut set. 
- For a given graph G there are V-1 fundamental cut sets i.e., one corresponding to each spanning tree edge. 
- The fact that the edges of the cycles that do not appear in the spanning tree but only in the cut sets of the edges can be used to establish the relationship between the cycles and the cut sets.

What is Spanning Forest?

- The sub-graph generalizing the spanning tree concept is called the spanning forest. 
- A spanning forest can be defined as a sub-graph consisting in each of the connected component a spanning tree of the graph G or we can call it a maximal cycle free sub graph.
- For counting the number of spanning trees for a complete graph the formula used is known as the cayley’s formula.



Saturday, July 13, 2013

Sliding Window Protocols? - Part 3

In the third part of this article we shall discuss about the types of sliding window protocols and how these protocols can be extended?

1. Stop and Wait: 
- This one is the simplest type among all the sliding window protocols. 
- Under this type, we have the stop–and–wait ARQ protocol as the simplest implementation.
- Both the transmit window and the receive window is 1 packet and the number of possible sequence numbers required is 2 i.e., 1+1 = 2. 
- The packets sent by the transmitter are marked alternatively as odd and even. 
- Therefore, the ACK packets are in the series of odd, even, odd, even and so on. 
- Now, suppose the transmitter sends an odd packet and immediately without waiting for an odd ACK sends the next even packet. 
- In such a case, it would receive an ACK saying that an odd packet is expected. 
- This leaves the transmitter in a state of ambiguity i.e., whether the receiver got both the packets or none of them.

2. Go-Back-N ARQ: 
- This sliding window protocol has a fixed w(fixed at 1) and wr  which is always greater than one. 
- Here, the receiver will not accept any packet other than the expected one from the sequence. 
- If the packet gets damaged or lost during the transmission, then the packets following the lost ne will not be accepted by the receiver until and unless it receives the lost one after re-transmission. 
- This ensures minimum loss of 1 RTT (round trip time). 
- This is why it results in inefficiency in using this protocol on the links where the packet loss is quite frequent. 
- Suppose a 3 bit sequence number is being used as in typical HDLC. 
- This means number of sequence numbers is 8 starting from 0 to 7. 
- This also means we have 8 possibilities. 
- Enough ACK information is required by the transmitter for distinguishing between those packets. 
- If 8 packets are sent back to back by the transmitter without stopping for ACK, then it will find itself in the same doubt as in the stop-and-wait case.

3. Selective repeat ARQ: 
- This one is the most general case of the sliding window protocols. 
- It works with a receiver that is more capable of accepting packets having the sequence numbers greater than what the current nr is and storing them till the gap is filled. 
- The advantage is that discarding data before re-transmission is not necessary.

Ways to extend these protocols

  1. The above types of the sliding window protocols don’t talk about reordering the packets after receiving them all. This will ensure that they don’t appear in wrong order. If the long distance can be bounded, the protocols can be extended to support this feature. The maximum mis- ordering distance can be used to expand the sequence number modulus N.
  2. Not acknowledging every packet is also possible after the sending an ACK up on not receiving packets. For example, every 2nd packet is acknowledged in TCP.
  3. Informing the transmitter immediately about the presence of gap in the packet sequence is quite common and so HDLC uses a packet called REJ packet for this purpose.
  4. During the communication, the window sizes may change if their sum remains in the limit defined by N. usually the transmit window size is reduced for slowing down the transmission in order to keep with the speed of the links and preventing congestion or saturation. 


Sunday, June 2, 2013

Explain the various Disk Allocation methods? – Part 2

In this article we discuss about the non-contiguous disk allocation methods i.e., the linked allocation and the indexed allocation. 

What is a Linked Allocation?

- In linked allocation a single file might be stored all over the disk and these scattered parts are linked to each other just like a linked list. 
- Few bytes in the memory block are used for storing the address of the following linked block.
- This type of allocation has two major advantages mentioned below:
  1. Simplicity and
  2. Non – requirement of disk compaction
- Since the nature of the allocation method is non-contiguous, it does not lead to any external fragmentation of the disk space.
- And since all the memory blocks have been linked to each other, a memory block available anywhere in the memory can be used for satisfying the requests made by the processes. 
- Declaring the file size for the linked allocation is not required during its creation. 
- There are no issues even if the file continues to grow as long as free blocks are available and since the blocks can always be linked up. 
- As a consequence of all this, the need for disk compaction is eliminated. 

Disadvantages of Linked Allocation

But there are disadvantages of using linked allocation. They are:

Direct access to the disk blocks becomes slow: 
For finding a particular block of the file, the search has to begin at the starting point of the file and the successive pointers have to be followed till the destination block is reached.

Space required by the pointers: 
- Suppose out of 512 words of a memory block, 2 are required for storing the pointers, then we have 39 percent of the total disk being used by the pointers rather than for data. 
- This adds to the space requirement of the file blocks.

Reliability: 
- Because of all the blocks being linked via the pointers, even if one pointer gets damaged or wrong, the successive blocks can become inaccessible. 
- This problem is quite common and thus most of the operating systems avoid this problem by keeping redundant copies of these pointers in a special file. 
The basic idea here is to keep the list of the pointers in the physical memory of the system. 
- This also allows for the faster access to the disk blocks.

What is Indexed allocation?

- The linked allocation method does not provide support for the direct access and this problem is solved by the indexed allocation method. 
- In this method, all the pointers are placed over an index. 
- Thus, these pointers together form the index block. 
- The address of this address block is then stored in the directory. 
- The pointer at the nth number in the index block points to the nth block of the associated file. 
- The purpose of the index blocks is somewhat similar to that of the page map tables; however both of these are implemented in a different way.
- First level index is used for searching the index of second level and the second one is used for searching the third one and the process may continue till the fourth level. 
- But in most of the cases the indexes of the first two levels are sufficient. 
- With this method, second level index blocks (128 x 128) can be easily addressed and files of size of up to 8mb can be supported. 
- Assuming the same thing, files of size up to 1 gb can be addressed.

Advantages and Disadvantage of Indexed Allocation

- The major advantage of this allocation technique is that it does not give rise to external fragmentation and offers a high level of efficiency in random accessing. 
- Also, using this technique mapping can be done around the disk blocks that are known to be bad. 
-Bit mapping can be used for indexing the free space. 

The biggest disadvantage of this allocation technique is the large number disk accesses required for the retrieval of the address of the destination block in the memory. 


Saturday, June 1, 2013

Explain the various Disk Allocation methods? – Part 1

Management of space of the secondary data storage devices is one of the most necessary functions of the file system. This includes tracking which blocks of memory or disk are being allocated to the files and which blocks are free to be allocated. 
There are two main problems faced by the system during allocation of the space to different files. Firstly, the access to the files has to be fast and secondly, the disk space has to be effectively utilized. Both of these combine to form the big problem of disk management. These two problems are more common with the physical memory of the system. 

However, the secondary storage of the system also introduces 2 additional problems which are long disk access time and blocks of larger size to deal with. In spite of all these problems, there are considerations that are common for both the storage such as the non – contiguous and contiguous space allocation. 

Types of Disk Allocation Methods


The following are the 3 widely used allocation techniques:
Ø  Contiguous
Ø  Linked
Ø  Indexed
- The linked and the indexed allocation techniques fall under the category of the non-contiguous space allocation. 
- All the methods have their own pros and cons.
- It is in the term of blocks that all the input and output operations are carried out on the disk. 
- Software is responsible for converting from the logical record to physical storage blocks.

Contiguous Allocation: 
- This allocation technique assigns only the contiguous memory blocks to the files. 
- The size of the memory required is mentioned by the user in advance for the holding the file. 
- The file is then created only if that much amount of memory is available otherwise not. 
- It is the advantage of the contiguous memory allocation technique that all the successive file records are saved adjacent to each other physically. 
- This causes an increase on the disk access time of the files. 
- This can be concluded from the fact that if the files are scattered all about the disk, then it takes a lot more time to access them. 
- Accessing files when they have been organized in a proper order is quite easy.
- To access the files sequentially, the system uses the address of the last memory block and if required moves to the next one. 
- Both direct and sequential accesses are supported by the contiguous memory allocation technique. 
- The problem with this allocation method is that every time a new contiguous space cannot be found for the new files if a majority of the memory is already in use and if the file size is larger than the available memory.  


Non–Contiguous Memory Allocation: 
- It happens a lot of time that the files grow or shrink in size with usage and time. 
- Therefore, it becomes difficult to determine the exact space required for the files since the users do not have any advance knowledge about the growing size of their files. 
- This is why the systems using the contiguous memory allocation techniques are being replaced by the ones with the non-contiguous storage allocation which is more practical and dynamic in nature. 
- The linked allocation technique mentioned above in the article, is a disk – based implementation of the linked list. 
- Each file is considered to be a list of the disk blocks linked to each other. 
- It does not matter whether the blocks are scattered about the disk. 
- In each block, few bytes are reserved for storing the address of the next block in chain.
- The pointer to the first and the last block of the file is stored in the directory. 


Tuesday, May 28, 2013

Concept of page fault in memory management

Page fault is also known as the pf or #pf and can be thought of as a trap that the hardware raises for the software whenever the program tries to access a page that has been mapped to an address space in the virtual memory but has not been loaded in the main memory. 

In most cases, the page fault is handled by the operating system by helping in accessing the required page at an address space in the main or the physical memory or sometimes by terminating the program if it makes an illegal attempt to the access the page.

- Memory management unit is the hardware that is responsible for detecting the page faults and is located in the processor. 
- The software that helps the memory management unit in handling the page faults is the exception handling software and is seen as a part of the OS. 
- ‘Page fault’ is not always an error.
- These are often seen as a necessary role player in increasing the memory. 
- This can be made available to the software applications that makes use of the virtual memory of  the operating system for execution.
- Hard fault is the term used by the Microsoft instead of page fault in the resource monitor’s latest versions.

Classification of Page Faults

Page faults can be classified in to three categories namely:

1. Minor: 
- This type of fault is also called the soft page fault and is said to occur when the loading of the page in to the memory takes place at the time of the fault generation, but the memory management unit does not mark it as being loaded in the physical memory. 
- A page fault handler is included in the operating system whose duty is to make an entry for the page that is pointed to by the memory management unit. 
- After making the entry for it, its task is to give an indication that the page has been loaded. 
- However, it is not necessary that the page must be read in to the memory. 
This is possible if the different programs share the memory and the page has been loaded in to the memory for the various applications. 
- In the operating systems that apply the technique of secondary page caching, the page can be removed from the working set of the process but not deleted or written to the disk.

2. Major: 
- Major fault is actually a fault that many operating systems use for increasing the memory for the program that must be available as demanded by the program. 
- The loading of the parts of the program is delayed by the operating system from the disk until an attempt is made by the program for using it and generating the page fault.
- In this case either a non – free page or a page in the memory has to be found by the page fault handler. 
- When the page is available, the data from it can be read by the operating system to the new page in the main memory, thus easily making an entry for the required page.

3. Invalid: 
- This type of fault occurs whenever a reference is made to an address that does not exists in the virtual address space and therefore it has no page corresponding to it in the memory. 
- Then the code by which the reference was made has to be terminated by the page fault handler and give an indication regarding the invalid reference. 


Thursday, May 2, 2013

What is a CPU Scheduler?


Scheduling is a very important concept when it comes to the multi-tasking operating systems. 
- It is the method via which the data flows, threads and processes are provided access to the shared resources of the computer system. 
- These resources include communications bandwidth, processor time, and memory and so on. 
- Scheduling is important as it helps in striking a balance the system processor and its resources effectively. 
- It helps in achieving the target QoS or quality of service. 
But what gave rise to scheduling? 
- Almost all the modern systems require to carry out multiple tasks i.e., multi-tasking and multiplexing as well which require a scheduling algorithm. 
Multiplexing means transmission of the multiple data flows at the same time. - There are some other things also with which the scheduler is concerned. They are:
  1. Throughput: It is the ratio of total number of processes executed to a given amount of time.
  2. Latency: This factor can be sub – divided in to two namely response time and the turnaround time. Response time is the time taken from the submission of the process till its output is produced by the processor. The latter i.e., the turnaround time is the time period elapsed between the process submission and its completion.
  3. Waiting/ fairness time: This is the equal CPU time given to each process or we can say that the time is allocated as per the priority of the processes. The time for which the processes wait in the ready queue is also counted in this.
- But in practical, conflicts may arise between these goals such as in case of latency versus throughput. 
- If such a case occurs, a suitable compromise has to be implemented by the scheduler. 
- The needs and the objectives of the user are used for deciding to who (of the above concerns) the preference is to be given. 
- In robotics or embedded systems i.e., in the real time environments, it becomes a duty of the scheduler for ensuring that all the processes meet their deadlines. 
- This is important for maintaining the stability of the system. 
- The mobile devices are given the scheduled tasks which are then managed by an administrative back end.

Types of CPU Schedulers

There are many types of CPU schedulers as discussed below:
1. Long-term Schedulers: 
- These schedulers facilitate the long term scheduling and are also known as the high level schedulers and the admission schedulers. 
- It is up to them to determine which processes and jobs are to be sent to the ready queue. 
- When the CPU makes an attempt for executing a program, the long term scheduler has the right to decide whether this program will be admitted to the currently executing set of processes. 
- Thus, it is dictated by this scheduler what processes are to be run and the extent of the concurrency has to be there.
- It also decides what amounts of processes have to be concurrently executed. 
It also decides the handling of the split between the CPU intensive and I/O processes.  

2. Medium-term Schedulers: 
- The processes are temporarily removed from the main memory and placed up on the secondary memory by this scheduler. 
- This process is called “swapping in” and “swapping out”. 
- Usually this scheduler swaps out the following processes:
a)   processes that have been inactive since some time
b)   the processes that has raised a frequent page faulting
c)   processes having a low priority
d) processes that take up large memory chunks for releasing the main memory to other processes
- The scheduler later swaps in these processes whenever sufficient memory is available and if the processes are unblocked and not in waiting state.

3. Short-term Schedulers: 
It takes decision regarding the processes to be executed after clock interrupt. 


Sunday, April 28, 2013

What is fragmentation? What are different types of fragmentation?


In the field of computer science, the fragmentation is an important factor concerning the performance of the system. It has a great role to play in bringing the performance of the computers. 

What is Fragmentation?

- It can be defined as a phenomenon involving the inefficient use of the storage space that in turn reduces the capacity of the system and also brings down its performance.  
- This phenomenon leads to the wastage of the memory and the term itself means the ‘wasted space’.
- Fragmentation is of three different forms as mentioned below:
  1. The external fragmentation
  2. Internal fragmentation and
  3. Data fragmentation
- All these forms of fragmentation might be present in conjunction with each other or in isolation. 
- In some cases, the fragmentation might be accepted in exchange of simplicity and speed of the system. 

Basic principle behind the fragmentation concept. 
- The CPU allocates the memory in form of blocks or chunks whenever requested by some computer program. 
- When this program has finished executing, the allocated chunk can be returned back to the system memory. 
- The size of memory chunk required by every program varies.
- In its lifetime, a program may request any number of memory chunks and free them after use. 
- When a program begins with its execution, the memory areas that are free to allocated, are contiguous and long. 
- After prolonged usage, these contiguous memory locations get fragmented in to smaller parts. 
- Later, a stage comes when it becomes almost impossible to serve the large memory demands of the program. 

Types of Fragmentation


1.External Fragmentation: 
- This type of fragmentation occurs when the available memory is divided in to smaller blocks and then interspersed. 
- Certain memory allocation algorithms have a minus point that they are at times unable to order the memory used by the programs in such a way that its wastage is minimized. 
- This leads to an undesired situation where even though we have free memory, it cannot be used effectively since being divided in to very small parts that alone cannot satisfy the memory demands of the programs.  
- Since here, the unusable storage lies outside the allocated memory regions, this type of fragmentation is called external fragmentation. 
- This type of fragmentation is also very common in file systems since here many files with different sizes are created as well as deleted. 
- This has a worse effect if the file deleted was in many small pieces. 
- This is so because this leaves similar small free memory chunks which might be of no use.

2. Internal Fragmentation: 
- There are certain rules that govern the process of memory allocation. 
- This leads to the allocation of more computer memory what is required. 
- For example, as the rule memory that is allocated to programs should be divisible by 4, 8 or 16. So if some program actually requires 19 bytes, it gets 20 bytes. 
- This leads to the wastage of extra 1 byte of memory. 
- In this case, this memory becomes unusable and is contained in the allocated region itself and therefore this type of fragmentation is called as the internal fragmentation.
- In computer forensic investigation, the slack space is the most useful source for evidence. 
- However, it is often difficult to reclaim the internal fragmentation. 
- Making a change in the design is the most effective way for preventing it. 
Memory pools in dynamic memory allocation are the most effective methods for cutting down the internal fragmentation. 
- In this the space overhead is spread by a large number of objects.

3. Data Fragmentation: 
This occurs because of breaking up of the data in many pieces that lie far enough from each other.
                                                                                                               


Facebook activity