Subscribe by Email


Showing posts with label Space. Show all posts
Showing posts with label Space. Show all posts

Thursday, July 18, 2013

Infrastructure: Need to do a frequent clean up of old builds from servers ..

This is a constant problems that organizations that have large products face. If your product has a size of around 500 MB or more, and you are in the development face, the chances are quite high that you will be generating a build every day. The build contains new fixes and code changes, so getting a new build every day ensures that there is a quick turnaround in terms of defects closure as well as ensuring that new features done by the developers get to the hands of testers almost as soon as the features are done.
However, there is an infrastructural issue involved with getting so many builds. I am talking of the cases where the typical release cycle of such a product is more than a few months. During this period, the team will generate a number of such builds that need to be hosted on servers so that they are accessible to the team members (and they needed to be transferred to additional servers if the team has members located in different geographical locations, or if there are vendors located in a different location and the vendors can only be given access to a different server outside of the main server that is accessible only to employees). Now, there can be additional builds that are required (for example, you may have the same application that has a different structure for the DVD release vs. the release on the company online store, and if there is distribution to vendors, there might be a different version). In some cases, the different language versions of the product might need to be different applications (builds) which also increases the size considerations.
Now, all these place a lot of constraints on infrastructure. Central servers are typically set up in a RAID kind of configuration which means that space needed is actually much more. Server capacity in terms of hard disk is cheap, but you can be pretty sure that at some point, unless you do some optimization, the additional hard disk capacity required on a regular basis can start becoming costly, not only from the perspective of equipment, but also from the perspective of the staff needed for maintaining this capacity, as well as making it more difficult to find what you want. It always makes sense to do some kind of optimization of the storage needs of the product in terms of builds, since if a build is not being used, the need of storing the build is unnecessary.
A initial thought might show that only recent builds need to be stored, but that is an over simplification. There might be defects that were filed some time back, and for the purpose of evaluating these defects, the builds on which these defects were found need to be accessible. Further, during the process of coding, there could be errors inserted into the code, but not detected for some time (this could be weeks or even months). Even though the code could be checked by doing a differential in the source code repository, it may be necessary to test the performance in the build in which the code change was first detected to see what the code change caused. There could be numerous such reasons as to why a specific build is needed at some point in the future, and hence, there needs to be a process defined that will let the team control which builds need to be deleted from the server, thus leading to an optimization of the server space. Here are some points that could help in this:
- If there are builds from the earlier cycle, then it is probable that those build are not necessary. It might just be enough to retain build that were of significant interest in terms of milestones.
- If a build had a problem in terms of the build either not launching or being rejected for the purpose of testing, those builds need not be retained and can be deleted
- When builds are older than a few months, the team can decide on a policy to check whether such builds can be deleted or not, and so on.


Friday, June 14, 2013

Explain the methods for free space management? – Part 2

- Managing the free space is easy only when the space that has to be managed is divided in to units of fixed size. 
- If this is the case, the list of these fixed size units can be kept. 
- The first entry can be returned if it is requested by the client. 
- Managing free space gets difficult when the space to be managed consists of units of variable sizes. 
- This is the case with the memory allocation library at the user level. 
- This is also the case in the physical memory where the segmentation is used for the implementation of the virtual memory. 
- In such cases, external fragmentation is the main problem. 
- This leads to the splitting up of the disk space in to pieces of variable size. 
The next coming requests might fail because of unavailability of contiguous free space. 
- For example, the request might fail even if 20 bytes are available and the request requires only 15 bytes because this 20 bytes space is non-contiguous. 
Thus, the main problem is how the free space should be managed while satisfying the variable sized variables. 
How these strategies can manage free space while at the same time keeping the fragmentation in control.

Low level Mechanisms: 
- Most of the allocator use some common mechanisms as these: coalescing and splitting.
- Here, the free list consists of a set of elements describing about all the free spaces available in the heap. 
- Once a pointer to a space is handed over to the program, the determination of the pointers to this space becomes somewhat difficult. 
- These pointers are stored either in the registers or in the variables at some point of execution. 
- However, this is not the case of garbage collected and strongly typed languages as a measure for enabling compaction for combating fragmentation. - Suppose the program makes a request for a single byte of memory.
- In such a case the action performed by the allocator is called splitting. 
- Here a free memory chunk is searched for and split in to two. 
- The first one is returned to the calling request and the second one stays in the list itself. 
- This approach is used in the allocators where small requests are made requesting space that is of size smaller than the chunk. 
- Most of the allocators use a corollary mechanism called the coalescing of the free space. 
- Suppose a small heap is given and an application calls a function to obtain some memory. 
- This function returns the space available in the middle of the heap. 
- All these strategies are based up on simple policies. 
- An ideal allocator is the one that both minimizes fragmentation and is fast. 
But since it is possible for the stream of free requests and allocation to be arbitrary, any strategy would go wrong if wrong inputs are given. 
- Thus, the best approach cannot be described. 
There are 4 major approaches in this regard:

1. Best fit: 
- Simplest and first searches the list for free chunks of memory that might be bigger than the size requested. 
- The smallest one in the searched ones is returned and this is known as the best fitting chunk or smallest fitting too. 
- One pass is enough for finding the appropriate block to be returned.

2. Worst fit: 
- This one is just the opposite of the best fit. 
- It looks for the largest chunk and returns the amount requested while keeping the remaining memory space.

3. First fit: 
- This one also looks for the first bigger block and out of it allocates the amount requested by the thread.

4. Next fit: 
- Here, one extra pointer is kept at the location where the last search was done. 


Saturday, June 8, 2013

Explain the methods for free space management? – Part 1

- For efficient working of the programs and the entire operating system, it is important that the memory of the system should be managed. 
- When the files and programs are allocated memory space, some free space is left in the storage area. 
- It is required that these free spaces must be managed properly. 
- Since there is a limitation to the disk space, this same space has to be used again and again after deleting and creating new files. 
- A free space list is maintained by the operating system for keeping the track of the available free space. 
- All the free disk spaces are listed in free space list. 
- For the creation of a new file this free space list is searched in order to get the amount of space needed and then if the space is available, it is allocated to the file to be created. 
- In the case of deletion, after deleting the file, its space is added to the list of free spaces.

Methods for Free Space Management

There are 4 methods for the management of free space namely:
- Bit vector
- Linked list
- Grouping
- Counting

What is Bit Vector?
- Quite a many times, the free space list about which we mentioned above, is implemented as the bit vector (also known as the bitmap). 
- Here, 1 bit is used for representing each block. 
- If a particular block has been allocated to some file or program, its representative bit is set to 0 and when the block is available, the bit is set to one.
- Consider an example, suppose the following disk blocks are free and rest are allocated: 1, 2, 4, 5, 6, 7, 9, 10, 12, 13, 14, 18, 19, 21, 26, 27, 28. 
- Then for this allocation we have the following free – space bit map:
01101111011011100011010000111…
- This method of free space management is relatively simple and has good efficiency. 
- This method is known for its efficiency to locate the n consecutive free blocks or the first free block available in the storage area. 
- But this method can be inefficient if it is not kept in the main memory of the system. 
- Also, when required occasionally for the recovery needs, this map can also be written to the disk. 
- Keeping such maps in the physical memory is an easy thing if the system has a small memory but this is not always possible in the case of the systems with larger memories.

What is a Linked list?
- In this method, all the free spaces are linked together and the first block in this linked list is assigned a pointer which is stored in the cache memory. 
Similarly, the pointer to the second block is stored in the first block.

What is Grouping?
- This method is a modified version of the free list approach and it stores the addresses of the all the free blocks in the first block that is free. 
- Here, actually the first n- 1 blocks only are free and the address of another n free blocks is stored in the last block. 
- This lets the system to find the addresses of a number of free blocks which is not possible in the case where the approach being used is the linked list approach.


What is Counting?
- This approach takes advantage of the simultaneous allocation or freeing of the contiguous blocks through clustering or by using contiguous allocation algorithm. 
- Thus, it requires only to keep the address of the first free block and the rest of the blocks follow it. 


Tuesday, May 21, 2013

Define the Virtual Memory technique?


Modern operating systems come with multitasking kernels. These multitasking kernels often run in to the problems related to memory management. Physical memory does not suffice for them to execute the tasks assigned to them because of being fragmented. So they have to take some additional from the secondary memory. But they cannot use this memory directly. Virtual memory offers a solution to this problem. 

What is Virtual Memory technique?

- Using this technique makes the fragmented main memory available to the kernels as a contiguous main memory. 
- Since it is really not the main memory but just appears to be, it has been named as the virtual memory and this technique is called the virtual memory technique. 
- Since, it helps in managing the memory, it is essential a memory management technique. 
- The main storage gets fragmented because of many programming and processing problems. 
- The main memory available to the processes and the tasks is virtualized by the virtual memory technique and then it appears to the process as a contiguous memory location. 
- This memory is a global address space. 
- Virtual address spaces such as these are managed by the operating system. 
- The real memory is assigned to the virtual memory by the operating system itself. 
- The virtual addresses of the allocated virtual address spaces are translated in to the physical addresses automatically by the CPU. 
- It achieves this with the help of some memory management hardware specially designed for this purpose. 
- The processes continue to execute uninterrupted as long as this hardware properly translates the virtual addresses in to real memory addresses properly. 
- If it fails in doing so at any point of time, the execution comes to a halt and the control is transferred to the operating system. 
- The duty of the operating system now is to move the requested memory page to the main memory from the backing store. 
- Once done with this, it then returns the control again to the process that was interrupted. 
- It greatly simplifies the whole execution process. 
- Even if the application would require more data or code that would fit in real memory, it does not have to be moved to and fro between the backing store and the real memory. 
- Furthermore, this technique also offers protection to the processes that are provided distinct address spaces by the isolation of the memory allocate to them from other tasks.
- Application programming has been made a lot easier with the help of the virtual memory technique since it hides the fragmentation defects of the real memory. 
- The burden of memory hierarchy management is delegated to the kernel which eliminates the need for the explicit handling of the overlays by the program. 
- Thus each process can execute in an address space that is dedicated to it. 
- The need for relocating the code of the program is obviated along with using relative addressing for accessing the memory. 
- The concept of virtual memory was generalized and eventually named as memory virtualization. 
- Gradually, the virtual memory has become an inseparable part of the architecture of the modern computers. 
- For implementing it, dedicated hardware support is absolutely necessary. 
- This hardware is built in to the CPU in some sort of memory management hardware. - If required for boosting the performance of the virtual memory, some virtual machines and emulators may employ some additional hardware support. 
- The older mainframe computers did not have any support for the virtual memory concept. 
- In virtual memory technique, each program can solely access the virtual memory.


Sunday, April 28, 2013

What is fragmentation? What are different types of fragmentation?


In the field of computer science, the fragmentation is an important factor concerning the performance of the system. It has a great role to play in bringing the performance of the computers. 

What is Fragmentation?

- It can be defined as a phenomenon involving the inefficient use of the storage space that in turn reduces the capacity of the system and also brings down its performance.  
- This phenomenon leads to the wastage of the memory and the term itself means the ‘wasted space’.
- Fragmentation is of three different forms as mentioned below:
  1. The external fragmentation
  2. Internal fragmentation and
  3. Data fragmentation
- All these forms of fragmentation might be present in conjunction with each other or in isolation. 
- In some cases, the fragmentation might be accepted in exchange of simplicity and speed of the system. 

Basic principle behind the fragmentation concept. 
- The CPU allocates the memory in form of blocks or chunks whenever requested by some computer program. 
- When this program has finished executing, the allocated chunk can be returned back to the system memory. 
- The size of memory chunk required by every program varies.
- In its lifetime, a program may request any number of memory chunks and free them after use. 
- When a program begins with its execution, the memory areas that are free to allocated, are contiguous and long. 
- After prolonged usage, these contiguous memory locations get fragmented in to smaller parts. 
- Later, a stage comes when it becomes almost impossible to serve the large memory demands of the program. 

Types of Fragmentation


1.External Fragmentation: 
- This type of fragmentation occurs when the available memory is divided in to smaller blocks and then interspersed. 
- Certain memory allocation algorithms have a minus point that they are at times unable to order the memory used by the programs in such a way that its wastage is minimized. 
- This leads to an undesired situation where even though we have free memory, it cannot be used effectively since being divided in to very small parts that alone cannot satisfy the memory demands of the programs.  
- Since here, the unusable storage lies outside the allocated memory regions, this type of fragmentation is called external fragmentation. 
- This type of fragmentation is also very common in file systems since here many files with different sizes are created as well as deleted. 
- This has a worse effect if the file deleted was in many small pieces. 
- This is so because this leaves similar small free memory chunks which might be of no use.

2. Internal Fragmentation: 
- There are certain rules that govern the process of memory allocation. 
- This leads to the allocation of more computer memory what is required. 
- For example, as the rule memory that is allocated to programs should be divisible by 4, 8 or 16. So if some program actually requires 19 bytes, it gets 20 bytes. 
- This leads to the wastage of extra 1 byte of memory. 
- In this case, this memory becomes unusable and is contained in the allocated region itself and therefore this type of fragmentation is called as the internal fragmentation.
- In computer forensic investigation, the slack space is the most useful source for evidence. 
- However, it is often difficult to reclaim the internal fragmentation. 
- Making a change in the design is the most effective way for preventing it. 
Memory pools in dynamic memory allocation are the most effective methods for cutting down the internal fragmentation. 
- In this the space overhead is spread by a large number of objects.

3. Data Fragmentation: 
This occurs because of breaking up of the data in many pieces that lie far enough from each other.
                                                                                                               


Wednesday, April 21, 2010

Overview of Nanotechnology and its applications

Nanotechnology, shortened to "nanotech", is the study of the controlling of matter on an atomic and molecular scale. Generally nanotechnology deals with structures of the size 100 nanometers or smaller in at least one dimension, and involves developing materials or devices within that size.
With nanotechnology, a large set of materials and improved products rely on a change in the physical properties when the feature sizes are shrunk.

Nanotechnology Applications in Medicine
The biological and medical research communities have exploited the unique properties of nanomaterials for various applications. Terms such as biomedical nanotechnology, nanobiotechnology, and nanomedicine are used to describe this hybrid field.
- Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology.
- Nanotechnology has been a boom in medical field by delivering drugs to specific cells using nano-particles.
- Nanotechnology can help to reproduce or to repair damaged tissue. “Tissue engineering” makes use of artificially stimulated cell proliferation by using suitable nanomaterial-based scaffolds and growth factors.

Nanotechnology Applications in Electronics
Nanotechnology holds some answers for how we might increase the capabilities of electronics devices while we reduce their weight and power consumption.

Nanotechnology Applications in Space
Advancements in nanomaterials make lightweight solar sails and a cable for the space elevator possible. By significantly reducing the amount of rocket fuel required, these advances could lower the cost of reaching orbit and traveling in space. In addition, new materials combined with nanosensors and nanorobots could improve the performance of spaceships, spacesuits, and the equipment used to explore planets and moons, making nanotechnology an important part of the ‘final frontier.’

Nanotechnology Applications in Food
Nanotechnology is having an impact on several aspects of food science, from how food is grown to how it is packaged. Companies are developing nanomaterials that will make a difference not only in the taste of food, but also in food safety, and the health benefits that food delivers.


Facebook activity