Subscribe by Email


Showing posts with label Datum. Show all posts
Showing posts with label Datum. Show all posts

Monday, April 29, 2013

What is cache memory?


Cache memory is a certain memory aid for computers that speeds them up very well. 
- In cache memory, the storage of the data is transparent so as to make the processing of the future requests faster. 
- A cache might store in it the values that have already computed or duplicate of some values stored somewhere else in the memory. 
- Whenever some data is requested, it is first looked up in the cache memory. - If the data is found here, it is returned to the processor and this is called a ‘cache hit’. 
- In this case the time taken for accessing the data is reduced. 
- This access is thus faster than that of the main memory. 
- Another case is of cache miss when the required data is not found in the cache.
- Then again the data has to be fetched or computed from its original source or the storage location which is slow as obvious. 
- The overall performance of the system increases in proportion with the number of requests that can be served from the cache memory.
- In order to maintain the cost efficiency as well as efficiency in data usage, the size of the cache is kept relatively small as compared to the main memory. 
However, the caches have proven themselves from time to time because of their ability to recognize the patterns of access in the applications having some locality of reference. 
- Temporal locality is exhibited by the references if the data that was previously requested is requested once again.
- These references apart from temporal locality also exhibit spatial locality if the storage location of the requested data is close to the data that was previously requested.

How is cache implemented?

- The cache is implemented as a memory block by the hardware and as a place of temporary storage. 
- Here, only that data is stored which is likely to be accessed again and again. 
Caches are not only used by hard drives and CPUs but also by the web servers and browsers. 
- Pools of entries together make up the cache. 
- Each entry has a datum associated with and a copy of it is stored in the backing store. 
- Each entry is also tagged for the specification of the datum’s identity in the backing store.
- When a datum is required to be accessed by a cache client (it might be an operating system, CPU or web browser.) that it thinks might be available in the backing store, the cache is first checked. 
- If the desired entry is found, it is returned for the use. This is cache hit.
- Similarly, a web browser might look in its local cache available at the disk to see if it has the contents of a web page. 
- In this case the URL serves as the searching tag and the contents are the datum. 
- The rate of successful cache accesses is known as the hit rate of the cache.
- In case of a cache miss, the datum not cached is copied in to the cache so as to prevent future cache misses. 
- For making space for this datum, some already existing datum in the cache is removed. 
- Which datum is to be removed is determined by using the replacement algorithms. 


Thursday, December 17, 2009

Introduction to Caching

Caching is a well-known concept where programs continually access the same set of instructions, a massive performance benefit can be realized by storing those instructions in RAM. This prevents the program from having to access the disk thousands or even millions of times during execution by quickly retrieving them from RAM.
A cache is made up of a pool of entries. Each entry has a datum (a nugget of data) - a copy of the same datum in some backing store. Each entry also has a tag, which specifies the identity of the datum in the backing store of which the entry is a copy.
When the cache client (a CPU, web browser, operating system) needs to access a datum presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired datum, the datum in the entry is used instead. This situation is known as a cache hit. The alternative situation, when the cache is consulted and found not to contain a datum with the desired tag, has become known as a cache miss. The previously uncached datum fetched from the backing store during miss handling is usually copied into the cache, ready for the next access.
When a system writes a datum to the cache, it must at some point write that datum to the backing store as well. The timing of this write is controlled by what is known as the write policy.
- In a write-through cache, every write to the cache causes a synchronous write to the backing store.
- In a write-back (or write-behind) cache, writes are not immediately mirrored to the store. Instead, the cache tracks which of its locations have been written over and marks these locations as dirty. The data in these locations is written back to the backing store when those data are evicted from the cache, an effect referred to as a lazy write.
- No-write allocation is a cache policy which caches only processor reads, thus avoiding the need for write-back or write-through when the old value of the datum was absent from the cache prior to the write.


Facebook activity