Cache
memory is a certain memory aid for computers that speeds them up
very well.
- In cache memory, the storage of the data is transparent so as to make
the processing of the future requests faster.
- A cache might store in it the
values that have already computed or duplicate of some values stored somewhere
else in the memory.
- Whenever
some data is requested, it is first looked up in the cache memory. - If the data
is found here, it is returned to the processor and this is called a ‘cache hit’.
- In this case the time taken for accessing the data is reduced.
- This access is
thus faster than that of the main memory.
- Another case is of cache miss when
the required data is not found in the cache.
- Then again the data has to be
fetched or computed from its original source or the storage location which is
slow as obvious.
- The overall performance of the system
increases in proportion with the number of requests that can be served from the
cache memory.
- In order to maintain the cost efficiency as well as efficiency in
data usage, the size of the cache is kept relatively small as compared to the
main memory.
- However, the caches have proven themselves from time to time
because of their ability to recognize the patterns of access in the applications
having some locality of reference.
- Temporal locality is exhibited by the
references if the data that was previously requested is requested once again.
- These
references apart from temporal locality also exhibit spatial locality if the
storage location of the requested data is close to the data that was previously
requested.
How is cache implemented?
- The cache
is implemented as a memory block by the hardware and as a place of temporary
storage.
- Here, only that data is stored which is likely to be accessed again and
again.
- Caches are not only used by hard drives and CPUs but also by the web
servers and browsers.
- Pools of entries together make up the cache.
- Each entry
has a datum associated with and a copy of it is stored in the backing store.
- Each entry is also tagged for the specification of the datum’s identity in the
backing store.
- When a
datum is required to be accessed by a cache client (it might be an operating
system, CPU or web browser.) that it thinks might be available in the backing
store, the cache is first checked.
- If the desired entry is found, it is
returned for the use. This is cache hit.
- Similarly, a web browser might look in its local cache available at the disk to see if it
has the contents of a web page.
- In this case the URL serves as the searching
tag and the contents are the datum.
- The rate of successful cache accesses is
known as the hit rate of the cache.
- In case of a cache miss, the datum not
cached is copied in to the cache so as to prevent future cache misses.
- For
making space for this datum, some already existing datum in the cache is
removed.
- Which datum is to be removed is determined by using the replacement
algorithms.
No comments:
Post a Comment