Wiktionary defines a cache as a store of things that will be required in the future, and can be retrieved rapidly. A cache is a collection of temporary data that either duplicates data located elsewhere or is the result of a computation. Data that is already in the cache can be repeatedly accessed with minimal costs in terms of time and resources.
A cache entry consists of a key and its mapped data value within the cache. This is also sometimes referred to as a cache mapping.
When a data entry is requested from cache and the entry exists for the given key, it is referred to as a cache hit (or simply, a hit).
When a data entry is requested from cache and the entry does not exist for the given key, it is referred to as a cache miss (or simply, a miss).
Any time a data entry is requested from cache, the cache is accessed no matter if the outcome was a hit or a miss.
It is the ratio of cache hits over cache accesses, or hits / accesses. For instance, a cache accessed 100 times that hit 90 times has a hit ratio of: 90 / 100 or 0.9 or 90%.
It is the ratio of cache misses over cache accesses, or misses / accesses. For instance, a cache accessed 100 times that missed 30 times has a miss ratio of: 30 / 100 or 0.3 or 30%. It is the corollary of hit ratio as hit ratio + miss ratio is always 1.0 or 100%.
The authoritative source of truth for the data. The cache acts as a local copy of data retrieved from or stored to the system-of-record (SOR). The SOR is often a traditional database, although it might be a specialized file system or some other reliable long-term storage. It can also be a conceptual component such as an expensive computation.
The removal of entries from the cache in order to make room for newer entries (typically when the cache has run out of data storage capacity).
The removal of entries from the cache after some amount of time has passed, typically as a strategy to avoid stale data in the cache.
When large caches put too much pressure on the GC, a common resort is to store the caches' contents off-heap, i.e:
still in the memory of the JVM process but out of reach of the garbage collector. The off-heap implementation Ehcache
uses is Terracotta’s port of
dlmalloc to Java backed by NIO direct
Data that has recently been used by an application is very likely to be accessed again soon. Such data is considered hot. A cache may attempt to keep the hottest data most quickly available, while attemping to choose the least hot data for eviction. Following the Pareto Distribution, you ideally want all your hot data to fit into your caches.
According to Data Science Central, the Pareto distribution is a skewed distribution with heavy, or “slowly decaying” tails (i.e. much of the data is in the tails). This is more commonly known as the 80% / 20% rule. The entire concept of caching is based on the Pareto Distribution, as caches are only effective when their hit ratio reaches a certain level, i.e.: as a general rule of thumb 80% of your transactions should be served with cached data and the remaining 20% by data coming from other, more expensive means.