Ehcache is fast. Really fast.
It was thought and made to be.
For example, a straight
get(key) from the cache should be under 500ns.
That’s fast and way sufficient for most purpose.
Keep that in mind. Especially when comparing caching frameworks with a benchmark. If something looks 10% faster it actually means 50ns faster. Do you care that much about these 50ns to lose your precious time in benchmarking?
That said, the way you configure Ehcache can have an impact on performance. This document is a work in progress. It will give you the performance impact of classical Ehcache configuration. It will also give you some advance tuning possibilities.
In the future, we will add some figures of what "slower" means. However, always do your own benchmark.
We probably know that the faster store is on-heap. Until you overwhelm the garbage collector.
Your next best bet is off-heap.
Try to avoid disk. Use a remote drive or even an HDD at your own risk.
We won’t talk about clustering here because it’s a different realm and its performance is based on many factors.
The next question would be: "Should I use a single tier?" Is using a single-tier off-heap faster than two-tiers? The answer depends on what you do with it. Having two tiers is a bit slower on writing. It is also a bit slower on reading when the data is not found in the caching tier (on-heap). However, it will be faster for an entry that is indeed found there.
So again, it depends. The more you follow the caching hypothesis that the same data is always reused (and so in the caching tier), the more interesting having two tiers will be.
A on-heap tier can be limited to a number of entries or a number of bytes. When using bytes, we need to calculate the size of every object added to the cache. This is of course much slower than calculating the number of entries.
Size calculation is done using the SizeOf library. This library uses multiple magic tricks to do so. It selects the fastest one for a given environment. Make sure of what is used to confirm you can’t use a faster way on your platform.
Off-heap, disk and clustering need to serialize keys and values before storing them. By default, the Java serialization is used. It is well-known for not being the fastest thing around. Ehcache uses it because it is supported out of the box. However, you can increase performances by provoding your own serializers.
By default, on-heap storage stores the entries by reference. You might want to use a copier to store entries by value for whatever reason. This can be much slower so watch out.
Loader-writer is interesting for many reasons. First, it protects you against the Thundering Herd. However, it needs to pass through more complicated code to do so.
We are expecting it to be a tiny bit slower. But nothing noticeable enough to prevent you from using it.
A cache with no expiration will always be faster.
If you need to set an expiration time, TTL will be the faster one. This is because the expiration time of an entry is calculated and updated ony when the entry is inserted or updated in the cache. But it still requires an expiration check at access time.
So you can expect a 2% drop in performance when using TTL. Not bad.
TTI is slower than TTL. We need to recalculate and update the expiration time each time the entry is accessed.
Ehcache won’t allocate any object during a simple
However, keep in mind that your configuration might do so.
For instance, let’s say you define an expiry policy like this.
Unresolved directive in performance.adoc - include::_eh35/integration-test/src/test/java/org/ehcache/docs/Performance.java[tag=expiryAllocation]
|1||Will instantiate a
In this case, putting the
Duration as a constant would solve the problem.
By default, Ehcache uses a
TimeSource that will retrieve the system time at every call. It is fast but not super
duper fast. But it is super duper accurate.
You can trade accuracy for speed by using a
TickingTimeSource. Please read the javadoc for details but the concept is
that a timer will increase time instead of always retrieving system time.
TickingTimeSource, even with a granularity of 1ms, can improve the performance of a
get as high as 30%.
The drawback is that a timer will continuously run.
Also, time might drift from the real time a bit.
Especially if the granularity of the
systemUpdatePeriod is big.
Is your expiration needs to be really tightly linked with real time, it can be a problem.
But in most cases, the drifting doesn’t matter much.