Introduction to Cache Loaders and Writers

Ehcache clustering is not yet compatible with cache-through.

This section documents the specifics behind the cache-through implementation in Ehcache. Refer to the section Cache Usage Patterns if you are not familiar with terms like cache-through, read-through, write-through or system of record.

Ehcache merges the concepts of read-through and write-through behind a single interface, the CacheLoaderWriter.

As indicated by its API, this interface provides methods with logical grouping:

read-through

The load(K) and loadAll(Iterable<? super K>) methods cover the read-through part of cache-through.

write-through

The write(K, V), writeAll(Iterable<? extends Map.Entry<? extends K, ? extends V>>), delete(K) and deleteAll(Iterable<? super K>) methods cover the write-through part of cache-through.

The reasoning behind having a unified interface is that if you want a read-through only cache, you need to decide what to do about mutative method calls. What happens if someone calls put(K, V) on the cache? This risks making it inconsistent with the underlying system of record.

In this context, the unified interface forces you to make a choice: either no-op write/delete methods or throwing when mutation happens.

For a write-through only cache, it remains possible by simply having no-op load* methods.

Write-behind

An additional feature provided by Ehcache is write-behind, where writes are made asynchronously to the backing system of record. The way this works in Ehcache is by simply telling the system to register a wrapper around your provided CacheLoaderWriter implementation.

From there, you will have extra configuration options around batching and coalescing of writes.

Ehcache does not support retry of failed writes at the write-behind wrapper level. You, as the application developer and system of record owner, know better when a retry should happen and how. So if you need that functionality, make it part of your CacheLoaderWriter implementation.

Write-behind introduces the following concepts:

queue size

Indicates how many pending write operations there can be before applying back pressure on cache operations.

concurrency level

Indicates how many parallel processing threads and queues there will be for write behind. Effectively the maximum number of in-flight writes is concurrency level * queue size.

batching and batch size

Mutative operations will be grouped in batch size sets before reaching the CacheLoaderWriter. When batching, the queue size is effectively the number of pending batches there can be. This means that the maximum number of in-flight writes becomes concurrency level * queue size * batch size.

coalescing

When batching, coalescing means that you only send the latest mutation on a per key basis to the CacheLoaderWriter.

maximum write delay

When batching, you can indicate the maximum write delay for an incomplete batch. After this time has elapsed, the batch is processed even if incomplete.

Implementing Cache-Through

CacheManager cacheManager = CacheManagerBuilder.newCacheManagerBuilder().build(true);

Cache<Long, String> writeThroughCache = cacheManager.createCache("writeThroughCache",
    CacheConfigurationBuilder.newCacheConfigurationBuilder(Long.class, String.class, ResourcePoolsBuilder.heap(10))
        .withLoaderWriter(new SampleLoaderWriter<>(singletonMap(41L, "zero"))) (1)
        .build());

assertThat(writeThroughCache.get(41L), is("zero")); (2)
writeThroughCache.put(42L, "one"); (3)
assertThat(writeThroughCache.get(42L), equalTo("one"));

cacheManager.close();
1 We register a sample CacheLoaderWriter that knows about the mapping ("41L" maps to "zero").
2 Since the cache has no content yet, this will delegate to the CacheLoaderWriter. The returned mapping will populate the cache and be returned to the caller.
3 While creating this cache mapping, the CacheLoaderWriter will be invoked to write the mapping into the system of record.

Adding Write-Behind

CacheManager cacheManager = CacheManagerBuilder.newCacheManagerBuilder().build(true);

Cache<Long, String> writeBehindCache = cacheManager.createCache("writeBehindCache",
    CacheConfigurationBuilder.newCacheConfigurationBuilder(Long.class, String.class, ResourcePoolsBuilder.heap(10))
        .withLoaderWriter(new SampleLoaderWriter<>(singletonMap(41L, "zero"))) (1)
        .add(WriteBehindConfigurationBuilder (2)
            .newBatchedWriteBehindConfiguration(1, TimeUnit.SECONDS, 3)(3)
            .queueSize(3)(4)
            .concurrencyLevel(1) (5)
            .enableCoalescing()) (6)
        .build());

assertThat(writeBehindCache.get(41L), is("zero"));
writeBehindCache.put(42L, "one");
writeBehindCache.put(43L, "two");
writeBehindCache.put(42L, "This goes for the record");
assertThat(writeBehindCache.get(42L), equalTo("This goes for the record"));

cacheManager.close();
1 For write-behind you need a configured CacheLoaderWriter.
2 Additionally, register a WriteBehindConfiguration on the cache by using the WriteBehindConfigurationBuilder.
3 Here we configure write behind or batching with a batch size of 3 and a maximum write delay of 1 second.
4 We also set the maximum size of the write-behind queue.
5 Define the concurrency level of write-behind queue(s). This indicates how many writer threads work in parallel to update the underlying system of record asynchronously.
6 Enable the write coalescing behavior, which ensures that only one update per key per batch reaches the underlying system of record.