Clustering is not yet compatible with event listeners.

Introduction

Cache listeners allow implementers to register callback methods that will be executed when a cache event occurs.

Listeners are registered at the cache level - and therefore only receive events for caches that they have been registered with.

CacheEventListenerConfigurationBuilder cacheEventListenerConfiguration = CacheEventListenerConfigurationBuilder
    .newEventListenerConfiguration(new ListenerObject(), EventType.CREATED, EventType.UPDATED) (1)
    .unordered().asynchronous(); (2)

final CacheManager manager = CacheManagerBuilder.newCacheManagerBuilder()
    .withCache("foo",
        CacheConfigurationBuilder.newCacheConfigurationBuilder(String.class, String.class, ResourcePoolsBuilder.heap(10))
            .add(cacheEventListenerConfiguration) (3)
    ).build(true);

final Cache<String, String> cache = manager.getCache("foo", String.class, String.class);
cache.put("Hello", "World"); (4)
cache.put("Hello", "Everyone"); (5)
cache.remove("Hello"); (6)
1 Create a CacheEventListenerConfiguration using the builder indicating the listener and the events to receive (in this case create and update events)
2 Optionally indicate the delivery mode - defaults are asynchronous and un-ordered (for performance reasons)
3 Pass the configuration of the listener into the cache configuration
4 You will be notified on creation
5 And on update
6 But not on removal, because it wasn’t included at step 1

Created, updated, and removed events are triggered by user execution of mutative methods as outlined in the table below. Eviction and expiration events can be triggered by both internal processes and by user execution of methods targeting both related and unrelated keys within the cache.

Table 1. Cache entry event firing behaviors for mutative methods
Initial value Operation New value Event {key, old-value, new-value}

{}

put(K, V)

{K, V}

created {K, null, V}

{K, V1}

put(K, V2)

{K, V2}

updated {K, V1, V2}

{}

put(K, V) [immediately expired]

{}

none

{K, V1}

put(K, V2) [immediately expired]

{}

none

{}

putIfAbsent(K, V)

{K, V}

created {K, null, V}

{}

putIfAbsent(K, V) [immediately expired]

{}

none

{K, V1}

replace(K, V2)

{K, V2}

updated {K, V1, V2}

{K, V1}

replace(K, V2) [immediately expired]

{}

none

{K, V1}

replace(K, V1, V2)

{K, V2}

updated {K, V1, V2}

{K, V1}

replace(K, V1, V2) [immediately expired]

{}

no events

{K, V}

remove(K)

{}

removed {K, V, null}

Ehcache provides an abstract class CacheEventAdapter for convenient implementation of event listeners when you are interested only on specific events.

Registering Event Listeners during runtime

Cache event listeners may also be added and removed while the cache is being used.

ListenerObject listener = new ListenerObject(); (1)
cache.getRuntimeConfiguration().registerCacheEventListener(listener, EventOrdering.ORDERED,
    EventFiring.ASYNCHRONOUS, EnumSet.of(EventType.CREATED, EventType.REMOVED)); (2)

cache.put(1L, "one");
cache.put(2L, "two");
cache.remove(1L);
cache.remove(2L);

cache.getRuntimeConfiguration().deregisterCacheEventListener(listener); (3)

cache.put(1L, "one again");
cache.remove(1L);
1 Create a CacheEventListener implementation instance.
2 Register it on the RuntimeConfiguration, indicating the delivery mode and events of interest. The following put() and remove() cache calls will make the listener receive events.
3 Unregister the previously registered CacheEventListener instance. The following put() and remove() cache calls will have no effect on the listener anymore.

Event Processing Queues

Advanced users may want to tune the level of concurrency which may be used for delivery of events:

CacheConfiguration<Long, String> cacheConfiguration = CacheConfigurationBuilder.newCacheConfigurationBuilder(Long.class, String.class,
                                                                                      ResourcePoolsBuilder.heap(5L))
    .withDispatcherConcurrency(10) (1)
    .withEventListenersThreadPool("listeners-pool")
    .build();
1 Indicate the level of concurrency desired

This will enable parallel processing of events at the cost of more threads being required by the system.