Thursday 18 December 2014

Infinispan 7.0.2.Final is a certified JSR-107 1.0 implementation

The infinispan-jcache module in Infinispan 7.0.2.Final has been certified to be compatible JSR-107 1.0 specification implementation. To get started with Infinispan's JSR-107 implementation, check the Infinispan documentation section on the topic, and remember that Infinispan also implements the JSR-107 annotations for CDI injection of cached values.

Thanks to everyone who has contributed to the module, and to Greg Luck and Brian Oliver for their help in completing the certification.

Cheers,
Galder

Monday 15 December 2014

Hot Rod Remote Events #4: Clustering and Failover

This blog post is the last in a series that looks at the forthcoming Hot Rod Remote Events functionality included in Infinispan 7.0. First article focused on how to get started receiving remote events from Hot Rod servers. The second article looked at how Hot Rod remote events can be filtered, and the third one showed how to customize contents of events.

In this last article, we'll be focusing on how remote events are fired in a clustered environment and how failover situations are dealt with.

The most important thing to know about remote events in a clustered environment is that when a client adds a remote listener, this is installed in a single node in the cluster and that this node is in charge of sending events back to the client for all affected operations happening cluster wide.

As a result of this, when filtering or event customization is applied, the org.infinispan.notifications.cachelistener.filter.CacheEventFilter and/or org.infinispan.notifications.cachelistener.filter.CacheEventConverter instances must be somehow marshallable. This is necessary because when the client listener is installed in a cluster, the filter and/or converter instances are sent to other nodes in the cluster so that filtering and conversion can happen right where the event originates, hence improving efficiency. These classes can be made marshallable by making them extend Serializable, or providing and registering a custom Externalizer for them.

Under normal circumstances, the code and examples showed in previous blog posts work the same way in clustered environment. However, in a clustered environment, a decision needs to be made with regards to how to deal with situations where nodes go down: If a node goes down that does not have the client listener installed, nothing happens. However, when the node containing the client listener goes down, the Hot Rod client implementation transparently fails over the client listener registration to a different node. As a result of this failover, there could be a gap in the event consumption. This gap is solved using one of these solutions:

State Delivery


The @ClientListener annotation has an optional parameter called includeCurrentState. When this is enabled and the client listener is registered, before receiving any events for on-going operations, the server sends ClientCacheEntryCreatedEvent event instances (for methods annotated with @ClientCacheEntryCreated) for all existing cache entries to the client. This offers the client an opportunity to construct some state or computation based on the contents of the clustered cache. When the Hot Rod client transparently fails over registered listeners, it re-registers them in a different node and if includeCurrentState is enabled, clients can recompute their state or computation to reinstate it to what it was before the failover. The downside of includeCurrentState is that it's performance is heavily dependant on the cache size, and hence it's disabled by default.

@ClientCacheFailover


Alternatively, instead of relying on receiving state, users can define a method with @ClientCacheFailover annotation that receives ClientCacheFailoverEvent as parameter inside the client listener implementation:


This method would be called back whenever the node that had this client listener has gone down. This can be handy for situations when the end users just wants to clear up some local state as a result of the failover, e.g. clear a near or L1 cache. When events are received again, the near or L1 cache could be repopulated again.

This callback method of dealing with client listener failover offers a simple, efficient solution to dealing with cluster topology changes affecting client listeners. Depending on the remote event use case, this method might be better suited that state delivery.

Final Words


This post marks the end of the remote event series. In future Infinispan versions, we'll continue improving the technology adding some extra features, and more importantly, we'll start building higher level abstractions on top of remote events, such as Hot Rod client Near Caches.

Cheers,
Galder

Friday 5 December 2014

Infinispan 7.1 Codename: And the winner is...

Dear Infinispan community,

the scrutiny is complete and YOU have decided the name of the next Infinispan release, which is going to be:

Hoptimus Prime

The following is a breakdown of the votes:

Hoptimus Prime30%
Insanely Bad Elf28%
Aventinus23%
Amber Shock14%
Drake's Hopocalypse5%

Since Hoptimus Prime has been retired we will have to celebrate by drinking one of the other wonderful runners-up. 

Remember that, aside from choosing codenames, you can also contribute to Infinispan in many ways: code, documentation, bug reports, helping on the forum, discussing ideas on the mailing list or just chat with us on #infinispan on Freenode's IRC servers.