Showing posts with label hotrod. Show all posts
Showing posts with label hotrod. Show all posts

Friday, 2 November 2018

Near caching with Spring-Boot and Infinispan


We have recently released infinispan-spring-boot-starter 2.0.0.Final. This version supports Spring Boot 2.1 and Infinispan 9.4.0.Final.

Before this release, some important features - such as near caching - were only configurable by code.
From now on, we can set all of the Hot Rod client configuration using the hotrod.properties file or the Spring application YAML. The latter is an important community requirement we had.

Let's see how to speed up our applications performance with near caching!


Hot Rod

 

Just as a quick reminder, Infinispan can be used embedded in your application or in client/server mode. To connect you application to a server you can use an Infinispan Client and the Infinispan “Hot Rod Protocol”. Other protocols are available, such as REST, but Hot Rod is the most recommended way since it is the one that supports most of the Infinispan functionalities.

Near cache


From the Infinispan documentation: Hot Rod client can keep a local cache that stores recently used data. Enabling near caching can significantly improve the performance of read operations get and getVersioned since data can potentially be located locally within the Hot Rod client instead of having to go remote.

When should I use it? 


Near caching can improve the performance of an application when most of the accesses to a given cache are read-only and the accessed dataset is relatively small.
When an application is doing lots of writes to a cache, invalidations, evictions and updates to the near cache need to happen. In this scenario we won't probably get much benefit.

As I said in the introduction, the good news is that this feature can be activated just by configuration. Code doesn't change, so we can measure the benefits, if such exist, in a very straightforward way.

Spring-Boot


I have created a very simple application, available here. Maven, Java 8 and an Infinispan server are required to run it. You can download the server or use docker.


Docker: docker run -it -p 11222:11222 jboss/infinispan-server:9.4.0.Final

Standalone: PATH/infinispan-server-9.4.0.Final/bin/standalone.sh

Once the server is up and running, build the application using maven 

>> infinispan-near-cache: mvn clean install

Writer 


This application loads the required data to a remote cache: a list of some of the Infinispan contributors over the last decade.

>> writer: mvn spring-boot:run


Reader 


The reader application does 10.000 accesses to the contributors cache. Using a random id, I call 10.000 times the get method. The job gets done in my laptop in ~4000 milliseconds.

>> reader-no-near-cache: mvn spring-boot:run


Activating the near cache


I need to configure two properties:
  • Near Cache Mode: DISABLED or INVALIDATED. Default value is DISABLED, so I turn it on with INVALIDATED.
  • Max Entries: Integer value that sets the max size of the near caches. There is no default value, so I set up one.
The hotrod client configuration is for each client, not for each cache (this might change in the future). With that in mind, note that configuring the previous properties will activate near caching for all the caches. If you need to activate it just for some of them, add the following property:
  • Cache Name Pattern:  String pattern. For example "i8n-.*" will activate the near caching for all the caches whose name starts by "i8n-".

Configuration can be placed in the hotrod-client.properties, Spring-boot configuration or code.

hotrod-client.properties

infinispan.client.hotrod.near_cache.mode=INVALIDATED
infinispan.client.hotrod.near_cache.max_entries=40
infinispan.client.hotrod.near_cache.cache_name_pattern=i8n-.*

application.yaml (or properties)

infinispan:
   remote:
     near-cache-mode: INVALIDATED
     near-cache-max-entries: 10
     near-cache-cache-name-pattern: i8n-.*

code 

With the Infinispan Spring-Boot Starter, I can add custom configuration using the InfinispanRemoteCacheCustomizer.


Results


My dataset contains 25 contributors. If I activate the near cache with max 12 entries and I run my reader again, I get the job done in ~1900 milliseconds, which is already an improvement. If I configure it to hold the complete dataset, I get it done in ~220 milliseconds, which is a big one!

Conclusions


Near caching can help us speed up our client applications if configured properly. We can test our tuning easily because we only need to add some configuration to the client. Finally, the Spring-Boot Infinispan Starter helps us build services with Spring-Boot and Infinispan. 

Further work will be done to help Spring-Boot users work with Infinispan, so stay tuned! Any feedback on the starter or any requirement from the community is more that welcome. Find us in Zulip Chat for direct contact or post your questions in StackOverflow!




Monday, 5 March 2018

A SWIG based framework to build Hotrod client prototype in your preferred language

If your are working on a non Java/C++/C#/JS application and you need to interact with Infinispan via Hotrod you may be interested in the idea behind the HotSwig[1] project.

Hotswig proposes a framework to build Hotrod client prototypes quickly and for a generic SWIG[2] supported language.
As people familiar with C++ and C# Infinispan native clients know, SWIG plays a role in both the projects:

  • is used to build the base of the C# client wrapping the C++ core with a C# layer;
  • is used in the C++ project to run (part of) the Java test suite against the client, in this way: a Java wrapper is built via SWIG to make the C++ client looks like its Java big brother so it can be tested with the Java test suite.

The main goal was to produce for a specific language an almost complete client reusing the C++ core features and the following workflow has been setup to do that:

  • the whole C++ interface is processed by SWIG. The resulting wrapper exposes almost all the C++ functions;
  • a user friendly adaptation layer is build on top of the SWIG result.

This approach doesn't work for the HotSwig goal, mainly because the effort need by the second step is usually not-negligible and prevents the rapid development of prototype in a generic language.

In the HotSwig approach, this limitation is removed moving the adaptation layer from the target language to the C++ side and then letting SWIG generate a ready to use client prototype. So the HotSwig workflow is the following:

  • build an adaptation facade around the C++ core to make it SWIG friendly (do the adaptation work once for all on the C++ side);
  • explicitly define what we want in the produced SWIG wrapper (keep things simple excluding everything by default);
  • run SWIG to produce the client.

At the moment HotSwig is just a proof of concept, but you can try to run it and produce a ready to work Infinispan client for the language you need. Examples are already provided for python, ruby and Octave, but HotSwig should work with all the SWIG supported languages. If you get it to run in your preferred programming language, please share your experience with us.

I've listed here[3] some tasks for the roadmap, with the idea to test the flexibility of the framework trying to extend it in different directions. Maybe the idea is good and it can grow up from a PoC to something that can really help devs. You can add you ideas of course.

So if you need to do math against your Infinispan data set why don't you try the Octave client? Or maybe you want to do analytics with R, or presentation with PHP. Or you just like parenthesis and you want to use Lisp. Or you're working for the Klingon empire and you must use ylDoghQo'[4]... well ok just joking now...

Thanks for reading!

Cheers
The Infinispan Team


[1] https://github.com/rigazilla/hotswig
[2] http://www.swig.org/
[3] https://github.com/rigazilla/hotswig/issues
[4] https://www.kli.org/about-klingon/klingon-phrases

Monday, 2 October 2017

Better Late than Never: Remote Cache collections

One of the main benefits of Infinispan extending the java.util.Map interface when we introduced our Cache interface  was that users would immediately be able to use a well established and familiar API.

The unfortunate thing about this relationship is that now the Cache interface also has to implement all of the other methods such as keySet, values and entrySet. Originally Infinispan didn't implement these collections or returned an immutable copy (requiring all elements to be in memory). Neither choice is obviously desirable.

This all changed with ISPN-4836 which provided backing implementations of keySet, values and entrySet collections. This means that all methods were now provided and would keep up to date with changes to the underlying Cache and updates to these collections would be persisted down to the Cache. The implementation also didn't keep a copy of all contents and instead allowed for memory efficient iteration. And if the user still wanted to use a copy they could still do that, by iterating over the collection and copying themselves. This later spring boarded our implementation of Distributed Stream as well.

The problem was that the RemoteCache was left in the old state, where some things weren't implemented and others were copies just like how embedded caches used to be.
Well I can now gladly say with the release of Infinispan 9.1 that RemoteCache now has backing implementations of keySet, values and entrySet implemented via ISPN-7900. Thus these collections support all methods on these collections and are backed by the underlying RemoteCache.

Unfortunately the Stream methods on these collections are not distributed like embedded, but we hope to someday improve that as well. Instead these streams must iterate over the cache to perform the operations locally. By default these will pull 10,000 entries at a time to try to make sure that memory is not overburdened on the client. If you want to decrease this number (less memory - lower performance) or increase (more memory - higher performance) you can tweak this by changing the batchSize parameter via ConfigurationBuilder or infinispan.client.hotrod.batch_Size if you use a property based file.

You can read more about this and the remote iterator which drives these collections on our user guide.

We hope you find that this improves your usage of RemoteCaches in the future by allowing you to have backed collections that also allow you to use the improvements of Java 8 with Streams.

If you have yet you can acquire Infinispan 9.1.1 or the latest stable version at http://infinispan.org/download/

Sunday, 10 September 2017

Multi-tenancy - Infinispan as a Service (also on OpenShift)

In recent years Software as a Service concept has gained a lot of traction. I'm pretty sure you've used it many times before. Let's take a look at a practical example and explain what's going on behind the scenes.

Practical example - photo album application

Imagine a very simple photo album application hosted within the cloud. Upon the first usage you are asked to create an account. Once you sign up, a new tenant is created for you in the application with all necessary details and some dedicated storage just for you. Starting from this point you can start using the album - download and upload photos. 

The software provider that created the photo album application can also celebrate. They have a new client! But with a new client the system needs to increase its capacity to ensure it can store all those lovely photos. There are also other concerns - how to prevent leaking photos and other data from one account into another? And finally, since all the content will be transferred through the Internet, how to secure transmission?

As you can see, multi-tenancy is not that easy as it would seem. The good news is that if it's properly configured and secured, it might be beneficial both for the client and for the software provider. 

Multi-tenancy in Infinispan

Let's think again about our photo album application for a moment. Whenever a new client signs up we need to create a new account for him and dedicate some storage. Translating that into Infinispan concepts this would mean creating a new CacheContainer. Within a CacheContainer we can create multiple Caches for user details, metadata and photos. You might be wondering why creating a new Cache is not sufficient? It turns out that when a Hot Rod client connects to a cluster, it connects to a CacheContainer exposed via a Hot Rod Endpoint. Such a client has access to all Caches. Considering our example, your friends could possibly see your photos. That's definitely not good! So we need to create a CacheContainer per tenant. Before we introduced Multi-tenancy, you could expose each CacheContainer using a separate port (using separate Hot Rod Endpoint for each of them). In many scenarios this is impractical because of proliferation of ports. For this reason we introduced the Router concept. It allows multiple clients to access their own CacheContainers through a single endpoint and also prevents them from accessing data which doesn't belong to them. The final piece of the puzzle is transmitting sensitive data through an unsecured channel such as the Internet. The use of TLS encryption solves this problem. The final outcome should look like the following:


The Router component on the diagram above is responsible for recognizing data from each client and redirecting it to the appropriate Hot Rod endpoint.
As the name implies, the router inspects incoming traffic and reroutes it to the appropriate underlying CacheContainer. To do this it can use two different strategies depending on the protocol: TLS/SNI for the Hot Rod protocol, matching each server certificate to a specific cache container  and path prefixes for REST.
The SNI strategy detects the SNI Host Name (which is used as tenant) and also requires TLS certificates to match. By creating proper trust stores we can match which tenant can access which CacheContainers.
URL path prefix is very easy to understand, but it is also less secure unless you enable authentication. For this reason it should not be used in production unless you know what you are doing (the SNI strategy for the REST endpoint will be implemented in the near future). Each client has its own unique REST path prefix that needs to be used for accessing the data (e.g. http://127.0.0.1:8080/rest/client1/fotos/2).

Confused? Let's clarify this with an example.

Foto application sample configuration

The first step is to generate proper key/trust stores for the server and client:


The next step is to configure the server. The snippet below shows only the most important parts:


Let's analyze the most critical lines:
  • 7, 15 - We need to add generated key stores to the server identities
  • 25, 30 - It is highly recommended to use separate CacheContainers
  • 38, 39 - A Hot Rod connector (but without socket binding) is required to provide proper mapping to CacheContainer. You can also use many useful settings on this level (like ignored caches or authentication).
  • 42 - Router definition which binds into default Hot Rod and REST ports.
  • 44 - 46 - The most important bit which states that only a client using SSLRealm1 (which uses trust store corresponding to client_1_server_keystore.jks) and TLS/SNI Host name client-1 can access Hot Rod endpoint named multi-tenant-hotrod-1 (which points to CacheContainer multi-tenancy-1).

Improving the application by using OpenShift

Hint: You might be interested in looking at our previous blog posts about hosting Infinispan on OpenShift. You may find them at the bottom of the page.

So far we've learned how to create and configure a new CacheContainer per tenant. But we also need to remember that system capacity needs to be increased with each new tenant. OpenShift is a perfect tool for scaling the system up and down. The configuration we created in the previous step almost matches our needs but needs some tuning.

As we mentioned earlier, we need to encrypt transport between the client and the server. The main disadvantage is that OpenShift Router will not be able to inspect it and take routing decisions. A passthrough Route fits perfectly in this scenario but requires creating TLS/SNI Host Names as Fully Qualified Application Names. So if you start OpenShift locally (using oc cluster up) the tenant names will look like the following: client-1-fotoalbum.192.168.0.17.nip.io

We also need to think how to store generated key stores. The easiest way is to use Secrets:


Finally, a full DeploymentConfiguration:



If you're interested in playing with the demo by yourself, you might find a working example here. It mainly targets OpenShift but the concept and configuration are also applicable for local deployment.

Links

Wednesday, 11 January 2017

Near Cache for native C++/C# Client example

Dear Readers,

as mentioned in our previous post about the new C++/C# release 8.1.0.Beta1, clients are now equipped with near cache support.

The near cache is an additional cache level that keeps the most recently used cache entries in an "in memory" data structure. Near cached objects are synchronized with the remote server value in the background and can be get as fast as a map[] operation.

So, your client tends to periodically focus the operations on a subset of your entries? This feature could be of help: it's easy to use, just enable it and you'll have near cache seamless under the wood.

A C++ example of a cache with near cache configuration
The last line does the magic, the INVALIDATED mode is the active mode for the near cache (default mode is DISABLED which means no near cache, see Java docs), maxEntries is the maximum number of entries that can be stored nearly. If the near cache is full the oldest entry will be evicted. Set maxEntries=0 for unbounded cache (do you have enough memory?)
Now a full example of application that just does some gets and puts and counts how many of them are served remote and how many are served nearly. As you can see the cache object is an instance of the "well known" RemoteCache class
Entries values in the near cache are kept aligned with the remote cache state via the events subsystem: if something changes in the server, an update event (modified, expired, removed) is sent to the client that updates the cache accordingly.

By the way: do you know that C++/C# clients can subscribe listener to events? In the next "native" post we will see how.

Cheers!
and thank you for reading.

Wednesday, 4 January 2017

Hotrod clients C++/C# 8.1.0.Beta1 released!

New Year, New (Beta) Clients!

I'm pleased to announce that the C++/C# clients version 8.1.0.Beta1 are out!
The big news in this release is:

  • Near Caching Support

Find the bits in the usual place: http://infinispan.org/hotrod-clients/

Features list for 8.1 is almost done... not bad :)
Feedbacks, proposals, hints and lines of code are welcome!

Happy New Year,
The Infinispan Team

Friday, 11 November 2016

Hotrod clients C++/C# 8.1.0.Alpha2 released!

Dear Infinispan community,

I'm pleased to announce that the C++/C# clients version 8.1.0.Alpha2 are out!

Some of the good news coming with this release:
  • more bugs fixed than added
  • SNI support
  • C++ Client listener for remote events

Download it from the usual link http://infinispan.org/hotrod-clients/


We're trying to keep track of the 8.1 trip at this Jira url:
Features list for 8.1
Feedbacks, proposals, hints are welcome!

Cheers,
The Infinispan Team

Thursday, 1 September 2016

Hotrod clients C/C# 8.0.0.Final released!

Dear Infinispan community,
I'm glad to announce the Final release of the C++ and C# clients version 8.0.0.

You can find the download on the Infinispan web site:

http://infinispan.org/hotrod-clients/

Major new features for this release are:
  • queries
  • remote script execution
  • asynchronous operation (C++ only)
plus several minor and internal updates that partially fill the gap between C++/C# and the Java client.

Some posts about the 8 serie of the C++/C# clients have been already published on this blog, you can recall them clicking through the list below.

The equivalent C# examples are collected here:

https://github.com/rigazilla/dotnet-client-examples

Enjoy!

Wednesday, 1 June 2016

HotRod C++ Native Client 8 Series

The Infinispan Team started the development of the new HotRod C++ Client (version 8) with two main goals in mind: update and refresh the code and reduce the feature gap between the C++ client and its Java big brother.

The work is still in progress, but since we're close to the 8.0.0.Final release, I would like to describe, in this and in the following posts, what's changed as of today.

Although there are a lot of changes and improvements in the code (protocol updates, segments topology, configurable balancing strategy... you can have a detailed view of the activities stream browsing to the Jira issues), I would like to focus on the following three big changes:
  • C++11 Standard
  • Remote Execution
  • Queries

C++11 Standard

Activities grouped under this title are motivated by the change in the development approach of the new features. Until version 7 we have followed the approach of keeping the baseline compiler requirements quite low to ensure a broad client portability, even to platforms with old compilers/libraries, but when we started development for the 8 series we felt that this principle would excessively complicate the implementation of new features.

With this in mind, we have fully embraced the new C++11 language feature (such as lambda function in the asynchronous interface method, or variadic templates) and pushed for extensive use of standard library container classes in lieu of our custom ones.

We know that in this way we may have limited use of the client to more recent platforms (bye bye RHEL 6) but fortunately the source is open and we have a very good build procedure based on cmake that can easily generates builder for the most used pair <compiler model, compiler version>.

The work on C++11 language adoption is still in progress and the goal on this front is to update the code wherever it results in improved readability (i.e. the auto keyword is a simple but powerful way to reduce code verbosity).

Because in this cycle we have added a few new features that required the introduction of some library dependencies and automatic code generation, the build process has become more complex, but we're doing our best to keep it manageable. We want to ensure that our packaging structure is what users expect on all of our platforms with respect to libraries, headers and documentation.

I will be glad to hear from any of you about any thoughts and suggestions, especially on the portability issues.

In the next post I will show an example of the new Remote Script Execution features.

Cheers



Friday, 16 October 2015

Stored Script Execution

One of the questions we get asked a lot is: when will I be able to run Map/Reduce and DistExec jobs over HotRod.

I'm happy to say: now !

Infinispan Server comes with Stored Script Execution which means that remote clients can invoke named scripts on the server. If you're familiar with the concept of Stored Procedures of the SQL world, then you already have an idea of what this feature is about. The types of scripts you can run are those handled by Java's scripting API. Out of the box this means Javascript (which uses either the Nashorn engine on JDK 8+), but you can add many more (Groovy, Scala, JRuby, Jython, Lua, etc). Scripts are stored in a dedicated script cache ("___scriptcache") so that they can be easily created/modified using the standard cache operations (put/get/etc.).

Here's an example of a very simple script:

The script above just obtains the default cache, retrieves the value with key 'a' and returns it (the Javascript script engine uses the last evaluated expression of a script as its return value).
The first line of the script is special: it looks like a comment, but, like the first line in Unix shell scripts, it actually provides instructions on how the script should be run in the form of properties.

The mode property instructs the execution engine where we want to run the script: local for running the script on the node that is handling the request and distributed for running the script wrapped by a distributed executor. Bear in mind that you can certainly use clustered operations in local mode.

Scripts can also take named parameters which will "appear" as bindings in the execution scope.

Invoking it from a Java HotRod client would look like this:

Server-side scripts will be evolving quite a bit in Infinispan 8.1 where we will add support for the broader concept of server-side tasks which will include both scripts and deployable code which can be invoked in the same way, all managed and configured by the upcoming changes in the Infinispan Server console.

Monday, 14 September 2015

Initial Support for Apache Avro and Gora

Avro and Gora are two Apache projects that belong to the Hadoop ecosystem. Avro is a data serialization framework that relies on JSON for defining data types and protocols, and serializes data in a compact binary format. Its primary use in Hadoop is to provide a serialization format for persistent data, and a wire format for communication between Hadoop nodes, and from client programs to the Hadoop services. Gora is an open-source software framework that provides an in-memory data model and persistence for big data. Gora supports persisting to column stores, key/value stores or databases, and analyzing the data with extensive Apache Hadoop MapReduce support.

As an effort to run Hadoop based applications atop Infinispan, the LEADS EU FP7 project has developed an Avro backend (infinispan-avro) and a Gora module (gora-infinispan). The former allows to store, retrieve and query Avro defined types via the HotRod protocol. The latter allows Gora-based applications to use Infinispan as a storage backend for their MapReduce jobs. In the current state of the implementation, the two modules make use of Infinispan 8.0.0.Final, Avro 1.7.6 and Gora 0.6

What’s in it for you Infinispan user

There are several use cases for which you can benefit from those modules.
  • With Infinispan’s Avro support, you can decide to persist your data in Infinispan using Avro’s portable format instead of Infinispan’s own format (or Java serialization’s format). This might help you standardize upon a common format for your data at rest. 
  • If you use Apache Gora to store/query some of your data in, or even out, of the Hadoop ecosystem, you can use Infinispan as the backend and benefit Infinispan’s features that you come to know like data distribution, partition handling, cross-site clustering. 
  • The last use case is to run legacy Hadoop applications, using Infinispan as the primary storage. For instance, it is possible to run the Apache Nutch web crawler atop Infinispan. A recent paper at IEEE Cloud 2015 gives a detailed description of such an approach in a geo-distributed environment (a preprint is available here). 



Tuesday, 11 August 2015

Infinispan 7.2.4.Final out including fixes for async store, Hot Rod...etc

Infinispan 7.2.4.Final is just out containing some important fixes in areas such as Hot Rod client and server, async cache store, key set iteration, as well as a Hibernate HQL parser upgrade. You can find more details about the issues fixed in our detailed release notes.

Happy hacking :)

Galder

Tuesday, 17 March 2015

Infinispan 7.2.0.Beta1 released

Dear Infinispan community,

We are proud to announce the release of Infinispan 7.2.0.Beta1 today.

Along the usual assortment of bug fixes, this release includes a few exciting new features:

  • Server-side scripting with JSR-223 (ISPN-5013)
  • Initial support for the JCache API over HotRod (ISPN-4955)
  • Improved size-based eviction, implemented on top of Doug Lea's ConcurrentHashMapV8 (ISPN-3023)

For a complete list of features and bug fixes included in this release, please refer to the release notes.  

Feel free to join us and shape the future releases on our forums, our mailing lists or our #infinispan IRC channel.

Many thanks to everyone who contributed to this release!

Monday, 15 December 2014

Hot Rod Remote Events #4: Clustering and Failover

This blog post is the last in a series that looks at the forthcoming Hot Rod Remote Events functionality included in Infinispan 7.0. First article focused on how to get started receiving remote events from Hot Rod servers. The second article looked at how Hot Rod remote events can be filtered, and the third one showed how to customize contents of events.

In this last article, we'll be focusing on how remote events are fired in a clustered environment and how failover situations are dealt with.

The most important thing to know about remote events in a clustered environment is that when a client adds a remote listener, this is installed in a single node in the cluster and that this node is in charge of sending events back to the client for all affected operations happening cluster wide.

As a result of this, when filtering or event customization is applied, the org.infinispan.notifications.cachelistener.filter.CacheEventFilter and/or org.infinispan.notifications.cachelistener.filter.CacheEventConverter instances must be somehow marshallable. This is necessary because when the client listener is installed in a cluster, the filter and/or converter instances are sent to other nodes in the cluster so that filtering and conversion can happen right where the event originates, hence improving efficiency. These classes can be made marshallable by making them extend Serializable, or providing and registering a custom Externalizer for them.

Under normal circumstances, the code and examples showed in previous blog posts work the same way in clustered environment. However, in a clustered environment, a decision needs to be made with regards to how to deal with situations where nodes go down: If a node goes down that does not have the client listener installed, nothing happens. However, when the node containing the client listener goes down, the Hot Rod client implementation transparently fails over the client listener registration to a different node. As a result of this failover, there could be a gap in the event consumption. This gap is solved using one of these solutions:

State Delivery


The @ClientListener annotation has an optional parameter called includeCurrentState. When this is enabled and the client listener is registered, before receiving any events for on-going operations, the server sends ClientCacheEntryCreatedEvent event instances (for methods annotated with @ClientCacheEntryCreated) for all existing cache entries to the client. This offers the client an opportunity to construct some state or computation based on the contents of the clustered cache. When the Hot Rod client transparently fails over registered listeners, it re-registers them in a different node and if includeCurrentState is enabled, clients can recompute their state or computation to reinstate it to what it was before the failover. The downside of includeCurrentState is that it's performance is heavily dependant on the cache size, and hence it's disabled by default.

@ClientCacheFailover


Alternatively, instead of relying on receiving state, users can define a method with @ClientCacheFailover annotation that receives ClientCacheFailoverEvent as parameter inside the client listener implementation:


This method would be called back whenever the node that had this client listener has gone down. This can be handy for situations when the end users just wants to clear up some local state as a result of the failover, e.g. clear a near or L1 cache. When events are received again, the near or L1 cache could be repopulated again.

This callback method of dealing with client listener failover offers a simple, efficient solution to dealing with cluster topology changes affecting client listeners. Depending on the remote event use case, this method might be better suited that state delivery.

Final Words


This post marks the end of the remote event series. In future Infinispan versions, we'll continue improving the technology adding some extra features, and more importantly, we'll start building higher level abstractions on top of remote events, such as Hot Rod client Near Caches.

Cheers,
Galder

Tuesday, 28 October 2014

Infinispan HotRod .NET Client 7.0.0.CR2

Dear community,

Infinispan HotRod .NET Client 7.0.0.CR2 is now available.

This is mostly a bug-fix release.For the complete list of changes please consult the release notes (includes also the changes from the corresponding version of the C++ Client).
 
Visit our downloads section to find the latest release.
If you have any questions please check our forums, our mailing lists or ping us directly on IRC.

Thanks to everyone involved for the changes and bug reports contributed!

Monday, 27 October 2014

Infinispan HotRod C++ Client 7.0.0.CR2

Dear community,

Infinispan HotRod C++ Client 7.0.0.CR2 is now available.

This is mostly a bug-fix release. I would like bring to your attention the following changes:
For the complete list of changes please consult the release notes.
Visit our downloads section to find the latest release.
If you have any questions please check our forums, our mailing lists or ping us directly on IRC.

Thanks to everyone involved for the changes and bug reports contributed!

Wednesday, 17 September 2014

Hot Rod Remote Events #3: Customizing events

This blog post is the third in a series that looks at the forthcoming Hot Rod Remote Events functionality included in Infinispan 7.0. In the first article we looked at how to get started receiving remote events from Hot Rod servers. In the second article, we saw how Hot Rod remote events can be filtered providing key/value filter factories that can create instances that filter which events are sent to clients, and how these filters can act on client provided information.

This time we are going to focus on how to customize events sent to clients. Events generated by default contain just enough information to make the event relevant but avoid cramming too much information in order to reduce the cost of sending them. Normally, this information consists of key and type of event.

Optionally, the information shipped in these events can be customized in order to contain more information, such as values, or to contain even less information. This customization is done with org.infinispan.notifications.cachelistener.filter.CacheEventConverter instances which are created by implementing a org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory class. Each factory must have a name associated to it via the org.infinispan.filter.NamedFactory annotation.

When a listener is added, we can provide the name of a converter factory to use with this listener, and when the listener is added, the server will look up the factory and invoke getConverter method to get a org.infinispan.notifications.cachelistener.filter.CacheEventConverter class instance to customize events server side.

Here's a sample implementation which will send custom events containing value information back to clients for a cache of Integers and Strings:

In the example above, the converter generates a new custom event which includes the value as well as the key in the event. This will result in bigger event payloads compared with default events, but if combined with filtering, it can reduce its network bandwidth cost.

In another converter implementation, the user could decide to send back an event that contains no key or event type information. This would result in extremely lightweight events at the expense of richness of information provided by the event itself.

Plugging the server with this converter requires deploying this converter factory (and associated converter class) within a jar file including a service definition inside the META-INF/services/org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory file:

With the server plugged with the converter, the next step is adding a remote client listener that will use this converter. How to implement a listener for custom events is slightly different to the listeners we've seen in the last couple of blog posts because we know have to deal with customised events as opposed to the default ones. To do so, the same annotations are used as previous blog posts, but the callbacks receive instances of org.infinispan.client.hotrod.event.ClientCacheEntryCustomEvent<T>, where T is the type of custom event we are sending from the server:

Now it's time to write a simple main java class which adds the remote event listener and executes some operations against the remote cache:

Once executed, we should see a console console output similar to this:

Similar to events, converters can also act on client provided information, enabling converter instances to customize events depending on the information given when the listener was added. The API provides an extra parameter to pass in converter parameters when the listener is added. Given the similarities with filtering, this part is not covered by this blog post.

A final note on the marshalling aspects of this example. In order to facilitate both server and client writing against type safe APIs, both the client and server need to be aware of custom event type and be able to marshall it. Client side, this is done by an optional marshaller configurable via the RemoteCacheManager. Server side, this is done by a marshaller recently added to the Hot Rod server configuration.

In the next blog post in the Hot Rod remote events series, we will look at how to receive remote events in a clustered environment, how to deal with failover situations...etc.

Cheers,
Galder

Wednesday, 20 August 2014

Hot Rod Remote Events #2: Filtering events

This blog post is the second in a series that looks at the forthcoming Hot Rod Remote Events functionality included in Infinispan 7.0. In the first blog post we looked at how to get started receiving remote events from Hot Rod servers. This time we are going to focus on how to filter events directly in the server.

Sending events to remote clients has a cost which increases as the number of clients. The more clients register remote listeners, the more events the server has to send. This cost also goes up as the number of modifications are executed against the cache. The more cache modifications, the more events that need to be sent.

A way to reduce this cost is by filtering the events to send server-side. If at the server level custom code decides that clients are not interested in a particular event, the event does not even need to leave the server, improving the overall performance of the system.

Remote event filters are created by implementing a org.infinispan.notifications.cachelistener.filter.CacheEventFilterFactory class. Each factory must have a name associated to it via the org.infinispan.notifications.cachelistener.filter.NamedFactory annotation.

When a listener is added, we can provide the name of a key value filter factory to use with this listener, and when the listener is added, the server will look up the factory and invoke getFilter method to get a org.infinispan.notifications.cachelistener.filter.CacheEventFilterFactory class instance to filter events server side.

Filtering can be done based on key or value information, and even based on cached entry metadata. Here's a sample implementation which will filter key "2" out of the events sent to clients:

Plugging the server with this key value filter requires deploying this filter factory (and associated filter class) within a jar file including a service definition inside the META-INF/services/org.infinispan.notifications.cachelistener.filter.CacheEventFilterFactory file:

With the server plugged with the filter, the next step is adding a remote client listener that will use this filter. For this example, we'll extend the EventLogListener implementation provided in the first blog post in the series and we override the @ClientListener annotation to indicate the filter factory to use with this listener:

Next, we add the listener via the RemoteCache API and we execute some operations against the remote cache:



If we checkout the system output we'll see that the client receives events for all keys except those that have been filtered:

Finally, with Hot Rod remote events we have tried to provide additional flexibility at the client side, which is why when adding client listeners, users can provide parameters to the filter factory so that filter instances with different behaviours can be generated out of a single filter factory based on client side information. To show this in action, we are going to enhance the filter factory above so that instead of filtering on a statically given key, it can filter dynamically based on the key provided when adding the listener. Here's the revised version:

Finally, here's how we can now filter by "3" instead of "2":

And the output:


To summarise, we've seen how Hot Rod remote events can be filtered providing key/value filter factories that can create instances that filter which events are sent to clients, and how these filters can act on client provided information.

In the next blog post, we'll look at how to customize remote events in order to reduce the amount of information sent to the clients, or on the contrary, provide even more information to our clients.

Cheers,
Galder

Tuesday, 12 August 2014

Hot Rod Remote Events #1: Getting started

Shortly after the first Hot Rod server implementation was released in 2010, ISPN-374 was created requesting cache events to be forwarded back to connected clients. Even though embedded caches have had access to these events since Infinispan's first release, propagating them to remote clients has taken a while, due to the increased complexity involved.

For Infinispan 7.0, we've finally addressed this. This is the first post in a series that looks at Hot Rod Remote Events and the different functionality we've implemented for this release. On this first post, we show you how to get started with Hot Rod Remote Events with the most basic of examples:

Start by downloading the Server distribution for the latest 7.0 (or higher) release from Infinispan's download page. The server contains the Hot Rod server with which the client will communicate. Once downloaded, start it up running the following from the root of the server:

./bin/standalone.sh

Next up, we need to write a little application that interacts with the Hot Rod server. If you're using Maven, create an application with this dependency, changing version to 7.0.0.Beta1 or higher:

If not using Maven, adjust according to your chosen build tool or download the all distribution with all Infinispan jars.

With the application dependencies in place, we need to start to write the client application. We'll start with a simple remote event listener that simply logs all events received:
Now it's time to write a simple main java class which adds the remote event listener and executes some operations against the remote cache:

Once executed, we should see a console console output similar to this:

As you can see from the output, by default events come with the key and the internal data version associated with the current value. The actual value is not shipped back to the client for performance reasons. Clearly, receiving remote events has a cost, and as the cache size increases and more operations are executed, more events will be generated. To avoid inundating Hot Rod clients, remote events can either be filtered server side, or the event contents can be customised. In the next blog post in this series, we will see this functionality in action.

Cheers,
Galder

Wednesday, 14 May 2014

Infinispan 7.0.0.Alpha4 is out!

Dear Community,

It is our pleasure to announce the Alpha4 release of Infinispan 7.0.0.

The release highlights are:

* HotRod protocol now supports authorization and the SKIP_CACHE_LOAD flag;
* Distributed entry iterator which allows iterate over all entries in the cluster;
* Object filtering and preview using query DSL;
* Apache Lucene 4.8.0 is now supported and JGroups was upgraded to 3.5.0.Beta5;
* Multiple improvements and bug fixes! 

For a complete list of features and bug fixes included in this release please refer to the release notes. Visit our downloads section to find the latest release.

If you have any questions please check our forums, our mailing lists or ping us directly on IRC.

Cheers,
The Infinispan team.