Wednesday 21 November 2018

The road to Infinispan 10 (Alpha1)

Dear all,

Today we are releasing 10.0.0.Alpha1 and 9.4.2.Final.

Infinispan 9.4.2.Final comes with a number of bug fixes and some small additional features:

  • ISPN-9655 REST Access Log headers
  • ISPN-8144 & ISPN-9661 Cross-Site replication statistics
  • ISPN-9708 Expose the executor services through JMX
  • ISPN-9732 Local iteration optimization with write behind is valid for non shared stores
  • ISPN-9717 Fix Integer overflow for lifespan and maxIdle

We have begun working on what will become Infinispan 10. As with all new major releases, this will come with a number of important changes.

  • New Server
    We are working on a new lightweight server, currently dubbed ServerNG, which will supersede the current WildFly-based offering. The new server will have a smaller disk and memory footprint, a new RESTful admin interface, improved security. It will still use familiar components (Elytron for security, Narayana for transactions, etc) but we hope that the installation and usability experience will be most improved. A dedicated blog post will describe in detail what is coming.
  • Long-term Storage Format
    The persistent storage format will be changed so that it will be easier to transparently make changes to it without requiring further exporters/importers.
  • Non-blocking listeners
    The listener implementation will be replaced with a non-blocking implementation.
  • Asynchronous CacheLoader/Store
    Store operations will be ran in another thread to provide non blocking for main threads
  • Improved statistics
    Infinispan statistics have been traditionally over-simplistic, offering mostly basic averages for writes and reads. We are going to implement percentiles on a histogram as well as recording tracing information so that you will be able to know how much time is being spent in the various subsystems (clustering, persistence, etc.)
  • New API
    The current Infinispan API, based around Java's ConcurrentHashMap design, does not offer the flexibility required to support modern reactive designs as well as the various extensions we've added over the years (counters, multimaps, etc). We are therefore working on a new modern API design which we will be describing with a number of blog posts in the near future.
  • Agroal JDBC Connection Pool
    We are replacing the JDBC connection pool implementation with Agroal.
  • Kubernetes Operators
    Operators are all the rage in the Kubernetes world, and we are working on an Infinispan Operator which will take care of managing and monitoring the health of an Infinispan cluster, handle scale up/scale down safely, perform upgrades and more.

Infinispan 10.0.0.Alpha1 is the first release from our development branch. It currently includes the following features on top of what is in 9.4.2.Final:



Please report any issues in our issue tracker and join the conversation in our Zulip Chat to shape up our next release.

Monday 19 November 2018

Quick start Infinispan on Kubernetes

Last week we showed you how to easily run Infinispan on top of OpenShift. This week we're trying to do the same on Minikube, a tool that makes it easy to run vanilla Kubernetes locally.

Although we've already covered the topic in the past, we felt the descriptors needed a permanent location and an update to the latest Infinispan releases. Detailed instructions can be found in this repository.

With OpenShift, we took advantage of Templates which allow a set of objects to be parameterised.
Templates are OpenShift specific, so Kubernetes does not understand them. Instead, we provide you with the individual descriptors required to run Infinispan (Helm chart to come...). This includes:


Before applying the descriptors, download and install Minikube. Then, set a profile, select the VM driver, give it enough CPU and memory for your experiments, and start it.

Once Minikube it's running and you have the corresponding kubectl command line tool installed, simply call:

$ kubectl apply -f .

Once all pods are ready, you should verify the 3-node cluster has formed correctly (find out how in the README file).

When ready, you can start storing and retrieving data. The HTTP REST endpoint is particularly useful for these initial tests, to verify everything works as expected:

$ kubectl exec \
  -it infinispan-server-0 \
  -- curl -v -u test:changeme -H 'Content-type: text/plain' -d 'test' infinispan-server-http:8080/rest/default/stuff

Then:

$ kubectl exec -it infinispan-server-1 \
  -- curl -v -u test:changeme infinispan-server-http:8080/rest/default/stuff

Go and try it out and let us know what you think. You can find us on this Zulip chat :)

Cheers,
Galder

Thursday 15 November 2018

Hotrod clients C++ and C# 8.3.0.Final are out!

Dear Infinispanners,

The C++ and C# 8.3.0.Final releases are out!

Main features contained in this release are:
  • Cache Admin Operations: create and remove cache at runtime;
  • Counters: clusterwide counters;
  • Transactions: run a list of operations transactionally;
  • Media Types: use differents media-types to encode (key,value) pairs.
Source code, binaries and docs are available as usual at the links below.

Thank you for reading,
The Infinispan Team


[1] Release notes for the 8.3.0 serie
[2++] C++ code for 8.3.0.Final
[2#] C# code for 8.3.0.Final
[3] Downloads

Monday 12 November 2018

The fastest path to running Infinispan on OpenShift!

Creating an Infinispan Server cluster in OpenShift has never been easier! We've just given the OpenShift templates for Infinispan server their biggest makeover yet which should help both Infinispan and OpenShift users:

The repository has been simplified and flattened out to only leave essential information. Minishift is the preferred way to get started with Infinispan and OpenShift, so we've tailored the instructions for this set up.

OpenShift templates are now YAML based which is less verbose, but more importantly, allows Infinispan Server XML configuration to be shown as-is. This makes it easier to directly modify the XML in the template itself.

The fastest way to get started with Infinispan and OpenShift is to simply fire up Minishift, set a profile, checkout our Infinipan OpenShift repository and then call:

oc create -f infinispan-ephemeral.yaml
oc new-app infinispan-ephemeral

These two simple steps will get you a single node Infinispan Server running! A more detailed getting started guide can be found in the repository's README file.

Go and try it out and let us know what you think. You can find us on this Zulip chat :)

Cheers,
Galder

Friday 9 November 2018

Infinispan 9.4.1.Final and Infinispan Spring Boot Starter 2.1.0.Final are out!

Dear Infinispan and Spring Boot users,

We have just released Infinispan 9.4.1.Final and Infinispan Spring Boot 2.1.0.Final.

Highlights of the Infinispan release include:

Complete release notes can be read here.

Highlights of the Infinispan-Spring-Boot release include:
  • Upgrade Spring-Boot version to 2.1.0
  • Upgrade Infinispan version to 9.4.1
  • Integration with Spring Actuator, to expose production ready metrics (ISPN-9668)
  • Bug fixes
  • Additional code examples
You can find these releases in the maven central repository.

Please report any issues in our issue tracker and join the conversation in our Zulip Chat to shape up our next release.

Enjoy,

The Infinispan Team

Friday 2 November 2018

Near caching with Spring-Boot and Infinispan


We have recently released infinispan-spring-boot-starter 2.0.0.Final. This version supports Spring Boot 2.1 and Infinispan 9.4.0.Final.

Before this release, some important features - such as near caching - were only configurable by code.
From now on, we can set all of the Hot Rod client configuration using the hotrod.properties file or the Spring application YAML. The latter is an important community requirement we had.

Let's see how to speed up our applications performance with near caching!


Hot Rod

 

Just as a quick reminder, Infinispan can be used embedded in your application or in client/server mode. To connect you application to a server you can use an Infinispan Client and the Infinispan “Hot Rod Protocol”. Other protocols are available, such as REST, but Hot Rod is the most recommended way since it is the one that supports most of the Infinispan functionalities.

Near cache


From the Infinispan documentation: Hot Rod client can keep a local cache that stores recently used data. Enabling near caching can significantly improve the performance of read operations get and getVersioned since data can potentially be located locally within the Hot Rod client instead of having to go remote.

When should I use it? 


Near caching can improve the performance of an application when most of the accesses to a given cache are read-only and the accessed dataset is relatively small.
When an application is doing lots of writes to a cache, invalidations, evictions and updates to the near cache need to happen. In this scenario we won't probably get much benefit.

As I said in the introduction, the good news is that this feature can be activated just by configuration. Code doesn't change, so we can measure the benefits, if such exist, in a very straightforward way.

Spring-Boot


I have created a very simple application, available here. Maven, Java 8 and an Infinispan server are required to run it. You can download the server or use docker.


Docker: docker run -it -p 11222:11222 jboss/infinispan-server:9.4.0.Final

Standalone: PATH/infinispan-server-9.4.0.Final/bin/standalone.sh

Once the server is up and running, build the application using maven 

>> infinispan-near-cache: mvn clean install

Writer 


This application loads the required data to a remote cache: a list of some of the Infinispan contributors over the last decade.

>> writer: mvn spring-boot:run


Reader 


The reader application does 10.000 accesses to the contributors cache. Using a random id, I call 10.000 times the get method. The job gets done in my laptop in ~4000 milliseconds.

>> reader-no-near-cache: mvn spring-boot:run


Activating the near cache


I need to configure two properties:
  • Near Cache Mode: DISABLED or INVALIDATED. Default value is DISABLED, so I turn it on with INVALIDATED.
  • Max Entries: Integer value that sets the max size of the near caches. There is no default value, so I set up one.
The hotrod client configuration is for each client, not for each cache (this might change in the future). With that in mind, note that configuring the previous properties will activate near caching for all the caches. If you need to activate it just for some of them, add the following property:
  • Cache Name Pattern:  String pattern. For example "i8n-.*" will activate the near caching for all the caches whose name starts by "i8n-".

Configuration can be placed in the hotrod-client.properties, Spring-boot configuration or code.

hotrod-client.properties

infinispan.client.hotrod.near_cache.mode=INVALIDATED
infinispan.client.hotrod.near_cache.max_entries=40
infinispan.client.hotrod.near_cache.cache_name_pattern=i8n-.*

application.yaml (or properties)

infinispan:
   remote:
     near-cache-mode: INVALIDATED
     near-cache-max-entries: 10
     near-cache-cache-name-pattern: i8n-.*

code 

With the Infinispan Spring-Boot Starter, I can add custom configuration using the InfinispanRemoteCacheCustomizer.


Results


My dataset contains 25 contributors. If I activate the near cache with max 12 entries and I run my reader again, I get the job done in ~1900 milliseconds, which is already an improvement. If I configure it to hold the complete dataset, I get it done in ~220 milliseconds, which is a big one!

Conclusions


Near caching can help us speed up our client applications if configured properly. We can test our tuning easily because we only need to add some configuration to the client. Finally, the Spring-Boot Infinispan Starter helps us build services with Spring-Boot and Infinispan. 

Further work will be done to help Spring-Boot users work with Infinispan, so stay tuned! Any feedback on the starter or any requirement from the community is more that welcome. Find us in Zulip Chat for direct contact or post your questions in StackOverflow!