Thursday 7 March 2019

Subatomic Infinispan Client

Today, the Quarkus project was released as a public beta. https://quarkus.io/ For those of you not familiar, Quarkus allows you to write your enterprise apps as you have done in the past with Hibernate/JAX-RS, but also to compile these applications to a Graal-VM native image. Running in a native image allows for the application to be started up in mere milliseconds, depending upon the app, all while using much less memory.

The Infinispan team is proud to announce that you can use the HotRod Java client in Quarkus and supports being compiled to a native image as well. This can allow you to startup and connect to a remote Infinispan server faster than ever before.

If you want a quick and simple example of how you can get this working you can take a look at the quick start which can be found at https://github.com/quarkusio/quarkus-quickstarts/tree/master/infinispan-client. This example covers configuring the client connection, cache injection and simple get/put operations as a basis.

The Infinispan Client Quarkus extension in addition to providing an easy way to create a Graal-VM native image with Infinispan Client also provides the following features to help the user get stuff done quicker.
  1. Automatically Inject Important Resources
    1. RemoteCache (named)
    2. RemoteCacheManger
    3. CounterManager
  2. User based ProtoStream Marshalling
  3. Querying (Indexed / Non Indexed)
  4. Continuous Query
  5. Near Cache
  6. Authentication/Authorization
  7. Encryption
  8. Counters

More details for these features as well as how to configure them can be found at https://quarkus.io/guides/infinispan-client-guide

Please let us know of any questions, concerns or suggestions at the usual places: forum or chat. We expect to continue enhancing this extension and would love to have any feedback.

Wednesday 6 March 2019

Triple cachestore release: Cloud, MongoDB and Cassandra

Today we present to you a trifecta of cache store releases which align to Infinispan 9.x

Cassandra Cache Store

The Cassandra cache store now implements the publishEntries/publishKeys methods.

Cloud Cache Store

The Cloud cache store uses the Apache jclouds library to store data on cloud storage providers such as Amazon’s S3, Rackspace’s Cloudfiles or any other such provider supported by JClouds.
The store has been updated to Infinispan 9.x's persistence SPI and uses jclouds 2.1.x

MongoDB Cache Store

This cache store has also been updated to the Infinispan 9.x persistence SPI.

You can get documentation and maven coordinates from our Cache Store page

Tuesday 5 March 2019

Enhanced JGroups configuration

Infinispan uses JGroups as its underlying clustering layer. In order to configure the finer details of clustering (discovery, flow control, cross-site, etc) you have to provide a separate XML file with the desired configuration and reference it from your Infinispan XML file as follows:


For simple configurations this is usually fine, but configuring complex setups, such as cross-site replication, means juggling multiple files (one for the local stack, one for the cross-site stack and one for the relay configuration).

Starting with Infinispan 10 Alpha2 we have introduced a number of changes to make your life with JGroups configurations a lot easier.

Default stacks

Infinispan now comes with two pre-declared stacks: tcp and udp. Using them is as simple as just referencing their names in the <transport> element.

Inline stacks

Inlining a stack means you can put the JGroups configuration inside the Infinispan one as follows:

You can use the full JGroups schema, and by using XML namespaces you get full validation.

Stack inheritance

Most of the time you want to reuse one of the pre-declared stacks but just override some of the parameters (e.g. discovery) to suit your environment. The following example creates a new tcpgossip stack which is based on the default tcp stack but replaces the discovery protocol with TCPGOSSIP:


In the above example you can see that we have enhanced the JGroups protocol declarations with two new attributes: ispn:stack.combine and ispn:stack.position which affect how and where protocol changes are applied on the parent configuration to obtain a new configuration. stack.combine can be one of COMBINE (the default, possibly overriding any specified attributes), REPLACE (which completely replaces the protocol and resets all attributes), REMOVE (removes the protocol) and INSERT_AFTER (which places this protocol in the stack immediately after the protocol specified by stack.position).

Multiple stacks and Cross-site

The inline configuration really shows its usefulness in cross-site configurations. In fact, the JGroups stack declaration has been extended with a special element which replaces the need for a separate relay XML file and can reference other stacks just by name. The following configuration uses the default udp stack for the local cluster transport and uses the default tcp stack for connecting to a remote site:

Having the entire configuration in a single place greatly simplifies management. Of course you can combine all of the above features to obtain the configuration you need for your environment. You can find more details and examples in the documentation.
Enjoy !
Tristan

Monday 4 March 2019

First OpenShift Operator pre-release for Infinispan is here!

Infinispan Operator is a new method of packaging, deploying and managing Infinispan clusters on OpenShift. You can think of the Infinispan Operator as the runtime that manages Infinispan clusters on OpenShift.

We've just done our first Infinispan Operator pre-release, version 0.1.0, which allows you to easily boot up an Infinispan cluster on OpenShift.

Using the operator is as simple as installing the Infinispan Operator (requires admin access) on OpenShift, and then create a YAML descriptor that defines the Infinispan cluster. The example below shows how to create a 3-node Infinispan cluster:

And then call:

$ oc apply -f example-infinispan.yaml

A more detailed tutorial on using the Infinispan Operator can be found here. We highly recommend you give it a go and let us know what you think.

Over the next few versions we'll be adding more features that make the most of the capabilities the Operator framework offers to automatically manage the health and status of running Infinispan clusters.

Please also note that as we work towards the 1.0 release, some things might change :)

Cheers
Galder