Wednesday, 28 February 2018

Infinispan 9.2.0.Final


Infinispan 9.2.0.Final "Gaina" is out !


Our three-month time-boxing for a minor release plan got a little bit skewed this time in order to accommodate for some additional overhauls. This also means that, for a minor release, this is much meatier than usual.

Core improvements

  • Conflict resolution
    Automatic conflict resolution after a partition merge is now supported for all partition handling strategies and is enabled by default. Furthermore, it is now possible to deploy custom EntryMergePolicy implementations to the server
  • Reactive streams-based distributed Iteration improvements
    Distributed iterator now uses less threads and allows for efficient parallel retrieval providing for improved throughput
  • Biased reads for scattered caches
    Originator can read the ‘backup’ copy locally until the data gets overwritten again. Together with improved read performance this migrates data to nodes that use it. 
  • Off-heap sizing
    Off-heap requires less overhead per entry and provides for more accurate sizing allowing you to maximize your memory used
  • Exception based evictionA new "eviction" that instead of removing old entries prevents new entries being inserted (supported by all memory storage and eviction types)

API improvements

  • Multimap caches
    Available for both Embedded and for Hot Rod, these maps which can store multiple values for the same key
  • Clustered Counters
    Clustered counters are now available for Hot Rod and in non-clustered deployments.
  • Clustered Locks
    Available in embedded mode. They allow concurrent synchronization between nodes in the same cluster
  • Wildcard configurations
    Implicitly use a predefined configuration for all caches whose name matches a wildcard. This is particularly useful when using Infinispan through an API which doesn't allow for additional configuration properties (such as JCache).
  • Cluster-wide cache admin with optional persistence
    The CacheManager API has been enhanced with methods to create/destroy caches across a cluster, in both Embedded and Hot Rod scenarios (REST will come in 9.3). Optionally, configurations can be made persistent across restarts.
  • Cache Stream
    Overloaded collect() method to take Supplier so that collect() in clustered environments is more user-friendly.

Data Interoperability


Transcoding is a powerful new feature which allows for transparent conversion between a number of formats across different endpoints. For example, it is now possible to write ProtoBuf-encoded data through the Hot Rod endpoint and retrieve that same data as a JSON document through the REST endpoint and vice versa. Additionally, such data is also indexable and queryable.

Indexing and Query

  • POJO queries over Hot Rod
    It is now possible to directly use Hibernate Search-annotated objects through JBoss Marshalling/Java serialization without the need for ProtoBuf over Hot Rod.
  • Broadcast queries
    Clustered queries have been unified with non-clustered queries under a single API, making their use transparent.

 

Infinispan Server

  • Rebased on WildFly 11
    The server baseline has been updated to WildFly 11
  • Async Hot Rod server
    The Hot Rod server now uses async ops, sparing CPU cycles from context-switching and reducing the latency.
  • Queries over REST
    The REST endpoint now supports running Ickle queries. This is fully integrated with the above-mentioned JSON support, so your results will be returned to you as JSON documents.
  • Netty Hot Rod Client
    The Hot Rod Java client network layer has been completely rewritten to use Netty, bringing true asynchronous calls and some performance benefits.

 

Management, monitoring and logging

  •     Console support for counters
  •     Improved remote protocol access logging
  •     Jolokia is now integrated as a part of the server.

 

Infinispan on OpenShift


We have been doing a lot of work in making Infinispan a first-class citizen of OpenShift. Check out the OpenShift templates for more details.

Integrations

  • JCache 1.1
    This release is now aligned with JCache 1.1.
  • Hibernate second-level cache provider
    Traditionally shipped by our friends on the Hibernate ORM team, this component has now changed ownership over to us. This release includes a provider for both Hibernate 5.1 and 5.2.
  • Azure cloud discoveryCourtesy of JGroups' extras, we now support discovery in Azure.

 

The codename


In the grand-old tradition of giving major and minor Infinispan releases a beer-themed codename, 9.2 is no exception.

"Gaina", which means "chicken" in the milanese dialect, also happens to be one of the great beers of the Birrificio Lambrate in Milan.

 

Onwards to 9.3


We have already started working on our next release, 9.3 which should be with you at the end of May. This will continue the work to make Infinispan fully asynchronous inside out, reducing resource usage and increasing performance. We are also working on a new modular API which will improve usability, increase interoperability between embedded and remote scenarios and take advantage of reactive designs. Transactions should finally make their appearance in Hot Rod and security will be greatly enhanced, by taking advantages of the great work done by our friends over on the Elytron team. We have much more planned, so please consult our roadmap for details.

 

Download, learn and play


You will find downloads, documentation, tutorials, quickstarts and demos over on our website.

Please let us know on our forum, on IRC, on our issue tracker if you have any issues with this release, if there is any feature you would like to see in the future, or just to chat.


Wednesday, 21 February 2018

Infinispan 9.2.0.CR3

This should have been the announcement for Final, but we discovered a number of performance regressions as well as a few important bugs that needed fixing. We also slipped in a few features and improvements. So, without further ado, here's what is new and noteworthy in Infinispan 9.2.0.CR3:
  • Various component upgrades
    • Netty 4.1.21
    • Hibernate Search 5.9.0.Final
    • Protostream to 4.2.0.CR1
  • Features/Enhancements
    • Azure discovery
    • Use async ops in the Hot Rod server
    • Simplified client configuration when security is enabled
  • Lots of documentation updates
    • REST server changes
    • Data Encoding
    • Server tasks
  • And many bugfixes

Get your artifacts from maven, the distributions from our download page, the fixed issues from our issue tracker and read the updated documentation. Come and talk to us on IRC (#infinispan on Freenode) or ask questions on the forum.

Monday, 19 February 2018

Distributed iteration improvements

Infinispan hasn't always provided a way for iterating upon entries in a distributed cache. In fact the first iteration wasn't until Infinispan 7. Then in Infinispan 8, with the addition of Java 8, we fully integrated this into distributed streams, which brought some minor iteration improvements in performance.

We are proud to announce that with Infinispan 9.2 there are even more improvements. This contains no API changes, although those will surely come in the future. This one is purely for performance and utilization.

New implementation details

 

There are a few different aspects that have been changed.  A lot of these revolve around the amount of entries being retrieved at once, which if you are familiar with DistributedStreams can be configured via the distributedBatchSize method. Note that if this is not specified it defaults to the chunk size in state transfer.

Entry retrieval is now pull based instead of push

Infinispan core (embedded) has added rxjava2 and reactive streams as dependencies and rewrote all of the old push style iterator code over to pull style to fully utilize the Publisher and Subscriber interfaces.

With this we only pull up to the batchSize in entries at a time from any set of nodes. The old style utilized push with call stack blocking, which could return up two times the amount of entries. Also since we aren't performing call stack blocking, we don't have to waste threads as these calls to retrieve entries are done async and finish very quickly irrespective of user interaction. The old method required multiple threads to be reserved for this purpose.

Streamed batches

The responses from a remote node are written directly to the output stream so there are no intermediate collections allocated. This means we only have to iterate upon the data once as we retain the iterator between requests. On the originator we still have to store the batches in a collection to be enqueued for the user to pull.

Rewritten Parallel Distribution

Great care was taken to implement parallel distribution in a way to vastly reduce contention and ensure that we properly follow the batchSize configuration.

When parallel distribution is in use the new implementation will start 4 remote node requests sharing the batch size (so each one gets 1/4). This way we can guarantee that we only have the desired size irrespective of the number of nodes in the cluster. The old implementation would request batchSize from all nodes at the same time. So not only did it reserve a thread for node but could easily swamp your JVM memory, causing OutOfMemoryErrors (which no one likes). The latter alone made us force the default to be sequential distribution when using an iterator.

The old implementation would write entries from all nodes (including local) to the same shared queue. The new implementation has a different queue for each request, which allows for faster queues with no locking to be used.

Due to these changes and other isolations between threads, we can now make parallel distribution the default setting for the iterator method. And as you will see this has improved performance nicely.

Performance


We have written a JMH test harness specifically for this blog post, testing 9.1.5.Final build against latest 9.2.0.SNAPSHOT. The test runs by default with 4GB of heap with 6 nodes in a distributed cache with 2 owners. It has varying entry count, entry sizes and distributed batch sizes.

Due to the variance in each test a large number of tests were ran and with different permutations to make sure it covered a large amount of test cases. The JMH test that was ran can be found at github. All the default settings were used for the run except -t4 (runs with 4 worker threads) was provided. This was all ran on my measly laptop (i7-4810MQ and 16 GB) - maxing out the CPU was not a hard task.

CAVEAT: The tests don't do anything with the iterator and just try to pull them as fast as they can. Obviously if you have a lot of processing done between iterations you will likely not see as good of a performance increase.

The entire results can be found here. It shows each permutation and how many operations per second and finds the difference (green shows 5% or more and red shows -5% or less).


Operation Average Gain Code
Specified Distribution Mode 3.5% .entrySet().stream().sequentialDistribution.iterator()
Default 11% .entrySet().iterator()
No Rehash 14% .entrySet().stream().disableRehashAware().iterator()

The above 3 rows show a few different ways you could have been invoking the iterator method. The second row is probably by far the most used case. In this case you should see around a 11% increase in performance (results will vary). This is due to the new pulling method as well as parallel distribution becoming the new default running mode. It is unlikely a user was using the other 2 methods, but are provided for a more complete view.

If you were specifying a distribution mode manually, either sequential or distribution you will only see a few percent faster run (3.5%), but every little bit helps! Also if you can switch to parallel you may want to think about doing so.

Also you can see if you were running with rehash disabled prior, it has even more gains (14%). Those don't even include the fact that no rehash was 28% faster than with before (which means it is about 32% faster in general now). So if you can get away with a at most once guarantee, disabling rehash will provide the best throughput.

Whats next? 


As was mentioned this is not exposed to the user directly. You still interact with the iterator as you would normally. We should remedy this at some point.

Expose new method

We would love to eventually expose a method to return a Publisher directly to the user so that they can get the full benefits of having a pull based implementation underneath.

This way any intermediate operations applied to the stream before would be distributed and anything applied to the Publisher would be done locally. And just like the iterator method this publisher would be fully rehash aware if you have it configured to do so and would make sure you get all entries delivered in an exactly once fashion (rehash disabled guarantees at most once).

Another side benefit is that the Subscriber methods could be called on different threads so there is no overhead required on the ISPN side for coordinating these into queue(s). Thus the Subscriber should be able to retrieve all entries faster than just doing an iterator.

Java 9 Flow

Also many of you may be wondering why we aren't using the new Flow API introduced in Java 9. Luckily the Flow API is a 1:1 conversion of reactive streams. So whenever Infinispan will start supporting Java 9 interfaces/classes, we hope to properly expose these as the JDK classes.

Segment Based Iteration 

With Infinispan 9.3, we hope to introduce data container and cache store segment aware iteration. This means when iterating over either we would only have to process entries that map to a given segment. This should reduce the time and processing for iteration substantially, especially for cache stores. Keep your eyes out for a future blog post detailing these as 9.3 development commences.

Give us Feedback

We hope you find a bit more performance when working with your distributed iteration. Also we value any feedback on what you want our APIs to look like or find any bugs. As always let us know at any of the places listed here.

Sunday, 18 February 2018

Thanks JFokus!!

We're now back from JFokus and we'd like to thank organizers, attendees, volunteers and sponsors for making JFokus a very enjoyable experience! :)

From an Infinispan perspective, we started the week with a Streaming Data deep-dive session presented together with Clement Escoffier. This was a 3h long session, so there was plenty to go through but we managed to do it on time. The final demo did not fully work, but this is something we will improving it in the near future. Slides can be found in [1] [2] [3] [4] [5] [6] and the code can be found here. This session was not recorded.

Next day I had a talk on streaming data analysis on top of Kubernetes where I went through some of the topics explained in the deep dive. This was mostly a live coding session showing how to work with streaming data on top OpenShift/Kubernetes which run on Google Cloud. This session was recorded. I'll keep an eye for when the video becomes available to share it here. The code from this session can be found here, slides here and the live coding instructions here.

The rest of the conference was a blast, with many networking opportunities. During this networking I started working on an RxJava2 API facade for Infinispan remote API, which would make it easier to fit with other reactive toolkits out there, such as Vert.x :). More news on this soon

Cheers,
Galder

Thursday, 15 February 2018

Hotrod clients C++/C# 8.2.0.Beta1 are out!

Dear Infinispanners,
C++ and C# 8.2.0.Beta1 releases are available!

These releases contain all the 8.2.0 features.

Worth a mention is the improvement in the remote execution API: we moved the JBossMarshaller basic implementation from test to the distro in order to simplify the data management on the application side. Test examples [1] and [2] have been updated accordingly.

Next step will be a CR release containing improvements on API docs (doxygen)

Check the release notes, browse the source code (C++, C#) or download the releases!

Cheers,
The Infinispan Team

Wednesday, 7 February 2018

Data Container Changes Part 3

Just over a year ago we detailed some improvements to the data container, including the availability of Off Heap storage in part 2. There have been quite a few fixes for Off Heap especially around memory size estimations with Infinispan 9.2. There is also a brand new "eviction" strategy that has a bit of a twist.

Eviction Strategy Resurrected


Some of you may have remembered that Infinispan used to have an eviction strategy. This was originally used to decide what eviction algorithm was used, such as LRU or LIRS. This was removed when the new data container was introduced. Well... it is back again, but it will be used for a slightly different purpose.

The eviction strategy still has NONE & MANUAL which are exactly the same as before.

Remove strategy


There is a new REMOVE strategy that is configured by default if eviction size is greater than 0. This strategy essentially enables eviction and removes old entries as new ones are inserted.

Exception strategy


We have a brand new "eviction" strategy that provides new functionality. It is unique in that it doesn't really evict, but rather prevent entries from being inserted.  This is the EXCEPTION strategy which blocks new entries from being inserted (or updated if they exceed memory size) by throwing a ContainerFullException when the size is reached.

This strategy only works on transactional caches that always have 2 phase commit enabled. This can be useful if you want to always have only so many entries and to give priority to currently inserted entries. This strategy has better performance than REMOVE since it doesn't have to bookkeep all entries to know what to remove as well.

Note this strategy works across all storage types: OBJECT, BINARY and OFFHEAP and works with both MEMORY and SIZE based "eviction types. This makes it just as flexible as the REMOVE eviction strategy and hope it finds some uses by people.

How to Configure EXCEPTION Strategy


This is how you can enable MEMORY based EXCEPTION "eviction" using xml configuration.
This is how you configure the same thing but programmatically.

Off Heap Memory Size Allocations & Estimations


Before the off heap memory based eviction only counted the allocated memory chunks for the stored entries themselves. This unfortunately meant that the size estimate is a bit less than what it should have been. There are a few things that we improved since then, including reducing the overhead of our allocations. Note all of the below things require no configuration changes and users should just get the benefits.

Reduced per object overhead


Prior the overhead for immutable entries with eviction, Infinispan itself use to allocate 2 chunks of memory with one being 28 bytes and adding 8 bytes to the actual object. Now we only allocate an additional 16 bytes to the object memory block itself (saving the extra allocation and requiring less on the object) when using eviction. Due to memory allocation overhead this saves much more than the 20 bytes as the allocator also has its own overhead.

We also shaved off 4 bytes off of all entries if expiration was not used, meaning overhead for an immutable cache entry without eviction only requires 21 bytes of overhead from ISPN when using off heap (retained in the same allocation block).

Per allocation memory sizing estimations


Internally ISPN allocates a new chunk of memory for each object. This is done currently to leverage the underlying OS allocator to handle features such as fragmentation or compaction (if the allocator does so). Unfortunately this means that each object has its own overhead from the allocator. Thus we now take that into account when estimating the memory used by adding 8 bytes overhead and aligning to 16 bytes. This seems to be a pretty common way for allocators to work. If possible we could allow for tweaking these values, but they are hard coded currently.

Accounting for Address Count


As was mentioned in the prior blog post about off heap, we allocate a single block of memory to hold address counters for our lookups when using Off Heap. Unfortunately we didn't account for that in the memory eviction count. We now account for that in the eviction mechanism, thus your memory eviction size must be greater than the address count rounded up to the nearest power of 2, multiplied by 8. What a mouthful...

Wrap up


Off heap has been overhauled quite a bit to try to reduce memory usage, fix bugs and more accurately estimate the memory used. We hope that along with the new eviction strategy are welcome additions to various applications.

Please make sure to contact us if you have any feedback, find any bugs or have any questions! You can get in contact with various places listed on our website.

Friday, 2 February 2018

A different kind of template: wildcards

Infinispan's configuration templates are an extremely flexible way to create multiple caches using the same configuration. Configuration inheritance works by explicitly declaring the configuration a specific cache should use.

This works fine when you know the caches you are going to use upfront, but in more dynamic scenarios, this might not be possible. Additionally, if you are using the JCache API, there is no way for you to specify the configuration template you want to use.

Infinispan 9.2 introduces an alternative way to apply templates to caches: wildcards. By creating a template with a wildcard in its name, e.g. `basecache*`, any cache whose name matches the template name will inherit that configuration.

Let's show an example:

Above, caches `basecache-1` and `basecache-2` will use the `basecache*` configuration. This behaviour also applies when retrieving caches programmatically:


When using the JCache API, using the XML file above and the following code will achieve the same result:


NOTE: If a cache name matches multiple wildcards, i.e. it is ambiguous, an exception will be thrown.

I will be introducing other new features that Infinispan 9.2 brings to cache configuration in an upcoming blog post. Stay tuned !

Infinispan coming to JFokus!!



The Infinispan team is on the move again! After Katia's trip to Snowcamp, it's my turn to head  to JFokus, Sweden's largest developer conference.

For JFokus we've morphed the streaming data workshop we delivered last year at Devoxx Belgium and Codemotion Madrid into a 3 hour long deep dive tutorial. This will be delivered Monday 5th February at 13:30 local time.

On top of that, I'll be delivering a talk on streaming data analysis with Kubernetes where Infinispan will be featured. If you're interested make sure you come on Tuesday, 6th February at 14:00 local time.

So, if you're coming to JFokus and you're interesting in data grids, streaming data or similar topics, make sure you attend our talks.

Cheers,
Galder

Thursday, 1 February 2018

Executing Code in the Grid

Infinispan has quite a few spectacular ways of executing code in the grid. But I bet you haven't heard or aren't really familiar with those, which is disappointing. I hope to fix this, however, as we have added more information to the user guide and wanted to detail that here in this blog.

As I am sure you are aware Infinispan can be used in embedded (in your JVM) and remote (in a standalone server). Unfortunately, this means there are different ways of executing code based on which mode you are in.

Embedded

The embedded mode has the most features available and is the easiest to use. The appropriate section can be found here.

One question that seems to come up more than others is how a user can perform cache operations on all data, such as remove all elements that match a given filter. If you are curious about this one, you should check out the Examples section with the example named "Remove specific entries" as it details how a user would do exactly that.

I should also point out the new Cluster Executor section, which is similar to Streams that replaced Map Reduce, is here to replace the old Distributed Executor. With Cluster Executor and Distributed Streams there is a clearer distinction between executing code on nodes (Cluster Executor) and executing code based on data (Distributed Streams).

Server

The server is a bit more interesting and usually requires configuration ahead of time, unlike Embedded. It can be found in this section. The benefit of the server is most of these can invoke embedded operations internally.

Scripting is by far the easiest to use - just insert your script and execute - but has some limitations that we haven't been able to fix yet.

Server tasks can run pretty much any Java but require registering classes beforehand. Unfortunately, this section still needs to be filled in and should be added sometime in the near future. I would say, until then, if you are interested, you can look at some tests in github.

Takeaway

I hope this has helped users be able to find out some more information about the various ways of executing arbitrary code for your data. If you have any questions or need more clarification about the features highlighted here, please don't hesitate to let us know at any of these places.