Thursday 31 January 2013

Infinispan 5.2.0 Final has landed!

Dear Infinispan community,

I am pleased to announce the much awaited final release of Infinispan 5.2.0. With more than 100 new features and enhancements and 150 bug fixes this the most stable Infinispan version to date.
Highlights:
This release has spread over a period of 8 months, with a total of 4 Alpha, 6 Beta and 3 CR releases: a sustained effort from  the core development team, QA team and our growing community - a BIG thanks to everybody involved!

Remember to visit our downloads section to find the latest release and if you have any questions please check our forums, our mailing lists or ping us directly on IRC.

Cheers,
Mircea






Friday 25 January 2013

Infinispan 5.2.0.CR3 gets rid of RHQ annotations

The amount of feedback we've had on Infinispan 5.2.0.CR2 has been tremendous, and so we considered that Infinispan was not ready to go Final yet, so we decided to do another candidate release, called 5.2.0.CR3.

In this candidate release, we've got rid of the RHQ annotations dependency, so Infinispan Core has now one less dependency thanks to the integration of RHQ annotations with our own JMX annotations.

The areas containing the most important fixes are Distributed Caches and the Hot Rod server, so if you're a user of these features, we'd highly recommend that you give CR3 a go. Check the full release notes for detailed information on the issues fixed.

Remember to visit our downloads section to find the latest release, and if you have any questions please check our forums, our mailing lists or ping us directly on IRC.

Cheers,
Galder

Wednesday 23 January 2013

Infinispan AS 7.x modules

The latest Infinispan 5.2.0.CR2 release includes a set of modules for JBoss AS 7.x. By installing these modules, it is possible to deploy user applications without packaging the Infinispan JARs within the deployments (WARs, EARs, etc), thus minimizing their size. In order not to conflict with the Infinispan modules which are already present within an AS installation, the modules provided by the Infinispan distribution are located within their own slot identified by the major.minor versions (e.g. slot="5.2").
In order to tell the AS deployer that we want to use the Infinispan APIs within our application, we need to add explicit dependencies to the deployment's MANIFEST:
If you are using Maven to generate your artifacts, mark the Infinispan dependencies as provided and configure your artifact archiver to generate the appropriate MANIFEST.MF file:

Saturday 19 January 2013

Infinispan 5.2.0.CR2 is out!

Dear Infinispan users,

This is hopefully the last CR release of the long expected infinispan 5.2 series. It contains some final touches and bug fixes, especially around the new non-blocking state transfer functionality but also a very useful enhancement to the HotRod protocol (and the Java client) which allows users to fetch the list of keys existing in the cluster - a big thanks to Ray Tsang for contributing this feature!
For the complete list of features please refer to the release notes.
You can download the distribution or the maven artifact. If you have any questions please check our forums, our mailing lists or ping us directly on IRC!

Cheers,
Mircea

Saturday 12 January 2013

Infinispan memory overhead

Have you ever wondered how much Java heap memory is actually consumed when data is stored in Infinispan cache? Let's look at some numbers obtained through real measurement.

The strategy was the following:

1) Start Infinispan server in local mode (only one server instance, eviction disabled)
2) Keep calling full garbage collection (via JMX or directly via System.gc() when Infinispan is deployed as a library) until the difference in consumed memory by the running server gets under 100kB between two consecutive runs of GC
3) Load the cache with 100MB of data via respective client (or directly store in the cache when Infinispan is deployed as a library)
4) Keep calling the GC until the used memory is stabilised
5) Measure the difference between the final values of consumed memory after the first and second cycle of GC runs
6) Repeat steps 3, 4 and 5 four times to get an average value (first iteration ignored)

The amount of consumed memory was obtained from a verbose GC log (related JVM options: -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/tmp/gc.log)

The test output looks like this: https://gist.github.com/4512589

The operating system (Ubuntu) as well as JVM (Oracle JDK 1.6) were 64-bit. Infinispan 5.2.0.Beta6. Keys were kept intentionally small (10 character Strings) with byte arrays as values. The target entry size is a sum of key size and value size.

Memory overhead of Infinispan accessed through clients


HotRod client

entry size -> overall memory
512B       -> 137144kB
1kB        -> 120184kB
10kB       -> 104145kB
1MB        -> 102424kB

So how much additional memory is consumed on top of each entry?

entry size/actual memory per entry -> overhead per entry
512B/686B                -> ~174B
1kB(1024B)/1202B         -> ~178B
10kB(10240B)/10414B      -> ~176B
1MB(1048576B)/1048821B   -> ~245B

MemCached client (text protocol, SpyMemcached client) 

entry size -> overall memory
512B       -> 139197kB
1kB        -> 120517kB
10kB       -> 104226kB
1MB        -> N/A (SpyMemcached allows max. 20kB per entry)

entry size/actual memory per entry -> overhead per entry
512B/696B               -> ~184B
1kB(1024B)/1205B        -> ~181B
10kB(10240B)/10422B     -> ~182B

REST client (Content-Type: application/octet-stream)

entry size -> overall memory
512B       -> 143998kB
1kB        -> 122909kB
10kB       -> 104466kB
1MB        -> 102412kB

entry size/actual memory per entry -> overhead per entry
512B/720B               -> ~208B
1kB(1024B)/1229B        -> ~205B
10kB(10240B)/10446B     -> ~206B
1MB(1048576B)/1048698B  -> ~123B

The memory overhead for individual entries seems to be more or less constant
across different cache entry sizes.

Memory overhead of Infinispan deployed as a library


Infinispan was deployed to JBoss Application Server 7 using Arquillian.

entry size -> overall memory/overall with storeAsBinary
512B       -> 132736kB / 132733kB
1kB        -> 117568kB / 117568kB
10kB       -> 103953kB / 103950kB
1MB        -> 102414kB / 102415kB

There was almost no difference in overall consumed memory when enabling or disabling storeAsBinary.

entry size/actual memory per entry-> overhead per entry (w/o storeAsBinary)
512B/663B               -> ~151B
1kB(1024B)/1175B        -> ~151B
10kB(10240B)/10395B     -> ~155B
1MB(1048576B)/1048719B  -> ~143B

As you can see, the overhead per entry is constant across different entry sizes and is ~151 bytes.

Conclusion


The memory overhead is slightly more than 150 bytes per entry when storing data into the cache locally. When accessing the cache via remote clients, the memory overhead is a little bit higher and ranges from ~170 to ~250 bytes, depending on remote client type and cache entry size. If we ignored the statistics for 1MB entries, which could be affected by a small number of entries (100) stored in the cache, the range would have been even narrower.


Cheers,
Martin

Tuesday 8 January 2013

Infinispan 5.2.0.CR1 is out!

Hi Infinispan users,

I'm very glad to announce first CR from the 5.2 branch. This contains a handful of fixes and enhancements especially around non-blocking state transfer functionality (refer to the release note for complete lists).

Also here's an summary of the main features that are being developed in Infinispan 5.2:
  • The non-blocking state transfer functionality which is a much more efficient and flexible implementation of the functionality that allows Infinispan to serve requests during nodes joining/leaving 
  • The cross-site replication functionality which allows backing up data between geographically distributed clusters in order to protect against catastrophic failures
  • Rolling upgrades of the hotrod clusters (zero downtime for upgrades)
  • Various fixes and improvements for the Map/Reduce framework
You can download the distribution or the maven artifact. If you have any questions please check our forums, our mailing lists or ping us directly on IRC!


Cheers,
Mircea

Friday 4 January 2013

JSR 347 in 2013

Happy new year, everyone.

One of my goals for 2013 is to push JSR 347 into action again.  To kick start this, I propose a meeting among expert group members - anyone else with an interest in the JSR is welcome to attend as well.

Details are in my post to the mailing list.  Please respond to the mail list if you are interested in participating.

Cheers
Manik