Showing posts with label transactions. Show all posts
Showing posts with label transactions. Show all posts

Thursday, 24 May 2012

How to configure Infinispan with transactions, backed by relational DB on JBoss AS 7 vs. Tomcat 7


Migrating projects from one container to another is often problematic. Not as much with Infinispan. This article is about configuring Infinispan, using Transaction Manager for demarcating transaction boundaries, while keeping the data both in a memory and relational database - stored via JDBC cache store. I'll demonstrate all the features on code snippets. 


A complete application is located at https://github.com/mgencur/infinispan-examples and is called carmart-tx-jdbc. It's a web application based on JSF 2, Seam 3 and Infinispan 5.1.4.FINAL, is fully working, tested with JBoss  Application Server 7.1.1.Final and Tomcat 7.0.27. There  is one prerequisite, though. It needs an installed and working MySQL database in your system. The database name should be carmartdb, accessible by a user with carmart/carmart username/password.
 

First, look at what we need to configure for JBoss Application Server 7. 

Configuring transactions and JDBC cache store on JBoss AS 7

Infinispan will be configured via new fluent API using builders, hence the call to  .build() method at the end. We need to configure aspects related to  transactions and cache loaders. The configuration API for cache loaders  is likely going to be changed in not-so-far future. It should be fluent  and more intuitive, generally easier to use than current one. 

I purposely do not show XML configuration. Configuration examples can be found at https://github.com/infinispan/infinispan/blob/master/core/src/main/resources/config-samples/sample.xml. In order to configure transactions and cache loaders, look for tags called  <transaction> and <loaders> and modify that sample file according to below configuration. Tag names and attribute names are very similar for both XML and Java configuration. If that is not enough, there is always a schema in Infinispan distribution.

The configuration of Infinispan is as follows: 



Lines marked with red are different in other containers/configurations, as you'll see in a minute. The code above implies that we need to specify proper TransactionManagerLookup implementation which is, in this case, GenericTransactionManagerLookup. We  also need to say: "Hey, I wanna use ManagedConnectionFactory as a connectionFactoryClass". OK, here we go. I should, as well, explain how to configure a datasource properly, right? In JBoss AS 7, this is configured as a subsystem in $JBOSS_HOME/standalone/configuration/standalone.xml:


The usage of transactions is very simple as we can obtain a transaction object by injection.


Sources: https://github.com/mgencur/infinispan-examples/blob/master/carmart-tx-jdbc/src/jbossas/java/org/infinispan/examples/carmart/session/CarManager.java

Quite easy, isn't it ...if you know how to do it. The only problem is that it does not work (at least not completely) :-) If you deploy the app, you find out that when storing a key-value pair in  the cache, an exception is thrown. This exception indicates that the operation with DB (and JDBC cache store) failed. The exception says:


A complete stack trace looks similar to https://gist.github.com/2777348
There's still an open issue in JIRA (ISPN-604) and it is being worked on. 

Configuring transactions and JDBC cache store on JBoss AS 7 - c3p0

But how do we cope with this inconvenience for now... By not using a managed datasource but rather a third party library called c3p0 (JDBC3  Connection and Statement Pooling, more information at http://www.mchange.com/projects/c3p0/index.html) Infinispan allows you to use this library for connecting to the database. If you really want to use it, you need to choose a different connectionFactoryClass which is, in this case, PooledConnectionFactory.

Infinispan configuration looks like this:


Transactions are accessible in the same way as in the previous use case. Now let's look at configuration for Tomcat servlet container. 

Configuring transactions and JDBC cache store on Tomcat 7

Tomcat does not have any Transaction Manager in it so we have to bundle one with the application. For the purpose of this exercise, we choose JBoss Transactions (http://www.jboss.org/jbosstm). See dependencies at the end.

Cache manager and cache configuration is in this form:



For Tomcat, we need to specify a different transactionManagerLookup implementation and datasourceJndiLocation. Tomcat simply places objects  under a bit different JNDI locations. The datasource is defined in context.xml file which has to be on classpath. This file might look like this:


How do we get the transaction manager in the application then? Lets obtain  it directly from a cache. 

Infinispan knows how to find the manager and we need to know how to obtain it from Infinispan.



Sources: https://github.com/mgencur/infinispan-examples/blob/master/carmart-tx-jdbc/src/tomcat/java/org/infinispan/examples/carmart/session/CarManager.java The transaction manager provides standard methods for transactions, such as begin(), commit() and rollback(). 

Now is the time for dependencies

So...which dependencies do we always need when using Infinispan with JDBC cache stores and transactions? These are infinspan-core, infinispan-cachestore-jdbc and javax.transaction.jta. The scope for jta dependency, as defined in Maven, is different for JBossAS and Tomcat.

Common dependencies for JBossAS and Tomcat



Of course, our application needs a few more dependencies but these are not directly related to Infinispan. Let's ignore them in this article. JBoss AS 7 provides managed datasource that is accessible from Infinispan. The only specific dependency (related to transactions or Infinispan) is JTA.

Dependencies specific to JBossAS - using managed Datasource (managed by the server)



Dependencies specific to JBossAS - using c3p0


Yes, you need to bundle also MySQL connector. On the other hand, for Tomcat use case and JBossAS with managed datasource, this jar file needs do be deployed to the server separately. For Tomcat, do this simply by copying the jar file to $TOMCAT_HOME/lib.  For JBoss AS 7, copy the jar file into $JBOSS_HOME/standalone/deployments.

Dependencies specific to Tomcat - using JBoss Transactions



That's it. I hope you've found this article helpful. Any feedback is welcome, especially the positive one :-) If you find any problem with the  application, feel free to comment here or participate in Infinispan forums (http://www.jboss.org/infinispan/forums).

Martin

Monday, 23 April 2012

Back from Portugal with goodies

The trip started in Coimbra where Sanne Grinovero and I discussed about Infinispan and OGM at the Portugal JUG. The interest was obvious from the amount of questions and the discussions that followed and ended up late in the night. A big thanks to Samuel Santos for organising this!
The trip then continued in Lisbon where we meet the CloudTM team for two good days of hacking and brainstorming.  The results are a set of awesome features the CloudTM is about to contribute to Infinispan, especially around transactions:
  • Total Order Broadcast (TOB) - is a transaction protocol for replicated caches built on atomic broadcast. It relies on JGroups'  SEQUENCER protocol for achieving total order. The benchmarks ran comparing it with the current 2PC based transaction protocol are rather promising and we hope to integrated it in our 5.2 release
  • Total Order Multicast (TOM) - is simplistically put TOB for distributed caches. This is the next in pipe after TOB to be integrated. Again, preliminary performance comparisons look rather promising
  • The  CloudTM also extended Infinispan's transactions to support one-copy serializability (that's a fancy name for SERIALIZABLE  transactions in replicated systems). The implementation is based on keeping multiple data version of data for each entry together with some housekeeping code. More on this to come!
Big thanks to CloudTM team for their effort and all the great ideas (and patches!!) they submitted to Infinispan! It was a great meeting with very good results and burning whiteboards (see below)!
Cheers,
Mircea

Tuesday, 17 January 2012

Infinispan 5.1.0.CR4 is out!

Over the past week we've been busy profiling Infinispan, in particular we've been trying to maximise the performance of Infinispan transactional caches which received a major overhaul int the 5.1 'Brahma' series, and the result is that instead of going final, we've decided to cut another candidate release, Infinispan 5.1.0.CR4 which is now available for download from the usual place. We're confident that these improvements will be greatly appreciated by Infinispan consumers such as Hibernate second-level-cache and JBoss Application Server 7 HTTP session replication.

As part of the configuration and parser work, new configuration documentation has generated which you can now find in http://docs.jboss.org/infinispan/5.1/configdocs/.

As always, give it a spin to this new candidate release and let us know what you think. You can find detailed information on the issues fixed in our release notes.

Cheers,
Galder

Wednesday, 23 November 2011

More on transaction performance: use1PcForAutoCommitTransactions

What's use1PcForAutoCommitTransactions all about?



Don't be scared the name, use1PcForAutoCommitTransactions is a new feature (5.1.CR1) that does quite a cool thing: increases your transactions's performance.
Let me explain.
Before Infinispan 5.1 you could access the cache both transactional and non-transactional. Naturally the non-transactional access is faster and offers less consistency guarantees.But we don't support mixed access in Infinispan 5.1, so what what's to be done when you need the speed of non-transactional access and you are ready to trade some consistency guarantees for it?
Well here is where use1PcForAutoCommitTransactions helps you. What is does is forces an induced (autoCommit=true) transaction to commit in a single phase. So only 1 RPC instead of 2RPCs as in the case of a full 2 Phase Commit (2PC).

At what cost?


You might end up with inconsistent data if multiple transactions modify the same key concurrently. But if you know that's not the case, or you can live with it then use1PcForAutoCommitTransactions will help your performance considerably.

An example


Let's say you do a simple put operation outside the scope of a transaction:


Now let's see how this would behave if the cache has several different transaction configurations:

Not using 1PC...



The put will happen in two RPCs/steps: a prepare message is sent around and then a commit.

Using 1PC...



The put happens in one RPC as the prepare and the commit are merged. Better performance.

Not using autoCommit



An exception is thrown, as this is a transactional cache and invocations must happen within the scope of a transaction.

Enjoy!
Mircea

Friday, 11 November 2011

Some worth mentioning improvements for pessimistic transactions

Pessimistic transactions were added in 5.1 and are the "rebranding" of eager transactions from previous Infinispan releases. But besides the re-branding, the code also brought some worth mentioning performance optimisation:
  • a single RPC happens for acquiring lock on a key, disregarding the number of invocations. So if you call cache.put(k,v) in a loop, within the scope of the same transaction, there is only one remote call to the owner of k.
  • if the key you want to lock/write maps to the local node then no remote locks are acquired. In other words there won't be any RPCs for writing to a key that maps locally. This can be very powerful used in conjunction with the KeyAffinityService, as it allows you to control the locality of you keys.
  • during the two phase commit (2PC), the prepare phase doesn't perform any RPCs: this optimisation is based on the fact locks are already acquired on each write. This means that then number of RPCs during transactions lifespan is reduced with 1.
  • for some writes to the cache (e..g cache.put(k,v)) two RPCs were performed: one to acquire the remote lock and one to fetch the previous value. The obvious optimisation in this case was to make a single RPC for both operations - which we do starting with 5.1.
Enjoy!
Mircea

Thursday, 10 November 2011

Fewer deadlocks, higher throughput

Here's the problem: first transaction (T1) writes to key a and b in this order. Second transaction (T2) writes to key b and a - again order is relevant. Now with some "right timing" T1 manages to acquire lock on a and T2 acquires lock on b. And then they wait one for the other to release locks so that they can progress. This is what is called a deadlock and is really bad for your system throughput - but I won't insist on this aspect, as I've mentioned it a lot in my previous posts.

What I want to talk about though is a way to solve this problem. Quit a simple way - just force an order on your transaction writes and you're guaranteed not to deadlock: if both T1 and T2 write to a then b (lexicographical order) there won't be any deadlock. Ever.
But there's a catch. It's not always possible to define this order, simply because you can't or because you don't know all your keys at the very beginning of the transaction.

Now here's the good news: Infinispan orders the keys touched in a transaction for you. And it even defines an order so that you won't have to do that. Actually you don't have to anything, not even enable this feature, as it is already enabled for you by default.
Does it sound too good to be true? That's because it's only partially true. That is lock reordering only works if you're using optimistic locking. For pessimistic locking you still have to do it the old way - order your locks (that's of course if you can).

Wanna know more about it? Read this.

Expect and enjoy this feature in our next release 5.1.0.BETA5.

Stay tunned!
Mircea

Wednesday, 9 November 2011

Single lock owner: an important step forward

The single lock owner is a highly requested Infinispan improvement. The basic idea behind it is that, when writing to a key, locks are no longer acquired on all the nodes that own that key, but only on a single designated node (named "main owner").

How does it help me?


Short version: if you use transactions that concurrently write to the same keys, this improvement significantly increases your system' throughput.


Long version: If you're using Infinispan with transactions that modify the same key(s) concurrently then you can easily end up in a deadlock. A deadlock can also occur if two transaction modify the same key at the same time - which is both inefficient and counter-intuitive. Such a deadlock means that at one transaction(or both) eventually rollback but also the lock on the key is held for the duration of a lockAquistionTimout config option (defaults to 10 seconds). These deadlocks reduces the throughput significantly as transactions threads are held inactive during deadlock time. On top of that, other transactions that want to operate on that key are also delayed, potentially resulting in a cascade effect.

What's the added performance penalty?


The only encountered performance penalty is during cluster topology changes. At that point the cluster needs to perform some additional computation (no RPC involved) to fail-over the acquired locks from previous to new owners.
Another noticeable aspect is that locks are now being released asynchronously, after the transaction commits. This doesn't add any burden to the transaction duration, but it means that locks are being held slightly longer. That's not something to be concerned about if you're not using transactions that compete for same locks though.
We plan to benchmark this feature using Radargun benchmark tool - we'll report back!

Want to know more?


You can read the single lock owner design wiki or/and follow the JIRA JIRA discussions.

Tuesday, 4 October 2011

5.1.0.BETA1 "Brahma" is out with reworked transaction handling

It's been a frantic couple of weeks at chez Infinispan with loads of hacking, presentations preparation, team meetings...etc and we're now proud to release Infinispan 5.1.0.BETA1 "Brahma".

For this first beta release, the transaction layer has been redesigned as explained by Mircea in this blog post. This is a very important step in the process of implementing some key locking improvements, so we're very excited about this! Thanks Mircea :)

There's a bunch of other little improvements, such as avoiding the use of thread locals for cache operations with flags. As a result, optimisations like the following are now viable:
AdvancedCache cache = ...
Cache forceWLCache = cache.withFlags(Flag.FORCE_WRITE_LOCK);
forceWLCache.get("voo");
forceWLCache.put("voo", "doo");
...
Previously each cache invocation would have required withFlags() to be called, but now you only need to do it once and you can cache the "flagged" cache and reuse it.

Another interesting little improvement is available for JDBC cache store users. Basically, database tables can now be discovered within an implicit schema. So, if each user has a different schema, the tables will be created within their own space. This makes it easier to manage environments where the JDBC cache store is used by multiple caches at the same time because management is limited to adding a user per application, as opposed to adding a user plus prefixing table names. Thanks to Nicolas Filotto for bringing this up.

Please keep the feedback coming, and as always, you can download the release from here and you get further details on the issues addressed in the changelog.

Cheers,
Galder

Monday, 3 October 2011

Transaction remake in Infinispan 5.1

If you ever used Infinispan in a transactional way you might be very interested in this article as it describes some very significant improvements in version 5.1 "Brahma" (released with 5.1.Beta1):
  • starting with this release an Infinispan cache can accessed either transactionally or non-transactionally. The mixed access mode is no longer supported (backward compatibility still maintained, see below). There are several reasons for going this path, but one of them most important result of this decision is a cleaner semantic on how concurrency is managed between multiple requestors for the same cache entry.

  • starting with 5.1 the supported transaction models are optimistic and pessimistic. Optimistic model is an improvement over the existing default transaction model by completely deferring lock acquisition to transaction prepare time. That reduces lock acquisition duration and increases throughput; also avoids deadlocks. With pessimistic model, cluster wide-locks are being acquired on each write and only being released after the transaction completed (see below).


Transactional or non transactional cache?


It's up to you as an user to decide weather you want to define a cache as transactional or not. By default, infinispan caches are non transactional. A cache can be made transactional by changing the transactionMode attribute:

transactionMode can only take two values: TRANSACTIONAL and NON_TRANSACTIONAL. Same thing can be also achieved programatically:

Important:for transactional caches it is required to configure a TransactionManagerLookup.

Backward compatibility


The autoCommit attribute was added in order to assure backward compatibility. If a cache is transactional and autoCommit is enabled (defaults to true) then any call that is performed outside of a transaction's scope is transparently wrapped within a transaction. In other words Infinispan adds the logic for starting a transaction before the call and committing it after the call.

So if your code accesses a cache both transactionally and non-transactionally, all you have to do when migrating to Infinispan 5.1 is mark the cache as transactional and enable autoCommit (that's actually enabled by default, so just don't disable it :)

The autoCommit feature can be managed through configuration:

or programatically:


Optimistic Transactions


With optimistic transactions locks are being acquired at transaction prepare time and are only being held up to the point the transaction commits (or rollbacks). This is different from the 5.0 default locking model where local locks are being acquire on writes and cluster locks are being acquired during prepare time.

Optimistic transactions can be enabled in the configuration file:

or programatically:

By default, a transactional cache is optimistic.

Pessimistic Transactions


From a lock acquisition perspective, pessimistic transactions obtain locks on keys at the time the key is written. E.g.

When cache.put(k1,v1) returns k1 is locked and no other transaction running anywhere in the cluster can write to it. Reading k1 is still possible. The lock on k1 is released when the transaction completes (commits or rollbacks).

Pessimistic transactions can be enabled in the configuration file:

or programatically:


What do I need - pessimistic or optimistic transactions?


From a use case perspective, optimistic transactions should be used when there's not a lot of contention between multiple transactions running at the same time. That is because the optimistic transactions rollback if data has changed between the time it was read and the time it was committed (writeSkewCheck).

On the other hand, pessimistic transactions might be a better fit when there is high contention on the keys and transaction rollbacks are less desirable. Pessimistic transactions are more costly by their nature: each write operation potentially involves a RPC for lock acquisition.

The path ahead


This major transaction rework has opened the way for several other transaction related improvements:

  • Single node locking model is a major step forward in avoiding deadlocks and increasing throughput by only acquiring locks on a single node in the cluster, disregarding the number of redundant copies (numOwners) on which data is replicated

  • Lock acquisition reordering is a deadlock avoidance technique that will be used for optimistic transactions

  • Incremental locking is another technique for minimising deadlocks.




Stay tuned!
Mircea

Wednesday, 10 August 2011

Transactions enhancements in 5.0

Besides other cool features such as Map reduce and distributed executors, Infinispan 5.0.0 "Pagoa" brings some significant improvements around transactional functionality:
  • transaction recovery is now supported, with a set of tools that allow state reconciliation in case the transaction fails during 2nd phase of 2PC. This is especially useful in the case of transactions spreading over Infinispan and another resource manager, e.g. a database (distributed transactions). You can find out more on how to enable and use transaction recovery here.
  • Synchronization enlistment is another important feature in this release. This allows Infinispan to enlist in a transaction as an Synchronization rather than an XAResource.This enlistment allows the TransactionManager to optimize 2PC with a 1PC where only one other resource is enlisted with that transaction (last resource commit optimization). This is particularly important when using Infinispan as a 2nd level cache in Hibernate. You can read more about this feature here.
  • besides that several bugs were fixed particularly when it comes to the integration with a transaction manager - BIG thanks to the community for reporting and testing them!
To summarise, Infinispan can participate in a transaction in 3 ways:
  1. as a fully fledged XAResource that supports recovery
  2. as an XAResource, but without recovery. This is the default configuration
  3. and as an Synchronization
In order to analyze the performance of running Infinispan in different transactional modes I've enhanced and used Radargun. The diagram below shows a performance comparison between running Infinispan in all the 3 modes described. The forth plot in the chart shows the performance of running Infinispan without transactions - this gives an idea about the cost of using transactions vs. raw operations.



The benchmark was run on this Radargun configuration, using Infinispan 5.0.0.CR5 configured as shown here. As a TransactionManager JBossTS 4.15.0.FINAL was used, configured with a VolatileStore as shown here. Each node was an 4-core Intel(R) Xeon(R) CPU E5640 @ 2.67GHz, with 4GB RAM.
Each transaction spread over only one put operation. The chart shows the following:
  • a non-transactional put is about 40% faster than a transactional one
  • Synchronization-enlisted transactions outperform an XAResource enlisted one by about 20%
  • A recoverable cache has about the same performance as a non-recoverable cache when it comes to transactions.
And that's not all! During Infinispan 5.0.0 development we've been thinking a lot about how we can improve transactional throughput, especially in scenarios in which multiple transactions are writing on the same key. As a result we've come up with some improvement suggestions summarised here: please feel free to take a look and comment!

Cheers,
Mircea

Tuesday, 7 June 2011

Faster Infinispan-based Second Level Cache coming to a JPA app near you!

Starting with Hibernate 4.0.0.Beta1, Infinispan second level cache interacts with the transaction manager as a JTA Synchronization instead of an XA resource. Based on some testing we've done in JBoss AS 7, this has resulted in a huge performance increase thanks to the optimisations the transaction manager can apply to Synchronizations, which work very well when Infinispan is used as a cache rather than as a authoritative data store.

From an Infinispan configuration perspective, nothing needs changing. The Infinispan provider still uses the same base configuration by default. Transactional configuration happens within the cache provider itself and it's here where the Infinispan is configured with Hibernate's transaction manager and where Infinispan is configured to participate as a JTA synchronization. This is the default configuration, so from an user perspective, there's nothing you have to do to take advantage of this new change.

However, you can always switch back to previous behaviour where Infinispan interacted as an XA resource via a dedicated Hibernate property called hibernate.cache.infinispan.use_synchronization

By default this property is set to true. If you set it false, Infinispan will interact with the transaction manager as an XA resource.

For more detailed information, check the "Using Infinispan As JPA Hibernate Second Level Cache Provider" wiki.

Cheers,
Galder

Monday, 27 July 2009

Increase transactional throughput with deadlock detection

Deadlock detection is a new feature in Infinispan. It is about increasing the number of transactions that can be concurrently processed. Let's start with the problem first (the deadlock) then discuss some design details and performance.

So, the by-the-book deadlock example is the following:
  • Transaction one (T1) performs following operation sequence: (write key_1,write key_2)
  • Transaction two (T2) performs following sequence: (write key_2, write key_1).
Now, if the T1 and T2 happen at the same time and both have executed first operation, then they will wait for each other virtually forever to release owned locks on keys. In the real world, the waiting period is defined by a lock acquisition timeout (LAT) - which defaults to 10 seconds - that allows the system to overcome such scenarios and respond to the user one way (successful) or the other(failure): so after a period of LAT one (or both) transaction will rollback, allowing the other to continue working.

Deadlocks are bad for both system's throughput and user experience. System throughput is affected because during the deadlock period (which might extend up to LAT) no other thread will be able to update neither key_1 nor key_2. Even worse, access to any other keys that were modified by T1 or T2 will be similarly restricted. User experience is altered by the fact that the call(s) will freeze for the entire deadlock period, and also there's a chance that both T1 and T2 will rollback by timing out.

As a side note, in the previous example, if the code running the transactions would(and can) enforce any sort of ordering on the keys accessed within the transaction, then the deadlock would be avoided. E.g. if the application code would order the operation based on the lexicographic ordering of keys, both T1 and T2 would execute the following sequence: (write key_1,write key_2), and so no deadlock would result. This is a best practice and should be followed whenever possible.
Enough with the theory! The way Infinispan performs deadlock detection is based on an algorithm designed by Jason Greene and Manik Surtani, which is detailed here. The basic idea is to split the LAT in smaller cycles, as it follows:

lock(int lockAcquisitionTimeout) {
while (currentTime < startTime + timeout) {
if (acquire(smallTimeout)) break;
testForDeadlock(globalTransaction, key);
}
}

What testForDeadlock(globalTransaction, key) does is check weather there is another transaction that satisfies both conditions:
  1. holds a lock on key and
  2. intends to lock on a key that is currently called by this transaction.
If such a transaction is found then this is a deadlock, and one of the running transactions will be interrupted: the decision of which transaction will interrupt is based on coin toss, a random number that is associated with each transaction. This will ensure that only one transaction will rollback, and the decision is deterministic: nodes and transactions do not need to communicate with each other to determine the outcome.

Deadlock detection in Infinispan works in two flavors: determining deadlocks on transactions that spread over several caches and deadlock detection in transactions running on a single(local) cache.

Let's see some performance figures as well. A class for benchmarking performance of deadlock detection functionality was created and can be seen here. Test description (from javadoc):

We use a fixed size pool of keys (KEY_POOL_SIZE) on which each transaction operates. A number of threads (THREAD_COUNT) repeatedly starts transactions and tries to acquire locks on a random subset of this pool, by executing put operations on each key. If all locks were successfully acquired then the tx tries to commit: only if it succeeds this tx is counted as successful. The number of elements in this subset is the transaction size (TX_SIZE). The greater transaction size is, the higher chance for deadlock situation to occur. On each thread these transactions are being repeatedly executed (each time on a different, random key set) for a given time interval (BENCHMARK_DURATION). At the end, the number of successful transactions from each thread is cumulated, and this defines throughput (successful tx) per time unit (by default one minute).

Disclaimer: The following figures are for a scenario especially designed to force very high contention. This is not typical, and you shouldn't expect to see this level of increase in performance for applications with lower contention (which most likely is the case). Please feel free tune the above benchmark class to fit the contention level of your application; sharing your experience would be very useful!

Following diagram shows the performance degradation resulting from running the deadlock detection code by itslef in a scenario where no contention/deadlocks are present.
Some clues on when to enable deadlock detection. A high number of transaction rolling back due to org.infinispan.util.concurrent.TimeoutException is an indicator that this functionality might help. TimeoutException might be caused by other causes as well, but deadlocks will always result in this exception being thrown. Generally, when you have a high contention on a set of keys, deadlock detection may help. But the best way is not to guess the performance improvement but to benchmark and monitor it: you can have access to statistics (e.g. number of deadlocks detected) through JMX, as it is exposed via the DeadlockDetectingLockManager MBean.