Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Sunday, 10 September 2017

Multi-tenancy - Infinispan as a Service (also on OpenShift)

In recent years Software as a Service concept has gained a lot of traction. I'm pretty sure you've used it many times before. Let's take a look at a practical example and explain what's going on behind the scenes.

Practical example - photo album application

Imagine a very simple photo album application hosted within the cloud. Upon the first usage you are asked to create an account. Once you sign up, a new tenant is created for you in the application with all necessary details and some dedicated storage just for you. Starting from this point you can start using the album - download and upload photos. 

The software provider that created the photo album application can also celebrate. They have a new client! But with a new client the system needs to increase its capacity to ensure it can store all those lovely photos. There are also other concerns - how to prevent leaking photos and other data from one account into another? And finally, since all the content will be transferred through the Internet, how to secure transmission?

As you can see, multi-tenancy is not that easy as it would seem. The good news is that if it's properly configured and secured, it might be beneficial both for the client and for the software provider. 

Multi-tenancy in Infinispan

Let's think again about our photo album application for a moment. Whenever a new client signs up we need to create a new account for him and dedicate some storage. Translating that into Infinispan concepts this would mean creating a new CacheContainer. Within a CacheContainer we can create multiple Caches for user details, metadata and photos. You might be wondering why creating a new Cache is not sufficient? It turns out that when a Hot Rod client connects to a cluster, it connects to a CacheContainer exposed via a Hot Rod Endpoint. Such a client has access to all Caches. Considering our example, your friends could possibly see your photos. That's definitely not good! So we need to create a CacheContainer per tenant. Before we introduced Multi-tenancy, you could expose each CacheContainer using a separate port (using separate Hot Rod Endpoint for each of them). In many scenarios this is impractical because of proliferation of ports. For this reason we introduced the Router concept. It allows multiple clients to access their own CacheContainers through a single endpoint and also prevents them from accessing data which doesn't belong to them. The final piece of the puzzle is transmitting sensitive data through an unsecured channel such as the Internet. The use of TLS encryption solves this problem. The final outcome should look like the following:


The Router component on the diagram above is responsible for recognizing data from each client and redirecting it to the appropriate Hot Rod endpoint.
As the name implies, the router inspects incoming traffic and reroutes it to the appropriate underlying CacheContainer. To do this it can use two different strategies depending on the protocol: TLS/SNI for the Hot Rod protocol, matching each server certificate to a specific cache container  and path prefixes for REST.
The SNI strategy detects the SNI Host Name (which is used as tenant) and also requires TLS certificates to match. By creating proper trust stores we can match which tenant can access which CacheContainers.
URL path prefix is very easy to understand, but it is also less secure unless you enable authentication. For this reason it should not be used in production unless you know what you are doing (the SNI strategy for the REST endpoint will be implemented in the near future). Each client has its own unique REST path prefix that needs to be used for accessing the data (e.g. http://127.0.0.1:8080/rest/client1/fotos/2).

Confused? Let's clarify this with an example.

Foto application sample configuration

The first step is to generate proper key/trust stores for the server and client:


The next step is to configure the server. The snippet below shows only the most important parts:


Let's analyze the most critical lines:
  • 7, 15 - We need to add generated key stores to the server identities
  • 25, 30 - It is highly recommended to use separate CacheContainers
  • 38, 39 - A Hot Rod connector (but without socket binding) is required to provide proper mapping to CacheContainer. You can also use many useful settings on this level (like ignored caches or authentication).
  • 42 - Router definition which binds into default Hot Rod and REST ports.
  • 44 - 46 - The most important bit which states that only a client using SSLRealm1 (which uses trust store corresponding to client_1_server_keystore.jks) and TLS/SNI Host name client-1 can access Hot Rod endpoint named multi-tenant-hotrod-1 (which points to CacheContainer multi-tenancy-1).

Improving the application by using OpenShift

Hint: You might be interested in looking at our previous blog posts about hosting Infinispan on OpenShift. You may find them at the bottom of the page.

So far we've learned how to create and configure a new CacheContainer per tenant. But we also need to remember that system capacity needs to be increased with each new tenant. OpenShift is a perfect tool for scaling the system up and down. The configuration we created in the previous step almost matches our needs but needs some tuning.

As we mentioned earlier, we need to encrypt transport between the client and the server. The main disadvantage is that OpenShift Router will not be able to inspect it and take routing decisions. A passthrough Route fits perfectly in this scenario but requires creating TLS/SNI Host Names as Fully Qualified Application Names. So if you start OpenShift locally (using oc cluster up) the tenant names will look like the following: client-1-fotoalbum.192.168.0.17.nip.io

We also need to think how to store generated key stores. The easiest way is to use Secrets:


Finally, a full DeploymentConfiguration:



If you're interested in playing with the demo by yourself, you might find a working example here. It mainly targets OpenShift but the concept and configuration are also applicable for local deployment.

Links

Monday, 19 June 2017

Cache operations impersonation: do as I say (or maybe as she says)

The implementation of cache authorization in Infinispan has traditionally followed the JAAS model of wrapping calls in a PrivilegedAction invoked through Subject.doAs(). This led to the following cumbersome pattern:


We also provided an implementation which, instead of relying on enabling the SecurityManager, could use a lighter and faster ThreadLocal for storing the Subject:


While this solves the performance issue, it still leads to unreadable code.
This is why, in Infinispan 9.1 we have introduced a new way to perform authorization on caches:


Obviously, for multiple invocations, you can hold on to the "impersonated" cache and reuse it:

We hope this will make your life simpler and your code more readable !

Thursday, 23 February 2017

Node.js client 0.4.0 released with encryption and cross-site failover

We've just released Infinispan Node.js Client version 0.4.0 which comes with encrypted client connectivity via SSL/TLS (with optional TLS/SNI support), as well as cross-site client failover.

Thanks to the encryption integration, Node.js Hot Rod clients can talk to Hot Rod servers via an encrypted channel, allowing trusted client and/or authenticated clients to connect. Check the documentation for information on how to enable encryption in Node.js Hot Rod client.

Also, we've added the possibility for the client to connect to multiple clusters. Normally, the client is connected to a single cluster, but if all nodes fail to respond, the client can failover to a different cluster, as long as one or more initial addresses have been provided. On top of that, clients can manually switch clusters using switchToCluster and switchToDefaultCluster APIs. Check documentation for more info.

On top of that, we've applied several bug fixes that further tighten the inner workings of the Node.js client.

If you're a Node.js user and want to store data remotely in Infinispan Server instances, please give the client a go and tell us what you think of it via our forum, via our issue tracker or via IRC on the #infinispan channel on Freenode.

Wednesday, 14 May 2014

Infinispan 7.0.0.Alpha4 is out!

Dear Community,

It is our pleasure to announce the Alpha4 release of Infinispan 7.0.0.

The release highlights are:

* HotRod protocol now supports authorization and the SKIP_CACHE_LOAD flag;
* Distributed entry iterator which allows iterate over all entries in the cluster;
* Object filtering and preview using query DSL;
* Apache Lucene 4.8.0 is now supported and JGroups was upgraded to 3.5.0.Beta5;
* Multiple improvements and bug fixes! 

For a complete list of features and bug fixes included in this release please refer to the release notes. Visit our downloads section to find the latest release.

If you have any questions please check our forums, our mailing lists or ping us directly on IRC.

Cheers,
The Infinispan team.

Friday, 11 April 2014

Infinispan 7.0.0.Alpha3 is out!

Hi,
 
The Alpha3 release of Infinispan 7.0.0 is now available.


Highlights:

  • authorization at both CacheManager and Cache levels
  • some important enhancements for Map/Reduce's usability, like the ability to use an intermediate cache during Map/Reduce execution and for storing the final results of the Map/Reduce tasks
  • a much welcomed revamp of the Infinispan embedded configuration which has been aligned to with the server
For a complete list of features and bug fixes included in this release please refer to the release notesVisit our downloads section to find the latest release.

If you have any questions please check our forums, our mailing lists or ping us directly on IRC.

Cheers,
Mircea