Thursday, 18 August 2016

Running Infinispan cluster on Kubernetes

In the previous post we looked how to run Infinispan on OpenShift. Today, our goal is exactly the same, but we'll focus on Kubernetes.

Running Infinispan on Kubernetes requires using proper discovery protocol. This blog post uses Kubernetes Ping but it's also possible to use Gossip Router.

Our goal

We'd like to build Infinispan cluster based on Kubernetes hosted locally (using Minikube). We will expose a service and route it to our local machine. Finally, we will use it to put data into the grid.




Spinning local Kubernetes cluster

There are many ways to spin up a local Kubernetes cluster. One of my favorites is Minikube. At first you will need the 'minikube' binary, which can be downloaded from Github releases page. I usually copy it into '/usr/bin' which makes it very convenient to use. The next step is to download 'kubectl' binary. I usually use Kubernetes Github releases page for this. The 'kubectl' binary is stored inside the release archive under 'kubernetes/platforms/<your_platform>/<your_architecture>/kubectl'. I'm using linux/amd64 since I'm running Fedora F23. I also copy the binary to '/usr/bin'.

We are ready to spin up Kubernetes:

$ minikube start 1 ↵
Starting local Kubernetes cluster...
Kubernetes is available at https://192.168.99.100:8443.
Kubectl is now configured to use the cluster.

Deploying Infinispan cluster

This time we'll focus on automation, so there will be no 'kubectl edit' commands. Below is the yaml file for creating all necessary components in Kubernetes cluster:

apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
labels:
run: infinispan-server
name: infinispan-server
namespace: default
spec:
replicas: 3
selector:
matchLabels:
run: infinispan-server
template:
metadata:
labels:
run: infinispan-server
spec:
containers:
- args:
- cloud
- -Djboss.default.jgroups.stack=kubernetes
env:
- name: OPENSHIFT_KUBE_PING_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: jboss/infinispan-server:9.0.0.Alpha4
name: infinispan-server
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8181
protocol: TCP
- containerPort: 8888
protocol: TCP
- containerPort: 9990
protocol: TCP
- containerPort: 11211
protocol: TCP
- containerPort: 11222
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- apiVersion: v1
kind: Service
metadata:
labels:
run: infinispan-server
name: infinispan-server
spec:
ports:
- name: rest
nodePort: 32348
port: 8080
protocol: TCP
targetPort: 8080
selector:
run: infinispan-server
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
kind: List
metadata: {}
view raw infinispan.yaml hosted with ❤ by GitHub
  • (lines 23 - 24) - We added additional arguments to the bootstrap scipt
  • (lines 26 - 30) - We used Downward API for pass the current namespace to the Infinispan
  • (lines 34 - 45) - We defined all ports used by the Pod
  • (lines 49 - 66) - We created a service for port 8080 (the REST interface)
  • (line 64) - We used NodePort service type which we will expose via Minikube in the next paragraph

Save it somewhere on the disk and execute 'kubectl create' command:

$ kubectl create -f infinispan.yaml
deployment "infinispan-server" created
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32348) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "infinispan-server" created

Exposing the service port

One of the Minikube's limitations is that it can't use Ingress API and expose services to the outside world. Thankfully there's other way - use Node Port service type. With this simple trick we will be able to access the service using '<minikube_ip>:<node_port_number>'. The port number was specified in the yaml file (we could leave it blank and let Kubernetes assign random one). The node port can easily be checked using the following command:

$ kubectl get service infinispan-server --output='jsonpath="{.spec.ports[0].NodePort}"'
"32348"%

In order to obtain the Kubernetes node IP, use the following command:

$ minikube ip
192.168.99.100
view raw Minikube IP hosted with ❤ by GitHub

Testing the setup

Testing is quite simple and the only thing to remember is to use the proper address - <minikube_ip>:<node_port>:

$ curl -X POST -H 'Content-type: text/plain' -d 'test' 192.168.99.100:32348/rest/default/test 7 ↵
$ curl 192.168.99.100:32348/rest/default/test
test%

Clean up

Minikube has all-in-one command to do the clean up:

$ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
view raw Minikube delete hosted with ❤ by GitHub

Conclusion

Kubernetes setup is almost identical to the OpenShift one but there are a couple of differences to keep in mind:
  • OpenShift's DeploymentConfiguration is similar Kubernetes Deployment with ReplicaSets
  • OpenShift's Services work the same way as in Kubernetes
  • OpenShift's Routes are similar to Kubernetes' Ingresses
Happy scaling and don't forget to check if Infinispan formed a cluster (hint - look into the previous post).

Friday, 12 August 2016

Infinispan Cloud Cachestore 8.0.1.Final

After bringing the MongoDB up-to-date a few days ago, this time it's the turn of the Cloud Cache Store, our JClouds-based store which allows you to use any of the JClouds BlobStore providers to persist your cache data. This includes AWS S3, Google Cloud Storage, Azure Blob Storage and Rackspace Cloud Files.
In a perfect world this would have been 8.0.0.Final, but Sod's law rules, so I give you 8.0.1.Final instead :) So head on over to our store download page and try it out.

The actual configuration of the cachestore depends on the provider, so refer to the JClouds documentation. The following is a programmatic example using the "transient" provider:
 
ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence().addStore(CloudStoreConfigurationBuilder.class)
.provider("transient")
.location("test-location")
.identity("me")
.credential("s3cr3t")
.endpoint("http://test.endpoint")
.compress(true);

And this is how you'd configure it declaratively:
<cache-container default-cache="default">
<local-cache name="default">
<persistence passivation="false">
<cloud-store xmlns="urn:infinispan:config:store:cloud:8.0"
provider="transient"
location="test-location"
identity="me"
credential="s3cr3t"
container="test-container"
endpoint="http://test.endpoint"
compress="true"/>
</persistence>
</local-cache>
</cache-container>

This will work with any Infinispan 8.x release.

Enjoy !

Infinispan 8.2.4.Final released!

Dear Infinispan community,

We are proud to announce a new micro release of our stable 8.2 branch. Download it here and try it out!

This maintenance release includes a handful of bug fixes and a bonus new feature. If you are using any other 8.x release, we recommend to upgrade to 8.2.4.Final.

Check out the fixed issues, download the release and tell us all about it on the forum, on our issue tracker or on IRC on the #infinispan channel on Freenode.

We are currently busy working on the upcoming beta release of the 9.0 stream.

Cheers,
The Infinispan team

Tuesday, 9 August 2016

Running Infinispan cluster on OpenShift

Did you know that it's extremely easy to run Infinispan in OpenShift? Infinispan 9.0.0.Alpha4 adds out of the box support for OpenShift (and Kubernetes) discovery!

Our goal

We'd like to build an Infinispan cluster on top of OpenShift and expose a Service for it (you may think about Services as Load Balancers). A Service can be exposed to the outside world using Routes. Finally, we will use REST interface to PUT and GET some data from the cluster.


Accessing the OpenShift cloud

Of course before playing with Infinispan, you will need an OpenShift cluster. There are number of options you can investigate. I will use the simplest path - OpenShift local cluster.

The first step is to download OpenShift Client Tools for your platform. You can find them on OpenShift releases Github page. Once you download and extract the 'oc' binary, make it accessible in your $PATH. I usually copy such things into my '/usr/bin' directory (I'm using Fedora F23). 

Once everything is set and done - spin up the cluster:

$ oc cluster up
-- Checking Docker client ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:latest image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ...
WARNING: Binding DNS on port 8053 instead of 53, which may be not be resolvable from all clients.
-- Checking type of volume mount ...
Using nsenter mounter for OpenShift volumes
-- Checking Docker version ... OK
-- Creating volume share ... OK
-- Finding server IP ...
Using 192.168.0.17 as the server IP
-- Starting OpenShift container ...
Creating initial OpenShift configuration
Starting OpenShift using container 'origin'
Waiting for API server to start listening
OpenShift server started
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://192.168.0.17:8443
You are logged in as:
User: developer
Password: developer
To login as administrator:
oc login -u system:admin

Note that you have been automatically logged in as 'developer' and your project has been automatically set to 'myproject'. 

Spinning an Infinispan cluster

The first step is to create an Infinispan app:

$ oc new-app jboss/infinispan-server
--> Found Docker image 9f3d9bf (13 minutes old) from Docker Hub for "jboss/infinispan-server"
* An image stream will be created as "infinispan-server:latest" that will track this image
* This image will be deployed in deployment config "infinispan-server"
* Ports 11211/tcp, 11222/tcp, 57600/tcp, 7600/tcp, 8080/tcp, 8181/tcp, 8888/tcp, 9990/tcp will be load balanced by service "infinispan-server"
* Other containers can access this service through the hostname "infinispan-server"
--> Creating resources with label app=infinispan-server ...
imagestream "infinispan-server" created
deploymentconfig "infinispan-server" created
service "infinispan-server" created
--> Success
Run 'oc status' to view your app.
view raw Infinispan app hosted with ❤ by GitHub

Now you need to modify the Deployment Configuration (use 'oc edit dc/infinispan-server' for this) and tell Infinispan to boot up with Kubernetes' discovery protocol stack by using the proper namespace to look up other nodes (unfortunately this step can not be automated, otherwise a newly created Infinispan node might try to join an existing cluster and this is something you might not want). Here's my modified Deployment Configuration:

apiVersion: v1
kind: DeploymentConfig
metadata:
name: infinispan-server
namespace: myproject
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/infinispan-server
uid: eaf22ecc-5afa-11e6-bcb7-54ee751d46e3
resourceVersion: '865'
generation: 6
creationTimestamp: '2016-08-05T10:54:01Z'
labels:
app: infinispan-server
annotations:
openshift.io/generated-by: OpenShiftNewApp
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 25%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
-
type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- infinispan-server
from:
kind: ImageStreamTag
namespace: myproject
name: 'infinispan-server:latest'
lastTriggeredImage: 'jboss/infinispan-server@sha256:52b4fcb1530159176ceb81ea8d9638fa69b8403c8ca5ac8aea1cdbcb645beb9a'
replicas: 1
test: false
selector:
app: infinispan-server
deploymentconfig: infinispan-server
template:
metadata:
creationTimestamp: null
labels:
app: infinispan-server
deploymentconfig: infinispan-server
annotations:
openshift.io/container.infinispan-server.image.entrypoint: '["docker-entrypoint.sh"]'
openshift.io/generated-by: OpenShiftNewApp
spec:
containers:
-
name: infinispan-server
image: 'jboss/infinispan-server@sha256:52b4fcb1530159176ceb81ea8d9638fa69b8403c8ca5ac8aea1cdbcb645beb9a'
args:
- cloud
- '-Djboss.default.jgroups.stack=kubernetes'
ports:
-
containerPort: 8181
protocol: TCP
-
containerPort: 8888
protocol: TCP
-
containerPort: 9990
protocol: TCP
-
containerPort: 11211
protocol: TCP
-
containerPort: 11222
protocol: TCP
-
containerPort: 57600
protocol: TCP
-
containerPort: 7600
protocol: TCP
-
containerPort: 8080
protocol: TCP
env:
-
name: OPENSHIFT_KUBE_PING_NAMESPACE
valueFrom: {fieldRef: {apiVersion: v1, fieldPath: metadata.namespace}}
resources:
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
status:
latestVersion: 5
observedGeneration: 6
replicas: 1
updatedReplicas: 1
availableReplicas: 1
details:
causes:
-
type: ConfigChange

There is one final step - Kubernetes' PING protocol uses the API to look up other nodes in the Infinispan cluster. By default API access is disabled in OpenShift and needs to be enabled. This can be done by this simple command:

$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)

Now we can redeploy the application (to ensure that all changes were applied) and scale it out (to 3 nodes):

$ oc deploy infinispan-server --latest -n myproject
$ oc scale --replicas=3 dc/infinispan-server
deploymentconfig "infinispan-server" scaled

Now let's check if everything looks good - you can do it either through the OpenShift web console or by using 'oc get pods' and 'oc logs' commands:

$ oc logs infinispan-server-6-lfiy9 | grep -i "Received new cluster view"
11:45:58,151 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,infinispan-server-6-lfiy9) ISPN000094: Received new cluster view for channel clustered: [infinispan-server-6-lfiy9|8] (3) [infinispan-server-6-lfiy9, infinispan-server-6-07vtk, infinispan-server-6-6ts14]


Accessing the cluster

In order to access the Infinispan cluster from the outside world we need a Route:

$ oc expose svc/infinispan-server
route "infinispan-server" exposed

The newly created Route needs small changes - we need to change the target port to 8080 (this is the REST service). The 'oc edit route/infinispan-server' command is perfect for it. Below is my updated configuration:

apiVersion: v1
kind: Route
metadata:
annotations:
openshift.io/host.generated: "true"
creationTimestamp: 2016-08-05T11:57:14Z
labels:
app: infinispan-server
name: infinispan-server
namespace: myproject
resourceVersion: "1373"
selfLink: /oapi/v1/namespaces/myproject/routes/infinispan-server
uid: c00bf8b5-5b03-11e6-bcb7-54ee751d46e3
spec:
host: infinispan-server-myproject.192.168.0.17.xip.io
port:
targetPort: 8080-tcp
to:
kind: Service
name: infinispan-server
status:
ingress:
- conditions:
- lastTransitionTime: 2016-08-05T11:57:14Z
status: "True"
type: Admitted
host: infinispan-server-myproject.192.168.0.17.xip.io
routerName: router
view raw Updated Route hosted with ❤ by GitHub
  • (line 17) - Modified to 8080 TCP port

Testing the setup

You can easily see how to access the cluster by describing the Route:

$ oc get route/infinispan-server 1 ↵
NAME HOST/PORT PATH SERVICE TERMINATION LABELS
infinispan-server infinispan-server-myproject.192.168.0.17.xip.io infinispan-server:7600-tcp app=infinispan-server%
view raw Viewing routes hosted with ❤ by GitHub

Now let's try to play with the data:

$ curl -X POST -H 'Content-type: text/plain' -d 'test' infinispan-server-myproject.192.168.0.17.xip.io/rest/default/test
$ curl -X GET -H 'Content-type: text/plain' infinispan-server-myproject.192.168.0.17.xip.io/rest/default/test
test%

Cleaning up

Finally, when you are done with experimenting, you can remove everything using 'oc delete' command:

$ oc delete all -l "app=infinispan-server"
imagestream "infinispan-server" deleted
deploymentconfig "infinispan-server" deleted
route "infinispan-server" deleted
service "infinispan-server" deleted
pod "infinispan-server-6-4fdvm" deleted
pod "infinispan-server-6-z686g" deleted

Conclusion

Running Infinispan cluster inside an OpenShift cloud is really simple. Just 3 steps to remember:
  1. Create an Infinispan app ('oc new-app')
  2. Tell it to use Kubernetes JGroups Stack and in which project look for other cluster members ('oc edit dc/infinispan-server')
  3. Allow access to the OpenShift API ('oc policy add-role-to-user')
Happy scaling!

Friday, 5 August 2016

MongoDB Cache Store 8.2.1.Final

In the storm of the persistence SPI rework that happened during Infinispan 6.0, the MongoDB cache store, among others, was left in a state of semi-abandonment for a long time.

Fortunately a few brave souls came to its rescue and have breathed new life into it so that it can be used with Infinispan 8.x

In particular I wish to thank Kurt Lehrke for doing most of the work !!!

Get it from the dedicated cache store download page.

Thursday, 4 August 2016

Infinispan 9.0.0.Alpha4

Dear Infinispan users,

I am glad to announce that we have released 9.0.0.Alpha4 for you!


A brand new Alpha release from our development branch: 9.0.0.Alpha4 has a slew of bug fixes, and some more enhancements, among which we single out: transactional JDBC cache store and Kubernetes PING support.  We also have added quite a bit of documentation around querying to help users better understand how to use it.

Download it now, try it and tell us what you think on the infinispan forums or come and meet us on IRC: channel #infinispan on Freenode.