Thursday, June 28, 2012

Cloud ecosystems with OSGi

One of the areas where I think that the dynamic services architecture of OSGi can really shine is in the context of cloud. And what I have in mind here is a cloud ecosystem comprised of multiple nodes in a cloud, or possibly across clouds, where each node in this ecosystem potentially has a different role from the other. In such a system the various nodes need to be able to work together to perform some function, and hooking the pieces together is really where the fun starts because how do you know from inside one cloud vm where the other ones are? Various people are working on solutions for this which range from elastic IP addresses to plugging in variables when launching a VM and various other ones. While I agree that these solutions provide value I think that they should not necessarily bleed into the space of the developer or even the deployer. The deployer should simply be able to select a cloud, create a few instances and associate them together. At that point they should nicely work together. 

This is where OSGi Services come in. OSGi Services implement a Java interface (we might see OSGi services in other languages too in the not too distant future) and are registered in the OSGi Service Registry. Consumers of these services are not tied to the provider as they select the service on its interface or other properties. The provider could be any other bundle in the OSGi Framework, or when using OSGi Remote Services they could be in a different framework. The OSGi Remote Services specs also describe a mechanism for discovery which makes it possible to find remote OSGi services using the standard OSGi Service Registry mechanisms (or component frameworks such as Blueprint, DS, etc).

So I started prototyping such a cloud ecosystem using Red Hat's OpenShift cloud combined with OSGi Remote Services. However you'll see that my bundles are pure OSGi bundles that don't depend on any type of cloud - they simply use the OSGi Service Registry as normal...

In the diagram each OSGi Framework is potentially running in its own Cloud VM, although multiple frameworks could also share VMs (this would be a deployment choice and doesn't affect the architecture).

Before diving into the details, my setup allows me to:
  • register OSGi Services to be shared with other OSGi frameworks within the ecosystem.
  • see what other frameworks are running in this ecosystem. This would be useful information for a provisioning agent.
What's so special about this? Isn't this just OSGi Remote Services? Yes, I'm using those, but the interesting bit is the discovery component which binds the cloud ecosystem together. The user of the Remote Services doesn't need to know where they physically are. Similarly, the provider of the remoted service doesn't need to know how its distributed.

As with any of my blog articles, I'm sharing the details of all this below, so do try this at home :) Most of the work here relates to setting up the infrastructure. Hopefully we can see cloud vendors provide something like this in the not too distant future which would give you a nice and clean deployment infrastructure for creating dynamic OSGi-based cloud ecosystems (i.e. an OSGi PAAS).

The view from inside an OSGi bundle

Very simple. The provider of a service marks it as shared for use in the cloud ecosystem by adding 2 extra service registration properties. I'm using the standard OSGi Remote Service property service.exported.interfaces for this, however with a special config type: This config type is picked up by the infrastructure to mean that it needs to be shared in the current cloud ecosystem.

I wrote a set of demo bundles to show the OSGi cloud ecosystem in action. One of the demo bundles registers a TestService, using the standard BundleContext API and adds these properties:

public class Activator implements BundleActivator {
  public void start(BundleContext context) throws Exception {
    TestService dr = new TestServiceImpl();
    Dictionary props = new Hashtable();
    props.put("service.exported.interfaces", "*");
    context.registerService(TestService.class.getName(), dr, props);
You can see the full provider class here:

Consuming the service is completely non-intrusive. My demo also contains a Servlet that provides a simple Web UI to test the service and makes invocations on it. It doesn't need to specify anything special to use an OSGi service that might be in another framework instance. It uses a standard OSGi ServiceTracker to look up the TestService:

ServiceTracker testServiceTracker = new ServiceTracker(context, TestService.class.getName(), null) {
  public Object addingService(ServiceReference reference) {
    return super.addingService(reference);

  public void removedService(ServiceReference reference, 
                             Object service) {
    super.removedService(reference, service);
For the whole thing, see here:

I used plain OSGi Service APIs here, but you can also use Blueprint, DS or whatever OSGi Component technology to work with the services...

The main point is that we are doing purely Service Oriented Programming. As long as the services are available somewhere in the ecosystem their consumers will find them. If a cloud VM dies or another is added, the dynamic capabilities of OSGi Services will rebind the consumers to the changed service locations. The code that deals with the services doesn't deal with the physical cloud topology at all.

Try it out!

As always on this blog I'm providing detailed steps to try this out yourself. Note that I'm using Red Hat's OpenShift which gives you 3 cloud instances for development purposes for free. The rest is all opensource software so you can get going straight away.

Also note that you can set this up using other clouds too, or even across different clouds, the OSGi bundles aren't affected by this at all. So if you prefer another cloud the only thing you need to do there is setup the Discovery system for that cloud; the same OSGi bundles will work.

Cloud instances

For this example I'm using 3 cloud VMs to create my ecosystem. All of which are based on the OpenShift 'DIY' cartridge as I explained in my previous posting. They have the following names:
  • discoserver - provides the Discovery functionality
  • osgi1 and osgi2 - two logically identical OSGi frameworks


The Discovery functionality is based on Apache ZooKeeper and actually runs in its own cloud vm. Everything you need is available from the github project osgi-cloud-discovery.

Here's how I get it into my cloud image (same as described in my previous post):
$ git clone ssh:// (this is the URL given to you when you created the OpenShift vm)
$ cd discoserver
$ git fetch
$ git merge -Xtheirs FETCH_HEAD 

then launch the VM:
$ git push
... after a while you'll see:
Starting zookeeper ... STARTED
Done - I've got my discovery system started in the cloud.

I didn't replicate discovery (for fault tolerance) here for simplicity, that can be added later.

The OSGi Frameworks

For the OSGi Frameworks I'm starting off with 2 identical frameworks which contain the baseline infrastructure. I put this infrastructure in the osgi-cloud-infra github project. To get this into your VM clone as provided by the OpenShift 'DIY' cartridge do similar to the above:
$ git clone ssh://
$ cd osgi1
$ git fetch
$ git merge -Xtheirs FETCH_HEAD 

At this point it gets a little tricky as I'm setting up an SSH tunnel to the discovery instance to make this vm part of the discovery domain. To do this, I create an SSH key which I'm adding to my OpenShift account, then each instance that's part of my ecosystem uses that key to set up the SSH tunnel.

Create the key and upload it to OpenShift:
$ cd disco-tunnel
$ ssh-keygen -t rsa -f disco_id_rsa
$ rhc sshkey add -i disco -k 

Create a script that sets up the tunnel. For this we also need to know the SSH URL of the discovery VM. This is the identifier (or whatever OpenShift provided to you). In the disco-tunnel directory is a template for this script, copy it and add the identifier to the DISCOVERY_VM variable in the script:
$ vi
... set the DISCOVERY_VM variable ...

finally add the new files in here to git:
$ git add disco_id_rsa*

For any further OSGi framework instances, you can simply copy the files added to git here ( and disco_id_rsa*) and add to the git repo. 

As you can see, this bit is quite OpenShift specific. It's a once-off thing that needs setting up and it's not really ideal, I hope that cloud vendors will make something like this easier in the future :)

Add Demo bundles

At this point I have my cloud instances set up as far as the infrastructure goes. However, they don't do much yet given that I don't have any application bundles. I want to deploy my TestService as described above and I'm also going to deploy the little Servlet-based Web UI that invokes it so that we can see it happening. The demo bundles are hosted in a source project: osgi-cloud-disco-demo.

To deploy, clone and build the demo bundles:
$ git clone git://
$ cd osgi-cloud-disco-demo
$ mvn install

Next thing we need to do is deploy the bundles. For now I'm using static deployment but I'm planning to expand to dynamic deployment in the future.

I'm deploying the Servlet-based Web UI bundle first. The osgi-cloud-demo-disco source tree contains a script that can do the copying and updates the configuration to deploy the bundles in the framework:
$ ./ ~/clones/osgi1

In the osgi1 clone I can now see that the bundles have been added and the configuration to deploy them updated:
$ git status
#    modified:   osgi/equinox/config/config.ini
# Untracked files:
#    osgi/equinox/bundles/cloud-disco-demo-api-1.0.0-SNAPSHOT.jar
#    osgi/equinox/bundles/cloud-disco-demo-web-ui-1.0.0-SNAPSHOT.jar
Add them all to git and commit and push the git repo:
$ git add osgi/equinox
$ git commit -m "An OSGi Framework Image"
$ git push

The cloud VM is started as part of the 'git push'.

Let's try the demo web ui, go to the /webui context of the domain that OpenShift provided to you and it will display the OSGi Frameworks known to the system and all the TestService instances:
There is 1 framework known (the one running the webui) and no TestService instances. So far so good.

Next we'll make the TestService available in another Cloud vm.
Create another cloud VM (e.g. osgi2) identical to osgi1, but without the demo bundles.

Then deploy the demo service provider bundles:
$ ./ ~/clones/osgi2

In the osgi2 clone I can now see that the bundles have been added and the configuration to deploy them updated:
$ git status
# On branch master
#    modified:   osgi/equinox/config/config.ini
# Untracked files:
#    osgi/equinox/bundles/cloud-disco-demo-api-1.0.0-SNAPSHOT.jar
#    osgi/equinox/bundles/cloud-disco-demo-provider-1.0.0-SNAPSHOT.jar

Add them all to git and commit and push the git repo:
$ git add osgi/equinox 
$ git commit -m "An OSGi Framework Image"
$ git push

Give the system a minute or so, then refresh the web UI:

You can now see that there are 2 OSGi frameworks available in the ecosystem. The web UI (running in osgi1) invokes the test service (running in osgi2) which, as a return value, reports its UUID to show that its running in the other instance.

Programming model

The nice thing here is that I stayed within the OSGi programming model. My bundles simply use an OSGi ServiceTracker to look up the framework instances (which are represented as services) and the TestService. I don't have any configuration code to wire up the remote services. This all goes through the OSGi Remote Services-based discovery mechanism.
Also, the TestService is invoked as a normal OSGi Service. The only 'special' thing I did here was to mark the TestService as exported in the cloud with some service properties.


This is just a start... I think it opens up some very interesting possibilities and I intend to write more posts in the near future about dynamic provisioning in this context, service monitoring and other cloud topics. The example I've been running here is on my employer's (Red Hat) OpenShift cloud - but it can work on any cloud or even across clouds and the bundles providing the functionality generally don't need to know at all what cloud they're in...

Some additional notes

Cloud instances typically have a limited set of ports they can open to the outside world. In the case of OpenShift you currently get only one: port 8080 which is mapped to external port 80. If you're running multiple applications in a single cloud VM this can sometimes be a problem as they each may want to have their own port. OSGi actually helps here too. It provides the OSGi HTTP Service where bundles can register any number of resources, servlets etc on different contexts of the same port. So in my example I'm running my Servlet on /webui but I'm also using Apache CXF-DOSGi as the Remote Services implementation which exposes the OSGi Services on the same port but different contexts. As many web-related technologies in OSGi are built on the HTTP Service they can all happily coexist on a single port.