In this post I'm starting to look at the provisioning of such ecosystems. OSGi frameworks are dynamic which means they can dynamically be assigned a function and when the need arises they can be dynamically repurposed. This repurposing is interesting in particular because it saves you from sending the cloud vm image over the network again. And those images can be big. You can reuse the image that you started with, changing its function and role over time.
In my previous post I was using pre-configured VM images, each representing a different deployment in the ecosystem. In this post I will start out with a number of identical images. In addition, I will deploy one other special image: the provisioner. However, since that one is also running in an OSGi framework, even that one can be repurposed or moved if the need arises.
You'll see that, as in the previous posts, none of the OSGi bundles are statically configured to know where the remote services they use are located. They are all part of the dynamic OSGi Cloud Ecosystem which binds all services together via the OSGi Remote Services compliant Discovery mechanism.
I wrote a very simple demo provisioner which keeps an eye on what frameworks are up, provisions them with OSGi bundles, and if one goes down it redeploys services to an alternative.
Altogether, the setup is as follows:
Provisioning an ecosystem of initially identical OSGi frameworks |
To be able to work with such a dynamic ecosystem of OSGi frameworks, the provisioner needs to monitor the frameworks available in it. Since these frameworks are represented as OSGi services this monitoring can simply be done through OSGi service mechanics, in my case I'm using a simple ServiceTracker, but you could do the same with Blueprint or Declarative Services or any other OSGi services-based framework.
Filter filter = bundleContext.createFilter(
"(&(objectClass=org.coderthoughts...api.OSGiFramework)(service.imported=*))");
frameworkTracker = new ServiceTracker(bundleContext, filter, null) {
@Override
public Object addingService(ServiceReference reference) {
handleTopologyChange();
return super.addingService(reference);
}
@Override
public void removedService(ServiceReference reference, Object service) {
handleTopologyChange();
super.removedService(reference, service);
}
frameworkTracker = new ServiceTracker(bundleContext, filter, null) {
@Override
public Object addingService(ServiceReference reference) {
handleTopologyChange();
return super.addingService(reference);
}
@Override
public void removedService(ServiceReference reference, Object service) {
handleTopologyChange();
super.removedService(reference, service);
}
};
My ServiceTracker listens for services with OSGiFramework as objectClass which have the service.imported property set. This means that it listens to remote frameworks only. Whenever a framework appears or disappears the handleTopologyChange() will be called.For this posting I wrote a really simple demo provisioner that has the following rules that are evaluated when the topology changes (frameworks arrive and disappear in the ecosystem):
- Deploy the web bundles to a framework with an ip address starting with 'web-'.
- Deploy one instance of the service provider bundles to the framework with the largest amount of memory available.
- Ensure that if the vm hosting the service provider bundles disappears for some reason, they get redeployed elsewhere. Note that by virtue of using the OSGi Remote Services Discovery functionality the client will automatically find the moved service.
Obviously these aren't real production rules, but they give you an idea what you can do in a management/provisioning agent:
- Decide what to deploy based on image characteristics, I used image name for the web component, but you can also use other metadata such as image type (Red Hat OpenShift/Amazon/Azure/etc), location (e.g. country where the VM is hosted) or other custom metadata.
- Dynamically select a target image based on a runtime characteristic, in my case I'm using memory available.
- Keep an eye on the topology. When images disappear or new ones arrive the provisioner gets notified - simply through OSGi Service dynamics - and changes can be made accordingly. For example a redeploy action can be taken when a framework disappears.
- No hardcoded references to where the target services live. The service consumer use ordinary OSGi service APIs to look up and invoke the services in the cloud ecosystem. I'm not using a component system like Blueprint or DS here but you can also use those, of course.
the demo provisioner code is here: DemoProvisioner.java
As before, I'm using Red Hat's free OpenShift cloud instances, so anyone can play with this stuff, just create a free account and go.
My basic provisioner knows the sets of bundles that need to be deployed in order to get a resolvable system. In the future I'll write some more about the OSGi Repository and Resolver services which can make this process more declarative and highly flexible.
However, the first problem I had was: what will I use to do the remote deployment to my OSGi frameworks? There are a number of OSGi specs that deal with remote management, most notably JMX. But remote JMX deployment doesn't really work that well in my cloud environment as it requires me to open an extra network port, which I don't have. Probably the best solution would be based on the REST API that is being worked on in the EEG (RFC 182) and hook that in with the OSGi HTTP service. For the moment, I decided to create a simple deployment service API that works well with OSGi Remote Services:
public interface RemoteDeployer {
long getBundleID(String location);
String getSymbolicName(long id);
long installBundle(String location, byte [] base64Data);
long [] listBundleIDs();
void startBundle(long id);
void stopBundle(long id);
void uninstallBundle(long id);
}
This gives me very basic management control over remote frameworks and allows me to find out what they have installed. It's a simple API that's designed to work well with Remote Services. Note that the installBundle() method takes the actual bundle content as base-64 encoded bytes. This saves me from having to host the bundles as downloadable URLs from my provisioner framework, and makes it really easy to send over the bytes.
I'm registering my RemoteDeployer service in each framework in the ecosystem using the Remote Services based cloud ecosystem service distribution I described in my previous post. This way the provisioner can access the service in the remote frameworks and deploy bundles to each.
That's pretty much it! I've simplified the web servlet a little from the previous post to only show a single TestService invocation, let's try it out...
Try it out!
First thing I'm doing is start the Discovery Server in the ecosystem. This is done exactly as in my earlier post: Cloud Ecosystems with OSGi.
Having the discovery server running, I begin by launching my provisioner framework. After that I'll be adding the empty OSGi frameworks one-by-one to see how it all behaves.
Finally I'll kill a VM to see redeployment at work.
The common OSGi Framework bits
The provisioner framework is created by first adding the baseline infrastructure to a new OpenShift DIY clone:
$ git clone ssh://blahblahblah@provisioner-coderthoughts.rhcloud.com/~/git/provisioner.git/
$ cd provisioner
$ git fetch https://github.com/bosschaert/osgi-cloud-infra.git
$ git merge -Xtheirs FETCH_HEAD
The specific Provisioner bits
Specific to the provisioner vm are the provisioner bundles. The demo provisioner I wrote can be built and deployed from the osgi-cloud-prov-src github project:
$ git clone https://github.com/bosschaert/osgi-cloud-prov-src.git
$ cd osgi-cloud-prov-src
$ cd osgi-cloud-prov-src
$ mvn install
$ ./copy-to-provisioner.sh .../provisioner
finally add the changed configuration and provisioner bundle to git in the provisioner clone, then commit and push:
$ cd .../provisioner
$ git add osgi
$ git commit -m "Provisioner VM"
$ git push
$ git push
The provisioner image will be uploaded to the cloud and if you have a console visible (which you'll get by default by doing a git push on OpenShift) you will eventually see the OSGi Framework coming up.
Adding the identical OSGi Frameworks
I'm launching 3 identical cloud images all containing an unprovisioned OSGi framework. As they become active my demo provisioner will exercise the RemoteDeployer API to provision them with bundles.
First add one called web-something. Mine is called web-coderthoughts. The demo provisioner looks for one with the 'web-' prefix and will deploy the servlet there, this will allow us to use a fixed URL in the browser to see what's going on. This is exactly the same as what is descibed above in the common OSGi Framework bits:
$ git clone ssh://blahblahblah@web-coderthoughts.rhcloud.com/~/git/web.git/
$ cd web
$ git fetch https://github.com/bosschaert/osgi-cloud-infra.git
$ git merge -Xtheirs FETCH_HEAD
And don't forget to set up the SSH tunnel.Once the web framework is started, the provisioner starts reacting, if you have a console, you'll see messages appear such as:
remote: *** Found web framework
remote: *** Web framework bundles deployed
remote: *** Web framework bundles deployed
Great! Let's look at the servlet page:
So far so good. We are seeing 2 frameworks in the ecosystem: the web framework and the provisioner. No invocations where made on the TestService because it isn't deployed yet.
So let's deploy our two other frameworks. The steps are the same as for the web framework, except the vm names are osgi1 and osgi2 and reload the web page:
click to see full size |
Reacting to failure
Let's kill the framework hosting the TestService and see what happens :)
OpenShift has the following command for this, if you're using a different cloud provider I'm sure they provide a means to stop a Cloud VM image.
$ rhc app stop -a osgi1
Give the system a few moments to react, then refresh the web page:
click to see full size |
Repurposing
Obviously the provisioner in this posting was very simplistic and only really an example of how you could write one. Additionally, things will become even more interesting when you start using an OSGi Repository Service to provide the content to the provisioner. That will bring a highly declarative model to the provisioner and allows you do create a much more generic management agent. I'll write about that in a future post.
Another element not worked out in this post is the re-purposing of OSGi frameworks. However it's easy to see that with the primitives available this can be done. Initially the frameworks in the cloud ecosystem are identical, and although provisioner will fill them with different content, the content can be removed (the bundles can be uninstalled) and the framework can be given a different role by the provisioner when the need arises.
Note that all the code in the various github projects used in this post has been tagged 0.3.