Tuesday, October 22, 2013

Role-based access control for Karaf shell commands and OSGi services

In a previous post I outlined how role-based access control was added to JMX in Apache Karaf. While JMX is one way to remotely manage a Karaf instance, another management mechanism is provided via the Karaf Console/Shell. Up until now security for console commands was very coarse-grained. Once in the console you had access to all the commands. For example, it was not possible to give certain users access to merely changing their own configuration without also giving them access to shutting down the whole karaf instance.

With commit r1534467 this has now changed (thanks again to JB Onofré for reviewing and applying my pull request). You can now define roles required for each shell command and even have different roles depending on the arguments used with a certain command. This is achieved by using a relatively advanced feature of OSGi: Service Registry Hooks. These hooks give you a lot of control on how the OSGi service registry behaves. I blogged about them before. They enable you to:
  • see what service consumers are looking for, so you can register these services on-the-fly. This is often used to import remote services from discovery, but only if there is actually a client for them.
  • hide services from certain service consumers
  • change the service properties the client sees for a service by providing an alternative registration
  • proxy the original service
Every Karaf command is in reality an Apache Felix Gogo command, registered as an OSGi service. Every command has two service registration properties: osgi.command.scope and osgi.command.function. These properties define the name of the command and its scope. With the use of the OSGi Service Registry hooks I can replace the original service with a proxy that adds the role-based security. 

When I originally floated this idea on the Karaf mailing list, Christian Schneider said: "why don't we enable this for all services?" Good idea! So that's how I ended up implementing it. I first added a mechanism to add role-based access control to OSGi services in general and then applied this mechanism to get role-based access control for the Karaf commands.

Under the hood

the original service is hidden by OSGi Service Registry Hooks
The theory is quite simple. As mentioned above you can use OSGi Service Registry hooks to hide a service from certain consuming bundles and effectively replace it with another. In my case the replacement is a proxy of the original service with the same service registration properties (and some extra ones, see below). It will delegate an invocation to the original service, but before it does so it will check the ACL for the service being invoked to find out what the permitted roles are. Then it checks the roles of the current user by looking at the Subject in the current AccessControlContext. If the user doesn't have any of the permitted roles the service invocation is aborted with a SecurityException.

How do I configure ACLs for OSGi services?

ACLs for OSGi services are defined in a way similar to how these are defined for JMX access: through the OSGi Configuration Admin service. The PID needs to start with org.apache.karaf.service.acl. but the exact PID value isn't important. The service to which the ACL is matched is found through the service.guard configuration value. Configuration Admin is very flexible wrt to how configuration is stored, but by default in Karaf these configurations are stored as .cfg files in the etc/ directory. Let's say I have a service in my system that implements the following API and is registered in the OSGi service registry under the org.acme.MyAPI interface:
  package org.acme;

  public interface MyAPI {
    void doit(String s);
If I want to specify an ACL to say that only clients that have the manager role can invoke this service, I have to do two things:
  1. First I need to enable the role-based access for this service by including it in the filter specified in the etc/system.properties in the karaf.secured.services property:
      karaf.secured.services=(|(objectClass=org.acme.MyAPI)(...what was there already...))
    only services matching this property are enabled for role-based access control. Other services are left alone.
  2. Define the ACL for this service as Config Admin configuration. For example by creating a file etc/org.apache.karaf.service.acl.myapi.cfg:

    So the actual PID of the configuration is not really important, as long as it starts with the prefix. The service it applies to is then selected by matching the filter in the service.guard property.
There are some additional rules. There is a special role of * which means that ACLs are disabled for this method. Similar to the JMX ACLs you can also specify function arguments that require specific roles. For more details see the commit message.

Setting roles for service invocation

The service proxy checks the roles in the current AccessControlContext against the required ones. So when invoking a service that has role-based access control enabled, you need to set these roles. This is normally done as follows:
  import javax.security.auth.Subject;
  import org.apache.karaf.jaas.boot.principal.RolePrincipal;
  // ... 
  Subject s = new Subject();
  s.getPrincipals().add(new RolePrincipal("manager"));
  Subject.doAs(s, new PrivilegedAction() {
    public Object run() {
      svc.doit("foo"); // invoke the service
This example uses a Karaf built-in role. You can also use your own role implementations by specifying them using the className:roleName syntax in the ACL.

Note however, that javax.security.auth.Subject is a very powerful API. You should give bundles that import it extra scrutiny to ensure that they don't give access to clients that they shouldn't really have...

Applied to Shell Commands

Next step was to apply these concepts to the Karaf shell commands. As all the shell commands are registered with the osgi.command.function and osgi.command.scope properties. I enabled them in the default Karaf configuration with the following system property:

The next thing is to configure command ACLs. However that presented a slight usability problem. Most of the services in Karaf are implemented (via OSGi Blueprint) using the Function interface. Which means that the actual method name is always execute. It also means that you need to create a separate Configuration Admin PID for each command which is quite cumbersome. You really want to configure this stuff on a per-scope level with all the commands for a single scope in a single configuration file. To allow this the command-integration code contains a configuration transformer which creates service ACLs as described above but based on command scope level configuration files.
The command scope configuration file must have a PID that follows this structure: org.apache.karaf.command.acl.<scope> So if you want to create such a file for the feature scope the config file would be in etc/org.apache.karaf.command.acl.feature.cfg:
  list = viewer
  info = viewer
  install = admin
  uninstall = admin
In this example only users with the admin role can do install/uninstall operations while viewers can list features etc... Note that by using groups (as outlined in this previous post) users added to an admin group will also have viewer permissions, so will be able to do everything. For a more complex configuration file, have a look at this one.

Can I find out what roles a service requires?

It can be useful to know in advance what the roles are that are required to invoke a service. For example the shell supports tab-style command completion. You don't want to show commands to the user that you know are not available to the user's roles. For this purpose an additional service registration property is added to the proxy service registration: org.apache.karaf.service.guard.roles=[role1,role2]. The value of the property is the Collection of roles that can possibly invoke a method on the service. Since each command maps to a single service, we can have a Command Processor that only selects the commands applicable to the roles of the current user. This means that commands that this user doesn't have the right roles for are automatically hidden from autocompletion etc. When I'm logged in as an admin I can see all the feature commands (I removed ones not mentioned in the config for brevity):
  karaf@root()> feature: <hit TAB>
  info            install         list            uninstall
while Joe, a viewer, only see the feature commands available to viewers:
  joe@root()> feature: <hit TAB>
  info            list

In some cases the commands have roles associated with particular values being passed in. For example the config admin shell commands require admin rights for certain PIDs but not all. So Joe can safely edit his own configuration but is prevented from editing system level configuration:
  joe@root(config)> edit org.acme.foo
  joe@root(config)> property-set somekey someval
  joe@root(config)> update
So Joe can edit the org.acme.foo PID, but when he tries to edit the jmx.acl PID access is denied:
  joe@root(config)> edit jmx.acl
  Error executing command: Insufficient credentials.

Where are we with this stuff today?

The first commits to enable the above have just gone into Karaf trunk and although I wrote lots of unit tests for it, more use is needed to see whether it all works as users would expect. Also the default ACL configuration files may need a bit more attention. What's there now is really a start, the idea is to refine as we go along and have this as a proper feature for Karaf 3.

The power of OSGi services

One thing that this approach shows is really the power and flexibility of OSGi services. None of the code of the actual commands was changed. The ability to build role-based access on top of them in a non-intrusive way was really enabled by the OSGi service registry design and its capabilities.

Friday, October 18, 2013

Running pure Jasmine unit tests through Maven

I have always really liked writing unit tests. For the simple reason that with those I know that I did all I could to ensure my algorithms worked as planned. Sure, even with high code coverage there is still a chance that you're missing a situation in your tests, but at least once you know this you can fill the gap by adding an additional test. And, of course, you want to run these tests automatically as part of a regular build. No manual testing please :)

So when I started looking at some projects that use JavaScript I wanted to use the same ideas. Write unit tests that are automatically run during a headless build.
I started using Jasmine, as it seems to be the most popular JavaScript testing framework today. Since the project I was working with was using Maven already I wanted to integrate my Jasmine testing as part of the ordinary Maven test cycle.
Additionally, I wanted the setup of my environment be trivial. I really don't want any developer to install additional software besides what they already have to run Maven. And, I don't want to depend on any platform specific software, if possible.

This got me looking around on the internet and I found a really good post by Code Cop that describes how you can do something like this for Apache Ant. What he did was test JavaScript logic using Jasmine, outside of the browser. So you don't have the browser JavaScript environment present, but you can test all your algorithms. This is precisely what I was looking for too. Another nice thing of his work is that the test results are stored in the same XML format as JUnit uses, so you can inspect these files with any tool that can work with ordinary JUnit output XML files (e.g. you can open them in Eclipse and view them in the JUnit view).

I started with the code by Code Cop, and reduced it to the bare minimum, only supporting Jasmine (Code Cop's work also supports other JS test frameworks). You can find this minimal ant-jasmine test integration at coderthoughts/jasmine-ant. The next step: get it working in Maven.

There were a couple of things that needed to be changed to be able to do this:
  1. I wanted to obtain the Java-level dependencies via Maven: the original Rhino scripting engine (can't use the one in the JRE, because JavaAdapter was removed, see here) and js-engine.jar that adds Rhino as the rhino-nonjdk scripting language.
  2. I want to have the source .js files in src/main/js and the tests in test/main/js, the usual locations in Maven.
  3. I needed to make the output directory configurable so that the results are written to target/surefire-reports, where Maven expects these files.
In the end I got things going. I'm still using Ant inside Maven to actually do the Jasmine test running, using a slightly modified version of Code Cop's Jasmine runner Ant task. But the whole end result fits nicely with the rest of the Maven setup.


  <packaging>war</packaging> <!-- your JavaScript will likely end up in a .war file -->

      <!-- Bring in the original Rhino implementation that contains the JavaAdapter class -->

      <!-- Adds the 'rhino-nonjdk' language to the supported scripting languages -->
      <!-- Obtained from the repository at http://dist.codehaus.org/mule/dependencies/maven2/ -->

                <property name="jasmine.dir" location="lib/jasmine-ant" />
                <property name="script.classpath" refid="maven.test.classpath" />

                <scriptdef name="jasmine" src="${jasmine.dir}/jasmineAnt.js"
                  language="rhino-nonjdk" classpath="${script.classpath}">
                  <!-- Jasmine (jasmine-rhino.js) needs pure Rhino because 
                       JDK-Rhino does not define JavaAdapter. -->
                  <attribute name="options" />
                  <attribute name="ignoredGlobalVars" />
                  <attribute name="haltOnFirstFailure" />
                  <attribute name="jasmineSpecRunnerPath" />
                  <attribute name="testOutputDir" />
                  <element name="fileset" type="fileset" />

                <jasmine options="{verbose:true}"
                  testOutputDir="target/surefire-reports" haltOnFirstFailure="false"
                  <fileset dir="test" includes="**/*Spec.js" />

      <!-- ... other plugins ... -->


A couple of things to note here:
  • I couldn't find the js-engine.jar in Maven Central. Fortunately it was available in the Mule repo at codehaus.org.
  • I added the testOutputDir as a configuration attribute for where the test results go.
  • No setup whatsoever required, no platform specific binaries needed, if you can run Maven you can run these Jasmine tests.
When I run it, it looks like this:

$ mvn test
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building jasmine-maven-example 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
 T E S T S
[INFO] ...
[INFO] --- maven-antrun-plugin:1.7:run (default) @ jasmine-maven-example ---
[INFO] Executing tasks

  [jasmine] Spec: main/js/RomanNumeralsSpec.js
  [jasmine] Tests run: 7, Failures: 0, Errors: 0
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.551s

Of course, the build fails when a test fails and also the test reports can be processed using anything that can JUnit test reports, such as mvn surefire-report:report

I find it pretty handy. A minimal project that does this that you can try yourself is available here: coderthoughts/jasmine-maven

So it's a little different from the jasmine-maven-plugin in that this doesn't fork a browser and is hence a little bit faster. It should be possible to speed it up even further by writing a proper Maven plugin for it...
It's more of a pure unit testing environment, where running the jasmine-maven-plugin is closer to a system test setup...
And of course, thanks again to Code Cop for providing an excellent starting point for this stuff.

Thursday, October 3, 2013

JMX role-based access control for Karaf

Recently I worked on adding role-based access control to Karaf management operations. This work is split into two parts: one part focuses on adding role-based access to JMX. Another part focuses on the Karaf shell/console. In this post I'm looking at how JMX access is secured.

JMX plays an important role in Karaf as a remote management mechanism. A number of management clients are built on top JMX, hawtio being probably the most popular one right now. While hawtio uses JMX through Jolokia which exposes the JMX API over a REST interface, other clients use JMX locally (e.g. via JConsole) or over a remote connector. 

Most functionality available in Karaf can be managed via MBeans, but up until now it was suffering from one issue, there was really only one level of access. If you were given rights to access, you had access to all the MBeans. It was not possible to give users access to certain areas in JMX while restricting access to other areas.

Role-based Access Control

With commit r1528587 my JMX role-based access has been added to Karaf trunk (extra kudos and thanks to Jean-Baptiste Onofré for additional testing, finding a number of bugs, fixing those and actually applying the commits!). It means that an administrator can now declare the roles required to access certain Karaf MBeans. And, it also applies to MBeans that are registered outside of Karaf, but running in the same MBeans server. So JRE-provided MBeans and MBeans coming from OSGi bundles that are installed on top of Karaf are also covered.

How does it work?

It works by inserting a JMX Guard which is configured via a JVM-wide MBeanServerBuilder. The Karaf launching scrips are updated to contain the following argument: -Djavax.management.builder.initial=org.apache.karaf.management.boot.KarafMBeanServerBuilder
This global JVM-level MBeanServerBuilder calls into an OSGi bundle that contains the JMX Guard for each JMX invocation made. The Guard in turn looks up the ACL of the accessed MBean in the OSGi Configuration Admin Service and checks the required roles for this MBean with the RolePrincipal objects present in the Subject in the current AccessControlContext. If no matching role is present, the JMX invocation will be blocked with a SecurityException.

How can I define my ACLs?

The Access Control Lists are stored in OSGi Configuration Admin. This means that they can be defined in whatever way the currently configured Config Admin implementation stores its information, which could be a database, nosql, etc... In the case of Karaf this information is normally stored in the etc/ directory in .cfg text files. The file name (excluding the .cfg extension) represents the Config Admin PID. JMX ACLs are mapped to Config Admin PIDs by prefixing them with jmx.acl. Then the Object Name as it appears in the JConsole tree is used to identify the MBean. So the ActiveMQ QueueA MBean as in the screenshot below would map to the PID jmx.acl.org.apache.activemq.Broker.amq-broker.Queue.QueueA
The 'purge' operation is denied if the user does not have the required role
However, having to write a configuration file for every MBean isn't really that user-friendly. It would be nice if we could define this stuff on a slightly higher level. Therefore the code that looks for the ACL PIDs follows a hierarchical approach. If it cannot find any matching definitions for the operation invoked on ...QueueA PID, it goes up in the tree and looks for definitions in jmx.acl.org.apache.activemq.Broker.amq-broker.Queue and then jmx.acl.org.apache.activemq.Broker.amq-broker and so on. So if you want to specify an ACL for all queues on all ActiveMQ brokers you could do this in the jmx.acl.org.apache.activemq.Broker.cfg file. For example:
  browse*          = manager, viewer
  getMessage       = manager, viewer
  purge            = admin
  remove*          = admin
  copy*            = manager
  sendTextMessage* = manager
Note that this example uses wildcards for method names, so browse* covers browse(), browseAsTable() and browseMessages(). Additionally even though the admin role has access to all APIs it's not explicitly listed everywhere. This is not because the admin role is special, this is because administrators are expected to be part of the admingroup, which has all the roles in the system.


To keep the ACLs manageable I used the concept of JAAS groups. Typically you want to give an administrator access to everything in the system, but it's very cumbersome (and ugly) to add 'admin' to every single ACL definition in the system. Therefore the idea is that an administrator is not directly assigned the admin role, but is rather added to the admingroup. This group then has all the roles defined in the system. And no, it's not magic. If you decide to add a new group then the admingroup needs to be updated. Here's what the definition of some users might look like:
  karaf@root()> jaas:realm-manage --realm karaf
  karaf@root()> jaas:user-list
  User Name | Group        | Role
  karaf     | admingroup   | admin
  karaf     | admingroup   | manager
  karaf     | admingroup   | viewer
  joe       | managergroup | manager
  joe       | managergroup | viewer
  mo        |              | viewer

So in this example, the karaf user is in the admingroup and because of that has the roles admin, manager and viewer.

Default Configuration

There is default configuration that applies to any MBean if it doesn't have specific configuration. This can be found at the top of the hierarchy in the jmx.acl.cfg file:
  list* = viewer
  get*  = viewer
  is*   = viewer
  set*  = admin
  *     = admin
So the default is that any operation on any MBean starting with 'list', 'get' or 'is' is assumed to be an operation that you only need the viewer role for, while set* or any other operation name requires the admin role by default. This also maps well to MBeans that define JMX attributes. Obviously these defaults don't apply if a more specific definition for the MBean can be found...

Redefine to suit

While the Karaf distro comes with some predefined configuration in the form of jmx.acl.**.cfg files, it might be possible that this doesn't map 100% to the roles being used in your organization. Therefore all of this can be changed by the administrator. Nothing is hard coded, so feel free to add new roles, new groups and new ACLs to suit your organizational structure.

ACL definition details

The ACL examples in this posting are on the method level, but in some cases you want to define roles based on the arguments being passed into the operation. For example, you might need admin rights to uninstall a karaf system bundle, but maybe the manager role is enough to uninstall other bundles. Therefore you can define roles based on arguments passed in to the JMX operation either as literal arguments or using regular expressions. For more information on this, see the original commit message in github: 

What MBeans can I use?

If you're writing a rich client or other tool over JMX it can be nice to know in advance whether the current user can invoke certain operations or not. It allows the tool to only show the relevant widgets (buttons, menus etc) if it's actually possible to use the associated MBeans. For this use-case I added an MBean org.apache.karaf:type=security,area=jmx that has a number of canInvoke() operations. It allows you to check whether the currently logged in user can invoke any methods on a given MBean at all or whether it can invoke a certain method. There is also a bulk query operation that lets you check a whole bunch of operations in one go. The nice thing about this approach is that the client doesn't need to know anything about how the roles are mapped by the administrator. It simply checks whether the currently logged in user has the appropriate roles for the operations requested. This means that if the administrator decides to revamp the whole role-mapping on the back-end the client console will automatically adapt: no duplication of information or hard-coded role names needed. For more details about the canInvoke() method see: https://github.com/bosschaert/karaf/blob/f793e70612c47d16a95ef12287514c603613f2c0/management/server/src/main/java/org/apache/karaf/management/JMXSecurityMBean.java

Changing permissions at Runtime

As with nearly everything in OSGi, the Configuration Admin service is dynamic, which means that you can change the information at runtime. This means that you can change the role mappings while the system is running and even for a user that is logged in. You can add or take away privileges dynamically, for example if a trusted user is all of a sudden causing havoc, you can remove the rights associated with the roles of that user dynamically and stop any further damage instantly.

What's next?

I am also working on implementing RBAC for Karaf shell/console commands and will write another post about that when available on trunk.

Friday, April 19, 2013

Using OSGi Subsystems to deploy your Applications

One of the major new specs in the OSGi R5 Enterprise release is the Subsystem specification. While this spec itself is quite large and covers a wide number of angles and use-cases, I find the simplest way to explain Subsystems really as something like Application deployment for OSGi where an application is comprised of a number of bundles.

OSGi always encourages modular development and during development this is indeed great, because you can focus on the module at hand and have clear visibility of the impact of any changes that you make. However, once you want to deploy your application of 150 bundles this may become a little bit complicated - you certainly don't want to hand the person performing the deployment of 150 different files to deploy. You want to put them together in some way or form. This has caused many projects to come up with their own solution around this. Karaf and Eclipse have features, Eclipse Virgo has plans and Apache Aries has applications. The OSGi Subsystem specification now provides a standard for combining a number of bundles in a single deployable (an .esa file) which means that an .esa file can be deployed in any compliant Subsystem implementation.
I am really happy that the good people at Apache Aries have recently released version 1.0 of the Aries Subsystem implementation, so you can now use this without having to build an implementation yourself.

I'm going to look at how to create and use Subsystems later in this post, but first let's get an OSGi framework set up with Subsystems. The Aries implementation with its dependencies consists of 15 bundles. In my example below I'm using Equinox as the OSGi framework, but it should obviously work just as well on any other OSGi R5 compliant framework.

Setting up the Subsystem infrastructure
I'm going to be using the following:
  • Equinox 3.8.2 (which comes with Eclipse 4.2.2 or 3.8.2)
  • The gogo shell (which comes with Equinox and also with Felix)
  • The Aries Subsystem 1.0 implementation
  • dependencies of the above...
Let's start getting Equinox up and running with the shell, add the following bundles to your Equinox runtime. These all come shipped with Eclipse so if you're working in Eclipse you can simply select them in the 'OSGi Framework' launch configuration:


Now add the following bundles to install the Aries Subsystem implementation (the links below can be used to download them from Maven Central):

I'll leave it to the reader to find a convenient way to install all these (see also the comments section with a note about how to install this on a framework other than Equinox; you need an extra bundle). You can do it with a script, using a repository, etc... There is also a subsystem-bundle artifact that may help. In any case, this alone validates one of the key points why Subsystems were designed in the first place. If you have an application that is formed by a number of bundles you'd really want a nice and convenient way to deploy them. Once we have a subsystem implementation in place we can do this and start deploying large applications that consist of many bundles by simply deploying a single Subsystem archive file.

With the above bundles started the Subsystem Service is registered and ready to deploy subsystems:
osgi> services (objectClass=*Subsystem)
  {subsystem.id=0, subsystem.state=ACTIVE, subsystem.version=1.0.0,
   subsystem.symbolicName=org.osgi.service.subsystem.root, ...}
  "Registered by bundle:" org.apache.aries.subsystem.core_1.0.0 [6]

There is one issue though - we have no tool yet that utilizes the subsystem service so we can interact with it. It would be really nice if we could add a command to the OSGi console to do this. Using the extensible Gogo command shell this is childs play. So let's add a few subsystem commands.

Add some Subsystem commands to Gogo
Gogo is becoming the de-facto standard for shell commands in an OSGi framework. It's used by Equinox, Felix, Karaf and other OSGi distributions these days. Gogo is extensible and adding a few new commands to it is as simple as registering an OSGi service.

I created a bundle with only a single class, the activator which provides the following commands:
  subsystem:install <url>
  subsystem:uninstall <id>
  subsystem:start <id>
  subsystem:stop <id>

import java.io.IOException;
import java.net.URL;
import java.util.*;
import org.osgi.framework.*;
import org.osgi.service.subsystem.Subsystem;

public class Activator implements BundleActivator {
  private BundleContext bundleContext;

  public void start(BundleContext context) throws Exception {
    bundleContext = context;
    Dictionary<String, Object> props = new Hashtable<String, Object>();
      new String [] {"install", "uninstall", "start", "stop", "list"});
    props.put("osgi.command.scope", "subsystem");
    context.registerService(getClass().getName(), this, props);

  public void install(String url) throws IOException {
    Subsystem rootSubsystem = getSubsystem(0);
    Subsystem s = rootSubsystem.install(url, new URL(url).openStream());
    System.out.println("Subsystem successfully installed: " +
      s.getSymbolicName() + "; id: " + s.getSubsystemId());

  public void uninstall(long id) {

  public void start(long id) {

  public void stop(long id) {

  public void list() throws InvalidSyntaxException {
    for (ServiceReference<Subsystem> ref :
         bundleContext.getServiceReferences(Subsystem.class, null)) {
      Subsystem s = bundleContext.getService(ref);
      if (s != null) {
        System.out.printf("%d\t%s\t%s\n", s.getSubsystemId(), s.getState(), s.getSymbolicName());

  private Subsystem getSubsystem(long id) {
    try {
      for (ServiceReference<Subsystem> ref :
           bundleContext.getServiceReferences(Subsystem.class, "(subsystem.id=" + id + ")")) {
        Subsystem svc = bundleContext.getService(ref);
        if (svc != null)
          return svc;
    } catch (InvalidSyntaxException e) {
      throw new RuntimeException(e);
    throw new RuntimeException("Unable to find subsystem " + id);

  public void stop(BundleContext context) throws Exception {}

I shared a bundle that contains this command, you can get it from here: http://coderthoughts.googlecode.com/files/subsystem-gogo-command-1.0.0.jar

Once the above bundles are installed and everything is started I have the following bundles in my framework:
0  ACTIVE org.eclipse.osgi_3.8.2.v20130124-134944
1  ACTIVE org.eclipse.osgi.services_3.3.100.v20120522-1822
2  ACTIVE org.apache.felix.gogo.runtime_0.8.0.v201108120515
3  ACTIVE org.apache.felix.gogo.shell_0.8.0.v201110170705
4  ACTIVE org.eclipse.equinox.console_1.0.0.v20120522-1841
5  ACTIVE org.apache.aries.subsystem.api_1.0.0
6  ACTIVE org.apache.aries.subsystem.core_1.0.0
7  ACTIVE org.apache.aries.subsystem.obr_1.0.0
8  ACTIVE org.apache.aries.application.api_1.0.0
9  ACTIVE org.apache.aries.application.modeller_1.0.0
10 ACTIVE org.apache.aries.application.utils_1.0.0
11 ACTIVE org.apache.aries.blueprint_1.1.0
12 ACTIVE org.apache.aries.proxy_1.0.1
13 ACTIVE org.apache.aries.util_1.1.0
14 ACTIVE org.apache.felix.bundlerepository_1.6.6
15 ACTIVE org.apache.felix.resolver_1.0.0
16 ACTIVE org.eclipse.equinox.coordinator_1.1.0.v20120522-1841
17 ACTIVE org.eclipse.equinox.region_1.1.0.v20120522-1841
18 ACTIVE slf4j.api_1.7.5, Fragments=19
19 RESOLV slf4j.simple_1.7.5, Master=18
20 ACTIVE org.osgi.service.subsystem.region.context.0_1.0.0
21 ACTIVE subsystem-gogo-command_1.0.0

Note that bundle 20 is a synthesized bundle created automatically by the subsystem implementation. We can safely ignore it.

Now I can start doing something. Let's list the available subsystems using our new command from the subsystem-gogo-command listing/bundle:
osgi> subsystem:list
0 ACTIVE org.osgi.service.subsystem.root 

At this point there is only a single subsystem: the root one.

Working with Subsystems
Let's create some sample subsystems to look at what you can do.

I'm going to create two basic subsystems that should allow us to play with it. The subsystem specification defines a number of different subsystem types. In this post I will be looking at the feature subsystem type which deploys all the bundles from the subsystem in a shared space. As if you were just installing all the bundles in a plain framework. (note: other subsystem types provide isolation for the subsystems.)
Subsystem archives typically use the .esa file extension. Both my example subsystems contain 3 bundles. The subsystem1.esa file contains Bundle A, Bundle B and a bundle called Shared Bundle. subsystem2.esa contains Bundle C, Bundle D and also the same Shared Bundle. Both subsystems package the Shared Bundle as they both have a dependency on it. So in order to get a fully working system for either subsystem I need that Shared Bundle. However since these are feature subsystems, where everything is shared I only need the Shared Bundle deployed once.

Creating a subsystem file is pretty easy. The .esa file is really just a zip file that contains the embedded bundles in the root. Additionally it contains a subsystem manifest. I created mine simply using the jar command, but you can also use tools such as the esa-maven-plugin. Here's what you'll find inside:

$ jar tvf subsystem1.esa
    99 Fri Apr 19 08:34:08 IST 2013 OSGI-INF/SUBSYSTEM.MF
  1181 Fri Apr 19 08:33:06 IST 2013 BundleA_1.0.0.jar
  1058 Fri Apr 19 08:33:06 IST 2013 BundleB_1.0.0.jar
   906 Fri Apr 19 08:33:06 IST 2013 SharedBundle_1.0.0.jar

As you can see the zip file contains the relevant bundles in the root plus a subsystem manifest. Here's what the SUBSYSTEM.MF file in subsystem1.esa looks like:
  Subsystem-SymbolicName: subsystem1
  Subsystem-Version: 1.0.0
  Subsystem-Type: osgi.subsystem.feature
It looks a bit like a Bundle Manifest. Most of the information in there is optional...

The subsystem2.esa file is very similar. You can download the sample subsystem files from here: subsystem1.esa and subsystem2.esa.

Let's deploy a subsystem:
  osgi> subsystem:install http://coderthoughts.googlecode.com/files/subsystem1.esa
  Subsystem successfully installed: subsystem1; id: 1

If we list the bundles the three bundles that were in subsystem1.esa are now added.
  22 INSTALLED SharedBundle_1.0.0
  23 INSTALLED BundleA_1.0.0
  24 INSTALLED BundleB_1.0.0

Now let's start the subsystem:
  osgi> subsystem:start 1
  22 ACTIVE SharedBundle_1.0.0
  23 ACTIVE BundleA_1.0.0
  24 ACTIVE BundleB_1.0.0
This is pretty handy: starting the subsystem will start all of the bundles that it contains!

Let's add the other subsystem:
  osgi> subsystem:install http://coderthoughts.googlecode.com/files/subsystem2.esa
  Subsystem successfully installed: subsystem2; id: 2
  osgi> subsystem:start 2

Now both subsystems are active:
  22 ACTIVE SharedBundle_1.0.0
  23 ACTIVE BundleA_1.0.0
  24 ACTIVE BundleB_1.0.0
  25 ACTIVE BundleC_1.0.0
  26 ACTIVE BundleD_1.0.0
And we can see that the SharedBundle was only deployed once, because it could be shared across subsystems.

You can also query the subsystems known in the system:
  osgi> subsystem:list
  0 ACTIVE org.osgi.service.subsystem.root
  1 ACTIVE subsystem1
  2 ACTIVE subsystem2

Another interesting aspect is how stopping and un-installation works. Especially in relation to the SharedBundle. I'll leave is as an exercise for the reader but you can see that the Subsystems implementation keeps track of the bundle sharing. If you only stop subsystem1, the SharedBundle will remain ACTIVE. Only when both subsystems that use the bundle are stopped the bundle will move to the RESOLVED state. Uninstalling works similarly. When you uninstall subsystem1, BundleA and BundleB will be uninstalled, but the SharedBundle won't as it is still being used by subsystem2. Only when subsystem2 is uninstalled as well all of the bundles associated with subsystem1 and subsystem2 are uninstalled.

There is a lot more to talk about in relation to subsystems. For example, subsystems don't have to actually embed their dependencies. They can also download them from an OSGi Repository service. In that case your .esa file can be limited to only contain a SUBSYSTEM.MF which lists what your root application bundles should be. The subsystem implementation can also use the Repository Service to automatically find transitive dependencies.

In my little example, the subsystems only contain 3 bundles each, but using .esa files can become really handy when your application becomes large and contains tens or hundreds of bundles. You can even nest them, so subsystems can contain other subsystems - becoming building blocks of higher-level subsystems.

OSGi Subsystems should make the distribution and deployment of larger OSGi applications much easier. The .esa file provides a portable format which allows you to hand your users a single artifact to deploy, regardless of how many bundles your application is made up of.

For more information about OSGi Subsystems see chapter 134 of the OSGi R5 Enterprise specification: http://www.osgi.org/Download/Release5

Monday, March 11, 2013

HTML5 video fun

I started playing with HTML5 and found that it contains some very cool stuff. Take for example the native ability to play video with the <video> tag.

But your browser can do more natively these days - it can access your computer's webcam too! In fact, connecting to your webcam and streaming it out to a video tag in a html page can be done in only a few lines of HTML with some embedded JavaScript!

Take a look at the following small html page:
    <title>HTML5 Video with no plugins!</title>
      function setup() {
        navigator.myGetMedia = ( navigator.getUserMedia || 
          navigator.webkitGetUserMedia ||
          navigator.mozGetUserMedia ||

        navigator.myGetMedia({video: true}, connect, error); 

      function connect(stream) {
        var video = document.getElementById("my_video");
        video.src = window.URL ? window.URL.createObjectURL(stream) : stream;

      function error(e) { console.log(e); }

      addEventListener("load", setup);
    <header><h1>HTML5 Video with no plugins!</h1></header>
    <video id="my_video"/>

When the page is loaded (on the load event) the setup() function is executed. This connects to your webcam through the navigator.getUserMedia() API. Currently there are a few different browser variants with various prefixes of this. When the WebRTC spec is finalized they will most likely all be unified to getUserMedia (without any prefix), which will take another 5 lines out of the above code. Once connected to the webcam the script obtains a URL to this connection and sets it as the source for the video tag. That's it!
Audio works similarly too so you can also create a combined audio/video stream.

Running the webpage shows what your webcam sees in on the browser page:

And, it works on my Android phone too!

You can try it yourself or you can launch the page from here.

This opens some really interesting possibilities. For example the people from html5videoguide have used this to create a video conferencing system:
powered by your browser and a few lines of serverside JavaScript. You can use these APIs to record video or audio. Or turn your browser into a camera app! Combine with the new canvas APIs and you can use it to do photo editing too! All right there in your browser, all without the need for additional plugins.

The code above still contains some variations to cater for slight differences that can be found in browsers, that should all be resolves as soon as the webrtc spec goes final.

Thursday, January 3, 2013

A mobile device photo organizer (using OSGi)

When people think about OSGi applications they often think of complex server-side applications or embedded programs that are running as part of a set-top-box of some sort. Or they think of Eclipse-based RCP applications which are also based on OSGi.

When I started writing a little program to organize photos from the various mobile devices in my home it was unlike many of those applications. It was small, it had a (Swing-based) GUI and it wasn't running on the server side. Still using OSGi helped me enormously with the development. In this blog post I will discuss how.

The application
I'm sure I'm not the only person where the house is full of devices that can take photos. I like to store these pictures centrally on a NAS drive where they are neatly organized. Most devices come with a photo management solution of some sort, but they are often only working with a certain device and not with others and they mostly store the photos in different ways on disk. On top of that I want to keep the photos in a directory structure that looks like this: year/date-taken where I don't mean date-copied. All in all I couldn't find a solution that did this the way I wanted so I started a little tool for this over the holiday season.

While most of the actual work is done by headless processing logic, I needed a little GUI to kick it off. While there are tons of options available, I chose to create a simple Java Swing GUI, made to look nice with one of the awesome look and feels from the JTattoo guys.

While the project itself (which is available here) has the ASL2 license, I am using a variety of libraries to extract information from photo files, video files and for example to access my Android device, because I want to be able to copy photos directly from that.

I need modularization
Most of the libraries that I'm using have a license compatible with my ASL2 license, but I really wanted to access my Android phone directly. Android phones, as well as other mobile devices cannot easily be mounted as a directory on the file system. They need to be accessed using MTP. I found a library that allowed me to access my phone from Java: jusbpmp. It worked fine for the most part but there was one issue: it's GPL-licensed. I don't want to get into which open source license is better, but the viral effects of GPL are well known and I already decided that my application was to be ASL2-licensed. I didn't want to relicense my whole project because of the fact that one dependency has this other license.

If I could single out the functionality that uses this library and only license that piece GPL (as needed) and then plug it into the rest of my application that would limit the scope and the amount of code that is required to have the GPL license. Ideally I want that component to be loosely coupled so that it can be downloaded separately and plugged into the main application.

OSGi Bundles to the rescue. And in particular OSGi Services! OSGi Services use a contribution model where service implementations contribute them to the Service Registry. Consumers find them there. It all works by defining the Service APIs in a separate module that they all communicate through.

The MTP library allows me to access the files on my Android phone. So I started by defining an API module to create an Iterable that can give me entries that serve the photo information from anything any type of source. The PhotoIterable interface can represent a file directory-based photo store, but also one that comes from an MTP device or other device. It looks somewhat like this:

public interface PhotoIterable extends Iterable<PhotoIterable.Entry> {
   * Get a human readable identification of the location.
   * @return The location string.
  String getLocationString();

   * This class represents a photo object that can be read.
  public interface Entry {
     * Returns the file name to use for the photo.
     * @return The file name without path information, for example IMG_01429.JPG
    String getName();

     * @return The stream to read the photo bytes from.
    InputStream getInputStream();

Now I need a way to obtain such a PhotoIterable. Typically the user wants to select a location on the device to copy the photos from (instead of getting all the image files on the device) and to do this  I defined the PhotoSource interface. This is what I will register in the OSGi Service Registry. For every supported type of source a corresponding service will be registered. The core bundle ships with one for loading photos from a file system directory, and the phototools.mtp bundle registers one to handle MTP devices.

public interface PhotoSource {
   * The label for this source, for example 'File System Directory' or 'Android'.
   * @return The label to use.
  String getLabel();

   * Calling this method should open a selection window where the user can select where the
   * photos are to be copied or downloaded from.
   * @return A PhotoIterable to obtain photos from the selected location.
  PhotoIterable getPhotoIterable();

At this stage I have 3 bundles: the API bundle, the core implementation bundle and the MTP implementation bundle.

This separation is nice because:
  • The API bundle is small. Anyone who wants to write support for another mobile device has only a very small number of interfaces to look at. No distracting implementation code to get in the way.
  • I can find all the PhotoSource implementations by looking them up in the OSGi Service Registry, for example with bundleContext.getServiceReferences(PhotoSource.class, null))
  • I can contribute support for additional devices without changing the rest of the code. Just add the bundle that contains the support (and registers the PhotoSource service) and it will appear (even the GUI will react to this).
  • I didn't have to write my own plugin mechanism. OSGi Bundles and Services provide that to me.
  • I could isolate the functionality that depends on a GPL library in a separate bundle. This means that the main application is still ASL2 licensed. The GPL-based bundle is optional and can be provided separately if I want to create a pure ASL2-based product.
The PhotoSource that the MTP provide bundle contributes is visible as a widget in the main screen (the Mobile Device via USB Radio Button). When I click Select I can see the custom MTP selection GUI in action:

More OSGi Bundles and Services to keep things clean
While the phototools.core bundle can deal with extracting metadata from some of the Photo formats (thanks to Drew's metadata-extractor) I also want my application to handle movie files. I found another nice library that could handle mp4 files for me: Sebastian's mp4parser. Although mp4parser is ASL2 licensed I didn't want to include it the core bundle because it was getting fairly heavy on embedded libraries and also there were still a number of photo/video formats unsupported. Getting the core bundle to support them all didn't seem the right thing to do. Allowing separate bundles to contribute a format handler did! So I defined an additional Service API:

public interface PhotoMetadataProvider {
   * Get metadata for a photo or video file
   * @param f The file to process.
   * @return The metadata found.
  Metadata getMetaData(File f);

  public interface Metadata {
     * Obtain the date the photo was taken (not necessarily the file date).
     * @return The date taken.
    Date getDateTaken();

     * Obtain a small preview file for the photo or movie. If available,
     * the preview file will always be a JPEG file.
     * @return The preview file.
    File getPreviewFile();

I can find an appropriate Metadata Provider for my photos by looking up one from the OSGi Service Registry that is registered for the relevant extension. This is done by using OSGi Service Registration properties. Each Photo Metadata Provider registers the formats it can handle with the format property. Then I can look one up by querying on the extension of the file I want to process e.g:
  ServiceReferences[] refs = bundleContext.getServiceReferences(
    PhotoMetadataProvider.class, "(format=.jpeg)");
When I find a service that can handle my format, get it to process the file:
  PhotoMetadataProvider p = bundleContext.getService(sref);
  Metadata metadata = p.getMetaData(myPhotoOrVideoFile);
  Date dateTaken = metadata.getDateTaken();

The mp4 handling code is now nicely separated in its own bundle. In addition, if I ever want to add support for other formats (.AVI for example) I can do this by simply adding another bundle, which drastically reduces the scope of my changes and also reduces the amount of code that I may have to look at.

Conclusion? Well my little project is not finished yet, it's still work in progress. But OSGi really helped me by providing a nice plug-in architecture and its modularity almost forced me to write nice interface-based components which will be easier to maintain in the long run. Because the bundles have a clear scope they tend to be quite small and when making changes the amount of code you have to look at as a developer is much smaller than if this was part of a monolithic application. This is great because I generally only sporadically have time to go back to my hobby projects and having less code to refresh my brain is good :) Oh, and the fact that I'm using OSGi is completely hidden to the end user. It's really just an architectural choice under the covers.

Happy new year, everyone.