Tuesday, March 25, 2014

Apache Felix Framework 4.4.0 provides full OSGi Core R5 support

Felix Framework 4.4.0 was released today (http://felix.apache.org/news.html) and now provides for OSGi Core R5 support. From a features point of view, it means that the following R5 features are now fully supported:
  • New org.osgi.framework.UnfilteredServiceListener interface
  • New org.osgi.framework.VersionRange class
  • The Resource API and it's use in the Wiring API
  • New osgi.identity namespace
  • New value and new default for org.osgi.framework.bsnversion framework property
  • support for static valueOf methods in the Filter
  • Enhanced Bundle.adapt() to support AccessControlContext
  • Updates to the Bundle Hook Specification (Bundle Collision Hook)
But the main benefit as I see it that we can now run OSGi Subsystems, such as the Apache Aries implementation, on Apache Felix! For more details on that see this blog post: http://coderthoughts.blogspot.com/2014/01/osgi-subsytems-on-apache-felix.html

Tuesday, January 28, 2014

How I learned to stop worrying (about power cuts) and love my Raspberry PI

As many other hobbyists I got myself a Raspberry PI a few months ago. It's very cool that such a functional piece of hardware can be so cheap. I'm using mine as a backup server for my photos, documents, videos etc, it is my Samba Server, FTP server and, most importantly an XBMC network server.

I'll outline some of the things that I did to make mine working the way I wanted to, but first I'd like to address the biggest issue that I experienced with my Raspberry PI: the fact that it doesn't deal well with power cuts. The PI itself isn't really to blame for that, its a general issue with Linux and other unix-type operating systems: sudden power losses can make their boot/os disk unreadable. With Linux running on a laptop you don't really have that issue, as your laptop has a built-in UPS, but my PI doesn't have that. Which means that I often happened to find my device in a state that it wouldn't boot up again. Apparently we had a power blip.
While some companies are starting to sell UPS solutions for the PI, one of the main issues with that is that the UPS often costs more than the PI itself. If you don't want to spend that extra money, I have used a different setup that gave me the capability to survive PI boot disk corruptions without too much work or cost.

Backup your PI boot SD card

Some people suggested that the best way to prevent your PI from corrupted boot disks is to make the disk readonly. While certainly a good idea, this doesn't really work for me as I often experiment with my PI and change its installed packages regularly. So I didn't want to restrict myself that much.
What I did instead was use Billw's cloning script to make a backup copy of the main SD card on a second one. I used an old SD card that I still had lying around and got a USB SD card adapter, which sells on ebay for about £1 including shipping. Using that script I make regular backups of my main SD card onto its clone. When experiencing one of those regular power blips and the PI doesn't want to start up, I simply swap the SD cards - the one from the USB goes into the main SD card port and vice-versa! That will allow me to start up the PI again with all my configuration as I had it. Then simply run Billw's script again to clone the SD card that's still working back on the corrupted one at which point I'll have two working SD cards again!
I've used this mechanism for a few months now and it works pretty well. Certainly much cheaper than a UPS and although a little bit of work (you need to ensure that the backup SD-card stays up-to-date) I found that it worked great. You can also automate the cloning so that it happens automatically every night or so...

In addition to the backup script there are a few other things that I tweaked for my PI. They might be a little different than what you're used to on Linux, so I'm outlining them here.

Get a Hard Disk with power-save

If you're planning to use your PI as a file server, like I do, get a USB harddrive that has a power-save feature. This means that you can have your system running 24/7 but that there won't be any spinning parts if nobody's using it, and very low power consumption too in that case. I got this Seagate 2TB one for about £60 which works great! The only thing you need to do is enable the power save feature once, which you have to do on a Windows PC using the program that comes on the drive. Once power-save enabled, you can connect it to the PI and it works perfectly. Note that you do need to sudo apt-get install ntfs-3g to use most pre-formatted USB drives.

Use UUIDs to mount your drives

Another issue that I was experiencing was that my USB drives didn't always get the same disk identifier after a PI reboot. One disk would sometimes be sdb and other times it would be sdc, for example. This is problematic if you use the standard 
  /dev/sda1 /disks/x1 auto noatime 0 0
in your /etc/fstab to mount your drives to a directory, as your mounted directory would sometimes be mounted to one disk and sometimes to the other...
A good solution to that is to mount your drives by UUID instead. If you look in /dev/disk/by-uuid/ you can see the UUIDs of all the available disks. Find the right one and use that to mount your disk. The following makes sure that always the same disk is mounted to /disks/x1:
  /dev/disk/by-uuid/5A22DB2D2DB0CBF /disks/x1 auto noatime 0 0

Rotate the logs

If you're doing a lot of file transfer to-from your PI, the Samba/FTP log files can get pretty big. If you, like me, use a crappy old 4GB SD card to run off, it might make sense to keep the size of the log files down. Logrotate allows you to do this, and the standard Raspbian distribution comes with this installed. I just tweaked it to keep the log files smaller, by only remembering 1 week of logs. To do this, you tweak /etc/logrotate.conf, I'm using these settings:
  # rotate log files weekly
  weekly

  # keep 1 weeks worth of backlogs
  rotate 1

  # create new (empty) log files after rotating old ones
  create

  # compress log files
  compress


Serve XBMC throughout the house

Many people use the PI to run XBMC directly. XBMC is a wonderful media playback application that gives you a really nice interface to your media collection. And a PI connected to an audio/video system can certainly be used for running it. However, I was looking for something different. I wanted to use my video library on any device in the house. Maybe a laptop, maybe our multimedia pc which is connected to the tv, maybe even on an Android smartphone. XBMC has features that allow you to do this. To get this working, you don't actually need XBMC on the Raspberry PI at all! You need the video files on it, a network file access protocol, such as Samba, and an installation of MySQL which is what the XBMC installations on the client computers communicate with to show your video library etc. The whole setup is really nice as it allows you to watch your videos on any device in the house and it even remembers where you left off if you stopped halfway, so that you can easily continue watching on another computer. Once you've installed Samba and MySQL for XBMC on the PI, you need to install XBMC on all the client devices that you want to use. There are builds available for most platforms. Next you need to configure XBMC to use MySQL. I actually use different profiles to organize my video files, for example you may have holiday videos in one and educational videos in another profile (or any other separation of your choice ;). You can do this by putting the advancedsettings.xml file in the .../userdata/profile/<Profile Name> directory and name the database in it to separate the content, like this:
  <advancedsettings>
    <videodatabase>
      <type>mysql</type>
      <host>my_pi_host</host>
      <port>3306</port>
      <user>mysql_username</user>
      <pass>mysql_password</pass>
      <name>my_holiday_movies</name>
    </videodatabase>
  </advancedsettings>
The name tag in the configuration is used to connect to separate MySQL databases running in the same database server. That allows you to serve different media depending on the XBMC profile you're in. In the Holidays profile it lists my holiday videos, in the Education profile it has my educational ones.

Finally, when you add videos to your database, make sure to select add the directory on the PI using a network protocol that works on all your devices. For example a Samba URL: 
  smb://my_pi_host/share/holiday/videos/
And voilà, you will get all your movies on all your XBMC devices.

Ok, so this was a random collection of bits and pieces that I did to get my Raspberry PI to do what I wanted. If you have a nugget of PI goodness, leave a comment to share :)

Monday, January 13, 2014

OSGi Subsytems on Apache Felix

Last year I blogged about running the OSGi Subsystems implementation from Apache Aries. At the time Equinox was the only OSGi framework that had the full Core R5 implementation supported, which is needed in order to run OSGi Subsystems.

With some recent work on the Apache Felix container, it is now very close to supporting the full OSGi R5 specification. One of this first things that I tried with that is to run the Aries Subsystems implementation on it. And it works :)

OSGi Subsystems are a great way to package a number of OSGi bundles together, for example if you want to distribute an OSGi-based application. Subsystems provide a really convenient deployment model, without compromising the modularity of your application. For more background on Subsystems, see my earlier post about how to use OSGi Subsystems to deploy your applications.

What I'm doing here is take the example from that blog post and run it on Apache Felix with Aries Subsystems.

First of all you need the latest and greatest Felix Framework. I always build it as follows:

  svn co http://svn.apache.org/repos/asf/felix/trunk felix
  cd felix/framework
  mvn install 
  cd ../main
  mvn install 

This will give you a framework runtime in the felix/main folder.
Then add the following bundles to the bundle subdirectory:

All of the bundles above have links behind them where they can be downloaded, except for the Felix Coordinator implementation, as this one is very new. You can just build it from the coordinator directory of your Felix checkout. Or, if you prefer to use released components, you can also use the Equinox implementation org.eclipse.equinox.coordinator_1.1.0.v20120522-1841, but in order to run that on Felix you also need the Equinox supplement bundle org.eclipse.equinox.supplement_1.5.0.v20130812-2109.
Finally, I'm adding my little gogo command bundle as described in my older post, to add subsystem:list, subsystem:install, subsystem:uninstall, subsystem:start and subsystem:stop commands to Gogo. You can also download that as a bundle here: subsystem-gogo-command-1.0.0.jar

Ok, let's start up Felix and look what's inside:
.../felix/main $ java -jar bin/felix 
... some log messages appear ...
____________________________
Welcome to Apache Felix Gogo
g! lb
START LEVEL 1
   ID|State      |Level|Name
    0|Active     |    0|System Bundle (4.3.0.SNAPSHOT)
    1|Active     |    1|Apache Aries Application API (1.0.0)
    2|Active     |    1|Apache Aries Application Modelling (1.0.0)
    3|Active     |    1|Apache Aries Application Utils (1.0.0)
    4|Active     |    1|Apache Aries Blueprint Bundle (1.1.0)
    5|Active     |    1|Apache Aries Proxy Bundle (1.0.1)
    6|Active     |    1|Apache Aries Subsystem API (1.0.0)
    7|Active     |    1|Apache Aries Subsystem Core (1.0.0)
    8|Active     |    1|Apache Aries Util (1.1.0)
    9|Active     |    1|Apache Felix Bundle Repository (1.6.6)
   10|Active     |    1|Apache Felix Configuration Admin Service (1.8.0)
   11|Active     |    1|Apache Felix Coordinator Service (0.0.1.SNAPSHOT)
   12|Active     |    1|Apache Felix Gogo Command (0.12.0)
   13|Active     |    1|Apache Felix Gogo Runtime (0.10.0)
   14|Active     |    1|Apache Felix Gogo Shell (0.10.0)
   15|Active     |    1|Apache Felix Resolver (1.0.0)
   16|Active     |    1|Region Digraph (1.1.0.v20120522-1841)
   17|Active     |    1|slf4j-api (1.7.5)
   18|Resolved   |    1|slf4j-simple (1.7.5)
   19|Active     |    1|Subsystem Gogo Command (1.0.0)
   20|Active     |    1|org.osgi.service.subsystem.region.context.0 (1.0.0)

g! subsystem:list
0 ACTIVE org.osgi.service.subsystem.root

Looking at the bundles, you can see the Gogo command line ones that come with Felix, as well as its Bundle Repo. You can also see bundle 20, which is a synthesised bundle that the Subsystems implementation added and which represents the root subsystem. The subsystem:list command reports that there is one subsystem: the root one.

Now let's look at my example applications again. I have the following two subsystems:


Subsystems are a great way to distribute multi-bundle applications, and my two subsystems each contain 2 specific bundles as well as a shared bundle.

Let's install then and see what happens:

g! subsystem:install http://coderthoughts.googlecode.com/files/subsystem1.esa
Installing subsystem: http://coderthoughts.googlecode.com/files/subsystem1.esa
Subsystem successfully installed: subsystem1; id: 1
g! subsystem:start 1
g! lb
START LEVEL 1
   ID|State      |Level|Name
...
   21|Active     |    1|SharedBundle (1.0.0)
   22|Active     |    1|BundleA (1.0.0)
   23|Active     |    1|BundleB (1.0.0)
The three bundles needed by subsystem1 are installed and all started with the subsystem:start command.

And let's add the second subsystem...
g! subsystem:install http://coderthoughts.googlecode.com/files/subsystem2.esa
Installing subsystem: http://coderthoughts.googlecode.com/files/subsystem2.esa
Subsystem successfully installed: subsystem2; id: 2
g! subsystem:start 2
g! lb
START LEVEL 1
   ID|State      |Level|Name
...
   21|Active     |    1|SharedBundle (1.0.0)
   22|Active     |    1|BundleA (1.0.0)
   23|Active     |    1|BundleB (1.0.0)
   24|Active     |    1|BundleC (1.0.0)
   25|Active     |    1|BundleD (1.0.0)

As expected, the two new bundles for subsystem2 are now also installed. And, because we're talking about a feature subsystem here, where everything is shared, the SharedBundle (21) is not installed a second time, but rather reused from subsystem1. 

The topic of subsystems is much bigger. Subsystems can provide a certain level of isolation, they can work with Repositories for provisioning and you have a lot of options wrt to what you can put inside an .esa file: you can put all the bundles in there that your application needs, or you can just have a textual descriptor that declares your main bundles and let the OSGi Resolver and Repository find the dependencies and deploy these for you. Details about these various options can be found in the OSGi Subsystem spec (chapter 134 in the OSGi R5 Enterprise spec). 

All in all good news - Subsystems now works in Apache Felix as well. Right now you need to build the latest snapshot, but hopefully we'll have a Felix Framework release for this soon!

Tuesday, October 22, 2013

Role-based access control for Karaf shell commands and OSGi services

In a previous post I outlined how role-based access control was added to JMX in Apache Karaf. While JMX is one way to remotely manage a Karaf instance, another management mechanism is provided via the Karaf Console/Shell. Up until now security for console commands was very coarse-grained. Once in the console you had access to all the commands. For example, it was not possible to give certain users access to merely changing their own configuration without also giving them access to shutting down the whole karaf instance.

With commit r1534467 this has now changed (thanks again to JB Onofré for reviewing and applying my pull request). You can now define roles required for each shell command and even have different roles depending on the arguments used with a certain command. This is achieved by using a relatively advanced feature of OSGi: Service Registry Hooks. These hooks give you a lot of control on how the OSGi service registry behaves. I blogged about them before. They enable you to:
  • see what service consumers are looking for, so you can register these services on-the-fly. This is often used to import remote services from discovery, but only if there is actually a client for them.
  • hide services from certain service consumers
  • change the service properties the client sees for a service by providing an alternative registration
  • proxy the original service
Every Karaf command is in reality an Apache Felix Gogo command, registered as an OSGi service. Every command has two service registration properties: osgi.command.scope and osgi.command.function. These properties define the name of the command and its scope. With the use of the OSGi Service Registry hooks I can replace the original service with a proxy that adds the role-based security. 

When I originally floated this idea on the Karaf mailing list, Christian Schneider said: "why don't we enable this for all services?" Good idea! So that's how I ended up implementing it. I first added a mechanism to add role-based access control to OSGi services in general and then applied this mechanism to get role-based access control for the Karaf commands.

Under the hood

the original service is hidden by OSGi Service Registry Hooks
The theory is quite simple. As mentioned above you can use OSGi Service Registry hooks to hide a service from certain consuming bundles and effectively replace it with another. In my case the replacement is a proxy of the original service with the same service registration properties (and some extra ones, see below). It will delegate an invocation to the original service, but before it does so it will check the ACL for the service being invoked to find out what the permitted roles are. Then it checks the roles of the current user by looking at the Subject in the current AccessControlContext. If the user doesn't have any of the permitted roles the service invocation is aborted with a SecurityException.

How do I configure ACLs for OSGi services?

ACLs for OSGi services are defined in a way similar to how these are defined for JMX access: through the OSGi Configuration Admin service. The PID needs to start with org.apache.karaf.service.acl. but the exact PID value isn't important. The service to which the ACL is matched is found through the service.guard configuration value. Configuration Admin is very flexible wrt to how configuration is stored, but by default in Karaf these configurations are stored as .cfg files in the etc/ directory. Let's say I have a service in my system that implements the following API and is registered in the OSGi service registry under the org.acme.MyAPI interface:
  package org.acme;

  public interface MyAPI {
    void doit(String s);
  }
If I want to specify an ACL to say that only clients that have the manager role can invoke this service, I have to do two things:
  1. First I need to enable the role-based access for this service by including it in the filter specified in the etc/system.properties in the karaf.secured.services property:
      karaf.secured.services=(|(objectClass=org.acme.MyAPI)(...what was there already...))
    only services matching this property are enabled for role-based access control. Other services are left alone.
  2. Define the ACL for this service as Config Admin configuration. For example by creating a file etc/org.apache.karaf.service.acl.myapi.cfg:
      service.guard=(objectClass=org.acme.MyAPI)
      doit=manager

    So the actual PID of the configuration is not really important, as long as it starts with the prefix. The service it applies to is then selected by matching the filter in the service.guard property.
There are some additional rules. There is a special role of * which means that ACLs are disabled for this method. Similar to the JMX ACLs you can also specify function arguments that require specific roles. For more details see the commit message.

Setting roles for service invocation

The service proxy checks the roles in the current AccessControlContext against the required ones. So when invoking a service that has role-based access control enabled, you need to set these roles. This is normally done as follows:
  import javax.security.auth.Subject;
  import org.apache.karaf.jaas.boot.principal.RolePrincipal;
  // ... 
  Subject s = new Subject();
  s.getPrincipals().add(new RolePrincipal("manager"));
  Subject.doAs(s, new PrivilegedAction() {
    public Object run() {
      svc.doit("foo"); // invoke the service
    }
  }
This example uses a Karaf built-in role. You can also use your own role implementations by specifying them using the className:roleName syntax in the ACL.

Note however, that javax.security.auth.Subject is a very powerful API. You should give bundles that import it extra scrutiny to ensure that they don't give access to clients that they shouldn't really have...

Applied to Shell Commands

Next step was to apply these concepts to the Karaf shell commands. As all the shell commands are registered with the osgi.command.function and osgi.command.scope properties. I enabled them in the default Karaf configuration with the following system property:
  karaf.secured.services=(&(osgi.command.scope=*)(osgi.command.function=*))

The next thing is to configure command ACLs. However that presented a slight usability problem. Most of the services in Karaf are implemented (via OSGi Blueprint) using the Function interface. Which means that the actual method name is always execute. It also means that you need to create a separate Configuration Admin PID for each command which is quite cumbersome. You really want to configure this stuff on a per-scope level with all the commands for a single scope in a single configuration file. To allow this the command-integration code contains a configuration transformer which creates service ACLs as described above but based on command scope level configuration files.
The command scope configuration file must have a PID that follows this structure: org.apache.karaf.command.acl.<scope> So if you want to create such a file for the feature scope the config file would be in etc/org.apache.karaf.command.acl.feature.cfg:
  list = viewer
  info = viewer
  install = admin
  uninstall = admin
In this example only users with the admin role can do install/uninstall operations while viewers can list features etc... Note that by using groups (as outlined in this previous post) users added to an admin group will also have viewer permissions, so will be able to do everything. For a more complex configuration file, have a look at this one.

Can I find out what roles a service requires?

It can be useful to know in advance what the roles are that are required to invoke a service. For example the shell supports tab-style command completion. You don't want to show commands to the user that you know are not available to the user's roles. For this purpose an additional service registration property is added to the proxy service registration: org.apache.karaf.service.guard.roles=[role1,role2]. The value of the property is the Collection of roles that can possibly invoke a method on the service. Since each command maps to a single service, we can have a Command Processor that only selects the commands applicable to the roles of the current user. This means that commands that this user doesn't have the right roles for are automatically hidden from autocompletion etc. When I'm logged in as an admin I can see all the feature commands (I removed ones not mentioned in the config for brevity):
  karaf@root()> feature: <hit TAB>
  info            install         list            uninstall
while Joe, a viewer, only see the feature commands available to viewers:
  joe@root()> feature: <hit TAB>
  info            list

In some cases the commands have roles associated with particular values being passed in. For example the config admin shell commands require admin rights for certain PIDs but not all. So Joe can safely edit his own configuration but is prevented from editing system level configuration:
  joe@root(config)> edit org.acme.foo
  joe@root(config)> property-set somekey someval
  joe@root(config)> update
So Joe can edit the org.acme.foo PID, but when he tries to edit the jmx.acl PID access is denied:
  joe@root(config)> edit jmx.acl
  Error executing command: Insufficient credentials.

Where are we with this stuff today?

The first commits to enable the above have just gone into Karaf trunk and although I wrote lots of unit tests for it, more use is needed to see whether it all works as users would expect. Also the default ACL configuration files may need a bit more attention. What's there now is really a start, the idea is to refine as we go along and have this as a proper feature for Karaf 3.

The power of OSGi services

One thing that this approach shows is really the power and flexibility of OSGi services. None of the code of the actual commands was changed. The ability to build role-based access on top of them in a non-intrusive way was really enabled by the OSGi service registry design and its capabilities.

Friday, October 18, 2013

Running pure Jasmine unit tests through Maven

I have always really liked writing unit tests. For the simple reason that with those I know that I did all I could to ensure my algorithms worked as planned. Sure, even with high code coverage there is still a chance that you're missing a situation in your tests, but at least once you know this you can fill the gap by adding an additional test. And, of course, you want to run these tests automatically as part of a regular build. No manual testing please :)

So when I started looking at some projects that use JavaScript I wanted to use the same ideas. Write unit tests that are automatically run during a headless build.
I started using Jasmine, as it seems to be the most popular JavaScript testing framework today. Since the project I was working with was using Maven already I wanted to integrate my Jasmine testing as part of the ordinary Maven test cycle.
Additionally, I wanted the setup of my environment be trivial. I really don't want any developer to install additional software besides what they already have to run Maven. And, I don't want to depend on any platform specific software, if possible.

This got me looking around on the internet and I found a really good post by Code Cop that describes how you can do something like this for Apache Ant. What he did was test JavaScript logic using Jasmine, outside of the browser. So you don't have the browser JavaScript environment present, but you can test all your algorithms. This is precisely what I was looking for too. Another nice thing of his work is that the test results are stored in the same XML format as JUnit uses, so you can inspect these files with any tool that can work with ordinary JUnit output XML files (e.g. you can open them in Eclipse and view them in the JUnit view).

I started with the code by Code Cop, and reduced it to the bare minimum, only supporting Jasmine (Code Cop's work also supports other JS test frameworks). You can find this minimal ant-jasmine test integration at coderthoughts/jasmine-ant. The next step: get it working in Maven.

There were a couple of things that needed to be changed to be able to do this:
  1. I wanted to obtain the Java-level dependencies via Maven: the original Rhino scripting engine (can't use the one in the JRE, because JavaAdapter was removed, see here) and js-engine.jar that adds Rhino as the rhino-nonjdk scripting language.
  2. I want to have the source .js files in src/main/js and the tests in test/main/js, the usual locations in Maven.
  3. I needed to make the output directory configurable so that the results are written to target/surefire-reports, where Maven expects these files.
In the end I got things going. I'm still using Ant inside Maven to actually do the Jasmine test running, using a slightly modified version of Code Cop's Jasmine runner Ant task. But the whole end result fits nicely with the rest of the Maven setup.

<project>
  <modelVersion>4.0.0</modelVersion>

  <groupId>org.coderthoughts</groupId>
  <artifactId>jasmine-maven-example</artifactId>
  <version>1.0.0-SNAPSHOT</version>
  <packaging>war</packaging> <!-- your JavaScript will likely end up in a .war file -->

  <dependencies>
    <dependency>
      <!-- Bring in the original Rhino implementation that contains the JavaAdapter class -->
      <groupId>org.mozilla</groupId>
      <artifactId>rhino</artifactId>
      <version>1.7R3</version>
      <scope>test</scope>
    </dependency>

    <dependency>
      <!-- Adds the 'rhino-nonjdk' language to the supported scripting languages -->
      <!-- Obtained from the repository at http://dist.codehaus.org/mule/dependencies/maven2/ -->
      <groupId>javax.script</groupId>
      <artifactId>js-engine</artifactId>
      <version>1.0</version>
      <scope>test</scope>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-antrun-plugin</artifactId>
        <version>1.7</version>
        <executions>
          <execution>
            <phase>test</phase>
            <configuration>
              <target>
                <property name="jasmine.dir" location="lib/jasmine-ant" />
                <property name="script.classpath" refid="maven.test.classpath" />

                <scriptdef name="jasmine" src="${jasmine.dir}/jasmineAnt.js"
                  language="rhino-nonjdk" classpath="${script.classpath}">
                  <!-- Jasmine (jasmine-rhino.js) needs pure Rhino because 
                       JDK-Rhino does not define JavaAdapter. -->
                  <attribute name="options" />
                  <attribute name="ignoredGlobalVars" />
                  <attribute name="haltOnFirstFailure" />
                  <attribute name="jasmineSpecRunnerPath" />
                  <attribute name="testOutputDir" />
                  <element name="fileset" type="fileset" />
                </scriptdef>

                <jasmine options="{verbose:true}"
                  testOutputDir="target/surefire-reports" haltOnFirstFailure="false"
                  jasmineSpecRunnerPath="${jasmine.dir}/AntSpecRunner.js">
                  <fileset dir="test" includes="**/*Spec.js" />
                </jasmine>
              </target>
            </configuration>
            <goals>
              <goal>run</goal>
            </goals>
          </execution>
        </executions>
      </plugin>

      <!-- ... other plugins ... -->

    </plugins>
  </build>
  
  <repositories>
    <repository>
      <id>codehaus-mule</id>
      <url>http://dist.codehaus.org/mule/dependencies/maven2/</url>
    </repository>
  </repositories>
</project>

A couple of things to note here:
  • I couldn't find the js-engine.jar in Maven Central. Fortunately it was available in the Mule repo at codehaus.org.
  • I added the testOutputDir as a configuration attribute for where the test results go.
  • No setup whatsoever required, no platform specific binaries needed, if you can run Maven you can run these Jasmine tests.
When I run it, it looks like this:

$ mvn test
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building jasmine-maven-example 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
...
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
[INFO] ...
[INFO] --- maven-antrun-plugin:1.7:run (default) @ jasmine-maven-example ---
[INFO] Executing tasks

main:
  [jasmine] Spec: main/js/RomanNumeralsSpec.js
  [jasmine] Tests run: 7, Failures: 0, Errors: 0
  [jasmine]
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.551s


Of course, the build fails when a test fails and also the test reports can be processed using anything that can JUnit test reports, such as mvn surefire-report:report

I find it pretty handy. A minimal project that does this that you can try yourself is available here: coderthoughts/jasmine-maven

So it's a little different from the jasmine-maven-plugin in that this doesn't fork a browser and is hence a little bit faster. It should be possible to speed it up even further by writing a proper Maven plugin for it...
It's more of a pure unit testing environment, where running the jasmine-maven-plugin is closer to a system test setup...
And of course, thanks again to Code Cop for providing an excellent starting point for this stuff.

Thursday, October 3, 2013

JMX role-based access control for Karaf

Recently I worked on adding role-based access control to Karaf management operations. This work is split into two parts: one part focuses on adding role-based access to JMX. Another part focuses on the Karaf shell/console. In this post I'm looking at how JMX access is secured.

JMX plays an important role in Karaf as a remote management mechanism. A number of management clients are built on top JMX, hawtio being probably the most popular one right now. While hawtio uses JMX through Jolokia which exposes the JMX API over a REST interface, other clients use JMX locally (e.g. via JConsole) or over a remote connector. 

Most functionality available in Karaf can be managed via MBeans, but up until now it was suffering from one issue, there was really only one level of access. If you were given rights to access, you had access to all the MBeans. It was not possible to give users access to certain areas in JMX while restricting access to other areas.


Role-based Access Control

With commit r1528587 my JMX role-based access has been added to Karaf trunk (extra kudos and thanks to Jean-Baptiste Onofré for additional testing, finding a number of bugs, fixing those and actually applying the commits!). It means that an administrator can now declare the roles required to access certain Karaf MBeans. And, it also applies to MBeans that are registered outside of Karaf, but running in the same MBeans server. So JRE-provided MBeans and MBeans coming from OSGi bundles that are installed on top of Karaf are also covered.

How does it work?

It works by inserting a JMX Guard which is configured via a JVM-wide MBeanServerBuilder. The Karaf launching scrips are updated to contain the following argument: -Djavax.management.builder.initial=org.apache.karaf.management.boot.KarafMBeanServerBuilder
This global JVM-level MBeanServerBuilder calls into an OSGi bundle that contains the JMX Guard for each JMX invocation made. The Guard in turn looks up the ACL of the accessed MBean in the OSGi Configuration Admin Service and checks the required roles for this MBean with the RolePrincipal objects present in the Subject in the current AccessControlContext. If no matching role is present, the JMX invocation will be blocked with a SecurityException.

How can I define my ACLs?

The Access Control Lists are stored in OSGi Configuration Admin. This means that they can be defined in whatever way the currently configured Config Admin implementation stores its information, which could be a database, nosql, etc... In the case of Karaf this information is normally stored in the etc/ directory in .cfg text files. The file name (excluding the .cfg extension) represents the Config Admin PID. JMX ACLs are mapped to Config Admin PIDs by prefixing them with jmx.acl. Then the Object Name as it appears in the JConsole tree is used to identify the MBean. So the ActiveMQ QueueA MBean as in the screenshot below would map to the PID jmx.acl.org.apache.activemq.Broker.amq-broker.Queue.QueueA
The 'purge' operation is denied if the user does not have the required role
However, having to write a configuration file for every MBean isn't really that user-friendly. It would be nice if we could define this stuff on a slightly higher level. Therefore the code that looks for the ACL PIDs follows a hierarchical approach. If it cannot find any matching definitions for the operation invoked on ...QueueA PID, it goes up in the tree and looks for definitions in jmx.acl.org.apache.activemq.Broker.amq-broker.Queue and then jmx.acl.org.apache.activemq.Broker.amq-broker and so on. So if you want to specify an ACL for all queues on all ActiveMQ brokers you could do this in the jmx.acl.org.apache.activemq.Broker.cfg file. For example:
  browse*          = manager, viewer
  getMessage       = manager, viewer
  purge            = admin
  remove*          = admin
  copy*            = manager
  sendTextMessage* = manager
Note that this example uses wildcards for method names, so browse* covers browse(), browseAsTable() and browseMessages(). Additionally even though the admin role has access to all APIs it's not explicitly listed everywhere. This is not because the admin role is special, this is because administrators are expected to be part of the admingroup, which has all the roles in the system.

Groups

To keep the ACLs manageable I used the concept of JAAS groups. Typically you want to give an administrator access to everything in the system, but it's very cumbersome (and ugly) to add 'admin' to every single ACL definition in the system. Therefore the idea is that an administrator is not directly assigned the admin role, but is rather added to the admingroup. This group then has all the roles defined in the system. And no, it's not magic. If you decide to add a new group then the admingroup needs to be updated. Here's what the definition of some users might look like:
  karaf@root()> jaas:realm-manage --realm karaf
  karaf@root()> jaas:user-list
  User Name | Group        | Role
  ----------------------------------
  karaf     | admingroup   | admin
  karaf     | admingroup   | manager
  karaf     | admingroup   | viewer
  joe       | managergroup | manager
  joe       | managergroup | viewer
  mo        |              | viewer

So in this example, the karaf user is in the admingroup and because of that has the roles admin, manager and viewer.

Default Configuration

There is default configuration that applies to any MBean if it doesn't have specific configuration. This can be found at the top of the hierarchy in the jmx.acl.cfg file:
  list* = viewer
  get*  = viewer
  is*   = viewer
  set*  = admin
  *     = admin
So the default is that any operation on any MBean starting with 'list', 'get' or 'is' is assumed to be an operation that you only need the viewer role for, while set* or any other operation name requires the admin role by default. This also maps well to MBeans that define JMX attributes. Obviously these defaults don't apply if a more specific definition for the MBean can be found...

Redefine to suit

While the Karaf distro comes with some predefined configuration in the form of jmx.acl.**.cfg files, it might be possible that this doesn't map 100% to the roles being used in your organization. Therefore all of this can be changed by the administrator. Nothing is hard coded, so feel free to add new roles, new groups and new ACLs to suit your organizational structure.

ACL definition details

The ACL examples in this posting are on the method level, but in some cases you want to define roles based on the arguments being passed into the operation. For example, you might need admin rights to uninstall a karaf system bundle, but maybe the manager role is enough to uninstall other bundles. Therefore you can define roles based on arguments passed in to the JMX operation either as literal arguments or using regular expressions. For more information on this, see the original commit message in github: 

What MBeans can I use?

If you're writing a rich client or other tool over JMX it can be nice to know in advance whether the current user can invoke certain operations or not. It allows the tool to only show the relevant widgets (buttons, menus etc) if it's actually possible to use the associated MBeans. For this use-case I added an MBean org.apache.karaf:type=security,area=jmx that has a number of canInvoke() operations. It allows you to check whether the currently logged in user can invoke any methods on a given MBean at all or whether it can invoke a certain method. There is also a bulk query operation that lets you check a whole bunch of operations in one go. The nice thing about this approach is that the client doesn't need to know anything about how the roles are mapped by the administrator. It simply checks whether the currently logged in user has the appropriate roles for the operations requested. This means that if the administrator decides to revamp the whole role-mapping on the back-end the client console will automatically adapt: no duplication of information or hard-coded role names needed. For more details about the canInvoke() method see: https://github.com/bosschaert/karaf/blob/f793e70612c47d16a95ef12287514c603613f2c0/management/server/src/main/java/org/apache/karaf/management/JMXSecurityMBean.java

Changing permissions at Runtime

As with nearly everything in OSGi, the Configuration Admin service is dynamic, which means that you can change the information at runtime. This means that you can change the role mappings while the system is running and even for a user that is logged in. You can add or take away privileges dynamically, for example if a trusted user is all of a sudden causing havoc, you can remove the rights associated with the roles of that user dynamically and stop any further damage instantly.

What's next?

I am also working on implementing RBAC for Karaf shell/console commands and will write another post about that when available on trunk.

Friday, April 19, 2013

Using OSGi Subsystems to deploy your Applications

One of the major new specs in the OSGi R5 Enterprise release is the Subsystem specification. While this spec itself is quite large and covers a wide number of angles and use-cases, I find the simplest way to explain Subsystems really as something like Application deployment for OSGi where an application is comprised of a number of bundles.

OSGi always encourages modular development and during development this is indeed great, because you can focus on the module at hand and have clear visibility of the impact of any changes that you make. However, once you want to deploy your application of 150 bundles this may become a little bit complicated - you certainly don't want to hand the person performing the deployment of 150 different files to deploy. You want to put them together in some way or form. This has caused many projects to come up with their own solution around this. Karaf and Eclipse have features, Eclipse Virgo has plans and Apache Aries has applications. The OSGi Subsystem specification now provides a standard for combining a number of bundles in a single deployable (an .esa file) which means that an .esa file can be deployed in any compliant Subsystem implementation.
I am really happy that the good people at Apache Aries have recently released version 1.0 of the Aries Subsystem implementation, so you can now use this without having to build an implementation yourself.

I'm going to look at how to create and use Subsystems later in this post, but first let's get an OSGi framework set up with Subsystems. The Aries implementation with its dependencies consists of 15 bundles. In my example below I'm using Equinox as the OSGi framework, but it should obviously work just as well on any other OSGi R5 compliant framework.

Setting up the Subsystem infrastructure
I'm going to be using the following:
  • Equinox 3.8.2 (which comes with Eclipse 4.2.2 or 3.8.2)
  • The gogo shell (which comes with Equinox and also with Felix)
  • The Aries Subsystem 1.0 implementation
  • dependencies of the above...
Let's start getting Equinox up and running with the shell, add the following bundles to your Equinox runtime. These all come shipped with Eclipse so if you're working in Eclipse you can simply select them in the 'OSGi Framework' launch configuration:

org.eclipse.osgi_3.8.2.v20130124-134944
org.eclipse.osgi.services_3.3.100.v20120522-1822
org.apache.felix.gogo.runtime_0.8.0.v201108120515
org.apache.felix.gogo.shell_0.8.0.v201110170705
org.eclipse.equinox.console_1.0.0.v20120522-1841

Now add the following bundles to install the Aries Subsystem implementation (the links below can be used to download them from Maven Central):

I'll leave it to the reader to find a convenient way to install all these (see also the comments section with a note about how to install this on a framework other than Equinox; you need an extra bundle). You can do it with a script, using a repository, etc... There is also a subsystem-bundle artifact that may help. In any case, this alone validates one of the key points why Subsystems were designed in the first place. If you have an application that is formed by a number of bundles you'd really want a nice and convenient way to deploy them. Once we have a subsystem implementation in place we can do this and start deploying large applications that consist of many bundles by simply deploying a single Subsystem archive file.

With the above bundles started the Subsystem Service is registered and ready to deploy subsystems:
osgi> services (objectClass=*Subsystem)
  {org.osgi.service.subsystem.Subsystem}=
  {subsystem.id=0, subsystem.state=ACTIVE, subsystem.version=1.0.0,
   subsystem.type=osgi.subsystem.application,
   subsystem.symbolicName=org.osgi.service.subsystem.root, ...}
  "Registered by bundle:" org.apache.aries.subsystem.core_1.0.0 [6]

There is one issue though - we have no tool yet that utilizes the subsystem service so we can interact with it. It would be really nice if we could add a command to the OSGi console to do this. Using the extensible Gogo command shell this is childs play. So let's add a few subsystem commands.

Add some Subsystem commands to Gogo
Gogo is becoming the de-facto standard for shell commands in an OSGi framework. It's used by Equinox, Felix, Karaf and other OSGi distributions these days. Gogo is extensible and adding a few new commands to it is as simple as registering an OSGi service.

I created a bundle with only a single class, the activator which provides the following commands:
  subsystem:list
  subsystem:install <url>
  subsystem:uninstall <id>
  subsystem:start <id>
  subsystem:stop <id>

import java.io.IOException;
import java.net.URL;
import java.util.*;
import org.osgi.framework.*;
import org.osgi.service.subsystem.Subsystem;

public class Activator implements BundleActivator {
  private BundleContext bundleContext;

  public void start(BundleContext context) throws Exception {
    bundleContext = context;
    Dictionary<String, Object> props = new Hashtable<String, Object>();
    props.put("osgi.command.function",
      new String [] {"install", "uninstall", "start", "stop", "list"});
    props.put("osgi.command.scope", "subsystem");
    context.registerService(getClass().getName(), this, props);
  }

  public void install(String url) throws IOException {
    Subsystem rootSubsystem = getSubsystem(0);
    Subsystem s = rootSubsystem.install(url, new URL(url).openStream());
    System.out.println("Subsystem successfully installed: " +
      s.getSymbolicName() + "; id: " + s.getSubsystemId());
  }

  public void uninstall(long id) {
    getSubsystem(id).uninstall();
  }

  public void start(long id) {
    getSubsystem(id).start();
  }

  public void stop(long id) {
    getSubsystem(id).stop();
  }

  public void list() throws InvalidSyntaxException {
    for (ServiceReference<Subsystem> ref :
         bundleContext.getServiceReferences(Subsystem.class, null)) {
      Subsystem s = bundleContext.getService(ref);
      if (s != null) {
        System.out.printf("%d\t%s\t%s\n", s.getSubsystemId(), s.getState(), s.getSymbolicName());
      }
    }
  }

  private Subsystem getSubsystem(long id) {
    try {
      for (ServiceReference<Subsystem> ref :
           bundleContext.getServiceReferences(Subsystem.class, "(subsystem.id=" + id + ")")) {
        Subsystem svc = bundleContext.getService(ref);
        if (svc != null)
          return svc;
      }
    } catch (InvalidSyntaxException e) {
      throw new RuntimeException(e);
    }
    throw new RuntimeException("Unable to find subsystem " + id);
  }

  public void stop(BundleContext context) throws Exception {}
}

I shared a bundle that contains this command, you can get it from here: http://coderthoughts.googlecode.com/files/subsystem-gogo-command-1.0.0.jar

Once the above bundles are installed and everything is started I have the following bundles in my framework:
0  ACTIVE org.eclipse.osgi_3.8.2.v20130124-134944
1  ACTIVE org.eclipse.osgi.services_3.3.100.v20120522-1822
2  ACTIVE org.apache.felix.gogo.runtime_0.8.0.v201108120515
3  ACTIVE org.apache.felix.gogo.shell_0.8.0.v201110170705
4  ACTIVE org.eclipse.equinox.console_1.0.0.v20120522-1841
5  ACTIVE org.apache.aries.subsystem.api_1.0.0
6  ACTIVE org.apache.aries.subsystem.core_1.0.0
7  ACTIVE org.apache.aries.subsystem.obr_1.0.0
8  ACTIVE org.apache.aries.application.api_1.0.0
9  ACTIVE org.apache.aries.application.modeller_1.0.0
10 ACTIVE org.apache.aries.application.utils_1.0.0
11 ACTIVE org.apache.aries.blueprint_1.1.0
12 ACTIVE org.apache.aries.proxy_1.0.1
13 ACTIVE org.apache.aries.util_1.1.0
14 ACTIVE org.apache.felix.bundlerepository_1.6.6
15 ACTIVE org.apache.felix.resolver_1.0.0
16 ACTIVE org.eclipse.equinox.coordinator_1.1.0.v20120522-1841
17 ACTIVE org.eclipse.equinox.region_1.1.0.v20120522-1841
18 ACTIVE slf4j.api_1.7.5, Fragments=19
19 RESOLV slf4j.simple_1.7.5, Master=18
20 ACTIVE org.osgi.service.subsystem.region.context.0_1.0.0
21 ACTIVE subsystem-gogo-command_1.0.0

Note that bundle 20 is a synthesized bundle created automatically by the subsystem implementation. We can safely ignore it.

Now I can start doing something. Let's list the available subsystems using our new command from the subsystem-gogo-command listing/bundle:
osgi> subsystem:list
0 ACTIVE org.osgi.service.subsystem.root 

At this point there is only a single subsystem: the root one.

Working with Subsystems
Let's create some sample subsystems to look at what you can do.

I'm going to create two basic subsystems that should allow us to play with it. The subsystem specification defines a number of different subsystem types. In this post I will be looking at the feature subsystem type which deploys all the bundles from the subsystem in a shared space. As if you were just installing all the bundles in a plain framework. (note: other subsystem types provide isolation for the subsystems.)
Subsystem archives typically use the .esa file extension. Both my example subsystems contain 3 bundles. The subsystem1.esa file contains Bundle A, Bundle B and a bundle called Shared Bundle. subsystem2.esa contains Bundle C, Bundle D and also the same Shared Bundle. Both subsystems package the Shared Bundle as they both have a dependency on it. So in order to get a fully working system for either subsystem I need that Shared Bundle. However since these are feature subsystems, where everything is shared I only need the Shared Bundle deployed once.

Creating a subsystem file is pretty easy. The .esa file is really just a zip file that contains the embedded bundles in the root. Additionally it contains a subsystem manifest. I created mine simply using the jar command, but you can also use tools such as the esa-maven-plugin. Here's what you'll find inside:

$ jar tvf subsystem1.esa
    99 Fri Apr 19 08:34:08 IST 2013 OSGI-INF/SUBSYSTEM.MF
  1181 Fri Apr 19 08:33:06 IST 2013 BundleA_1.0.0.jar
  1058 Fri Apr 19 08:33:06 IST 2013 BundleB_1.0.0.jar
   906 Fri Apr 19 08:33:06 IST 2013 SharedBundle_1.0.0.jar


As you can see the zip file contains the relevant bundles in the root plus a subsystem manifest. Here's what the SUBSYSTEM.MF file in subsystem1.esa looks like:
  Subsystem-SymbolicName: subsystem1
  Subsystem-Version: 1.0.0
  Subsystem-Type: osgi.subsystem.feature
It looks a bit like a Bundle Manifest. Most of the information in there is optional...

The subsystem2.esa file is very similar. You can download the sample subsystem files from here: subsystem1.esa and subsystem2.esa.

Let's deploy a subsystem:
  osgi> subsystem:install http://coderthoughts.googlecode.com/files/subsystem1.esa
  Subsystem successfully installed: subsystem1; id: 1

If we list the bundles the three bundles that were in subsystem1.esa are now added.
  22 INSTALLED SharedBundle_1.0.0
  23 INSTALLED BundleA_1.0.0
  24 INSTALLED BundleB_1.0.0

Now let's start the subsystem:
  osgi> subsystem:start 1
...
  22 ACTIVE SharedBundle_1.0.0
  23 ACTIVE BundleA_1.0.0
  24 ACTIVE BundleB_1.0.0
This is pretty handy: starting the subsystem will start all of the bundles that it contains!

Let's add the other subsystem:
  osgi> subsystem:install http://coderthoughts.googlecode.com/files/subsystem2.esa
  Subsystem successfully installed: subsystem2; id: 2
  osgi> subsystem:start 2

Now both subsystems are active:
  22 ACTIVE SharedBundle_1.0.0
  23 ACTIVE BundleA_1.0.0
  24 ACTIVE BundleB_1.0.0
  25 ACTIVE BundleC_1.0.0
  26 ACTIVE BundleD_1.0.0
And we can see that the SharedBundle was only deployed once, because it could be shared across subsystems.

You can also query the subsystems known in the system:
  osgi> subsystem:list
  0 ACTIVE org.osgi.service.subsystem.root
  1 ACTIVE subsystem1
  2 ACTIVE subsystem2

Another interesting aspect is how stopping and un-installation works. Especially in relation to the SharedBundle. I'll leave is as an exercise for the reader but you can see that the Subsystems implementation keeps track of the bundle sharing. If you only stop subsystem1, the SharedBundle will remain ACTIVE. Only when both subsystems that use the bundle are stopped the bundle will move to the RESOLVED state. Uninstalling works similarly. When you uninstall subsystem1, BundleA and BundleB will be uninstalled, but the SharedBundle won't as it is still being used by subsystem2. Only when subsystem2 is uninstalled as well all of the bundles associated with subsystem1 and subsystem2 are uninstalled.

There is a lot more to talk about in relation to subsystems. For example, subsystems don't have to actually embed their dependencies. They can also download them from an OSGi Repository service. In that case your .esa file can be limited to only contain a SUBSYSTEM.MF which lists what your root application bundles should be. The subsystem implementation can also use the Repository Service to automatically find transitive dependencies.

In my little example, the subsystems only contain 3 bundles each, but using .esa files can become really handy when your application becomes large and contains tens or hundreds of bundles. You can even nest them, so subsystems can contain other subsystems - becoming building blocks of higher-level subsystems.

OSGi Subsystems should make the distribution and deployment of larger OSGi applications much easier. The .esa file provides a portable format which allows you to hand your users a single artifact to deploy, regardless of how many bundles your application is made up of.

For more information about OSGi Subsystems see chapter 134 of the OSGi R5 Enterprise specification: http://www.osgi.org/Download/Release5