Chapter 8. Tools for building and testing

This chapter covers

  • Strategies for building OSGi bundles
  • How to choose a set of tools that works for you
  • Useful command-line OSGi tools
  • Using Maven and Ant to build OSGi bundles and EBA applications
  • How to unit test OSGi bundles
  • How to run tests on bundles inside an OSGi framework

As you’ve been working through the OSGi examples in this book, we’ve cheerfully assumed that you didn’t need much help building your example bundles. After all, an OSGi bundle is a JAR with some extra metadata, and even an enterprise OSGi bundle is only an OSGi bundle with even more extra metadata. But just because you can build OSGi bundles the same way that you build ordinary JARs doesn’t mean you should. A lot of specialist tools are out there that can make the process of building your OSGi application easier and more robust.

With a few notable exceptions, most of the tools we’ll discuss aren’t specific to enterprise OSGi. This is partly because you don’t necessarily need enterprise tools—the most important thing is support for core OSGi concepts like compile classpath and launching an OSGi framework. A second reason we don’t cover many enterprise OSGi tools is that enterprise OSGi itself is new, and the tools are still catching up!

We won’t specifically discuss IDEs—we’ll get to them in chapter 9. But we can’t ignore IDEs, because for building OSGi bundles your choice of command-line tooling and your choice of IDE are interconnected. Which command-line tooling you use will influence which IDE works best for you, and the opposite is true as well; you may find your choice of IDE makes the decision about your command-line build for you. In particular, you’ll need to decide early on whether you want to use manifest-first tools or code-first tools.

8.1. Manifest-first or code-first?

One of the great debates in the OSGi world is whether it’s better to write manifests or generate them automatically. Unlike a conventional JAR manifest, which is pretty boring, an OSGi manifest is absolutely critical to how an OSGi bundle works. Because manifests are so pivotal, some people think they’re much too important to be left to a computer to write. Others argue that they’re too important to be left in the hands of software developers!

Writing the manifest for a complex bundle can be hard, and getting it right can be even harder. For this reason, many developers prefer to write and compile their Java code as though it were going to run in a normal Java environment. Tools can then use this code to generate a manifest that’s guaranteed to accurately reflect the code’s dependencies. This style of development is known as code-first. Some developers, on the other hand, prefer to be more involved with the OSGi side of things. In particular, they want to see and control what’s going into their manifests.

Opponents of code-first development argue that although it’s easy to automatically produce an accurate list of packages a bundle should import, it’s much harder to produce a list of packages the bundle should export. Generated manifests often export more packages than you might have intended, particularly if you’re using a service-oriented style of OSGi. To keep your bundles as private as they should be, you may find yourself having to pay as much attention to package exports as you would if you were writing the manifest yourself.

Even the bundle imports might not turn out how you want them. Because the packages you use are hidden with code-first development, you may end up using packages you would have avoided if you’d had to introduce an explicit dependency on them. This can create bundles that fail to resolve at deploy time because of code dependencies you don’t even want or need. Manual tweaking is also required to ensure that optional dependencies are flagged appropriately.

Manifest-first development, on the other hand, has its own difficulties. Development can be slower because you may find yourself constantly interrupting coding to go add required imports to your manifest. Although the tools should automatically tell you if you’re missing required imports, they won’t, in general, flag unused imports. This means you can end up with bundles with dangerously bloated package imports, unless you make a point of cleaning up regularly. Maintaining accurate versions on your exported packages is also almost impossible without some level of bytecode analysis and partial manifest generation.

Whether you prefer manifest-first or code-first development, one thing is certain: for projects of any size, you’ll need some sort of OSGi-aware tooling. Managing small manifests by hand is reasonable, but it rapidly becomes impossible without a compiler to either tell you that you got it wrong or generate the manifest for you.

OSGi and the dreaded ClassNotFoundException

It’s often said that OSGi eliminates class-not-found exceptions. This statement needs to be qualified—OSGi can only eliminate class-not-found exceptions if all the bundle manifests are correct. A bundle that forgets to import packages is guaranteed to fail if it tries to use classes from those packages. Both styles of OSGi development try to guarantee accurate manifests, but a determined developer can introduce manifest errors with either process!

The flagship tool for code-first OSGi development is a command-line tool, bnd. Bnd is also well integrated into a higher-level stack of more general build tools like Ant and Maven, and IDE tools. The star tool for manifest-first development, on the other hand, is Eclipse’s built-in OSGi tooling, Eclipse PDE (we’ll discuss PDE much more in chapter 9). Eclipse PDE itself has only limited support for command-line builds, but several tools integrate with PDE to support command-line building. We’ll begin with a survey of the command-line tools available, starting with bnd.

8.2. Building OSGi applications

Although setting up a command-line build may not be the first thing you do when you start developing a new project, you’ll almost certainly need an automated build sooner or later. Even if you’re not wading into build scripts as your first development step, thinking in advance about what kind of build you want can help you make the right choices about what kind of IDE tools are best suited for your project.

Your choice of which command-line tools to use will usually be guided by whether you prefer a manifest-first or code-first style of development. (Alternatively, if you already know which build tools you want to use, that may make the decision about manifest-first or code-first for you!) Figure 8.1 shows how the various styles of development and tools we’ll discuss in this chapter connect to one another.

Figure 8.1. What build tools are right for you can be decided by whether you want to control your manifests or not, and which build tools you already prefer.

8.2.1. Bnd

We’ll start our discussion with the bnd tool. If you’re sharp-eyed, you’ll notice that bnd only appears in one path in figure 8.1. Nonetheless, if you opt for a code-first style of OSGi development, you’ll almost certainly use either bnd directly or—more likely—one of the many tools that incorporate bnd under the covers. For this reason, it’s important to understand what bnd is and what it can do for you. Bnd analyzes a bundle’s bytecode to work out what packages it would need to import to work properly, and then it uses that information to generate a manifest. Bnd is extremely powerful and could easily fill a chapter of its own. In this chapter, we’ll cover the bits of bnd you’re most likely to use at build time, but we’ll come back to some of bnd’s other features again in section 12.1.3.

Why are we only getting to bnd at this late stage?

The bnd tool is widely used for OSGi development; some people argue that OSGi development on any kind of serious scale is impossible without bnd, and will be wondering why we’ve waited so long to introduce bnd. In our view, using bnd is like using a calculator. When you learn addition and multiplication in school, you don’t use a calculator, because it’s important to properly understand the basic principles; after you’ve got the basics mastered and move on to more advanced math, you use a calculator to handle the basics for you. In our opinion, even if you opt for a code-first style of development, you must understand OSGi manifests so that you can understand what the tools have generated for you.

Building with Bnd

Bnd works from a configuration file that it uses to guide the generation of your manifest. Simple .bnd files are almost indistinguishable from MANIFEST.MF files. Let’s have a look at the bnd file for the fancyfoods.persistence bundle in the next listing.

Listing 8.1. The bnd.bnd file for building the fancyfoods.persistence JAR
Bundle-Name: Fancy Foods Persistence Bundle
Bundle-SymbolicName: fancyfoods.persistence
Bundle-Version: 1.0.0
Meta-Persistence:
Private-Package: fancyfoods.persistence
Import-Package: *

This is almost exactly the same as the manifest you wrote in chapter 3. One little difference is that the bnd file is parsed as a properties file, so line breaks must be escaped with a backslash, rather than indenting the following line with a space. But the two most important differences are the extra header, Private-Package:, and that you’re allowed to use the value * for pattern matching.

Private-Package: is used to indicate that a package should be considered private to the bundle, and that it should not be exported. Bnd will assume that any code not mentioned in a standard Export-Package: or bnd-specific Private-Package: header should be ignored; bnd won’t put code into the bundle it builds unless you explicitly tell it to do so. This may surprise you the first time you do a bnd-based build.

Even though there are three of them, getting all the package declarations right isn’t as time consuming as you might initially guess, because of the second difference between the bnd file and a manifest—patterns. Notice how listing 8.1 uses a wildcard for the package imports. You can also use wildcards and other regular expression constructs such as !, when setting your private and exported packages.

Creating a Bundle

To see how the bnd file works, let’s get bnd to produce the fancyfoods.persistence bundle. Navigate to the folder that contains the compiled fancyfoods.persistence classes and create the bnd.bnd file (or the folder with your Eclipse .classpath folder if you imported fancyfoods.persistence into Eclipse). (If you name the file fancyfoods.persistence.bnd, bnd will automatically work out the bundle symbolic name and JAR name.) Then type the following

java -jar ${bnd.path}/biz.aQute.bnd.jar buildx -classpath
 ${bin.dir} bnd.bnd

where ${bin.dir} is the location of the compiled fancyfoods.persistence classes. If you have the fancyfoods.persistence bundle set up as an Eclipse project, you can use the following command from the root of the Eclipse project:

java -jar ${bnd.path}/biz.aQute.bnd.jar bnd.bnd

Bnd will create a fancyfoods.persistence.jar file in the current directory. Open it up and have a look at what classes were included and the manifest. You’ll see that bnd has helpfully included what the Private-Package: header looked like after wildcard expansion. It’s also added in required manifest headers like the Bundle-ManifestVersion.

Warning: When Bnd Goes Bad

When using bnd, it’s essential to inspect both your bundle contents and your manifest after building, at least while you’re getting started. Small misunderstandings between you and bnd can result in

  • Every package in your bundle being exported
  • All of your dependencies being included in your bundle
  • None of your classes being repackaged in your bundle

The authors are aware of these possibilities because we’ve made all of these mistakes ourselves! You’ll quickly discover if your classes haven’t been included in your bundle, but the other two issues may take much longer to notice; your built bundle will be perfectly valid, but it won’t be at all encapsulated, or at all modular, so it’s not in the spirit of OSGi.

We’ve been talking about building using bnd, but what you’re doing here is somewhere halfway between building and packaging. The way you’re using it, the bnd tool isn’t compiling anything; all it’s doing is using the precompiled code to generate a manifest, and then packaging everything up into a JAR.

Package Versions

When you used bnd to build the fancyfoods.persistence bundle, it added in package imports, but no versions. Because specifying versions is an OSGi best practice, it would be annoying if bnd always ignored them. Luckily, bnd will take good care of your versions if provided with the right information.

Bnd can’t guess the version of exported packages from their bytecode, so the version must be specified somewhere. One option is to explicitly list the packages in the Export-Package and provide a version for each one, but this is manual, and undoes some of the benefits of bnd wildcards. A nicer solution is to make use of bnd’s support for Java package-info.java files. A package-info file is a special class whose only content is an annotation specifying the version. Because the package-info files are right next to the source, it’s easier to remember to update them when making code changes.

An alternative to package-info.java

Some people dislike the extra overhead of maintaining a class to represent information about their Java packages. As an alternative option, bnd also looks for text files called packageinfo in each package. These packageinfo files use the properties format, but otherwise can contain exactly the same information as package-info.java.

Any of the tricks we show you for managing package-info.java will work equally well in a packageinfo file.

Bnd will automatically detect package-info files and use them to determine the version of exported packages. For example, to set the version of the fancyfoods.food package to 1.0.0 (as we did in chapter 2), it’s sufficient to create the following package-info.java file in that package:

@Version("1.0.0")
package fancyfoods.food;

import aQute.bnd.annotation.Version;

But you can do even better than that. The semantic versioning scheme suggests adding a fourth qualifier part to the version string to identify the individual build. While generating the manifest, bnd will automatically expand variables in the package-info.java files:

@Version("1.0.0.${buildLabel}")
package fancyfoods.food;

import aQute.bnd.annotation.Version;

To set the build label, add the following to the bnd.bnd file:

buildLabel = build-${tstamp}

Even more usefully, bnd will automatically add version ranges to its package imports. It does this based, not on information you provide, but rather on what’s in the manifest of the bundles on the build path. Bnd will work out an appropriate version range based on the semantic versioning policy. API implementation providers will be less tolerant to changes in the API than consumers of the API, so you’ll need to give bnd a hint to choose the right range for API providers. Explicitly including the API package in the Import-Package list and adding the provide:=true directive will do the trick. API providers will import packages between versions on the classpath up to, but not including, the next minor increment. Other consumers of an API can handle versions up to, but not including, the next major version. If you’re at all confused about the difference between consumers and providers, or why they need different version ranges, then we suggest taking a look at appendix A (A.2.5), as well as looking back at section 1.2.1.

Setting the Classpath for Bnd

How does bnd work out what version you’re importing? When you repackaged the fancyfoods.persistence bundle, no versions were specified for the imported packages, because bnd didn’t have access to them (or rather, to their exporting bundles). To allow bnd to work out package versions, bnd needs to see both the compiled classes and the compile-time classpath. These two groups of classes are collapsed into a single bnd classpath. Bnd will read Eclipse .classpath files to work out a default classpath, but it won’t unpack Eclipse classpath containers (like those used by Eclipse PDE). If you’re not using Eclipse, or if bnd is struggling to interpret the Eclipse classpath files, classpaths can be specified using the -classpath option.

The lack of a distinction between the compiled classes for the target bundle and the classpath used to compile those classes is why it’s necessary to specify the Private-Package header. Otherwise, bnd won’t have any way of knowing which classes belong to the bundle it’s building. It also means caution must be exercised when using wildcards in the Private-Package header—make the regular expression too general, and bnd will package up the entire classpath into a single bundle. (This brings us neatly back to the importance of double-checking the bundles produced by bnd in case of disaster!)

Warning: Wrong Version? Wrong Classpath

Bnd’s automatic versioning of package imports is incredibly useful, but if you get things wrong it can leave you with bundles that won’t start when you deploy them in production. If you compile against a package with version 1.1.0, bnd will (correctly) set the minimum version for that package import as version 1.1.0. What happens if you then deploy into an environment where that package is only available at version 1.0.0? The OSGi framework will refuse to start your bundle because its minimum requirements aren’t met, even if all your code really needed was version 1.0.0. You can fix this problem by manually specifying the version range for the import in your bnd.bnd file, but a cleaner solution is to make sure that what you’re compiling against lines up with what you’re deploying against. You’ll want to compile against the lowest compatible version of a bundle.

Let’s look in more depth at what can go into a bnd configuration file.

Making the Most of the Bnd File

The bnd file is an extremely flexible and powerful configuration tool. As well as the wildcards we’ve already seen, it supports variable substitution, macros, inheritance, and even Declarative Services.

The Import-Package, Export-Package, and Private-Package headers all allow wildcards. This means bundle exports can be specified concisely. For example, if you adopt a naming convention that assumes packages ending with .impl are private, the following .bnd snippet will automatically export only what it should:

Export-Package: !*.impl, *
Private-Package: *

If the same package is included in both Export-Package and Private-Package, the export takes precedence.

One nice thing about the variable substitution is you can include as much or as little detail as you like. You can specify nothing but *, or copy and paste whole import declarations from existing manifests, or add in package versions or other package directives where needed. Even if you do explicitly list out a bundle’s package imports, it’s a good idea to add a catch-all * at the end of the list to import anything you’ve forgotten—or didn’t even know you needed. If you don’t import everything you need, then bnd will issue warnings to tell you so. If you choose to continue from there, then you expose yourself again to the dreaded NoClassDefFoundError.

So far all the bnd functionality we’ve seen has been about generating manifests—better, smarter, cleaner manifests, but still manifests. Bnd can also use the information in the bnd.bnd file to generate other types of resources. In particular, it can be used for Declarative Services.

Declarative Services

If you’re planning to use both bnd and Declarative Services, you may find the bnd support for Declarative Services handy. Service components can be declared within bnd files; bnd will generate the component XML files. The bnd format for service components is another syntax to learn, but it’s more concise than the XML.

For example, the cheese bundle can be packaged so that it uses bnd Declarative Services instead of Blueprint, with the following bnd file. (We’ve switched from the persistence bundle to the cheese department bundle, both for variety, and because you can’t do the sort of container-managed JPA Blueprint made possible with Declarative Services.)

-include= ~../common.props

Bundle-SymbolicName: fancyfoods.department.cheese
Export-Package:
Private-Package: fancyfoods.dept.cheese*

Service-Component=fancyfoods.cheese.offer; 
 implementation:=fancyfoods.dept.cheese.offers.DesperateCheeseOffer;
 provide:=fancyfoods.offers.SpecialOffer;
 enabled:=true; 
 inventory=fancyfoods.food.Inventory

If you drop the rebuilt cheese bundle into your Aries assembly’s load directory in place of the original cheese bundle, you should find everything works exactly as before. The cheese offer gets to the Service Registry by a different mechanism, but the service is the same. (Don’t forget, you’ll need to add a Declarative Services implementation to your Aries assembly.)

Just as there’s more to bnd files than generating better manifests, there’s more to bnd than building. Bnd is also useful for working with existing conventional JARs and bundles. You’ll see more about these parts of bnd in section 12.1.3.

Although bnd on its own is useful, its mechanisms for specifying classpaths and build paths are fairly limited. Some large projects build with bnd alone, but most opt to use one of the bnd integrations with more general build tools. The bnd project provides Ant tasks, and it’s also extremely well integrated with Maven through the bundle plug-in.

8.2.2. The Maven bundle plug-in

Maven considerably simplifies the dependency management required when building with Ant. If Maven is your build tool of choice, you’ll find that the decision about whether to control your manifests directly or generate manifests automatically has been mostly made for you. Although it’s technically possible to use Maven to build bundles while using existing manifests—and the sample code for the earlier chapters of this book did that—it’s not a natural way of using Maven. (If you need convincing, you need only to look at the build scripts packaged with the sample code!)

In many ways, Maven is a natural fit with OSGi, because Maven’s modules and dependencies map relatively neatly to OSGi bundles. Both modules and bundles are uniquely identified by their symbolic names and versions. Maven’s bundle plug-in combines module-level (or bundle-level) dependencies declared in the pom.xml with bnd bytecode analysis to produce OSGi manifests that declare package-level dependencies. The convenient thing about this process is that it involves almost no extra work compared to normal Maven JAR packaging.

Let’s see what the pom.xml file looks like for a simple bundle with no external dependencies, fancyfoods.api, in this listing.

Listing 8.2. The pom.xml build file for the fancyfoods.api bundle

The manifest generation is controlled by the plug-in configuration for the bundle plug-in . We’ve kept all the plug-in configuration in the same file for clarity, but it’s more likely you’d want to move the generic parts of it up to a parent pom. Almost anything that can go into a bnd file can be added—in XML form—to the bundle plug-in configuration, and the cautions that apply to writing bnd files also apply to configuring the bundle plug-in.

Taking Advantage of Defaults

The Maven bundle plug-in uses the information available elsewhere in the pom file to provide some nice defaults. If you don’t specify the bundle symbolic name, the plug-in will construct it from the group ID and default ID. In general, it will be ${pom.groupId}.${pom.artifactId}, but overlaps between the group ID and artifact ID will be stripped out. Maven will automatically use the artifact ID and version for the JAR name, but it’s good OSGi practice to use the symbolic name (and version) for the JAR names. Therefore, we suggest ensuring that your artifact ID matches your intended bundle symbolic name. For example, to get a bundle symbolic name of fancyfoods.api and a JAR name of fancyfoods.api-1.0.0.jar, choose an artifact ID of fancyfoods.api and a group ID of fancyfoods.

The bundle version will be set to the module version. Similarly, the bundle name, description, and license will all be taken from ones specified elsewhere in the pom.

The bundle plug-in also overrides some of the more counterintuitive bnd defaults for what’s included in and exported by the bundle, because it’s able to distinguish between your local Java source and the binaries on your classpath. It assumes that you do want to export packages you provided the source code for, but don’t want to export all the other packages on your classpath. Packages containing impl or internal in the name won’t be exported. Recent versions of the bundle plug-in default <Private-Package> to include classes in the module, rather than the empty default you get with raw bnd.

Warning: The Importance of Double-Checking

As with plain bnd, it’s a good idea to validate your configuration of the bundle plug-in by having a look in the bundle that comes out and making sure the manifest is what you hoped for, and there aren’t too many or too few classes packaged into the JAR.

You may find that, with bundles that are using service dependencies instead of package dependencies, even the bundle plug-in’s defaults are too generous and you’ll need to restrict the exports further. In listing 8.2, they’re listed explicitly. Versions will be inferred from package-info files, as we discussed in section 8.2.1.

Enterprise OSGI and the Bundle Plug-in

You can also add any other custom headers you need as XML elements. This enables enterprise OSGi headers like Meta-Persistence: to enable container-managed JPA, or Bundle-Blueprint: file to point to a Blueprint file.

To see how the nondefault headers work in a pom, let’s have a look at the pom file for the persistence bundle. This is a more complex pom than the API pom, because the persistence bundle has a Blueprint configuration and dependencies on other bundles. But using the bundle plug-in’s defaults can reduce the amount of configuration it needs, as shown next.

Listing 8.3. A sample pom.xml for the fancyfoods.department.cheese bundle

You’ll notice we didn’t add a <Bundle-Blueprint> element. Although there’s nothing stopping you adding this header, the bundle plug-in is smart enough to hunt out Blueprint files and automatically add the Bundle-Blueprint: headers for you.

The org.osgi.service.blueprint dependency

If you look at the manifest generated for you by the bundle plug-in, you’ll see an extra imported package, org.osgi.service.blueprint. Where did this import come from? When the Maven bundle plug-in detects that your bundle uses Blueprint, it will automatically add that package dependency. Although this package doesn’t include any code, the Blueprint specification encourages Blueprint bundles to import it to ensure that Blueprint is available in their runtime environment. (The specification also insists that Blueprint implementations must export this package.) If you need your bundle to run in non-Blueprint environments too, you can make the package optional by specifying it explicitly in your pom:

<Import-Package>
  org.osgi.service.blueprint;resolution:=optional
</Import-Package>

If you’re using code-first development, you’ve got a lot of build tools open to you. You can use bnd on its own, you can use it in combination with Ant, or you can use the powerful Maven build plug-in. If you’re doing manifest-first development, you have a similar choice between Ant and Maven. Several tools allow manifest-first building using Ant, although most involve some degree of duplication of information with the IDE. There’s also a nice set of Maven plug-ins that neatly share information with the Eclipse PDE IDE.

8.2.3. Ant and Eclipse PDE

With manifest-first development, one of the main challenges of building an OSGi bundle vanishes; there’s no need for the tools to work out a manifest, because you’ve already written one. But it’s still necessary to work out a classpath for compiling. Ideally, this classpath should be the same as the one used by the Eclipse IDE, without having to duplicate it between IDE and command-line environments. For bonus points, the compile stage should pay attention to the manifest and only allow imported packages onto the compile-time classpath.

The Eclipse Headless Build

It turns out that this is a challenging set of requirements. The only Ant tool that fully satisfies them is a miniaturized headless version of the IDE.

Despite running without a GUI, the Eclipse headless build is fairly heavyweight. Even projects that want to take advantage of Eclipse’s metadata often try to build without the direct dependency on the Eclipse runtime. The Eclipse headless build is also fairly inflexible, so integrating extra build steps like coverage instrumentation or custom packaging can be painful.

Osgijc

An alternative to the Eclipse headless build is a third-party tool called osgijc. Osgijc is a replacement for the Ant <javac> task that reads the contents of an Eclipse .classpath file and adds it to the compile classpath. Users will need to manually add in their own classpath entries for bundles that haven’t been explicitly included in the .classpath file. The most convenient way of doing this is to add the directory containing the target runtime to the Ant classpath.

Unlike the Eclipse headless build, osgijc doesn’t validate that a bundle imports all the packages it needs to compile, or that those packages have been exported by some other bundle. Despite the name, osgijc does a conventional Java compile in a flat and open class space.

The osgijc tool, therefore, relies on manifest validation having been done earlier, in the IDE environment. If team members inadvertently deliver code that couldn’t compile in Eclipse because of missing manifest imports or exports, osgijc will build the broken bundle without complaint. Problems will only be discovered during the test phase when classes can’t be loaded.

Rolling Your Own, Cheating, and Other Options

The osgijc tool isn’t complex, and many teams opt to roll their own Ant tasks to consume Eclipse metadata instead. Parsing the Eclipse .classpath file allows you to identify what other projects need to be on an individual project’s classpath, and then you can bulk-add the bundles from the target OSGi environment. If you’re feeling enthusiastic, you can even read in all the .classpath files to work out the right order to build the projects, so that all the dependencies get built in the right order. But unless you parse the MANIFEST.MF files themselves (at which point you’re venturing dangerously close to writing your own OSGi framework), you’ll still be dependent on the Eclipse IDE to validate that the manifests are as they should be.

A more basic approach, which can be effective for many smaller projects, is to manually maintain both the IDE .classpath and Ant or Maven build files in parallel. Although this violates the software engineering practice of not writing anything down more than once, you may find it’s a simple and pragmatic solution to getting things building. (This is how the Fancy Foods sample code is built, for example.)

8.2.4. Maven Tycho

Although most attempts to build OSGi bundles from Eclipse projects have focused on Ant, a promising new project called Eclipse Tycho brings together Eclipse and Maven. Tycho reuses Eclipse metadata to keep pom files small and ensure that automated builds behave the same way as builds within the IDE. Tycho appears to be providing a true manifest-first OSGi build, rather than building in a flat classpath and relying on the IDE to catch manifest problems.

Because it’s so tightly integrated with Eclipse PDE, Tycho is a peculiar hybrid of Maven and Eclipse. Many familiar Maven idioms have disappeared. The default disk layout for source and resources is an Eclipse layout, rather than a Maven one. You no longer need to use <dependency> sections to declare your dependencies—Tycho ignores them. Although Maven repositories still have their place, how they’re used and what gets put into them are different. If you’re a long-time Maven user, you may find the Tycho experience unsettling; but if your heart lies with Eclipse, Tycho will feel warm and comforting, like an old pair of slippers.

Because it’s so different from conventional Maven, Tycho works best if you’ve already got your projects set up in Eclipse, but haven’t yet written an automated build for them. If you already have everything laid out on disk in the standard Maven way, you can make Tycho work with Maven’s build layouts, but it will require extra plug-in configuration.

Tycho is a Maven plug-in, so getting started with Tycho is easy. All you’ll need is Maven 3 and a pom file. One of the nice things about Tycho is that Tycho poms are small, especially if you put the plug-in configuration in a parent pom (which we haven’t done here!). Almost all of the information needed to build the plug-in is shared between the IDE and the build tooling.

Listing 8.4. The pom.xml file for a Tycho build of the fancyfoods.business bundle

Like Eclipse PDE, Tycho uses the terms plug-in and bundle somewhat interchangeably, so to build a normal bundle, you’ll have to use Tycho’s eclipse-plugin packaging type. Don’t worry, Tycho will build you a standard OSGi bundle, despite the name!

Almost all of the information about how your bundle should be built is drawn from the bundle manifest and the eclipse metadata, rather than the pom. This avoids duplication, and it has the added bonus that you can use Eclipse’s nice tools (which we’ll come to in section 9.1.1) to control things. The symbolic name and version strings should exactly match the ones in your manifest, but that’s the only information you’ll need to duplicate.

You may need to adjust your Eclipse build.properties file (using the Eclipse GUI or a text editor) to ensure nonstandard resources like the OSGI-INF folder are included in the built JAR.

Unlike many OSGi-aware build solutions, Tycho fully honors your bundle manifest at compile time. It doesn’t take the common shortcut of working out what bundles should be on the classpath, and then treating that classpath as a flat classpath after that. Tycho uses the rules of OSGi to work out what’s visible to your plug-in.

But how does Tycho work out what should be on the classpath in the first place? You’ll notice that there’s no <dependency> section in the pom.xml—any <dependency> elements are ignored. The answer is that Tycho provisions as it builds your plug-ins.

Provisioning

Provisioning is nothing new for Maven users—every time Maven downloads a JAR from the Maven repository, it’s provisioning—but the way Tycho provisions is both sophisticated and convenient. Dependencies are implicitly declared by Import-Package: statements, rather than explicitly declared with <dependency> elements. Tycho will look for bundles with matching package exports in its repository.

Where do the repositories come from? Tycho can’t provision against a normal Maven repository, because it doesn’t know which bundles export which packages. Clearly, downloading everything in the Maven repository to read all the bundle manifests isn’t practical. (You may suspect your normal Maven builds already download everything, but trust us, there’s more in there!)

When Tycho runs the install goal, it adds extra metadata to the local Maven repository along with the built bundles. This metadata allows it to quickly identify and download bundles with appropriate package exports when building. For example, this allows it to work out that it should add the fancyfoods.api bundle to the classpath when building the fancyfoods.cheese.department bundle (assuming you built the api bundle first, using Tycho!).

But you’re not going to build all your external dependencies using Tycho before trying to build your product. Tycho can be configured to provision against external repositories. Unfortunately, these repositories must be in the p2 format. (For a refresher on p2, see section 7.3.2). Although p2 repositories are widely available for Eclipse-based projects, the format isn’t common outside the Eclipse ecosystem. This limits the utility of Tycho for non-Eclipse-based OSGi development. But with elbow grease you can make Tycho provisioning work for enterprise OSGi, and you may find the benefits of Tycho outweigh the clunkiness of getting the provisioning going.

You have two options for making your external dependencies visible to Tycho’s provisioner. The first is to step back to a slightly modified version of Maven’s normal dependency declarations. The other is to generate your own p2 repository.

Using Maven Dependencies

Although Tycho normally ignores Maven’s <dependency> elements, it can be configured to consider them in its provisioning by adding the following plug-in configuration:

<plugin>
    <groupId>org.eclipse.tycho</groupId>
    <artifactId>target-platform-configuration</artifactId>
    <version>${tycho-version}</version>
    <configuration>
        <resolver>p2</resolver>
        <pomDependencies>consider</pomDependencies>
    </configuration>
</plugin>

Sadly, at the time of writing, Tycho would not resolve transitive dependencies (the dependencies of your dependencies). This means you’ll have to list more dependencies in your pom files than you would if you weren’t using Tycho! But you may find that listing the dependencies isn’t too onerous if you list all the external dependencies for your project in a single parent pom file. Internal dependencies will still be handled automatically by Tycho.

You may find that managing your dependency list quickly becomes tedious. One way of managing it is to write scripts that autogenerate it from your runtime environment. Files in the JAR’s META-INF/maven folder can be used to work out every JAR’s group and artifact ID, and so you can work out a complete <dependencies> element by scanning every JAR in the deploy directory.

Generating your own p2 repository

Another option is to bypass the <considerDependencies> route entirely and generate your own p2 repository. Eclipse provides command-line tools for doing this. Generating your repository directly from your runtime environment has some advantages over normal Maven dependency management; because you know the compile-time environment is identical to the runtime environment, you don’t risk building bundles that compile against the latest and greatest available from a Maven repository, but then fail miserably at runtime when required bundles are either back-level or missing entirely.

To generate the p2 repository, you’ll first need to copy all your runtime bundles into a folder called plugins, inside a holding folder, ${bundles.dir}. Then run the following command

${eclipse.home}/eclipse -nosplash -application
  org.eclipse.equinox.p2.publisher.FeaturesAndBundlesPublisher
  -metadataRepository file:${repository.dir}
  -artifactRepository file:${repository.dir}
  -source ${bundles.dir} -compress -publishArtifacts

where ${eclipse.home} is an Eclipse installation and ${repository.dir} is the output directory for your repository.

You now have a p2 repository that exactly represents the bundles available in your runtime environment. To configure Tycho to use it, add the following to your pom.xml (or to a parent pom.xml):

<repositories>
    <repository>
        <id>runtime-environment</id>

        <layout>p2</layout>
        <url>file://${repository.dir}</url>
    </repository>
</repositories>

In section 8.2.2, we described how the Maven bundle plug-in could automatically find Blueprint files. As far as we know, the Tycho plug-in hasn’t achieved this level of support for enterprise OSGi. But whether you’re using Tycho or the bundle plug-in, Maven does have some more enterprise OSGi tricks up its sleeve.

8.2.5. The Maven EBA plug-in

Apache Aries provides a useful Maven plug-in for building EBA archives. All dependencies listed in the EBA pom are zipped up into a generated .eba file.

Listing 8.5. The pom.xml file to build the Fancy Foods EBA

By default, the EBA plug-in will look for an APPLICATION.MF file in src/main/resources/META-INF. If none is found, it won’t include an application manifest (don’t worry, the manifest is optional, so your application will still work). To have the EBA plug-in automatically generate a manifest based on the .eba contents and the pom symbolic name, set generateManifest to true.

Whichever style of OSGi build you go for—manifest-first or bundle-first, Maven or Ant, tightly integrated to your IDE or relatively standalone—compiling and packaging up your OSGi bundles and applications should only be the first half of your build automation. The second, equally important, half is automated testing. As with compiling, testing OSGi bundles presents some unique challenges, and a range of tools have sprung up to help.

8.3. Testing OSGi applications

If you’re like a lot of developers, you’ll probably divide your testing into a few phases. The lowest-level tests you’ll run are simple unit tests that exercise individual classes, but not their interactions with one another. The next group of tests in your test hierarchy are the integration tests, which test the operation of your application. Finally, at the highest level, you may have system tests, which test the complete system, preferably in an environment that’s close to your production one.

When you’re testing OSGi bundles, each of these types of testing requires a different approach—different from the other phases, and also different from how you’d do similar testing for an application intended to run on a Java EE server or standalone. We’ll start by discussing unit testing, because that’s the simplest case in many ways. We’ll then show you some tools and strategies that we hope you’ll find useful for integration and system testing.

Unit testing OSGi bundles is straightforward, but you’ll find you’ve got choices to make, and a bewildering array of tool options when you start looking at integration testing. The good news is that if you’ve already decided how to build your bundles, some of the choices about how best to test them will have been made for you. The bad news is that you’ve still got a few choices! Figure 8.2 shows some of these choices.

Figure 8.2. Testing OSGi applications can be done in many ways. Any test process should include a simple unit test phase, but the best way to do integration testing depends on a number of factors, including which build tools are already being used.

8.3.1. Unit testing OSGi

By definition, unit tests are designed to run in the simplest possible environment. The purest unit tests have only the class you’re testing and any required interfaces on the classpath. In practice, you’ll probably also include classes that are closely related to the class that’s being tested, and possibly even some external implementation classes. But unit tests needn’t—and shouldn’t—require big external infrastructure like an OSGi framework.

How can code that’s written to take advantage of the great things OSGi offers work without OSGi? How can such code be tested? Luckily, the answer is easily.

Mocking Out

If you followed our advice in chapter 5, you probably don’t have many explicit dependencies on OSGi classes in your code. The enterprise OSGi features of your runtime environment will ideally be handling most of the direct contact with the OSGi libraries for you.

Even if you need to reference OSGi classes directly, you may feel more comfortable writing your application so that it doesn’t have too many explicit OSGi dependencies. After all, loose coupling is good, even when the coupling is to a framework as fabulous as OSGi.

If you do have direct OSGi dependencies, using mocking frameworks to mock out OSGi classes like BundleContext can be helpful. You can even mock up service lookups, although if you’re using Blueprint or Declarative Services you’ll probably rarely have cause to use the OSGi services API directly. Using Blueprint has a lot of testability advantages beyond eliminating direct OSGi dependencies.

Blueprint-Driven Testing

One of the nice side effects of dependency injection frameworks like Blueprint is that unit testing becomes much easier. Although Blueprint won’t do your testing for you, testing code that was written with Blueprint in mind is an awful lot easier than otherwise.

Separating out the logic to look up or construct dependencies from the work of doing things with those dependencies allows dependencies to be stubbed out without affecting the main control flow you’re trying to test. The key is to realize that although you certainly need a Blueprint framework to be present to run your application end to end, you don’t need it at all to test components of the application in isolation. Instead, your test harness can inject carefully managed dependencies into the code you’re trying to test. You can even inject mock objects instead of real objects, which would be almost impossible if your intercomponent links were all hard-wired.

To see this in action, look at the DesperateCheeseOffer again. It depends on the Inventory, which requires container-managed JPA and a functioning database. You don’t want to get a bunch of JPA entities and a database going for a unit test. Instead, you’ll mock up an Inventory object and use the setter methods you created for Blueprint injection for unit testing instead.

Listing 8.6. Using mocked injection to unit test the desperate cheese offer

This sort of testing allows you to verify that if the injected Inventory object behaves as expected, the DesperateCheeseOffer does the right thing. You could also add more complex tests that confirmed that the cheese offer tolerated the case when Inventory had no cheeses in it at all, or the case when there was more than one cheese present in the inventory.

Although tests running outside an OSGi framework are straightforward and can spot a range of problems, there are several classes of problems they can’t detect. In particular, unit tests will still continue to run cleanly, even if bundles fail to start or Blueprint services never get injected. To catch these issues, you’ll also want to test the end-to-end behavior of your application, inside an OSGi framework.

How closely this OSGi framework matches your production environment is a matter of taste. You may choose to do automated integration testing in a minimal environment, and follow it up with a final system test in a mirror of your production system. Alternatively, you may find you flush more problems out more quickly if you test in a more fully featured environment. The tools and techniques we’ll describe for testing inside an OSGi environment are broadly suitable for both integration and system testing.

8.3.2. Pax Exam

The gold standard for OSGi testing tools is a tool called Pax Exam. Pax Exam is part of a suite of OSGi-related tools developed by the OPS4J open source community. In contrast to other open source communities like the Apache and Eclipse Foundations, OPS4J has an interestingly flat structure that emphasizes open participation as well as open consumption. There’s no notion of a committer, no barrier to committing source changes, and little internal hierarchy.

Pax Exam builds on other tools developed by OPS4J, such as Pax Runner, to provide a sophisticated framework for launching JUnit—or TestNG—tests inside an OSGi framework and collecting the results. Under the covers, the Pax Exam framework wraps test classes into a bundle (using bnd to generate the manifest), and then automatically exposes the tests as OSGi services. Pax Exam then invokes each test in turn and records the results.

How Clean is Your Framework?

By default, Pax Exam will start a fresh framework for each test method, which means Pax Exam tests may run slowly if you’ve got a lot of them. In recent versions, you can speed things up—at the risk of interesting side effects—by specifying an @ExamReactorStrategy annotation. You can also choose whether Pax Exam launches the OSGi frameworks inside the main JVM or forks a new JVM for each framework, and runs tests by RMI (Remote Method Invocation). Not spawning a new JVM makes things far faster, and it also means you can debug your tests without having to attach remote debuggers. But many of the more useful options for configuring frameworks are only supported for the remote framework case.

Which container to use is determined by which container you list in your Maven dependency. To use the quicker nonforking container, add the following dependency:

<dependency>
     <groupId>org.ops4j.pax.exam</groupId>
     <artifactId>pax-exam-container-native</artifactId>
     <version>${paxexamversion}</version>
     <scope>test</scope>
 </dependency>

To use the more powerful, but slower, Pax Runner–based container, specify the following:

<dependency>
    <groupId>org.ops4j.pax.exam</groupId>
    <artifactId>pax-exam-container-paxrunner</artifactId>
    <version>${paxexamversion}</version>
    <scope>test</scope>
</dependency>
Enabling Tests for Pax Exam

A JUnit test intended for Pax Exam has a few key differences from one that runs standalone. Running your test code in an entirely different JVM from the one used to launch the test, with RMI and all sorts of network communication going on in the middle, isn’t something the normal JUnit test runner is designed to handle. You’ll need to run with a special Pax Exam runner instead by adding a class-level annotation:

@RunWith(org.ops4j.pax.exam.junit.JUnit4TestRunner.class)

Pax Exam can also inject a bundle context into your test class:

import javax.inject.Inject;
    @Inject
    protected BundleContext ctx;
Warning: Why is Nothing Being Injected?

If you’re using Pax Exam injection, make sure to use the javax.inject.Inject annotation and not org.ops4j.pax.exam.Inject. The Pax Exam annotation is nonfunctional in Pax Exam 2.0 and up.

It’s a good pattern to use the bundle context for querying the state of the framework, but to use Pax Exam’s API to configure the framework and install bundles.

Configuring a Framework

Pax Exam configuration of the framework is done in a method annotated @Configuration. Pax Exam provides a fluent API for building up configuration options and gives you detailed control over the contents and configuration of your OSGi framework. You can specify the OSGi framework implementation (Equinox, Felix, Knopflerfish) and version, or any combination of implementations and versions, system properties, and installed bundles. You can specify a list of bundles to install, as well as JVM options, and OSGi frameworks. All of these can be controlled using methods statically imported from org.ops4j.pax.exam.CoreOptions and combined into an array using the CoreOptions.options() method:

@Configuration
public static Option[] configuration() {
    MavenArtifactProvisionOption foodGroup = mavenBundle().groupId(
            "fancyfoods");
    Option[] fancyFoodsBundles = options(
            foodGroup.artifactId("fancyfoods.department.cheese").
               version("1.0.0"),
            foodGroup.artifactId("fancyfoods.api").
               version("1.0.0"),
            foodGroup.artifactId("fancyfoods.persistence").
               version("1.0.0"),
            foodGroup.artifactId("fancyfoods.datasource").
               version("1.0.0"));
    Option[] server = PaxConfigurer.getServerPlatform();
    Option[] options = OptionUtils.combine(fancyFoodsBundles,
                                     server);

    return options;
}

Here you’re installing the fancyfoods.department.cheese bundle, along with its dependencies and the bundles that make up the hosting server. Most of your tests will probably run on the same base platform (or platforms), so it’s worth pulling out common configuration code into a configuration utility, PaxConfigurer in this case. If you do this, you can use OptionUtils.combine() to merge the core option array and Option varargs parameter into one big array.

Using Maven

Pax Exam is well integrated with Maven, so one of the most convenient ways of specifying bundles to install is using their Maven coordinates and the CoreOptions.maven-Bundle() method. Versions can be explicitly specified, pulled from a pom.xml using versionAsInProject(), or left implicit to default to the latest version.

Using an existing server install

As well as your application bundles themselves, you’ll need to list all the other bundles in your target runtime environment. If you think about using mavenBundle() calls to specify every bundle in your server runtime, you may start to feel uneasy. Do you really need to list out the Maven coordinates of every bundle in your Aries assembly—or worse yet, every bundle in your full-fledged application server?

Luckily, the answer is no—Pax Exam does provide alternate ways of specifying what should get installed into your runtime. You can install Pax Runner or Karaf features if any exist for your server, or point Pax Exam at a directory that contains your server. For testing applications intended to run on a server, this is the most convenient option—you probably don’t need to test your application with several different OSGi frameworks or see what happens with different Blueprint implementations, because your application server environment will be well defined. Listing 8.7 shows how to configure a Pax Exam environment based on the Aries assembly you’ve been using for testing.

Using profiles

Pax Exam also provides some methods to reference convenient sets of bundles, such as a webProfile() and a set of junitBundles(). Remember that Pax Exam installs only the bundles you tell it to install—your server probably doesn’t ship JUnit, so if you point Pax Exam at your server directory, you’ll need to add in your test class’s JUnit dependencies separately. Because it can be complex, even with profiles, we find it can be convenient to share the code for setting up the test environment. The following listing shows a utility class for setting up a test environment that reproduces the Aries assembly we’ve been using throughout.

Listing 8.7. A class that provides options that can be shared between tests

Here $[aries.assembly] should be replaced with a path—there’s no clever variable substitution going on!

Warning: One OSGI Framework Good, Two OSGI Frameworks Bad

Pax Exam will install all the bundles in the scanned directory into an OSGi framework. Because the scanned directory contains its own OSGi framework bundle, this means you may getting slightly more than you bargained for. Installing one OSGi framework into another is possible, but the classloading gets complicated! To avoid installing multiple frameworks, you can either copy all your server bundles except for the OSGi framework itself to another directory, or rename the OSGi bundle so that it’s not captured by your filter. For example, if you rename the bundle to osgi.jar (no dash) and then specify the filter "*-*.jar"), Pax Exam will install every JAR except for your framework JAR (assuming all the other JARs have dashes before the version number).

All this setup might seem like a lot of work, and it is. Luckily, once you’ve done it for one test, writing all your other tests will be much easier.

The Pax Exam Test

What kind of things should you be doing in your tests? The first thing to establish is that your bundles are present, and that they’re started. (And if not, why not!) Bundles that haven’t started are a common cause of problems in OSGi applications, and if anything, these problems are even more common with Pax Exam because of the complexity of setting up the environment. Despite this, checking bundle states isn’t a first-class Pax Exam diagnostic feature. It’s worth adding some utility methods in your tests that try to start the bundles in your framework to make sure everything is started, and fail the test if any bundles can’t start.

After this setup verification, what you test will depend on the exact design of your application. Verifying that all your services, including Blueprint ones, are present in the Service Registry is a good next step—and the final step is to make sure your services all behave as expected, for a variety of inputs. The following listing shows a test for the cheese bundle.

Listing 8.8. A Pax Exam integration test

The ServiceTracker

The eagle-eyed among you will have noticed that the test in listing 8.8 uses a ServiceTracker to locate the service. We did mention this briefly in section 6.2.1, but for a more detailed explanation we suggest that you look at appendix A.

Unlike normal JUnit test methods, Pax Exam test methods can take an optional BundleContext parameter. You can use it to locate the bundles and services you’re trying to test.

Warning: Blueprint and Fast Tests

An automated test framework will generally start running tests as soon as the OSGi framework is initialized. This can cause fatal problems when testing Blueprint bundles, because Blueprint initializes asynchronously. At the time you run your first test, half your services may not have been registered! You may find you suffer from perplexing failures, reproducible or intermittent, unless you slow Pax Exam down. Waiting for a Blueprint-driven service is one way of ensuring things are mostly ready, but unfortunately just because one service is enabled doesn’t mean they all will be. In the case of the cheese test, waiting for the SpecialOffer service will do the trick, because that’s the service you’re testing.

Running the Tests

If you’re using Maven, and you keep your test code in Maven’s src/test/java folder, your tests will automatically be run when you invoke the integration-test or install goals. You’ll need one of the Pax Exam containers declared as a Maven dependency. If you’re using Ant instead, don’t worry—Pax Exam also supports Ant.

8.3.3. Tycho test

Although Pax Exam is a popular testing framework, it’s not the only one. In particular, if you’re using Tycho to build your bundles, you’re better off using Tycho to test them as well. Tycho offers a nice test framework that in many ways is less complex than Pax Exam.

Tycho is a Maven-based tool, but like Tycho build, Tycho test uses an Eclipse PDE directory layout. Instead of putting your tests in a test folder inside your main bundle, Tycho expects a separate bundle. It relies on naming conventions to find your tests, so you’ll need to name your test bundle with a .tests suffix.

Fragments and OSGi unit testing

Tools like Pax Exam will generate a bundle for your tests, but with Tycho you’ve got control of your test bundle. To allow full white-box unit testing of your bundle, you may find it helpful to make the test bundle a fragment of the application bundle. This will allow it to share a classloader with the application bundle and drive all classes, even those that aren’t exported.

Configuring a Test Framework

Tycho will use your bundle’s package imports to provision required bundles into the test platform. Like any provisioning that relies solely on package dependencies, it’s unlikely that all the bundles your application needs to function properly will be provisioned. API bundles will be provisioned, but service providers probably won’t be. It’s certain that the runtime environment for your bundle won’t be exactly the same as the server you eventually intend to deploy on.

To ensure Tycho provisions all your required bundles, you can add them to your test bundle’s manifest using Require-Bundle. (At this point, you may be remembering that in chapter 5 we strongly discouraged using Require-Bundle. We’re making an exception here because it’s such a handy provisioning shortcut, and it’s only test code. We won’t tell, if you don’t!)

Tycho will automatically provision any dependencies of your required bundles, so you won’t need to include your entire runtime platform. But you may find the list is long enough that it’s a good idea to make one shared test.dependencies bundle whose only function is to require your test platform. All your other test bundles can require the test.dependencies bundle.

Your test bundles will almost certainly have a dependency on JUnit, so you’ll need to add in one of the main Eclipse p2 repositories to your pom so that Tycho can provision the JUnit bundle:

<repository>
    <id>eclipse-helios</id>
    <layout>p2</layout>
    <url>http://download.eclipse.org/releases/helios</url>
</repository>

As with Pax Exam, you may find it takes you a few tries to get the Tycho runtime platform right. Until you’ve got a reliable platform definition, you may spend more time debugging platform issues than legitimate failures. Don’t be deterred—having a set of solid tests will pay back the effort many times over, we promise!

8.3.4. Rolling your own test framework

So far, the OSGi testing tools we’ve discussed have all provided some way of integrating a unit test framework, like JUnit, into an OSGi runtime. To do this they’ve required you to do some fairly elaborate definition and configuration of this runtime, either in test code (Pax Exam) or in Maven scripts and manifest files (Tycho). Sometimes the benefits of running JUnit inside an OSGi framework don’t justify the complexity of getting everything set up to achieve this.

Instead of using a specialized OSGi testing framework, some developers rely on much more low-tech or hand-rolled solutions. A bundle running inside an OSGi framework can’t interact with a JUnit runner without help, but this doesn’t mean it can’t interact with the outside world. It can provide a bundle activator or eager Blueprint bean to write out files, or it can expose a servlet and write status to a web page. Your test code can find the file or hit the servlet and parse the output to work out if everything is behaving as expected. It can even use a series of JUnit tests that hit different servlets or read different log files, so that you can take advantage of all your existing JUnit reporting infrastructure.

This method of testing OSGi applications always feels a little inelegant to us, but it can work well as a pragmatic test solution. Separating out the launching of the framework from the test code makes it much easier to align the framework with your production system. You can use your production system as is, to host the test bundles. It’s also much easier to debug configuration problems if the framework doesn’t start as expected. Both authors have witnessed the sorry spectacle of confused Pax Exam users shouting at an innocent laptop, “Where’s my bundle? And why won’t you tell me that six bundles in my framework failed to start?” at the end of a long day’s testing.

If you’re planning to deploy your enterprise OSGi application on one of the bigger and more muscular application servers, you may find that it’s more application server than you need for your integration testing. In this case, you may be better off preparing a sandbox environment for testing, either by physically assembling one, or by working out the right minimum set of dependencies.

The Aries assembly you’ve been using to run the examples through the course of the book is a good example of a hand-assembled OSGi runtime. You’ve started with a simple Equinox OSGi framework and added in the bundles needed to support the enterprise OSGi programming model. Alternatively, there are several open source projects that aim to provide lightweight and flexible OSGi runtimes.

Pax Runner

Pax Runner is a slim container for launching OSGi frameworks and provisioning bundles. We’ve already met Pax Runner in section 8.3.2 because Pax Exam relies heavily on Pax Runner to configure and launch its OSGi framework. You may find it easier to bypass Pax Exam and use Pax Runner directly as a test environment. Pax Runner comes with a number of predefined profiles, which can be preloaded into the framework. For example, to launch an OSGi framework that’s capable of running simple web applications, it’s sufficient to use this command:

pax-run.sh --profiles=web

At the time of writing, there isn’t a Pax Runner profile that fully supports the enterprise OSGi programming model. But profiles are text files that list bundle locations, so it’s simple to write your own profiles. These profiles can be shared between members of your development team, which is nice, and fed to Pax Exam for automated testing, which is even nicer. Pax Runner also integrates with Eclipse PDE, so you can use the same framework in your automated testing and your development-time testing:

pax-run.sh --file:///yourprofile.txt

Although Pax Runner is described as a provisioning tool, it can’t provision bundles like the provisioners we discussed in chapter 7. You’ll need to list every bundle you want to include in your profile file.

Karaf

An alternative to Pax Runner, with slightly more dynamic provisioning behavior, is Apache Karaf. Like the name implies, Karaf is a little OSGi container. Karaf has some handy features that are missing in Pax Runner, like hot deployment and runtime provisioning. This functionality makes Karaf suitable for use as a production runtime, rather than as a test container. Karaf is so suitable for production, it underpins the Apache Geronimo server. But Karaf isn’t as well integrated into existing unit test frameworks as Pax Runner, so if you use Karaf for your testing you’ll mostly be limited to the “use JUnit to scrape test result logs” approach we described above.

Karaf has a lot of support for Apache Aries, which makes it a convenient environment for developing and testing enterprise OSGi applications. So well integrated is Karaf with Aries, Karaf comes with Blueprint support by default. Even better, if you list the OSGi bundles using the osgi:list, there’s a special column that shows each bundle’s Blueprint status.

Hot deployment

Like the little Aries assembly we’ve been using, Karaf can install bundles from the filesystem dynamically. Karaf monitors the ${karaf.home}/deploy directory, and will silently install any bundles (or feature definitions) dropped into it. To confirm which bundles are installed and see their status, you can use the list command.

Karaf features

Karaf features, like Pax Runner profiles, are precanned groups of bundles that can easily be installed. As an added bonus, Karaf features have been written for many of the Apache Aries components. To see all the available features, type features:list. To install new features, type features:install -v (the -v option does a verbose install). To get a working Apache Aries runtime with Karaf features, it’s sufficient to execute the following commands:

features:install -v war
  features:install -v jpa
  features:install -v transaction
  features:install -v jndi

You’ll also need to enable the HTTP server by creating a file called ${karaf.home}/etc/org.ops4j.pax.web.cfg and specifying an HTTP port:

org.osgi.service.http.port=8080

To get the Fancy Foods application going, the final step is to install a database by copying one into the deploy directory, and then copy your Fancy Foods JARs into the deploy directory. You can also install the bundles from the console using Maven coordinates.

The Karaf console is nice, but it doesn’t lend itself so well to automation. Luckily, repositories can be defined, features installed, and bundles started, by changing files in ${karaf.home}. If you’re keen to automate, the Karaf console can even be driven programmatically.

8.4. Collecting coverage data

When you’re running tests against your application, how do you measure the effectiveness of your tests? How do you make sure you haven’t forgotten anything? Code coverage tools instrument your application classes to produce reports on which code was exercised. We feel collecting coverage data should be an integral part of your development and testing process.

The good news is that a range of coverage tools are available. Most either make Ant tasks available or integrate neatly into Maven. The bad news is that instrumenting classes is different in an OSGi environment, so most coverage tools will spectacularly fail to work out of the box.

Why doesn’t bytecode modification work with OSGi? The problem is that code that’s instrumented to collect coverage data will have extra dependencies. In a non-OSGi environment, the classpath is flat, so it’s sufficient to have the coverage library on the classpath. In an OSGi environment, with its structured classpath and limited class visibility, a bundle must explicitly import any new dependencies added at runtime.

OSGi support for load-time weaving

Version 4.3 of the OSGi specification includes new support for load-time bytecode weaving of classes. Although at the time of writing no coverage tools exploit this capability, expect future tools to take advantage of this extremely useful facility.

If you want to collect coverage data for your enterprise OSGi application (you do), there are two options. A few tools have been specifically designed to support OSGi coverage collection. Alternatively, you can carry on using your favorite tools by doing creative classpath manipulation.

The most well-known coverage tool with built-in OSGi support is EclEmma. But as the name suggests, EclEmma only supports Eclipse Equinox.

8.4.1. Getting coverage tools onto the classpath

If you’re fond of another tool, like Cobertura, or using Felix as your OSGi framework, there are ways to make non-OSGi-aware tools work. One option is to explicitly import the coverage library packages in the manifests of your instrumented bundles. You’ll either need to produce two versions of the bundle, one for testing and one for production, or make the coverage imports optional. Another option, which is less disruptive to your manifests, is to add the coverage classes to your boot classpath and use boot delegation to ensure all bundles can see them.

8.5. Summary

The OSGi tools ecosystem is big, and many of the tools are powerful (or complicated!). We could probably write a whole book on OSGi tools. Hopefully this chapter has given you an overview of what kind of tool options are out there, and some starting points to find more information on individual tools.

The key differentiators between many of the tool stacks is whether your OSGi development starts with an OSGi manifest or starts with plain code. After you’ve made this decision, you’ll find the choice between tools like bnd and Tycho starts to get made for you. As you move on to test your bundles, you’ll find that many of your build-time tools have natural testing extensions. In particular, almost all testing tools offer some level of Maven integration. How do all these tools fit together?

Figure 8.3 shows how the tools we’ve been discussing relate to one another and the broader tooling ecosystem. We’ll cover the middle section of figure 8.3, IDE tools, in the next chapter.

Figure 8.3. A range of tools for building, developing, and testing OSGi applications is available. Many of the tools build on or integrate with other tools, or even several other tools.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset