Because of the above reasons, the OSGi Compendium Release 8.1 contains the Whiteboard Specification for Jakarta™ RESTful Web Services. And additionally the OSGi Technology Whiteboard Implementation for Jakarta RESTful Web Services reference implementation is available in the Eclipse namespace.
The following tutorial is actually an update to the old one, using the new specification and reference implementation, to be able to create RESTful Web Services using OSGi.
Currently there are no Maven archetypes for OSGi that would be really helpful. The enRoute archetypes are outdated and only generate skeletons for OSGi R7 projects. The org.eclipse.osgitech.rest.archetype
generates the skeleton for a single project, which is helpful to identify the dependencies, but not helpful in a multi-module project.
If you want to try out the org.eclipse.osgitech.rest.archetype
provided by the reference implementation, you can use the following command:
mvn archetype:generate \
-DarchetypeGroupId=org.eclipse.osgi-technology.rest \
-DarchetypeArtifactId=org.eclipse.osgitech.rest.archetype \
-DarchetypeVersion=1.2.2 \
-DgroupId=org.fipro.modifier \
-DartifactId=jakartars \
-Dversion=1.0.0-SNAPSHOT \
-Dpackage=org.fipro.modifier.jakartars
As mentioned this creates a single jakartars project. Unfortunately with the above command, the generated project structure is also not valid and shows up with compile errors, as explained in this GitHub Issue.
We will not use the mentioned archetype, as we want to build a multi-module project. So let’s start to create the project structure using the default Maven archetypes similar to Multi-Module Project with Maven:
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=org.fipro.service.modifier \
-DartifactId=jakartars \
-Dversion=1.0.0-SNAPSHOT \
-DinteractiveMode=false
cd jakartars
rmdir src /s ... rm -r src
packaging
to pom
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=org.fipro.service.modifier \
-DartifactId=api \
-Dversion=1.0.0-SNAPSHOT \
-Dpackage=org.fipro.service.modifier.api \
-DinteractiveMode=false
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=org.fipro.service.modifier \
-DartifactId=impl \
-Dversion=1.0.0-SNAPSHOT \
-Dpackage=org.fipro.service.modifier.impl \
-DinteractiveMode=false
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=org.fipro.service.modifier \
-DartifactId=rest \
-Dversion=1.0.0-SNAPSHOT \
-Dpackage=org.fipro.service.modifier.rest \
-DinteractiveMode=false
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=org.fipro.service.modifier \
-DartifactId=app \
-Dversion=1.0.0-SNAPSHOT \
-Dpackage=org.fipro.service.modifier.app \
-DinteractiveMode=false
Now the projects can be imported to the IDE of your choice. As the projects are plain Maven based Java projects, you can use any IDE. But of course my choice is Eclipse with Bndtools.
Open the jakartars/pom.xml parent pom file and add the following configurations:
<properties>
<java.version>17</java.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<bnd.version>7.0.0</bnd.version>
<jakartars.whiteboard.version>1.2.2</jakartars.whiteboard.version>
<jersey.version>3.1.5</jersey.version>
</properties>
Note:
We will use the newest Bndtools 7.0.0 which requires Java 17 for the execution. If you need to use Java 11 in your setup, use Bndtools 6.4.0.
At the time writing this blog post, the current released version of the org.eclipse.osgi-technology.rest
artefacts is 1.2.2. Double check if in the meanwhile a newer version was published. In case you want to test a SNAPSHOT version, you need to add the following snippet to your Maven settings.xml:
<profiles>
<profile>
<id>oss-sonatype-snapshots</id>
<repositories>
<repository>
<id>OSSRH</id>
<name>Maven OSSRH Snapshots</name>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>true</enabled>
</releases>
</repository>
</repositories>
</profile>
</profiles>
<activeProfiles>
<activeProfile>oss-sonatype-snapshots</activeProfile>
</activeProfiles>
Note:
On Windows there is some formatting issue when using the archetypes. For every additional module you create, an empty line with some spaces is added between the content lines. If you followed the tutorial and created 5 modules, you will see 5 empty lines between every content line. To clean this up and make the enroute/pom.xml file readable again, you can do a search and replace via regular expression in an editor of your choice. Use the following regex and replace it with nothing
^(?:[\t ]*(?:\r?\n|\r))+
The following screenshot shows the settings in the Find/Replace dialog that can be used to cleanup:
dependencies
section from the jakartars/pom.xmldependencyManagement
section similar to the following snippet<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.core</artifactId>
<version>8.0.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.annotation</artifactId>
<version>8.1.0</version>
<scope>provided</scope>
</dependency>
<!-- The OSGi framework RI is Equinox -->
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.osgi</artifactId>
<version>3.18.600</version>
<scope>runtime</scope>
</dependency>
<!-- Declarative Services -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.component</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.component.annotations</artifactId>
<version>1.5.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.scr</artifactId>
<version>2.2.6</version>
<scope>runtime</scope>
<exclusions>
<exclusion>
<groupId>org.codehaus.mojo</groupId>
<artifactId>animal-sniffer-annotations</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Configuration Admin -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.cm</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.configadmin</artifactId>
<version>1.9.26</version>
<scope>runtime</scope>
</dependency>
<!-- OSGi Configurator -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.configurator</artifactId>
<version>1.0.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.configurator</artifactId>
<version>1.0.18</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.cm.json</artifactId>
<version>2.0.2</version>
<scope>runtime</scope>
</dependency>
<!-- Event Admin -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.event</artifactId>
<version>1.4.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.event</artifactId>
<version>1.6.200</version>
<scope>runtime</scope>
</dependency>
<!-- Log Stream Service -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.log</artifactId>
<version>1.5.0</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.log.stream</artifactId>
<version>1.1.100</version>
<scope>runtime</scope>
</dependency>
<!-- Metatype -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.metatype</artifactId>
<version>1.4.1</version>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.metatype.annotations</artifactId>
<version>1.4.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.metatype</artifactId>
<version>1.6.300</version>
<scope>runtime</scope>
</dependency>
<!-- OSGi Converter -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.util.converter</artifactId>
<version>1.0.9</version>
<scope>runtime</scope>
</dependency>
<!-- OSGi Function -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.util.function</artifactId>
<version>1.2.0</version>
<scope>runtime</scope>
</dependency>
<!-- OSGi Promise -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.util.promise</artifactId>
<version>1.3.0</version>
<scope>runtime</scope>
</dependency>
<!-- OSGi PushStream -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.util.pushstream</artifactId>
<version>1.1.0</version>
<scope>runtime</scope>
</dependency>
<!-- Jakarta Servlet Whiteboard -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.servlet</artifactId>
<version>2.0.0</version>
</dependency>
<!-- Jakarta RESTful Web Services Whiteboard -->
<dependency>
<groupId>jakarta.ws.rs</groupId>
<artifactId>jakarta.ws.rs-api</artifactId>
<version>3.1.0</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.jakartars</artifactId>
<version>2.0.0</version>
</dependency>
<!-- The whiteboard implementation -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest</artifactId>
<version>${jakartars.whiteboard.version}</version>
<scope>runtime</scope>
</dependency>
<!-- The whiteboard implementation default configuration, when you want to use it -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest.config</artifactId>
<version>${jakartars.whiteboard.version}</version>
<scope>runtime</scope>
</dependency>
<!-- An optional fragment for the use of server sent events -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest.sse</artifactId>
<version>${jakartars.whiteboard.version}</version>
<scope>runtime</scope>
</dependency>
<!-- The adapter to run the implementation with Jetty -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest.jetty</artifactId>
<version>${jakartars.whiteboard.version}</version>
<scope>runtime</scope>
</dependency>
<!-- The adapter to run the implementation with the OSGi Servlet Whiteboard -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest.servlet.whiteboard</artifactId>
<version>${jakartars.whiteboard.version}</version>
<scope>runtime</scope>
</dependency>
<!-- Jersey - explicitly added to be able to update the dependency that is provided by org.eclipse.osgi-technology.rest -->
<dependency>
<groupId>org.glassfish.jersey</groupId>
<artifactId>jersey-bom</artifactId>
<version>${jersey.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- Condition Service -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.condition</artifactId>
<version>1.0.0</version>
</dependency>
<!-- Tracker -->
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.util.tracker</artifactId>
<version>1.5.4</version>
</dependency>
<!-- Jetty -->
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-bom</artifactId>
<version>11.0.20</version>
<type>pom</type>
</dependency>
<!--
org.apache.felix.http.jetty:
- implementation of the R8.1 OSGi Servlet Service, the R7 OSGi Http Service and the R7 OSGi Http Whiteboard Specification
- has itself the dependencies to Eclipse Jetty, which makes those bundles transitively available in our setup
-->
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.http.jetty</artifactId>
<version>5.1.8</version>
<scope>runtime</scope>
</dependency>
<!-- Http Servlet 3.1 API with contract -->
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.http.servlet-api</artifactId>
<version>2.1.0</version>
<!-- <version>3.0.0</version> -->
<scope>runtime</scope>
</dependency>
<!-- Java XML -->
<dependency>
<groupId>jakarta.xml.bind</groupId>
<artifactId>jakarta.xml.bind-api</artifactId>
<version>4.0.1</version>
</dependency>
<dependency>
<groupId>com.sun.xml.bind</groupId>
<artifactId>jaxb-osgi</artifactId>
<version>4.0.4</version>
<scope>runtime</scope>
</dependency>
<!-- JSON Support -->
<dependency>
<groupId>jakarta.json</groupId>
<artifactId>jakarta.json-api</artifactId>
<version>2.1.3</version>
</dependency>
<dependency>
<groupId>jakarta.json.bind</groupId>
<artifactId>jakarta.json.bind-api</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.16.0</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jakarta.rs</groupId>
<artifactId>jackson-jakarta-rs-json-provider</artifactId>
<version>2.16.0</version>
</dependency>
<dependency>
<groupId>org.eclipse.parsson</groupId>
<artifactId>jakarta.json</artifactId>
<version>1.1.5</version>
</dependency>
<!-- extender that facilitates the use of JRE SPI providers -->
<dependency>
<groupId>org.apache.aries.spifly</groupId>
<artifactId>org.apache.aries.spifly.dynamic.framework.extension</artifactId>
<version>1.3.7</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.glassfish.hk2</groupId>
<artifactId>osgi-resource-locator</artifactId>
<version>1.0.3</version>
<scope>runtime</scope>
</dependency>
<!-- Several implementations need to log using SLF4J -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.36</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.12</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.12</version>
<scope>runtime</scope>
</dependency>
<!-- The Web Console -->
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.webconsole</artifactId>
<version>4.8.8</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.webconsole.plugins.ds</artifactId>
<version>2.2.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.inventory</artifactId>
<version>1.1.0</version>
<scope>test</scope>
</dependency>
<!-- The Gogo Shell -->
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.shell</artifactId>
<version>1.1.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.runtime</artifactId>
<version>1.1.6</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.command</artifactId>
<version>1.1.2</version>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
</exclusion>
<exclusion>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.compendium</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</dependencyManagement>
build
section similar to the following snippet<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.11.0</version>
<configuration>
<release>${java.version}</release>
</configuration>
</plugin>
<!-- Use the bnd-maven-plugin and assemble the symbolic names -->
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
<version>${bnd.version}</version>
<configuration>
<bnd>
<![CDATA[
Bundle-SymbolicName: ${project.groupId}.${project.artifactId}
-sources: true
-contract: *
]]>
</bnd>
</configuration>
<executions>
<execution>
<goals>
<goal>bnd-process</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- Required to make the maven-jar-plugin pick up the bnd
generated manifest. Also avoid packaging empty Jars -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.2.0</version>
<configuration>
<archive>
<manifestFile>
${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
</archive>
<skipIfEmpty>true</skipIfEmpty>
</configuration>
</plugin>
<!-- Setup the indexer for running and testing -->
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-indexer-maven-plugin</artifactId>
<version>${bnd.version}</version>
<configuration>
<localURLs>REQUIRED</localURLs>
<attach>false</attach>
</configuration>
<executions>
<execution>
<id>index</id>
<goals>
<goal>index</goal>
</goals>
<configuration>
<indexName>${project.artifactId}</indexName>
</configuration>
</execution>
<execution>
<id>test-index</id>
<goals>
<goal>index</goal>
</goals>
<configuration>
<indexName>${project.artifactId} Test</indexName>
<outputFile>${project.build.directory}/test-index.xml</outputFile>
<scopes>
<scope>test</scope>
</scopes>
</configuration>
</execution>
</executions>
</plugin>
<!-- Define the version of the resolver plugin we use -->
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-resolver-maven-plugin</artifactId>
<version>${bnd.version}</version>
<configuration>
<failOnChanges>false</failOnChanges>
<bndruns></bndruns>
</configuration>
<executions>
<execution>
<goals>
<goal>resolve</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- Define the version of the export plugin we use -->
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-export-maven-plugin</artifactId>
<version>${bnd.version}</version>
<configuration>
<resolve>true</resolve>
<failOnChanges>false</failOnChanges>
</configuration>
<executions>
<execution>
<goals>
<goal>export</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- Define the version of the testing plugin that we use -->
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-testing-maven-plugin</artifactId>
<version>${bnd.version}</version>
<executions>
<execution>
<goals>
<goal>testing</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- Define the version of the baseline plugin we use and
avoid failing when no baseline jar exists. (for example before the first
release) -->
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-baseline-maven-plugin</artifactId>
<version>${bnd.version}</version>
<configuration>
<failOnMissing>false</failOnMissing>
</configuration>
<executions>
<execution>
<goals>
<goal>baseline</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</pluginManagement>
</build>
bndtools.m2e
connector as well as the m2e.pde.connector
in your workspace enabled. To solve this open the pom.xml files with the error, set the cursor to the line with the error, press CTRL+1 (Quick Fix) and select Ignore M2E PDE Connector…dependencies
section with the following one:<dependencies>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.core</artifactId>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.annotation</artifactId>
</dependency>
</dependencies>
build
section:<build>
<plugins>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-baseline-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
org.fipro.service.modifier.api
StringModifier
interface:public interface StringModifier {
String modify(String input);
}
App.java
file which was created by the archetype.org.fipro.service.modifier.api
package. It configures that the package is exported. If this file is missing, the package is a Private-Package
and therefore not usable by other OSGi bundles.@org.osgi.annotation.bundle.Export
@org.osgi.annotation.versioning.Version("1.0.0")
package org.fipro.service.modifier.api;
src/test/java
The package-info.java file and its content are part of the Bundle Annotations introduced with R7. Here are some links if you are interested in more detailed information:
dependencies
section.<dependency>
<groupId>org.fipro.service.modifier</groupId>
<artifactId>api</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.core</artifactId>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.annotation</artifactId>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.component.annotations</artifactId>
</dependency>
build
section:<build>
<plugins>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
org.fipro.service.modifier.impl
StringInverter
service:@Component
public class StringInverter implements StringModifier {
@Override
public String modify(String input) {
return new StringBuilder(input).reverse().toString();
}
}
App
class that was created by the archetype.A Jakarta RESTful Web Services Resource can be registered with the Jakarta RESTful Web Services Whiteboard by registering them as Whiteboard services. In other words, the Jakarta REST Resource can simply be registered with the Jakarta REST Whiteboard if it is implemented as a OSGi service.
After the projects are imported to the IDE and the OSGi service to consume is available, we can start implementing the REST based service.
dependencies
section.<dependency>
<groupId>org.fipro.service.modifier</groupId>
<artifactId>api</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.core</artifactId>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.annotation</artifactId>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.component.annotations</artifactId>
</dependency>
<dependency>
<groupId>jakarta.ws.rs</groupId>
<artifactId>jakarta.ws.rs-api</artifactId>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.jakartars</artifactId>
</dependency>
build
section:<build>
<plugins>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
org.fipro.service.modifier.rest
ModifierRestService
:
@Component
annotation to the class definition and specify the service parameter to specify it as a service, not an immediate component.@JakartarsResource
annotation to the class definition to mark it as a Jakarta-RS whiteboard resource.osgi.jakartars.resource=true
which means this service must be processed by the Jakarta RS whiteboard. @JakartarsResource
itself has the @RequireJakartarsWhiteboard
annotation which adds the requirement for a Jakarta RESTful Web Services Whiteboard implementation. Therefore it is not needed to use the @RequireJakartarsWhiteboard
annotation on your REST service implementation.@JakartarsName
annotation.osgi.jakartars.name
, which defines a user defined name that can be used to identify a Jakarta RESTful Web Services whiteboard service.Path
annotation on class level to specify the URI path that the resource class will serve requests for.StringModifier
injected using the @Reference
annotation.StringModifier
.@JakartarsResource
@JakartarsName("modifier")
@Component(service=ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@Path("/")
public class ModifierRestService {
@Reference
StringModifier modifier;
@GET
@Path("modify/{input}")
public String modify(@PathParam("input") String input) {
return modifier.modify(input);
}
}
When you read the specification, you will see that the example service is using the PROTOTYPE scope. The example services in the OSGi enRoute tutorials do not use the PROTOTYPE scope. So I was wondering when to use the PROTOTYPE scope for Jakarta-RS Whiteboard services. I was checking the specification and asked on the OSGi mailing list. Thanks to Raymond Augé who helped me understanding it better. In short, if your component implementation is stateless and you get all necessary information injected to the Jakarta-RS resource methods, you can avoid the PROTOTYPE scope. If you have a stateful implementation, that for example gets Jakarta-RS context objects for a request or session injected into a field, you have to use the PROTOTYPE scope to ensure that every information is only used by that single request. The example service in the specification therefore does not need to specify the PROTOTYPE scope, as it is a very simple example. But it is also not wrong to use the PROTOTYPE scope even for simpler services. This aligns the OSGi service design (where typically every component instance is a singleton) with the Jakarta-RS design, as Jakarta-RS natively expects to re-create resources on every request.
There are currently two adapters you can choose from to run the OSGi Technology Whiteboard Implementation for Jakarta RESTful Web Services:
The following section describes how to run directly on a Jetty server.
In the application project we need to ensure that our service is available. In case the StringInverter
from above was implemented, the impl module needs to be added to the dependencies
section of the app/pom.xml file. If you want to use another service that can be consumed via Maven, you of course need to add that dependency.
dependencyManagement
section of the parent pom.xml. Remember to remove the version, as it is defined in the parent pom.xmldependencies
section.<dependency>
<groupId>org.fipro.service.modifier</groupId>
<artifactId>impl</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.fipro.service.modifier</groupId>
<artifactId>rest</artifactId>
<version>${project.version}</version>
</dependency>
<!-- The whiteboard implementation -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest</artifactId>
</dependency>
<!-- The whiteboard implementation default configuration, when you want to use it -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest.config</artifactId>
</dependency>
<!-- An optional fragment for the use of server sent events -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest.sse</artifactId>
</dependency>
<!-- The adapter to run the implementation with Jetty -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest.jetty</artifactId>
</dependency>
slf4j-simple
to at least see the log statements on the console<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.36</version>
</dependency>
build
section: <build>
<plugins>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-indexer-maven-plugin</artifactId>
<configuration>
<includeJar>true</includeJar>
</configuration>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-export-maven-plugin</artifactId>
<configuration>
<bndruns>
<bndrun>app.bndrun</bndrun>
</bndruns>
</configuration>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-resolver-maven-plugin</artifactId>
<configuration>
<bndruns>
<bndrun>app.bndrun</bndrun>
</bndruns>
</configuration>
</plugin>
</plugins>
</build>
junit
dependency from the pom.xmlApp
and the package under src/main/java{
":configurator:resource-version": 1,
"JakartarsWhiteboardComponent": {
"jersey.port": 8080,
"jersey.jakartars.whiteboard.name" : "Jetty REST",
"jersey.context.path" : ""
}
}
The following properties are supported for configuring the Whiteboard on Jersey:
Parameter | Description | Default |
---|---|---|
jersey.schema |
The schema under which the services should be available. | http |
jersey.host |
The host under which the services should be available. | localhost |
jersey.port |
The port under which the services should be available. | 8181 |
jersey.context.path |
The base context path of the whiteboard. | /rest |
jersey.jakartars.whiteboard.name |
The name of the whiteboard | Jersey REST |
jersey.disable.sessions |
Enable/disable session handling in Jetty. Disabled by default as REST services are stateless. |
true |
The definition of these properties is located in JerseyConstants.
Note:
The default value for jersey.context.path
is /rest
. So if you don’t configure a value via the configurator.json file, your services will be available via the rest
context path. This is also the case for a custom Jakarta-RS application. If you don’t want to use a context path, you explicitly have to set it to an empty value, as in the example above.
package-info.java
in the folder src/main/java/config with the following content:@RequireConfigurator
@RequireConfigurationAdmin
package config;
import org.osgi.service.configurator.annotations.RequireConfigurator;
import org.osgi.service.cm.annotations.RequireConfigurationAdmin;
By using these annotations you declare that the Configurator extender and a Configuration Admin implementation are required. Further information about the Configurator can be found in the OSGi Compendium Configurator Specification.
index: target/index.xml;name="app"
-standalone: ${index}
-runrequires: \
bnd.identity;id='org.fipro.service.modifier.rest',\
bnd.identity;id='org.fipro.service.modifier.app',\
bnd.identity;id='org.eclipse.parsson.jakarta.json',\
bnd.identity;id='slf4j.simple'
-runfw: org.eclipse.osgi
-runee: JavaSE-17
-resolve.effective: active
-runblacklist: bnd.identity;id='org.apache.felix.http.jetty'
Note:
We add the bundle org.apache.felix.http.jetty
to the Run Blacklist to avoid that this bundle is used in the resolve process. This is necessary as we explicitly want to use the default Jetty bundles instead of the repackaged Felix Jetty bundle.
Note:
If the Run Bundles stay empty, or you see the bundles and shortly afterwards they are gone again, try to set the Resolution to Auto and save the file. This should then solve the issue afterwards.
Note:
Eclipse Parsson provides an implementation of Jakarta JSON Processing Specification. It is required by the Jakarta RESTful Web Services implementation if you configure it via the OSGi Compendium Configurator Specification, but unfortunately there is no direct requirement to an implementation. Therefore it is not resolved automatically and needs to be specified as Run Requirement explicitly.
Note:
If you see the following warning and want to get rid of it, you need to add com.sun.xml.bind.jaxb-osgi
to the Run Requirements and Resolve again.
JAXBContext implementation could not be found. WADL feature is disabled.
The following section describes how to run using the OSGi Servlet Whiteboard.
If you want to try out both variants, I suggest to create a new module app-http. This will be helpful later on to test and compare the differences.
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=org.fipro.service.modifier \
-DartifactId=app-http \
-Dversion=1.0.0-SNAPSHOT \
-Dpackage=org.fipro.service.modifier.app-http \
-DinteractiveMode=false
In the application project we need to ensure that our service is available. In case the StringInverter
from above was implemented, the impl module needs to be added to the dependencies
section of the application pom.xml file. If you want to use another service that can be consumed via Maven, you of course need to add that dependency.
dependencyManagement
section of the parent pom.xml. Remember to remove the version, as it is defined in the parent pom.xmldependencies
section.<dependency>
<groupId>org.fipro.service.modifier</groupId>
<artifactId>impl</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.fipro.service.modifier</groupId>
<artifactId>rest</artifactId>
<version>${project.version}</version>
</dependency>
<!-- The whiteboard implementation -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest</artifactId>
</dependency>
<!-- The adapter to run the implementation with the OSGi Servlet Whiteboard -->
<dependency>
<groupId>org.eclipse.osgi-technology.rest</groupId>
<artifactId>org.eclipse.osgitech.rest.servlet.whiteboard</artifactId>
</dependency>
slf4j-simple
to at least see the log statements on the console<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.36</version>
</dependency>
build
section: <build>
<plugins>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-indexer-maven-plugin</artifactId>
<configuration>
<includeJar>true</includeJar>
</configuration>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-export-maven-plugin</artifactId>
<configuration>
<bndruns>
<bndrun>app.bndrun</bndrun>
</bndruns>
</configuration>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-resolver-maven-plugin</artifactId>
<configuration>
<bndruns>
<bndrun>app.bndrun</bndrun>
</bndruns>
</configuration>
</plugin>
</plugins>
</build>
junit
dependency from the pom.xmlApp
and the package under src/main/java{
":configurator:resource-version": 1,
"org.apache.felix.http~modifier":
{
"org.osgi.service.http.port": "8080",
"org.osgi.service.http.host": "localhost",
"org.apache.felix.http.context_path": "",
"org.apache.felix.http.name": "Modify REST Service",
"org.apache.felix.http.runtime.init.id": "modify"
},
"JakartarsServletWhiteboardRuntimeComponent~modifier":
{
"jersey.context.path" : "",
"jersey.jakartars.whiteboard.name" : "Servlet REST",
"osgi.http.whiteboard.target" : "(id=modify)"
}
}
The first block org.apache.felix.http~modifier
is used to configure the Apache Felix HTTP Service service factory. Details about the configuration options are available in the Apache Felix HTTP Service Wiki.
The second block JakartarsServletWhiteboardRuntimeComponent~modifier
is used to configure the whiteboard service factory with the Servlet Whiteboard. The following properties are supported for configuring the Whiteboard on Servlet Whiteboard:
Parameter | Description | Default |
---|---|---|
jersey.context.path |
The base context path of the whiteboard. | / |
jersey.jakartars.whiteboard.name |
The name of the whiteboard | Jersey REST |
osgi.http.whiteboard.target |
Service property specifying the target filter to select the Http Whiteboard implementation to process the service. The value is an LDAP style filter that points to the id defined in org.apache.felix.http.runtime.init.id . |
- |
The definition of these properties is located in JerseyConstants.
package-info.java
in the folder src/main/java/config with the following content:@RequireConfigurator
@RequireConfigurationAdmin
package config;
import org.osgi.service.configurator.annotations.RequireConfigurator;
import org.osgi.service.cm.annotations.RequireConfigurationAdmin;
By using these annotations you declare that the Configurator extender and a Configuration Admin implementation are required. Further information about the Configurator can be found in the OSGi Compendium Configurator Specification.
index: target/index.xml;name="app-http"
-standalone: ${index}
-runrequires: \
bnd.identity;id='org.fipro.service.modifier.rest',\
bnd.identity;id='org.fipro.service.modifier.app-http',\
bnd.identity;id='org.eclipse.parsson.jakarta.json',\
bnd.identity;id='slf4j.simple',\
bnd.identity;id='org.apache.felix.http.jetty'
-runfw: org.eclipse.osgi
-runee: JavaSE-17
-resolve.effective: active
# Avoid to have the default Jetty run at port 8080
-runproperties: \
org.osgi.service.http.port=-1
-runproperties: \
org.osgi.service.http.port=-1
Note:
If the Run Bundles stay empty, or you see the bundles and shortly afterwards they are gone again, try to set the Resolution to Auto and save the file. This should then solve the issue afterwards.
Note:
Eclipse Parsson provides an implementation of Jakarta JSON Processing Specification. It is required by the Jakarta RESTful Web Services implementation if you configure it via the OSGi Compendium Configurator Specification, but unfortunately there is no direct requirement to an implementation. Therefore it is not resolved automatically and needs to be specified as Run Requirement explicitly.
Note:
Compared to the Jetty usage, the default value for jersey.context.path
with the Servlet Whiteboard is /
. So if you don’t want to use a context path, you can simply do not configure a value via the configurator.json file.
If you specify org.apache.felix.http.context_path
and jersey.context.path
, the path to the service is combined, e.g.
"org.apache.felix.http.context_path": "http"
...
"jersey.context.path" : "demo"
Would result in the path http://localhost:8080/http/demo/modify/fubar
It is also possible to register the Jakarta-RS Whiteboard Service with the default Jetty. In this case the configuration is much simpler:
{
":configurator:resource-version": 1,
"JakartarsServletWhiteboardRuntimeComponent":
{
"jersey.jakartars.whiteboard.name" : "Servlet REST",
"jersey.context.path" : "rest"
}
}
And of course you need to remove org.osgi.service.http.port=-1
from the runproperties
, otherwise the default Jetty instance doesn’t start. It is important that you provide a configuration, either via Configurator or even manually via ConfigurationAdmin, as the JakartarsServletWhiteboardRuntimeComponent
requires a configuration.
The following snippet shows how you could provide a configuration programmatically via Immediate Component:
@Component
public class JakartaRsConfiguration {
@Reference
ConfigurationAdmin admin;
@Activate
void activate() throws IOException {
Dictionary<String, String> properties = new Hashtable<>();
properties.put("jersey.jakartars.whiteboard.name", "Servlet REST");
properties.put("jersey.context.path", "rest");
Configuration config =
admin.getConfiguration("JakartarsServletWhiteboardRuntimeComponent", "?");
config.update(properties);
}
}
Note:
If you see the following warning and want to get rid of it
JAXBContext implementation could not be found. WADL feature is disabled.
you need to add com.sun.xml.bind.jaxb-osgi
to the Run Requirements and Resolve again.
In Jakarta RESTful Web Services you can add Providers that are responsible for various cross-cutting concerns such as filtering requests, converting representations into Java objects, mapping exceptions to responses, etc. Such Jakarta RESTful Web Services Extensions can be registered with the Jakarta RESTful Web Services Whiteboard by registering them as Whiteboard services. This is explained in more detail in the OSGi Compendium Specification Jakarta RESTful Web Services Whiteboard.
The following interfaces are supported by the specification:
ContainerRequestFilter
and ContainerResponseFilter
extensions are used to alter the HTTP request and response parameters.ReaderInterceptor
and WriterInterceptor
extensions are used to alter the incoming or outgoing objects for the call.MessageBodyReader
and MessageBodyWriter
extensions are used to deserialize/serialize objects to the wire for a given media type, for example application/json.ContextResolver
extensions are used to provide objects for injection into other Jakarta RESTful Web Services resources and extensions.ExceptionMapper
extensions are used to map exceptions thrown by Jakarta RESTful Web Services resources into responses.ParamConverterProvider
extensions are used to map rich parameter types to and from String values.Feature
and DynamicFeature
extensions are used as a way to register multiple extension types with the Jakarta RESTful Web Services container. Dynamic Features further allow the extensions to be targeted to specific resources within the Jakarta RESTful Web Services container.For a Jakarta-RS Extension Whiteboard Service, there are basically two important annotations:
@JakartarsExtension
@JakartarsExtensionSelect
As an example we will implement a WriterInterceptor
. This tutorial contains further examples in the following chapters.
HtmlWriterInterceptor
class in the rest module
@Component
annotation to the class definition to specify it as a service.@JakartarsExtension
annotation to the class definition to mark the service as a Jakarta-RS Whiteboard Extension type that should be processed by the Jakarta-RS Whiteboard.WriterInterceptor
interface and wrap the String in the entity reference with HTML tags.@Component
@JakartarsExtension
public class HtmlWriterInterceptor implements WriterInterceptor {
public void aroundWriteTo(WriterInterceptorContext ctx)
throws WebApplicationException, IOException {
Object entity = ctx.getEntity();
if (entity instanceof String result) {
String html = "<html><head></head><body><ul>";
String[] split = result.split(";");
for (String string : split) {
html += "<li>" + string + "</li>";
}
html += "</ul></body>";
ctx.setEntity(html);
}
ctx.proceed();
}
}
Note:
We use the list markup processing already as a preparation for later steps.
By default Jakarta-RS Extensions are applied to every request and response. In cases where this should be not the case for every Jakarta-RS Resource, but only a subset, it is possible to limit the usage via name binding. Let’s evaluate this with the following modifications:
@Target({ElementType.TYPE, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@NameBinding
public @interface HtmlModification{}
HtmlWriterInterceptor
to add the name binding annotation to the class definition@Component
@JakartarsExtension
@HtmlModification
public class HtmlWriterInterceptor implements WriterInterceptor { ... }
ModifierRestService
and add a new resource method that uses the name binding annotation and explicitly returns the media type text/html
@JakartarsResource
@JakartarsName("modifier")
@Component(service=ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@Path("/")
public class ModifierRestService {
@Reference
StringModifier modifier;
@GET
@Path("modify/{input}")
public String modify(@PathParam("input") String input) {
return modifier.modify(input);
}
@GET
@Path("modifyhtml/{input}")
@Produces(MediaType.TEXT_HTML)
@HtmlModification
public String modifyHtml(@PathParam("input") String input) {
return modifier.modify(input);
}
}
The Jakarta RESTful Web Services Whiteboard registers a default Jakarta REST Web Service Application with the name .default
. Typically it is sufficient to register Jakarta-RS Resources and Jakarta-RS Extensions as Whiteboard Services and implicitly use the default application. There are two use cases where it makes sense to register a Jakarta-RS Application as Whiteboard Service:
For a Jakarta-RS Application Whiteboard Service, there are basically two important annotations:
@JakartarsApplicationBase
@JakartarsApplicationSelect
@JakartarsApplicationBase("mod")
@JakartarsName("modifyApplication")
@Component(service=Application.class)
public class ModifyApplication extends Application { ... }
@Component(service=ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@JakartarsResource
@JakartarsApplicationSelect("(osgi.jakartars.name=modifyApplication)")
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
public class ModifierRestService { ... }
If you need to use the @JakartarsApplicationSelect
annotation on multiple Jakarta-RS Resources and Jakarta-RS Extensions, it is helpful to define a Custom Component Property Annotation.
@ComponentPropertyType
@Retention(RetentionPolicy.CLASS)
@Target(ElementType.TYPE)
public @interface TargetModifyApp {
String osgi_jakartars_application_select() default "(osgi.jakartars.name=modifyApplication)";
}
You can then use the @TargetModifyApp
annotation instead:
@JakartarsResource
@Component(service=ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@TargetModifyApp
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
public class ModifierRestService { ... }
Note:
By default Jakarta-RS Resource Whiteboard Services and Jakarta-RS Extension Whiteboard Services are registered with the .default
Jakarta-RS Application provided by the Whiteboard implementation. They are not automatically assigned to all published Applications. This means, if you have a custom Application in your runtime and want to add Resources and Extensions to that Application, you need to target them via @JakartarsApplicationSelect
.
An example to follow is shown later in this tutorial, to have a better idea on how it could look like.
Further information is available in the OSGi Compendium Spec Registering RESTful Web Service Applications.
There are use cases where returning a plain String as result of a web service is not sufficient. In the following section we extend our setup to return the result as JSON. We will use Jackson for this.
ModifierRestService
First we configure the Jakarta-RS Resource so it produces JSON.
@Produces(MediaType.APPLICATION_JSON)
annotation to the ModifierRestService
class definition to specify that JSON responses are created.StringModifier
injected and return a List
of Strings as a result of the REST resource.@JakartarsResource
@JakartarsName("modifier")
@Component(service=ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
public class ModifierRestService {
@Reference
private volatile List<StringModifier> modifier;
@GET
@Path("modify/{input}")
public List<String> modify(@PathParam("input") String input) {
return modifier.stream()
.map(mod -> mod.modify(input))
.collect(Collectors.toList());
}
@GET
@Path("modifyhtml/{input}")
@Produces(MediaType.TEXT_HTML)
@HtmlModification
public String modifyHtml(@PathParam("input") String input) {
return modifier.stream()
.map(mod -> mod.modify(input))
.collect(Collectors.joining(";"));
}
}
Note:
If you change the return value to List
without further configuration, you will see an error like this:
MessageBodyWriter not found for media type=text/html, type=class java.util.ArrayList, genericType=java.util.List<java.lang.String>
StringModifier
in the impl module.@Component
public class Upper implements StringModifier {
@Override
public String modify(String input) {
return input.toUpperCase();
}
}
Jersey provides support for common media type representations, e.g. Jersey - JSON - Jackson (2.x). By adding the bundle org.glassfish.jersey.media.jersey-media-json-jackson
to the runtime, the necessary providers are automatically registered to all Jakarta-RS Applications in the runtime.
dependencies
section<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
</dependency>
ModifierRestService
to consume a collection of StringModifier
, you need to add the bundle org.fipro.service.modifier.impl
explicitly to the Run Requirementsorg.glassfish.jersey.media.jersey-media-json-jackson
bundle to the Run Requirements-runrequires: \
bnd.identity;id='org.fipro.service.modifier.impl',\
bnd.identity;id='org.fipro.service.modifier.rest',\
bnd.identity;id='org.fipro.service.modifier.app',\
bnd.identity;id='org.eclipse.parsson.jakarta.json',\
bnd.identity;id='slf4j.simple',\
bnd.identity;id='com.sun.xml.bind.jaxb-osgi',\
bnd.identity;id='org.glassfish.jersey.media.jersey-media-json-jackson'
If you have also created the app-http module, perform the above modifications also in the app-http/app.bndrun
-runrequires: \
bnd.identity;id='org.fipro.service.modifier.impl',\
bnd.identity;id='org.fipro.service.modifier.rest',\
bnd.identity;id='org.fipro.service.modifier.app-http',\
bnd.identity;id='org.eclipse.parsson.jakarta.json',\
bnd.identity;id='slf4j.simple',\
bnd.identity;id='org.apache.felix.http.jetty',\
bnd.identity;id='com.sun.xml.bind.jaxb-osgi',\
bnd.identity;id='org.glassfish.jersey.media.jersey-media-json-jackson'
Note:
If the execution of Resolve does not take the new changes into account, you need to execute a Maven build mvn clean verify
, update the projects via Right Click -> Maven -> Update Project…, and then trigger Resolve from the Bnd Run File Editor again.
Note:
The org.glassfish.jersey.jackson.JacksonFeature
is automatically registered with all applications in the server. This way the OSGi requirement on the JSON media type via osgi.jakartars.media.type=application/json
service property is not satisfied. If you want to use the org.glassfish.jersey.jackson.JacksonFeature
and use the OSGi capability mechanism, you could register it via Jakarta-RS Feature Whiteboard Extension (see below).
JacksonJsonProvider
via Jakarta-RS Feature Whiteboard ExtensionA Jakarta RESTful Web Service Feature is a special type of Jakarta RESTful Web Service Provider, that implements the Feature
interface and can be used to configure a Jakarta-RS implementation. They are useful for grouping sets of properties and providers (including other features) that are logically related and must be enabled as a unit (see Configurable Types).
dependencies
section<dependency>
<groupId>com.fasterxml.jackson.jakarta.rs</groupId>
<artifactId>jackson-jakarta-rs-json-provider</artifactId>
</dependency>
JacksonJsonFeature
:
@Component
annotation to the class definition.@JakartarsExtension
annotation to the class definition to mark the service as a Jakarta-RS Whiteboard Extension type that should be processed by the Jakarta-RS Whiteboard.@JakartarsMediaType(APPLICATION_JSON)
annotation to the class definition to mark the component as providing a serializer capable of supporting the named media type, in this case the standard media type for JSON.com.fasterxml.jackson.jakarta.rs.json.JacksonJsonProvider
in the configure(FeatureContext)
method.@Component
@JakartarsExtension
@JakartarsMediaType(MediaType.APPLICATION_JSON)
public class JacksonJsonFeature implements Feature {
@Override
public boolean configure(FeatureContext context) {
context.register(JacksonJsonProvider.class);
return true;
}
}
org.glassfish.jersey.media.jersey-media-json-jackson
from the Run RequirementsAs our Feature
provides the capability to support the media type JSON via the @JakartarsMediaType(APPLICATION_JSON)
, we can configure our service to require that capability via the @JSONRequired
annotation.
@JSONRequired
annotation to the ModifierRestService
class definition to mark this class to require JSON media type support. In OSGi terms it means that a service is available that provides the service property osgi.jakartars.media.type=application/json
, which we provided in our custom entity provider via @JakartarsMediaType(MediaType.APPLICATION_JSON)
. This way our Jakarta REST Resource will only be available if the media type support service is available in the runtime.@JakartarsResource
@JakartarsName("modifier")
@Component(service = ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
@JSONRequired
public class ModifierRestService { ... }
In this section we will implement a Custom Entity Provider and use Jackson for this. We will first register it directly as a Jakarta-RS Whiteboard Extension.
dependencies
section.<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.util.converter</artifactId>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
Note:
Remember to remove org.glassfish.jersey.media.jersey-media-json-jackson
from the Run Requirements in the app/app.bndrun and Resolve in case you haven’t done so already in a previous section.
JacksonJsonConverter
:
@Component
annotation to the class definition and specify the PROTOTYPE
scope parameter to ensure that multiple instances can be requested.@JakartarsExtension
annotation to the class definition to mark the service as a Jakarta-RS Whiteboard Extension type that should be processed by the Jakarta-RS Whiteboard.@JakartarsMediaType(APPLICATION_JSON)
annotation to the class definition to mark the component as providing a serializer capable of supporting the named media type, in this case the standard media type for JSON.@Consumes(MediaType.WILDCARD)
annotation, to define the media types the jakarta.ws.rs.ext.MessageBodyReader
can accept. In this case */*
to also support “non-standard” JSON variants as input.@Produces(MediaType.APPLICATION_JSON)
annotation, to define the media type the jakarta.ws.rs.ext.MessageBodyWriter
can produce. In this case application/json
.@Provider
annotation, to support automatic discovery of the provider class by the Jakarta-RS runtime.package org.fipro.service.modifier.rest;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.lang.annotation.Annotation;
import java.lang.reflect.Type;
import java.util.List;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.osgi.service.component.annotations.ServiceScope;
import org.osgi.service.jakartars.whiteboard.propertytypes.JakartarsExtension;
import org.osgi.service.jakartars.whiteboard.propertytypes.JakartarsMediaType;
import org.osgi.service.log.Logger;
import org.osgi.service.log.LoggerFactory;
import org.osgi.util.converter.Converter;
import org.osgi.util.converter.ConverterFunction;
import org.osgi.util.converter.Converters;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import jakarta.ws.rs.Consumes;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.WebApplicationException;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.core.MultivaluedMap;
import jakarta.ws.rs.ext.MessageBodyReader;
import jakarta.ws.rs.ext.MessageBodyWriter;
import jakarta.ws.rs.ext.Provider;
@JakartarsExtension
@JakartarsMediaType(MediaType.APPLICATION_JSON)
@Component(scope = ServiceScope.PROTOTYPE)
@Consumes(MediaType.WILDCARD)
@Produces(MediaType.APPLICATION_JSON)
@Provider
public class JacksonJsonConverter implements MessageBodyReader<Object>, MessageBodyWriter<Object> {
@Reference(service = LoggerFactory.class)
private Logger logger;
private final Converter converter = Converters.newConverterBuilder()
.rule(String.class, this::toJson)
.rule(this::toObject)
.build();
private ObjectMapper mapper = new ObjectMapper();
private String toJson(Object value, Type targetType) {
try {
return mapper.writeValueAsString(value);
} catch (JsonProcessingException e) {
logger.error("error on JSON creation", e);
return e.getLocalizedMessage();
}
}
private Object toObject(Object o, Type t) {
try {
if (List.class.getName().equals(t.getTypeName())) {
return this.mapper.readValue((String) o, List.class);
}
return this.mapper.readValue((String) o, String.class);
} catch (IOException e) {
logger.error("error on JSON parsing", e);
}
return ConverterFunction.CANNOT_HANDLE;
}
@Override
public boolean isWriteable(Class<?> c, Type t, Annotation[] a, MediaType mediaType) {
return MediaType.APPLICATION_JSON_TYPE.isCompatible(mediaType)
|| mediaType.getSubtype().endsWith("+json");
}
@Override
public boolean isReadable(Class<?> c, Type t, Annotation[] a, MediaType mediaType) {
return MediaType.APPLICATION_JSON_TYPE.isCompatible(mediaType)
|| mediaType.getSubtype().endsWith("+json");
}
@Override
public void writeTo(
Object o, Class<?> type, Type genericType,
Annotation[] annotations, MediaType mediaType,
MultivaluedMap<String, Object> httpHeaders, OutputStream out)
throws IOException, WebApplicationException {
String json = converter.convert(o).to(String.class);
out.write(json.getBytes());
}
@Override
public Object readFrom(
Class<Object> type, Type genericType,
Annotation[] annotations, MediaType mediaType,
MultivaluedMap<String, String> httpHeaders, InputStream in)
throws IOException, WebApplicationException {
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
return converter.convert(reader.readLine()).to(genericType);
}
}
In the previous section we created a Custom Entity Provider to return the media type JSON as service response. And we registered it via the OSGi Jakarta RESTful Web Service Extension mechanism. Basically this means, the Custom Entity Provider needs to be an OSGi service itself. But what about cases where the Custom Entity Provider already exists and is maintained by a project that is not OSGi aware?
How can you make use of Jakarta Extensions that are not whiteboard enabled?
The easiest approach is to create a Jakarta Feature as a Jakarta RESTful Web Service Extension. Similar to what we have done to register the JacksonJsonProvider
(see above).
To show how this works, modify the JacksonJsonConverter
so it is no whiteboard service anymore:
org.osgi.service.log.Logger
to org.slf4j.Logger
@Consumes(MediaType.WILDCARD)
@Produces(MediaType.APPLICATION_JSON)
@Provider
public class JacksonJsonConverter implements MessageBodyReader<Object>, MessageBodyWriter<Object> {
private Logger logger = LoggerFactory.getLogger(JacksonJsonConverter.class);
...
}
You need to add the slf4j-api
to the dependencies
of the rest/pom.xml to get rid of the compile errors:
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<scope>compile</scope>
</dependency>
JacksonJsonFeature
:
@Component
annotation to the class definition.@JakartarsExtension
annotation to the class definition to mark the service as a Jakarta-RS Whiteboard Extension type that should be processed by the Jakarta-RS Whiteboard.@JakartarsMediaType(APPLICATION_JSON)
annotation to the class definition to mark the component as providing a serializer capable of supporting the named media type, in this case the standard media type for JSON.JacksonJsonConverter
in the configure(FeatureContext)
method.@Component
@JakartarsExtension
@JakartarsMediaType(MediaType.APPLICATION_JSON)
public class JacksonJsonFeature implements Feature {
@Override
public boolean configure(FeatureContext context) {
context.register(JacksonJsonConverter.class);
return true;
}
}
If you start the application again with the JacksonJsonFeature
, the service should work again as expected.
In case you want to have a more static definition of the Jakarta-RS Resources and Jakarta-RS Extensions, and for example you also want to add Extensions that are not whiteboard enabled, you can also use a custom Jakarta-RS Application and register it as a Whiteboard Service.
ModifyApplication
:
@Component
annotation to the class definition to mark it as an OSGi DS.@JakartarsApplicationBase
annotation to the class definition to mark the service as a Jakarta-RS Whiteboard Application type that should be processed by the Jakarta-RS Whiteboard. Also defines the URI, relative to the root context of the whiteboard, at which the Application should be registered.@JakartarsName
annotation to the class definition to specify a user defined name that can be used to identify a Jakarta RESTful Web Services whiteboard service.JacksonJsonConverter
in the getClasses()
method.@JakartarsApplicationBase("mod")
@JakartarsName("modifyApplication")
@Component(service=Application.class)
public class ModifyApplication extends Application {
@Override
public Set<Class<?>> getClasses() {
return Set.of(JacksonJsonConverter.class);
}
}
ModifierRestService
@JakartarsApplicationSelect
annotation to select the Jakarta RESTful Web Services Application with which this Whiteboard service should be associated.@JSONRequired
annotation, as the converter does not provide the necessary capability.@JakartarsResource
@JakartarsName("modifier")
@Component(service = ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
@JakartarsApplicationSelect("(osgi.jakartars.name=modifyApplication)")
public class ModifierRestService { ... }
After this change you will notice that the REST resource is not available anymore with the default application. This is because we selected the modifyApplication
as the application where the resource should be available. To register the resource with the default application and the modifyApplication
, you can either configure to select all Applications in the whiteboard
@JakartarsApplicationSelect("(osgi.jakartars.name=*)")
or provide a LDAP filter that selects the two explicitly
@JakartarsApplicationSelect("(|(osgi.jakartars.name=.default)(osgi.jakartars.name=modifyApplication))")
If you need to use the @JakartarsApplicationSelect
annotation on multiple Jakarta-RS Resources and Jakarta-RS Extensions, it is helpful to define a Custom Component Property Annotation.
@ComponentPropertyType
@Retention(RetentionPolicy.CLASS)
@Target(ElementType.TYPE)
public @interface TargetModifyApp {
String osgi_jakartars_application_select() default "(osgi.jakartars.name=modifyApplication)";
}
You can then use the @TargetModifyApp
instead in the ModifierRestService
:
@JakartarsResource
@JakartarsName("modifier")
@Component(service = ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
@TargetModifyApp
public class ModifierRestService { ... }
As you can see, you have multiple ways to register a Jakarta-RS Extension:
Note:
Many thanks to Tim Ward who helped me in understanding the Jakarta RESTful Web Services Whiteboard and especially the Extension mechanisms better!
With Jackson you can control the format of the JSON structure via an ObjectMapper
. In case of a Custom Entity Provider like the one above, you are in full control of the ObjectMapper
instance. To make this more dynamic you could also provide an ObjectMapper
via dependency injection. For this need a Jakarta Extension ContextResolver
for an ObjectMapper
.
To make the effect visible, let’s first extend the ModifierRestService
with a resource method that returns a more complex data structure:
@JakartarsResource
@JakartarsName("modifier")
@Component(service = ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
@JakartarsApplicationSelect("(osgi.jakartars.name=*)")
public class ModifierRestService {
@Reference
private volatile List<StringModifier> modifier;
@GET
@Path("modify/{input}")
public List<String> modify(@PathParam("input") String input) {
return modifier.stream()
.map(mod -> mod.modify(input))
.collect(Collectors.toList());
}
@GET
@Path("modifyhtml/{input}")
@Produces(MediaType.TEXT_HTML)
@HtmlModification
public String modifyHtml(@PathParam("input") String input) {
return modifier.stream()
.map(mod -> mod.modify(input))
.collect(Collectors.joining(";"));
}
@GET
@Path("pretty/{input}")
public Result pretty(@PathParam("input") String input) {
List<String> result = modifier.stream()
.map(mod -> mod.modify(input))
.collect(Collectors.toList());
return new Result(input, result);
}
public static record Result(String input, List<String> result) {};
}
{"input":"fubar","result":["FUBAR","rabuf"]}
ContextResolver
for ObjectMapper
in the rest module
@Component
annotation to the class definition to mark it as an OSGi DS.@JakartarsExtension
annotation to the class definition to mark the service as a Jakarta-RS Whiteboard Extension type that should be processed by the Jakarta-RS Whiteboard.@Provider
annotation to mark the implementation of an extension interface that should be discoverable by Jakarta-RS runtime during a provider scanning phase.@JakartarsExtension
@Component
@Provider
public class CustomObjectMapperProvider implements ContextResolver<ObjectMapper> {
private ObjectMapper mapper;
public CustomObjectMapperProvider() {
this.mapper = new ObjectMapper();
this.mapper.enable(SerializationFeature.INDENT_OUTPUT);
}
public ObjectMapper getContext(Class<?> clazz) {
return mapper;
}
}
If the Custom Entity Provider JacksonJsonConverter
is still in place in your setup, you need to modify it to get the ObjectMapper
injected. This can be done by using the jakarta.ws.rs.ext.Providers
:
@Consumes(MediaType.WILDCARD)
@Produces(MediaType.APPLICATION_JSON)
@Provider
public class JacksonJsonConverter implements MessageBodyReader<Object>, MessageBodyWriter<Object> {
private Logger logger = LoggerFactory.getLogger(JacksonJsonConverter.class);
private final Converter converter = Converters.newConverterBuilder()
.rule(String.class, this::toJson)
.rule(this::toObject)
.build();
@Context
private Providers providers;
private ObjectMapper mapper;
private ObjectMapper getObjectMapper() {
if (this.mapper == null) {
if (providers != null) {
this.mapper = providers
.getContextResolver(ObjectMapper.class, MediaType.APPLICATION_JSON_TYPE)
.getContext(ObjectMapper.class);
} else {
this.mapper = new ObjectMapper();
}
}
return this.mapper;
}
private String toJson(Object value, Type targetType) {
try {
return getObjectMapper().writeValueAsString(value);
} catch (JsonProcessingException e) {
logger.error("error on JSON creation", e);
return e.getLocalizedMessage();
}
}
private Object toObject(Object o, Type t) {
try {
if (List.class.getName().equals(t.getTypeName())) {
return getObjectMapper().readValue((String) o, List.class);
}
return getObjectMapper().readValue((String) o, String.class);
} catch (IOException e) {
logger.error("error on JSON parsing", e);
}
return ConverterFunction.CANNOT_HANDLE;
}
...
}
{
"input" : "fubar",
"result" : [ "FUBAR", "rabuf" ]
}
If you also have the custom Application
deployed, try to navigate to http://localhost:8080/mod/pretty/fubar. Here you will now see an error, because the ContextResolver
by default is only registered with the .default
application. This can be solved by either adding the @JakartaApplicationSelect
annotation to the CustomObjectMapperProvider
, or simply by adding the class to ModifyApplication#getClasses()
. If you have a custom Application
, this is probably the better fitting way of solving this.
@JakartarsApplicationBase("mod")
@JakartarsName("modifyApplication")
@Component(service=Application.class)
public class ModifyApplication extends Application {
@Override
public Set<Class<?>> getClasses() {
return Set.of(
CustomObjectMapperProvider.class,
JacksonJsonConverter.class);
}
}
Now also http://localhost:8080/pretty/fubar should produce the correct output without an error.
As explained before with the Custom Entity Provider, you can register the Jakarta Extension as a Whiteboard Extension as above, or as plain Jakarta Extension via a Feature or an Application. This depends on the use case you want to solve. Also note that the CustomObjectMapperProvider
registered as Whiteboard Extension Service, is also resolved by the com.fasterxml.jackson.jakarta.rs.json.JacksonJsonProvider
or the org.glassfish.jersey.media.jersey-media-json-jackson
module. To verify this, change the JacksonJsonFeature
back to return the com.fasterxml.jackson.jakarta.rs.json.JacksonJsonProvider
instead of your custom JacksonJsonConverter
. Or even disable the JacksonJsonFeature
and add the org.glassfish.jersey.media.jersey-media-json-jackson
bundle back to the Run Requirements of the app module.
For simple use cases like the one in this tutorial, registering Jakarta Extensions as a Whiteboard Extension is the easiest approach. In more advanced setups, or if you need to consume Jakarta Extensions that are provided by non-OSGi environments, the usage of a Feature as Whiteboard Extension or a whiteboard enabled Jakarta Application is usually more efficient.
In the past I had to implement file processing services as part of the API. This means you upload a file, process it and download the result. This way you can for example migrate model files to a newer version, perform a static analysis of a model and even transform a model to some executable format and execute the result for simulation scenarios.
Using the Jakarta RESTful Web Service Specification and Jersey as implementation, this becomes quite easy. The multipart support is provided via a Jersey Module.
dependencies
section<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-multipart</artifactId>
</dependency>
Note:
From Jersey 3.1.0 on, the MultiPartFeature
is no longer required to be registered and it is registered automatically. So there is no need for an additional Jakarta-RS Feature or the registration via a Jakarta-RS Application. See Jersey Documentation - Multipart for further information.
ModifierRestService
and add a Resource method that supports a file upload
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces(MediaType.TEXT_PLAIN)
@FormParam("file")
EntityPart
, InputStream
or String
data-types, or a List<EntityPart>
.// get the EntityPart and the InputStream form parameter with name "file"
// received by a multipart/form-data POST request
@POST
@Path("modify/upload")
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces(MediaType.TEXT_PLAIN)
public Response upload(
@FormParam("file") EntityPart part,
@FormParam("file") InputStream input) throws IOException {
if (part != null
&& part.getFileName().isPresent()) {
StringBuilder inputBuilder = new StringBuilder();
try (InputStream is = input;
BufferedReader br =
new BufferedReader(new InputStreamReader(is))) {
String line;
while ((line = br.readLine()) != null) {
inputBuilder.append(line).append("\n");
}
}
// modify file content
String inputString = inputBuilder.toString();
List<String> modified = modifier.stream()
.map(mod -> mod.modify(inputString))
.collect(Collectors.toList());
String resultString = part.getFileName().get() + "\n\n";
resultString += String.join("\n", modified);
return Response.ok(resultString).build();
}
return Response.status(Status.PRECONDITION_FAILED).build();
}
org.glassfish.jersey.media.jersey-media-multipart
bundle to the Run Requirements-runrequires: \
bnd.identity;id='org.fipro.service.modifier.impl',\
bnd.identity;id='org.fipro.service.modifier.rest',\
bnd.identity;id='org.fipro.service.modifier.app',\
bnd.identity;id='org.eclipse.parsson.jakarta.json',\
bnd.identity;id='slf4j.simple',\
bnd.identity;id='com.sun.xml.bind.jaxb-osgi',\
bnd.identity;id='org.glassfish.jersey.media.jersey-media-multipart'
If you are using a tool like Postman, you can test if the multipart upload is working by executing a POST request on http://localhost:8080/modify/upload
file
and check that it is a File and not a TextNote:
To return a file instead of plain text, you can return an EntityPart
and change the @Produces
annotation to return MediaType.MULTIPART_FORM_DATA
.
@POST
@Path("modify/change")
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces(MediaType.MULTIPART_FORM_DATA)
public Response change(
@FormParam("file") EntityPart part,
@FormParam("file") InputStream input) throws IOException {
if (part != null
&& part.getFileName().isPresent()) {
StringBuilder inputBuilder = new StringBuilder();
try (InputStream is = input;
BufferedReader br =
new BufferedReader(new InputStreamReader(is))) {
String line;
while ((line = br.readLine()) != null) {
inputBuilder.append(line).append("\n");
}
}
// modify file content
String inputString = inputBuilder.toString();
List<String> modified = modifier.stream()
.map(mod -> mod.modify(inputString))
.collect(Collectors.toList());
String resultString = String.join("\n", modified);
return Response
.ok(EntityPart
.withFileName("changed.txt")
.content(resultString)
.build())
.build();
}
return Response.status(Status.PRECONDITION_FAILED).build();
}
If you asked yourself before, when to use a deployment on Jetty and when to use the OSGi Servlet Whiteboard, you get an answer in this part of the tutorial. We will publish a simple form as a static resource in our application. Doing this we are able to test the file upload even without additional tools.
To register a HTML form as static resource with our REST service, we use the Whiteboard Specification for Jakarta™ Servlet (formerly known as Http Whiteboard Specification).
dependencies
section
```xml
- Add the `@HttpWhiteboardResource` annotation to the `ModifierRestService` class definition
```java
@JakartarsResource
@JakartarsName("modifier")
@Component(service = ModifierRestService.class, scope = ServiceScope.PROTOTYPE)
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
@JakartarsApplicationSelect("(osgi.jakartars.name=*)")
@HttpWhiteboardResource(pattern = "/files/*", prefix = "static")
public class ModifierRestService { ... }
Important:
Once you add the @HttpWhiteboardResource
annotation, your application won’t be able to resolve anymore with the deployment on Jetty setup. The reason is that the @HttpWhiteboardResource
annotation itself uses the @RequireHttpWhiteboard
, which means, an implementation of the Jakarta Servlet Whiteboard is required. Either replace the annotation with the corresponding Component Properties to avoid the additional requirement (see below), or ensure to run the example on the OSGi Jakarta Servlet Whiteboard.
@JakartarsResource
@JakartarsName("modifier")
@Component(
service = ModifierRestService.class,
scope = ServiceScope.PROTOTYPE,
// use component properties instead of component property type annotation
// this way we avoid the requirement on the Servlet Whiteboard and the service also works on a Jetty
property = {
"osgi.http.whiteboard.resource.pattern=/files/*",
"osgi.http.whiteboard.resource.prefix=static"
})
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
@JakartarsApplicationSelect("(osgi.jakartars.name=*)")
public class ModifierRestService { ... }
With this configuration all requests to URLs with the /files
path are mapped to resources in the static
folder. The next step is therefore to add the static form to the project:
<html>
<body>
<h1>File Upload to Jakarta RESTful Web Service</h1>
<form
action="http://localhost:8080/modify/upload"
method="post"
enctype="multipart/form-data">
<p>
Select a file : <input type="file" name="file" size="45"/>
</p>
<input type="submit" value="Upload It"/>
</form>
</body>
</html>
After starting the app via app-http/app.bndrun you can open a browser and navigate to http://localhost:8080/files/upload.html. If you decided to modify the app project for deployment via OSGi Servlet Whiteboard, you of course need to start the application via app/app.bndrun. Now you can select a file (don’t use a binary file) and upload it to see the modification result of the REST service.
To debug your REST based service you can start the application by using Debug OSGi instead of Run OSGi in the app.bndrun. But in the OSGi context you often face issues even before you can debug code. In such situations you usually use an OSGi console to inspect the runtime. There are two types of OSGi consoles to inspect the OSGi runtime provided by Apache Felix:
Note:
The Webconsole only works on a deployment via OSGi Servlet Whiteboard. An alternative for OSGi Runtime Inspection would be OSGi.fx, which I plan to cover in an upcoming blog post.
To clearly separate the target application runtime from a debug runtime, it is best practice to create an additional .bndrun file. This file includes the app.bndrun and extends it with configurations to enable the inspection capabilities.
-include: ~app.bndrun
test-index: target/test-index.xml;name="app Test"
-standalone: ${index},${test-index}
-runproperties: \
osgi.console=,\
osgi.console.enable.builtin=false
-runrequires.debug: osgi.identity;filter:='(osgi.identity=org.apache.felix.gogo.shell)',\
osgi.identity;filter:='(osgi.identity=org.apache.felix.gogo.runtime)',\
osgi.identity;filter:='(osgi.identity=org.apache.felix.gogo.command)'
-resolve: manual
The -runproperties
configuration will start the console in interactive mode. The -runrequires.debug
configuration adds the necessary console bundles to the runtime.
The Gogo Shell becomes available in the Console View of the IDE. You can now interact with the runtime in the Console View, e.g. list all bundles in the runtime via
lb
As mentioned before, with the OSGi Jakarta Servlet Whiteboard, you also have the option to use the Felix Webconsole. To demonstrate this, we create a debug configuration in the app-http module.
-include: ~app.bndrun
test-index: target/test-index.xml;name="app Test"
-standalone: ${index},${test-index}
-runproperties: \
osgi.console=,\
osgi.console.enable.builtin=false,\
org.osgi.service.http.port=-1
-runrequires.debug: osgi.identity;filter:='(osgi.identity=org.apache.felix.webconsole)',\
osgi.identity;filter:='(osgi.identity=org.apache.felix.webconsole.plugins.ds)',\
osgi.identity;filter:='(osgi.identity=org.apache.felix.gogo.shell)',\
osgi.identity;filter:='(osgi.identity=org.apache.felix.gogo.runtime)',\
osgi.identity;filter:='(osgi.identity=org.apache.felix.gogo.command)'
-resolve: manual
Compared to the app/debug.bndrun file, we include the Felix Webconsole bundles and ensure that the default Jetty in the Felix Jetty bundle is not started.
Now you can open a browser and navigate to http://localhost:8080/system/console. Login with the default username/password admin/admin. Using the Webconsole you can check which bundles are installed and in which state they are. You can also inspect the available OSGi DS Components and check the active configurations.
As the project setup is a plain Java/Maven project, the build is pretty easy:
clean verify
in the Goals fieldFrom the command line:
mvn clean verify
Note:
It can happen that an error occurs on building the app module if you followed the steps in this tutorial exactly. The reason is that the build locates a change in the Run Bundles of the app.bndrun file. But it is just a difference in the ordering of the bundles. To solve this open the app.bndrun file, remove all entries from the Run Bundles and hit Resolve again. After that the order of the Run Bundles will be the same as the one in the build. This could be also avoided by configuring the bnd-export-maven-plugin
setting the failOnChanges
parameter to false
.
Note:
This build process works because we used the Eclipse IDE with Bndtools. If you are using another IDE or working only on the command line, have a look at the OSGi enRoute Microservices Tutorial that explains the separate steps for building from command line.
After the build succeeds you will find the resulting app.jar
in jakartars/app/target. Execute the following line to start the self-executable jar from the command line if you are located in the jakartars folder:
java -jar app/target/app.jar
If you also want to build the debug configuration, you need to enable this in the pom.xml file of the app and/or the app-http module:
build/plugins
section update the bnd-resolver-maven-plugin
and the bnd-export-maven-plugin
and add the debug.bndrun to the bndruns
.<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-export-maven-plugin</artifactId>
<configuration>
<bndruns>
<bndrun>app.bndrun</bndrun>
<bndrun>debug.bndrun</bndrun>
</bndruns>
</configuration>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-resolver-maven-plugin</artifactId>
<configuration>
<bndruns>
<bndrun>app.bndrun</bndrun>
<bndrun>debug.bndrun</bndrun>
</bndruns>
</configuration>
</plugin>
Executing the build again, you will now also find a debug.jar
in the target folder of the app module, you can use to inspect the OSGi runtime.
Implementing Jakarta™ RESTful Web Services with the OSGi Compendium Specification Release 8.1 and the corresponding reference implemenations is similar to approaches with other frameworks like Spring Boot, Quarkus or Microprofile. And if you want to wrap existing OSGi services, it is definitely the most comfortable one. If consuming OSGi services is not needed, well then every framework has its pros and cons.
With this tutorial I hope I can help developers to get started with the OSGi Jakarta™ RESTful Web Services Whiteboard and getting a better understanding of the mechanisms. Writing it at least helped me a lot in that area.
In the last years I published several blog posts and gave several talks related to OSGi, and often the topic OSGi Remote Services was raised, but never really covered in detail. Scott Lewis, the project lead of the Eclipse Communication Framework, was really helpful whenever I encountered issues with Remote Services. I promised to write a blog post about that topic as a favour for all the support. And with this blog post I finally want to keep my promise. That said, let’s start with OSGi Remote Services.
First I want to explain the motivation for having a closer look at OSGi Remote Services. Looking at general software architecture discussions in the past, service oriented architectures and microservices are a huge topic. Per definition the idea of a microservices architecture is to have
While new frameworks and tools came up over the years, the OSGi specification(s) covers these topics for a long time. Via the service registry and the service dynamics you can build up very small modules. Those modules can then be integrated into small runtimes and deployed in different environments (despite the needed JVM or a database if needed). The services in those small independent deployments can then be accessed in different ways, like using the HTTP Whiteboard or JAX-RS Whiteboard. This satisfies the aspect of a communication between services via lightweight mechanisms. For inhomogeneous environments the usage of those specifications is a good match. But it means that you need to implement the access layer on the provider side (e.g. the JAX-RS wrapper to access the service via REST) and you need to implement the service access on the consumer side by using a corresponding framework to execute the REST calls.
Ideally the developer of the service as well as the developer of the service consumer should not need to think about the infrastructure of the whole application. Well, it is always good that everybody in a project knows about everything, but the idea is to not making your code dependent on infrastructure. And this is where OSGi Remote Services come in. You develop the service and the service consumer as if they would be executed in the same runtime. In the deployment the lightweight communication will be added to support service communication over a network.
And as initially mentioned, I want to have a look at ways how to probably get rid of the networking issues I faced in the presentations in the past.
To understand this blog post you should be familiar with OSGi services and ideally with OSGi Declarative Services. If you are not familiar with OSGi DS, you can get an introduction by reading my blog post Getting Started with OSGi Declarative Services.
In short, the OSGi Service Layer specifies a Service Producer that publishes a service, and a Service Consumer that listens and retrieves a service. This is shown in the following picture:
With OSGi Remote Services this picture is basically the same. The difference is that the services are registered and consumed across network boundaries. For OSGi Remote Services the above picture could be extended to look like the following:
To understand the above picture and the following blog post better, here is a short glossary for the used terms:
To get a slightly better understanding, the following picture shows some more details inside the Remote Service Implementation block.
Note:
Actually this picture is still a simplified version, as internally there are Endpoint Event Listener and Remote Service Admin Listener that are needed to trigger all the necessary actions. But to get an idea how things play together this picture should be sufficient.
Now let’s explain the picture in more detail:
To simplify the picture again, the important takeaways are the Distribution Provider and the Discovery. The Distribution Provider is responsible for exporting and importing the service, the Discovery is responsible for announcing and discovering the service. The other terms are needed for a deeper understanding, but for a high level understanding of OSGi Remote Services, these two are sufficient.
Now it is time to get our hands dirty and play with OSGi Remote Services. This tutorial has several steps:
There are different ways and tools available for OSGi development. In this tutorial I will use Bndtools. I also published this tutorial with other toolings if you don’t want to use Bndtools:
While the implementation and export of an OSGi service as a Remote Service is trivial in first place, the definition of the runtime can become quite complicated. Especially collecting the necessary bundles is not that easy without some guidance.
As a reference, with Equinox as underlying OSGi framework the following bundles need to be part of the runtime as a basis:
With the above basic runtime configuration the Remote Services will not yet work. There are still two things missing, the Discovery and the Distribution Provider. ECF provides different implementations for both. Which implementations to use needs to be defined by the project. In this tutorial we will use Zeroconf/JmDNS for the Discovery and the Generic Distribution Provider:
Note:
You can find the list of different implementations with the documentation about the bundles, configuration types and intents in the ECF Wiki:
The project setup with Bndtools is different compared to PDE tooling. With Bndtools you setup a Workspace and configure the repositories to use. The ECF project provides Workspace/Project Templates to make the setup easier.
Further details on the Bndtools support provided by the ECF project can be found in the Eclipse Wiki.
Alternatively to using the provided ECF Bndtools Templates, you can configure the workspace manually. This might be useful as the ECF Templates add everything ECF provides to the workspace (including examples). That is perfect for getting started and learning the topics, but for more experienced setups this is probably too much as you want to limit your repository to what you really need.
For the manual setup you create a BND OSGi Workspace by using the default bndtools/workspace:
To add the ECF related artifacts you need to modify some files in the workspace:
# ECF
org.eclipse.platform:org.eclipse.core.jobs:3.12.0
org.eclipse.platform:org.eclipse.equinox.common:3.15.100
org.eclipse.platform:org.eclipse.equinox.concurrent:1.2.100
org.eclipse.ecf:org.eclipse.ecf:3.10.0
org.eclipse.ecf:org.eclipse.ecf.console:1.3.100
org.eclipse.ecf:org.eclipse.ecf.discovery:5.1.1
org.eclipse.ecf:org.eclipse.ecf.identity:3.9.402
org.eclipse.ecf:org.eclipse.ecf.osgi.services.distribution:2.1.600
org.eclipse.ecf:org.eclipse.ecf.osgi.services.remoteserviceadmin:4.9.3
org.eclipse.ecf:org.eclipse.ecf.osgi.services.remoteserviceadmin.console:1.3.0
org.eclipse.ecf:org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy:1.0.101
org.eclipse.ecf:org.eclipse.ecf.remoteservice.asyncproxy:2.1.200
org.eclipse.ecf:org.eclipse.ecf.remoteservice:8.14.0
org.eclipse.ecf:org.eclipse.ecf.sharedobject:2.6.200
org.eclipse.ecf:org.eclipse.osgi.services.remoteserviceadmin:1.6.300
# ECF Discovery Zeroconf
org.eclipse.ecf:org.eclipse.ecf.provider.jmdns:4.3.301
# ECF Distribution Provider - Generic
org.eclipse.ecf:org.eclipse.ecf.provider:4.9.1
org.eclipse.ecf:org.eclipse.ecf.provider.remoteservice:4.6.1
There are of course more artifacts provided by ECF. But for this example we keep the minimum needed.
Note:
Since the ECF artifacts are available on Maven Central you could also simply edit the existing central.maven file and add the ECF arteficts there, but for a better separation we split it here.
Now add the created ecfatcentral.maven file to the workspace build:
-plugin.10.ECFATCENTRAL: \
aQute.bnd.repository.maven.provider.MavenBndRepository; \
releaseUrl=https://repo.maven.apache.org/maven2/; \
index=${.}/ecfatcentral.maven; \
name="ECF Remote Services"
Bndtools also provides the option to include a p2 repository directly as explained here. To use the ECF p2 repository directly add the following instruction to the build.bnd file instead:
-plugin.11.p2: \
aQute.bnd.repository.p2.provider.P2Repository; \
url = https://download.eclipse.org/rt/ecf/3.14.31/site.p2; \
name = ECF Remote Services p2
Note:
If the newly added repositories do not show up in the Repositories view (bottom left in the default Bndtools Perspective), click on Reload workspace in the Bndtools Explorer (the circle arrows in the upper left corner).
Ensure that you have switched to the Bndtools Perspective for the following steps.
StringModifier
in the package org.fipro.modifier.api
package org.fipro.modifier.api;
public interface StringModifier {
String modify(String input);
}
StringInverter
into the package org.fipro.modifier.inverter
package org.fipro.modifier.inverter;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
@Component(property= {
"service.exported.interfaces=*",
"service.exported.configs=ecf.generic.server" }
)
public class StringInverter implements StringModifier {
@Override
public String modify(String input) {
return (input != null)
? new StringBuilder(input).reverse().toString()
: "No input given";
}
}
The only thing that needs to be done additionally in comparison to creating a local OSGi service, is to configure that the service should be exported as Remote Service. This is done by setting the component property service.exported.interfaces
. The value of this property needs to be a list of types for which the class is registered as a service. For a simple use case like the above, the asterisk can be used, which means to export the service for all interfaces under which it is registered, but to ignore the classes. For more detailed information have a look at the Remote Service Properties section of the OSGi Compendium Specification.
The other component property used in the above example is service.exported.configs
. This property is used to specify the configuration types, for which the Distribution Provider should create Endpoints. If it is not specified, the Distribution Provider is free to choose the default configuration type for the service.
Note:
In the above example we use the ECF Generic Provider. This one by default chooses a SSL configuration type, so without additional configuration the example would not work if we don’t specify the configuration type.
Additionally you can specify Intents via the service.exported.intents
component property to constrain the possible communication mechanisms that a distribution provider can choose to distribute a service. An example will be provided at a later step.
Now you can start the org.fipro.modifier.inverter.app via the Run OSGi button in the upper right corner of the editor. With the console bundles in the Run Requirements the console will be available, apart from that you won’t see anything now.
Note:
The creation of a dedicated application project is not mandatory, but a recommended best practice to separate the application runtime from the service implementation. Especially if you consider that an application typically consists of several services, it doesn’t make much sense to have the launch configuration in one service bundle project. For this tutorial and for testing you can of course also edit the .bndrun file in the Service Implementation project.
Note:
If you used the ECF Project Templates to create the Service Implementation project, you will find two pre-configured .bndrun files in the project root that can be used to start the Service Provider Runtime. Open the file org.fipro.modifier.inverter.zeroconf.generic.bndrun and click on Resolve to calculate the Run Bundles. Once the result is accepted via Update in the dialog, the Service Provider Runtime can be started via Run OSGi.
The implementation of a Remote Service Consumer also quite simple. From the development perspective there is nothing to consider. The service consumer is implemented without any additions. Only the runtime needs to be extended to contain the necessary bundles for Discovery and Distribution.
The simplest way of implementing a service consumer is a Gogo Shell command.
ModifyCommand
into the package org.fipro.modifier.client
package org.fipro.modifier.client;
import java.util.List;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
@Component(
property= {
"osgi.command.scope:String=fipro",
"osgi.command.function:String=modify"},
service=ModifyCommand.class
)
public class ModifyCommand {
@Reference
volatile List<StringModifier> modifier;
public void modify(String input) {
if (modifier.isEmpty()) {
System.out.println("No StringModifier registered");
} else {
modifier.forEach(m -> System.out.println(m.modify(input)));
}
}
}
If you now click on Run OSGi on the Run tab of the editor, the Gogo Shell becomes available in the Console view of the IDE. Once the application is started you can execute the created Gogo Shell command via
modify <input>
If services are available, it will print out the modified results. Otherwise the message “No StringModifier registered” will be printed.
Note:
If you used the ECF Project Templates to create the Service Consumer project, you will find two pre-configured .bndrun files in the project root that can be used to start the Service Consumer Runtime. Open the file org.fipro.modifier.client.zeroconf.generic.bndrun and click on Resolve to calculate the Run Bundles. Once the result is accepted via Update in the dialog, the Service Provider Runtime can be started via Run OSGi.
There are several events with regards to importing and exporting Remote Services, that are fired by the Remote Service Admin synchronously once they happen. These events are posted asynchronously via the OSGi Event Admin under the topic
org/osgi/service/remoteserviceadmin/<type>
Where <type>
can be one of the following:
A simple event listener that prints to the console on any Remote Service Admin Event could look like this:
@Component(property = EventConstants.EVENT_TOPIC + "=org/osgi/service/remoteserviceadmin/*")
public class RemoteServiceEventListener implements EventHandler {
@Override
public void handleEvent(Event event) {
System.out.println(event.getTopic());
for (String objectClass : ((String[])event.getProperty("objectClass"))) {
System.out.println("\t"+objectClass);
}
}
}
For further details on the Remote Service Admin Events have a look at the OSGi Compendium Specification Chapter 122.7.
If you need to react synchronously on these events, you can implement a RemoteServiceAdminListener
. I typically would not recommend this, unless you really want blocking calls on import/export events. Typically it is intended to be used internally by the Remote Service Admin. But for debugging purposes the ECF project also provides a DebugRemoteServiceAdminListener
. It writes the endpoint description via a Writer to support debugging of Remote Services. Via the following class you could easily register a DebugRemoteServiceAdminListener
via OSGi DS that prints the information on the console.
@Component
public class DebugListener
extends DebugRemoteServiceAdminListener
implements RemoteServiceAdminListener {
// register the DebugRemoteServiceAdminListener via DS
}
To test this you can either add the above components to one of the existing bundles, or create a new bundle and add that bundle to the runtimes.
The ECF project provides several ways for runtime inspection and runtime debugging. This is mainly done Gogo Shell commands provided via separate bundles. To enable the OSGi console and the ECF console commands, you need to add the following bundles to your runtime:
With the ECF Console bundles added to the runtime, there are several commands to inspect and interact with the Remote Service Admin. As an overview the available commands are listed in the wiki: Gogo Commands for Remote Services Development
Additionally the DebugRemoteServiceAdminListener
described above is activated by default with the ECF Console bundles. It can be activated or deactivated in the runtime via the command
ecf:rsadebug <true/false>
One of the biggest issues I faced when working with Remote Services is networking as mentioned in the introduction. In the above example the ECF Generic Distribution Provider is used for a simpler setup. But for example in a corporate network with enabled firewalls somewhere in the network setup, the example will probably not work. As said before, the ECF project provides multiple Distribution Provider implementations, which gives the opportunity to configure the setup to match the project needs. One interesting implementation in that area is the JAX-RS Distribution Provider. Using that one could probably help solving several of the networking issues related to firewalls. But as with the whole Remote Service topic, the complexity in the setup is quite high because of the increased number of dependencies that need to be resolved.
The JAX-RS Distribution Provider implementation is available for Eclipse Jersey and Apache CXF. It uses the OSGi HttpService to register the JAX-RS resource, and of course it then also needs a Servlet container like Eclipse Jetty to provide the JAX-RS resource. I will show the usage of the Jersey based implementation in the following sections.
Unfortunately the JAX-RS Distribution Provider is not available via Maven Central. As Bndtools supports p2 Repositories, we can add the one from GitHub to make it available in our workspace. The p2 support is only able to add the whole repository, so you will see everything from that p2 repository in the workspace. But as there is no availability on Maven Central, the only other option would be to download the artifacts locally and place them in a local structure (which is actually what the ECF Bndtools Workspace Template does). If you used the ECF Bndtools Workspace Templates, the JAX-RS Distribution Provider and its dependencies are already available in the workspace. There are no additional steps necessary for consuming the JAX-RS Distribution Provider and its dependencies.
If you have chosen the manual project setup I recommend to use the p2 repository:
-plugin.12.p2: \
aQute.bnd.repository.p2.provider.P2Repository; \
url = https://raw.githubusercontent.com/ECF/JaxRSProviders/master/build/; \
name = ECF JAX-RS Distribution Provider p2
Additionally we need a server that publishes the JAX-RS resource. We will use a Jetty server.
org.apache.felix:org.apache.felix.http.jetty:4.1.14
Note:
The ECF Bndtools Workspace Template used a local repository approach in the past. That means the artifacts are physically located in subfolders of the cnf directory. To update them you needed to download the artifacts from the respective GitHub repositories and add/replace the jars in the local repository structure. This was recently changed to also make use of the p2 repository support. If you created an ECF Bndtools Workspace in the past you might want to check if the usage of p2 repositories could improve your project setup.
Note:
The local repository approach and the limitation with regards to updates can also be seen as an advantage. The JAX-RS Distribution Provider is not yet released and published officially. So the p2 update site is generic and if the libraries are updated there, the updates will be directly consumed on a workspace update. Anyhow I personally don’t like having jars locally in my Bnd OSGi Workspace as these artifacts also need to be checked into the repository. I’d rather configure the remote repositories and go into the “offline mode” in case I have to work without an internet connection.
The implementation of the service already looks different compared to what you have seen so far. Instead of only adding the necessary Component Properties to configure the service as a Remote Service, the service implementation does directly contain the JAX-RS annotations. That of course also means that the annotations need to be available.
UppercaseModifier
snippet the projectpackage org.fipro.modifier.uppercase;
import java.util.Locale;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
//The JAX-RS path annotation for this service
@Path("/modify")
//The OSGi DS component annotation
@Component(
immediate = true,
property = {
"service.exported.interfaces=*",
"service.exported.intents=jaxrs"})
public class UppercaseModifier implements StringModifier {
@GET
// The JAX-RS annotation to specify the result type
@Produces(MediaType.TEXT_PLAIN)
// The JAX-RS annotation to specify that the last part
// of the URL is used as method parameter
@Path("/{value}")
@Override
public String modify(@PathParam("value") String input) {
return (input != null)
? input.toUpperCase(Locale.getDefault())
: "No input given";
}
}
For the JAX-RS annotations, please have a look at various existing tutorials and blog posts in the internet, for example
About the OSGi DS configuration:
**service.exported.interfaces=***
**service.exported.intents=jaxrs**
Note:
As mentioned earlier there is a bug in ECF 3.14.26 which is integrated in the Eclipse 2021-21 SimRel repo. The service.exported.intents
property is not enough to get the JAX-RS resource registered. Additionally it is necessary to set service.exported.configs=ecf.jaxrs.jersey.server
to make it work. This was fixed shortly after I reported it and is included with the current ECF 3.14.31 release. The basic idea of the intent configuration is to make the service independent of the underlying JAX-RS Distribution Provider implementation (Jersey vs. Apache CXF).
For the JAX-RS Distribution Provider Runtime a lot more dependencies are required. The following list should cover the additional necessary base dependencies:
For the Service Provider we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Server bundles, the Jetty as embedded server and the HTTP Whiteboard:
For the Service Consumer we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Client bundles and the HttpClient to be able to access the JAX-RS resource:
org.osgi.service.http.port=8181
Note:
With the latest version of the JAX-RS Distribution Provider, the .bndrun configuration is much more comfortable than before. There were several improvements to make the definition of a runtime more user friendly, so if you are already familiar with the JAX-RS Distribution Provider and used it in the past, be sure to update it to the latest version to benefit from the latest modifications.
Now you can start the Uppercase JAX-RS Service Runtime from the Overview tab via Launch an Eclipse application. After the runtime is started the service will be available as JAX-RS resource and can be accessed in a browser, e.g. http://localhost:8181/modify/remoteservice
Note:
Unfortunately with the above setup, you will see a 404 instead of the service result. It seems that using Jetty 9 the usage of the base URL is not working for Remote Services. Maybe it is only a configuration issue that I was not able to solve as part of this tutorial. There are two options to handle this issue, either configure additional path segments or use Jetty 10.
Note:
Don’t worry if you see a SelectContainerException
in the console. It is only an information that tells that the service from the first part of the tutorial can not be imported in the runtime of this part of the tutorial and vice versa. The first service is distributed via the Generic Provider, while the second service is distributed by the JAX-RS Provider. But both are using the JmDNS Discovery Provider.
The URL path is defined via the JAX-RS annotations, “modify” via @Path("/modify")
on the class, “remoteservice” is the path parameter defined via @Path("/{value}")
on the method (if you change that value, the result will change accordingly). You can extend the URL via configurations shown below:
ecf.jaxrs.server.pathPrefix=<value>
ecf.jaxrs.server.pathPrefix=/services
)@Component
annotationecf.jaxrs.server.pathPrefix=<value>
e.g.
@Component(
immediate = true,
property = {
"service.exported.interfaces=\*",
"service.exported.intents=jaxrs",
"ecf.jaxrs.server.pathPrefix=/upper"})
If all of the above configurations are added, the new URL to the service is, e.g. http://localhost:8181/services/upper/modify/remoteservice
Additional information about available component properties can be found here: Jersey Service Properties
With the above setup the bundle org.apache.felix.http.jetty
is integrated in the runtime. That bundle combines the following:
This makes the integration very easy. If you want to update to Jetty 10 the setup is more complicated, as that is not available as combined Felix bundle. In that case you need the following bundles:
First you need to add the necessary artifacts to the workspace:
org.eclipse.platform:org.eclipse.osgi.services:jar:3.10.200
org.eclipse.platform:org.eclipse.equinox.http.jetty:jar:3.8.100
org.eclipse.platform:org.eclipse.equinox.http.servlet:jar:1.7.200
org.eclipse.jetty:jetty-http:jar:10.0.8
org.eclipse.jetty:jetty-io:jar:10.0.8
org.eclipse.jetty:jetty-security:jar:10.0.8
org.eclipse.jetty:jetty-server:jar:10.0.8
org.eclipse.jetty:jetty-servlet:jar:10.0.8
org.eclipse.jetty:jetty-util:jar:10.0.8
org.eclipse.jetty:jetty-util-ajax:jar:10.0.8
jakarta.servlet:jakarta.servlet-api:jar:4.0.4
After that you can create a new Service Provider Runtime project that includes Jetty 10:
org.osgi.service.http.port=8181
launch.activation.eager=true
Note:
The OSGi Framework property launch.activation.eager=true
is necessary because of the activation policy set in the Equinox Jetty Http Service bundle. It is configured to be activated lazy, which means it will only be activated if someone requests something from that bundle. But as Equinox does collect all OSGi service interfaces in org.eclipse.osgi.services, actually nobody ever will request something from that bundle, which leaves it in the STARTING state forever. With launch.activation.eager
property the lazy activation will be ignored and all bundles will be simply started. Bug 530076 was created to discuss if the lazy activation could be dropped.
Note:
Unfortunately you can not include the org.apache.felix.webconsole
in a Jetty 10 runtime. The reason is the Servlet API version dependency of webconsole. org.apache.felix.webconsole
requires javax.servlet;version="[2.4,4)"
even in its latest version, while org.eclipse.jetty.servlet
requires javax.servlet;version="[4.0.0,5)"
. So if you want to use the webconsole in your JAX-RS Remote Service, you need to stick with Jetty 9.
Note:
It is currently not possible to use Jetty 11 for OSGi development, as the OSGi implementations are not updated to the jakarta
namespace.
For an overview on the Jetty versions and dependencies, have a look at the Jetty Downloads page.
To consume the Remote Service provided via JAX-RS Distribution Provider, the runtime needs to be extended to include the additional dependencies:
If you now start the Service Consumer Runtime and have the Service Provider Runtime also running, you can execute the following command
modify jax
This will actually lead to an error if you followed my tutorial step by step:
ServiceException: Service exception on remote service proxy
The reason is that the Service Interface does not contain the JAX-RS annotations as the service actually does, and therefore the mapping is working. So while for providing the service the interface does not need to be modified, it has to for the consumer side.
jakarta.ws.rs-api
StringModifier
class and add the JAX-RS annotations to be exactly the same as for the Service Implementationpackage org.fipro.modifier.api;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
@Path("/modify")
public interface StringModifier {
@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("/{value}")
String modify(@PathParam("value") String input);
}
If you now start the Uppercase Service Provider Runtime and the Service Consumer Runtime again, the error should be gone and you should see the expected result.
After the Service Interface was extended to include the JAX-RS annotations, the first Service Provider Runtime will not resolve anymore because of missing dependencies. To fix this:
Now you can start that Service Provider Runtime again. If the other Service Provider and the Service Consumer are also active, executing the modify command will now output the result of both services.
In the tutorial we used JmDNS/Zeroconf as Discovery Provider. This way there is not much we have to do as a developer or administrator despite adding the according bundle to the runtime. This kind of Discovery is using a broadcast mechanism to announce the service in the network. In cases this doesn’t work, e.g. firewall rules that block broadcasting, it is also possible that you use a static file-based discovery. This can be done using the Endpoint Description Extender Format (EDEF) and is also supported by ECF.
Let’s create an additional service that is distributed via JAX-RS. But this time we exclude the org.eclipse.ecf.provider.jmdns bundle, so there is no additional discovery inside the Service Provider Runtime. We also add the console bundles to be able to inspect the runtime.
Note:
If you don’t want to create another service, you can also modify the previous uppercase service. In that case remove the org.eclipse.ecf.provider.jmdns bundle from the product configuration and ensure that the console bundles are added to be able to inspect the remote service runtime via the OSGi Console.
CamelCaseModifier
snippet the projectpackage org.fipro.modifier.camelcase;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
@Path("/modify")
@Component(
immediate = true,
property = {
"service.exported.interfaces=*",
"service.exported.intents=jaxrs",
"ecf.jaxrs.server.pathPrefix=/camelcase"})
public class CamelCaseModifier implements StringModifier {
@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("/{value}")
@Override
public String modify(@PathParam("value") String input) {
StringBuilder builder = new StringBuilder();
if (input != null) {
for (int i = 0; i < input.length(); i++) {
char currentChar = input.charAt(i);
if (i % 2 == 0) {
builder.append(Character.toUpperCase(currentChar));
} else {
builder.append(Character.toLowerCase(currentChar));
}
}
}
else {
builder.append("No input given");
}
return builder.toString();
}
}
org.osgi.service.http.port=8282
ecf.jaxrs.server.pathPrefix=/services
Once the runtime is started via Run OSGi the service should be available via http://localhost:8282/services/camelcase/modify/remoteservice
You probably noticed a console output on startup that shows the Endpoint Description XML. This is actually what we need for the EDEF file. You can also get the endpoint description at runtime via the ECF Gogo Command listexports <endpoint.id>
:
osgi> listexports
endpoint.id |Exporting Container ID |Exported Service Id
5918da3a-a971-429f-9ff6-87abc70d4742 |http://localhost:8282/services/camelcase |38
osgi> listexports 5918da3a-a971-429f-9ff6-87abc70d4742
<endpoint-descriptions xmlns="http://www.osgi.org/xmlns/rsa/v1.0.0">
<endpoint-description>
<property name="ecf.endpoint.id" value-type="String" value="http://localhost:8282/services/camelcase"/>
<property name="ecf.endpoint.id.ns" value-type="String" value="ecf.namespace.jaxrs"/>
<property name="ecf.endpoint.ts" value-type="Long" value="1642667915518"/>
<property name="ecf.jaxrs.server.pathPrefix" value-type="String" value="/camelcase"/>
<property name="ecf.rsvc.id" value-type="Long" value="1"/>
<property name="endpoint.framework.uuid" value-type="String" value="80778aff-63c7-448d-92a5-7902eb6782ae"/>
<property name="endpoint.id" value-type="String" value="5918da3a-a971-429f-9ff6-87abc70d4742"/>
<property name="endpoint.package.version.org.fipro.modifier" value-type="String" value="1.0.0"/>
<property name="endpoint.service.id" value-type="Long" value="38"/>
<property name="objectClass" value-type="String">
<array>
<value>org.fipro.modifier.StringModifier</value>
</array>
</property>
<property name="remote.configs.supported" value-type="String">
<array>
<value>ecf.jaxrs.jersey.server</value>
</array>
</property>
<property name="remote.intents.supported" value-type="String">
<array>
<value>passByValue</value>
<value>exactlyOnce</value>
<value>ordered</value>
<value>osgi.async</value>
<value>osgi.private</value>
<value>osgi.confidential</value>
<value>jaxrs</value>
</array>
</property>
<property name="service.imported" value-type="String" value="true"/>
<property name="service.imported.configs" value-type="String">
<array>
<value>ecf.jaxrs.jersey.server</value>
</array>
</property>
<property name="service.intents" value-type="String">
<array>
<value>jaxrs</value>
</array>
</property>
</endpoint-description>
</endpoint-descriptions>
The endpoint description is needed by the Service Consumer to discover the new service. Without a Discovery that is broadcasting, the service needs to be discovered statically via an EDEF file. As the EDEF file is registered via manifest header, we create a new bundle. You could also place it in an existing bundle like org.fipro.modifier.client, but for some more OSGi dynamics fun, let’s create a new bundle.
-includeresource: edef=edef
Remote-Service: edef/camelcase.xml
If you start the Service Consumer Runtime, the service will directly be available. This is because the new org.fipro.modifier.client.edef bundle is activated automatically by the bnd launcher (a big difference compared to Equinox). Let’s deactivate it via the console. First we need to find the bundle-id via lb
and then stop it via stop <bundle-id>
. The output should look similar to the following snippet:
g! lb edef
START LEVEL 1
ID|State |Level|Name
50|Active | 1|org.fipro.modifier.client.edef (0.0.0)|0.0.0
g! stop 50
Now the service becomes unavailable via the modify
command. If you start the bundle, the service becomes available again.
The EDEF specification itself would not be sufficient for productive usage. For example, the values of the endpoint description properties need to match. For the endpoint.id this would be really problematic, as that value is a random generated uuid and changes on each runtime start. So if the Service Provider Runtime is restarted there is a new endpoint.id value. ECF includes a mechanism to support the discovery and the distribution even if the endpoint.id of the importer and the exporter do not match. This actually makes the EDEF file support work in productive environments.
ECF also provides a mechanism to create an endpoint description using a properties file. All the necessary endpoint description properties need to be included as properties with the respective types and values. The following example shows the properties representation for the EDEF XML of the above example. Note that for endpoint.id and endpoint.framework.uuid the type is set to uuid
and the value is 0. This way ECF will generate a random UUID and the matching feature will ensure that the distribution will work even without matching id values.
ecf.endpoint.id=http://localhost:8282/services/camelcase
ecf.endpoint.id.ns=ecf.namespace.jaxrs
ecf.endpoint.ts:Long=1642761763599
ecf.jaxrs.server.pathPrefix=/camelcase
ecf.rsvc.id:Long=1
endpoint.framework.uuid:uuid=0
endpoint.id:uuid=0
endpoint.package.version.org.fipro.modifier.api=1.0.0
endpoint.service.id:Long=38
objectClass:array=org.fipro.modifier.api.StringModifier
remote.configs.supported:array=ecf.jaxrs.jersey.server
remote.intents.supported:array=passByValue,exactlyOnce,ordered,osgi.async,osgi.private,osgi.confidential,jaxrs
service.imported:boolean=true
service.imported.configs:array=ecf.jaxrs.jersey.server
service.intents:array=jaxrs
Properties files can be used to override values in an underlying XML EDEF file and even as an alternative, so the XML file is not needed anymore. It is even possible to override properties values for different environments, which makes it very interesting in a productive environment. So there can be a default Properties file for the basic endpoint description, then an endpoint description per service that derives from the basic settings, and even profile specific settings that changes for example the ecf.endpoint.id URLs per profile (DEV/INT/PROD). More details on that topic can be found in the ECF Wiki.
Alternatively you can also trigger a remote service import via EDEF programmatically using classes from the org.osgi.service.remoteserviceadmin
package (see below). This way it is possible to dynamically import and close remote service registrations at runtime (without operating via low level OSGi bundle operations). The following snippet is an example for the programmatic registration of the service above:
Map<String, Object> properties = new HashMap<>();
properties.put("ecf.endpoint.id", "http://localhost:8282/services/camelcase");
properties.put("ecf.endpoint.id.ns", "ecf.namespace.jaxrs");
properties.put("ecf.endpoint.ts", 1642489801532l);
properties.put("ecf.jaxrs.server.pathPrefix", "/camelcase");
properties.put("ecf.rsvc.id", 1l);
properties.put("endpoint.framework.uuid", "0");
properties.put("endpoint.id", "0");
properties.put("endpoint.package.version.org.fipro.modifier.api", "1.0.0");
properties.put("endpoint.service.id", 38l);
properties.put("objectClass", new String[] { "org.fipro.modifier.api.StringModifier" });
properties.put("remote.configs.supported", new String[] { "ecf.jaxrs.jersey.server" });
properties.put("remote.intents.supported", new String[] { "passByValue", "exactlyOnce", "ordered", "osgi.async", "osgi.private", "osgi.confidential", "jaxrs" });
properties.put("service.imported", "true");
properties.put("service.intents", new String[] { "jaxrs" });
properties.put("service.imported.configs", new String[] { "ecf.jaxrs.jersey.server" });
EndpointDescription desc = new EndpointDescription(properties);
ImportRegistration importRegistration = admin.importService(desc);
The OSGi specification has several chapters and implementations to support a microservice architecture. The Remote Service and Remote Service Admin specifications are one of these and probably the most complicated ones, which was confirmed by several OSGi experts I talked with at conferences. Also the specification itself is not easy to understand, but I hope that this blog post helps to get a better understanding.
While Remote Services are pretty easy to implement, the complicated steps are in the setup of the runtime by collecting all necessary bundles. While the ECF project provides several examples and also tries to provide support for better bundle resolving, it is still not a trivial task. I hope this tutorial helps also in solving that topic a little bit.
Of course at runtime you might face networking issues, as I did in every talk for example. The typical fallacies are even referred in the Remote Service Specification. With the usage of JAX-RS and HTTP for the distribution of services and EDEF for a static file-based discovery, this might be less problematic. Give them a try if you are running into troubles.
At the end I again want to thank Scott Lewis for his continuous work on ECF and his support whenever I faced issues with my examples and had questions on some details. If you need an extension or if you have other requests regarding ECF or the JAX-RS Distribution Provider, please get in touch with him.
In the last years I published several blog posts and gave several talks related to OSGi, and often the topic OSGi Remote Services was raised, but never really covered in detail. Scott Lewis, the project lead of the Eclipse Communication Framework, was really helpful whenever I encountered issues with Remote Services. I promised to write a blog post about that topic as a favour for all the support. And with this blog post I finally want to keep my promise. That said, let’s start with OSGi Remote Services.
First I want to explain the motivation for having a closer look at OSGi Remote Services. Looking at general software architecture discussions in the past, service oriented architectures and microservices are a huge topic. Per definition the idea of a microservices architecture is to have
While new frameworks and tools came up over the years, the OSGi specification(s) covers these topics for a long time. Via the service registry and the service dynamics you can build up very small modules. Those modules can then be integrated into small runtimes and deployed in different environments (despite the needed JVM or a database if needed). The services in those small independent deployments can then be accessed in different ways, like using the HTTP Whiteboard or JAX-RS Whiteboard. This satisfies the aspect of a communication between services via lightweight mechanisms. For inhomogeneous environments the usage of those specifications is a good match. But it means that you need to implement the access layer on the provider side (e.g. the JAX-RS wrapper to access the service via REST) and you need to implement the service access on the consumer side by using a corresponding framework to execute the REST calls.
Ideally the developer of the service as well as the developer of the service consumer should not need to think about the infrastructure of the whole application. Well, it is always good that everybody in a project knows about everything, but the idea is to not making your code dependent on infrastructure. And this is where OSGi Remote Services come in. You develop the service and the service consumer as if they would be executed in the same runtime. In the deployment the lightweight communication will be added to support service communication over a network.
And as initially mentioned, I want to have a look at ways how to probably get rid of the networking issues I faced in the presentations in the past.
To understand this blog post you should be familiar with OSGi services and ideally with OSGi Declarative Services. If you are not familiar with OSGi DS, you can get an introduction by reading my blog post Getting Started with OSGi Declarative Services.
In short, the OSGi Service Layer specifies a Service Producer that publishes a service, and a Service Consumer that listens and retrieves a service. This is shown in the following picture:
With OSGi Remote Services this picture is basically the same. The difference is that the services are registered and consumed across network boundaries. For OSGi Remote Services the above picture could be extended to look like the following:
To understand the above picture and the following blog post better, here is a short glossary for the used terms:
To get a slightly better understanding, the following picture shows some more details inside the Remote Service Implementation block.
Note:
Actually this picture is still a simplified version, as internally there are Endpoint Event Listener and Remote Service Admin Listener that are needed to trigger all the necessary actions. But to get an idea how things play together this picture should be sufficient.
Now let’s explain the picture in more detail:
To simplify the picture again, the important takeaways are the Distribution Provider and the Discovery. The Distribution Provider is responsible for exporting and importing the service, the Discovery is responsible for announcing and discovering the service. The other terms are needed for a deeper understanding, but for a high level understanding of OSGi Remote Services, these two are sufficient.
Now it is time to get our hands dirty and play with OSGi Remote Services. This tutorial has several steps:
There are different ways and tools available for OSGi development. In this tutorial I will use the OSGi enRoute Maven Archetypes. I also published this tutorial with other toolings if you don’t want to use enRoute:
While the implementation and export of an OSGi service as a Remote Service is trivial in first place, the definition of the runtime can become quite complicated. Especially collecting the necessary bundles is not that easy without some guidance.
As a reference, with Equinox as underlying OSGi framework the following bundles need to be part of the runtime as a basis:
With the above basic runtime configuration the Remote Services will not yet work. There are still two things missing, the Discovery and the Distribution Provider. ECF provides different implementations for both. Which implementations to use needs to be defined by the project. In this tutorial we will use Zeroconf/JmDNS for the Discovery and the Generic Distribution Provider:
Note:
You can find the list of different implementations with the documentation about the bundles, configuration types and intents in the ECF Wiki:
By using Maven and the OSGi enRoute archetypes you create plain Maven-Java projects. This way you can use any IDE if you are not comfortable with Eclipse and Bndtools. The first step is to create the projects from command line.
Switch to a folder in which you want to create the projects. Create a minimal enRoute OSGi workspace by using the project-bare archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=project-bare \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = enroute
version = 1.0-SNAPSHOT
package = org.fipro.modifier
enroute
is created that contains a basic minimal pom.xml file.Change into the newly created folder enroute and create the Service API project by using the api archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=api \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = api
version = 1.0-SNAPSHOT
package = org.fipro.modifier.api
api
is created that contains the api project structure and the api project is added as module to the parent pom.xml file.Create the Service Implementation project by using the ds-component archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=ds-component \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = inverter
version = 1.0-SNAPSHOT
package = org.fipro.modifier.inverter
inverter
is created that contains the service implementation project structure and the inverter
project is added as module to the parent pom.xml file.With the OSGi enRoute Archetypes we create a composite application to put the modules together. This is done via the application archetype. Execute the following command in the enroute folder:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=application \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = inverter-app
version = 1.0-SNAPSHOT
package = org.fipro.modifier
impl-artifactId = inverter
impl-groupId = org.fipro.modifier
impl-version = 1.0-SNAPSHOT
target-java-version = 11
target-java-version = 8
will be used. After setting the correct values and accepting them with ‘y’ a subfolder named inverter-app
is created that contains .bndrun
files and preparations for configuring the application.To be able to test the Remote Service, we directly create the Service Consumer project by again using the ds-component archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=ds-component \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = client
version = 1.0-SNAPSHOT
package = org.fipro.modifier.client
inverter
is created that contains the service implementation project structure and the inverter
project is added as module to the parent pom.xml file.The consumer will be a command line application. Therefore create an application project with the application archetype similar to creating the service application:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=application \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = client-app
version = 1.0-SNAPSHOT
package = org.fipro.modifier
impl-artifactId = client
impl-groupId = org.fipro.modifier
impl-version = 1.0-SNAPSHOT
target-java-version = 11
target-java-version = 8
will be used. After setting the correct values and accepting them with ‘y’ a subfolder named client-app
is created that contains .bndrun
files and preparations for configuring the application.Now the projects can be imported to the IDE of your choice. As the projects are plain Maven based Java projects, you can use any IDE. But still my choice is Eclipse with bndtools.
Unfortunately the archetypes are some years old and were not updated since then. Using the enRoute OSGi Maven Archetypes you get project skeletons that are configured for Java 8, Bndtools 4.1.0 and OSGi R7. For this tutorial it is sufficient to use OSGi R7, but let’s update to Java 11 and the current Bndtools 6.2.0.
Note:
On Windows there is some formatting issue when using the archetypes. For every additional module you create, an empty line with some spaces is added between the content lines. If you followed the tutorial and created 5 modules, you will see 5 empty lines between every content line. To clean this up and make the enroute/pom.xml file readable again, you can do a search and replace via regular expression in an editor of your choice. Use the following regex and replace it with nothing
^(?:[\t ]*(?:\r?\n|\r))+
The following screenshot shows the settings in the Find/Replace dialog that can be used to cleanup:
properties
section
bnd.version
from 4.1.0 to 6.2.0maven.compiler.source
maven.compiler.target
dependencyManagement
section
<!-- ECF dependencies -->
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.core.jobs</artifactId>
<version>3.12.0</version>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.concurrent</artifactId>
<version>1.2.100</version>
</dependency>
<!-- ECF -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf</artifactId>
<version>3.10.0</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.discovery</artifactId>
<version>5.1.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.identity</artifactId>
<version>3.9.402</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
<version>2.1.600</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
<version>4.9.3</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
<version>1.0.101</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
<version>2.1.200</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice</artifactId>
<version>8.14.0</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.sharedobject</artifactId>
<version>2.6.200</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
<version>1.6.300</version>
</dependency>
<!-- ECF Discovery - Zeroconf -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
<version>4.3.301</version>
</dependency>
<!-- ECF Distribution Provider - Generic -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider</artifactId>
<version>4.9.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.remoteservice</artifactId>
<version>4.6.1</version>
</dependency>
Note:
Unfortunately the ECF project does not have the dependencies configured in the pom.xml files, so there automated resolving of transitive dependencies in Maven is not working. The reason is obviously the usage of Tycho and the resolving of dependencies based on the MANIFEST file. While the MANIFEST-first approach is nice at development time, it makes you a bad Maven citizen by default. If you as a project want to be also a good Maven citizen, you have to maintain the dependencies twice, in the MANIFEST for PDE based development and Tycho builds, and in the pom.xml file in the dependencies section, that is actually not used in the build and creates Warnings in the Tycho build.
For this example simply use the snippet above, which should help in managing the dependencies. But keep in mind that by the time the versions might have increased and need to be updated.
pluginManagement
section
maven-compiler-plugin
configured like below <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<release>11</release>
</configuration>
</plugin>
api
project to the dependencies of the inverter
project
dependencies
section <dependency>
<groupId>org.fipro.modifier</groupId>
<artifactId>api</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
api
project to the dependencies of the client
project
dependencies
section <dependency>
<groupId>org.fipro.modifier</groupId>
<artifactId>api</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
Modify the api project:
ConsumerInterface
and the ProviderInterface
StringModifier
into the api
projectpackage org.fipro.modifier.api;
public interface StringModifier {
String modify(String input);
}
Modify the inverter project:
ComponentImpl
classStringInverter
into the inverter
projectpackage org.fipro.modifier.inverter;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
@Component(property= {
"service.exported.interfaces=*",
"service.exported.configs=ecf.generic.server" }
)
public class StringInverter implements StringModifier {
@Override
public String modify(String input) {
return (input != null)
? new StringBuilder(input).reverse().toString()
: "No input given";
}
}
The only thing that needs to be done additionally in comparison to creating a local OSGi service, is to configure that the service should be exported as Remote Service. This is done by setting the component property service.exported.interfaces
. The value of this property needs to be a list of types for which the class is registered as a service. For a simple use case like the above, the asterisk can be used, which means to export the service for all interfaces under which it is registered, but to ignore the classes. For more detailed information have a look at the Remote Service Properties section of the OSGi Compendium Specification.
The other component property used in the above example is service.exported.configs
. This property is used to specify the configuration types, for which the Distribution Provider should create Endpoints. If it is not specified, the Distribution Provider is free to choose the default configuration type for the service.
Note:
In the above example we use the ECF Generic Provider. This one by default chooses a SSL configuration type, so without additional configuration the example would not work if we don’t specify the configuration type.
Additionally you can specify Intents via the service.exported.intents
component property to constrain the possible communication mechanisms that a distribution provider can choose to distribute a service. An example will be provided at a later step.
The implementation of a Remote Service Consumer also quite simple. From the development perspective there is nothing to consider. The service consumer is implemented without any additions. Only the runtime needs to be extended to contain the necessary bundles for Discovery and Distribution.
The simplest way of implementing a service consumer is a Gogo Shell command.
Modify the client project:
ComponentImpl
classModifyCommand
into the client
projectpackage org.fipro.modifier.client;
import java.util.List;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
@Component(
property= {
"osgi.command.scope:String=fipro",
"osgi.command.function:String=modify"},
service=ModifyCommand.class
)
public class ModifyCommand {
@Reference
volatile List<StringModifier> modifier;
public void modify(String input) {
if (modifier.isEmpty()) {
System.out.println("No StringModifier registered");
} else {
modifier.forEach(m -> System.out.println(m.modify(input)));
}
}
}
Now the ECF bundles need to be added to the dependencies
section of the service-app/pom.xml. You can find the ECF bundles on Maven Central.
After the Maven Dependencies are updated, the .bndrun configuration can be updated to include the necessary bundles:
dependencies
to the ECF bundles as shown below
(the versions are already configured in the parent pom.xml dependencyManagement
section)<!-- ECF dependencies -->
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>
<!-- ECF -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<!-- ECF Discovery - Zeroconf -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
</dependency>
<!-- ECF Distribution Provider - Generic -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.remoteservice</artifactId>
</dependency>
Now you can start the inverter-app via the Run OSGi button in the upper right corner of the editor. As there is nothing included in the runtime that would show up somewhere, you won’t see anything now.
The client app is a simple command line application that uses the Gogo Shell. To get the Gogo Shell up and running some additional steps need to be performed in the client-app. By default the Gogo Shell bundles are included in the project setup for the test
scope and for debugging. To make them available in the compile
scope:
dependencies
section for the ECF bundles
(the versions are already configured in the parent pom.xml dependencyManagement
section)<!-- ECF dependencies -->
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>
<!-- ECF -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<!-- ECF Discovery - Zeroconf -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
</dependency>
<!-- ECF Distribution Provider - Generic -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.remoteservice</artifactId>
</dependency>
dependencies
section for the Gogo Shell bundles
(actually copied from the org.osgi.enroute:debug-bundles
, so the versions are probably outdated, but for the example sufficient).<!-- The Gogo Shell -->
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.shell</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.runtime</artifactId>
<version>1.0.10</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.command</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
</exclusion>
<exclusion>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.compendium</artifactId>
</exclusion>
</exclusions>
</dependency>
-runproperties: \
osgi.console=,\
osgi.console.enable.builtin=false
If you now click on Run OSGi on the Run tab of the editor, the Gogo Shell becomes available in the Console view of the IDE. Once the application is started you can execute the created Gogo Shell command via
modify <input>
If services are available, it will print out the modified results. Otherwise the message “No StringModifier registered” will be printed.
There are several events with regards to importing and exporting Remote Services, that are fired by the Remote Service Admin synchronously once they happen. These events are posted asynchronously via the OSGi Event Admin under the topic
org/osgi/service/remoteserviceadmin/<type>
Where <type>
can be one of the following:
A simple event listener that prints to the console on any Remote Service Admin Event could look like this:
@Component(property = EventConstants.EVENT_TOPIC + "=org/osgi/service/remoteserviceadmin/*")
public class RemoteServiceEventListener implements EventHandler {
@Override
public void handleEvent(Event event) {
System.out.println(event.getTopic());
for (String objectClass : ((String[])event.getProperty("objectClass"))) {
System.out.println("\t"+objectClass);
}
}
}
For further details on the Remote Service Admin Events have a look at the OSGi Compendium Specification Chapter 122.7.
If you need to react synchronously on these events, you can implement a RemoteServiceAdminListener
. I typically would not recommend this, unless you really want blocking calls on import/export events. Typically it is intended to be used internally by the Remote Service Admin. But for debugging purposes the ECF project also provides a DebugRemoteServiceAdminListener
. It writes the endpoint description via a Writer to support debugging of Remote Services. Via the following class you could easily register a DebugRemoteServiceAdminListener
via OSGi DS that prints the information on the console.
@Component
public class DebugListener
extends DebugRemoteServiceAdminListener
implements RemoteServiceAdminListener {
// register the DebugRemoteServiceAdminListener via DS
}
To test this you can either add the above components to one of the existing bundles, or create a new bundle and add that bundle to the runtimes.
The ECF project provides several ways for runtime inspection and runtime debugging. This is mainly done Gogo Shell commands provided via separate bundles. To enable the OSGi console and the ECF console commands, you need to add the following bundles to your runtime:
With the ECF Console bundles added to the runtime, there are several commands to inspect and interact with the Remote Service Admin. As an overview the available commands are listed in the wiki: Gogo Commands for Remote Services Development
Additionally the DebugRemoteServiceAdminListener
described above is activated by default with the ECF Console bundles. It can be activated or deactivated in the runtime via the command
ecf:rsadebug <true/false>
To add the ECF Console bundles to the project, add the following snippet to the dependencyManagement
section of the enroute/pom.xml file:
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.console</artifactId>
<version>1.3.100</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.console</artifactId>
<version>1.3.0</version>
</dependency>
One of the biggest issues I faced when working with Remote Services is networking as mentioned in the introduction. In the above example the ECF Generic Distribution Provider is used for a simpler setup. But for example in a corporate network with enabled firewalls somewhere in the network setup, the example will probably not work. As said before, the ECF project provides multiple Distribution Provider implementations, which gives the opportunity to configure the setup to match the project needs. One interesting implementation in that area is the JAX-RS Distribution Provider. Using that one could probably help solving several of the networking issues related to firewalls. But as with the whole Remote Service topic, the complexity in the setup is quite high because of the increased number of dependencies that need to be resolved.
The JAX-RS Distribution Provider implementation is available for Eclipse Jersey and Apache CXF. It uses the OSGi HttpService to register the JAX-RS resource, and of course it then also needs a Servlet container like Eclipse Jetty to provide the JAX-RS resource. I will show the usage of the Jersey based implementation in the following sections.
Unfortunately the JAX-RS Distribution Provider is not available via Maven Central. The only way to get the project setup done is to install the artifacts in the local repository. This can be done by installing the artifacts locally via mvn clean install
. Alternatively you can use the maven-install-plugin, which can even be integrated into your Maven build if you add the artifact to install to the source code repository. For this tutorial we use the manual installation of artifacts, as it is the easier approach for now.
Note:
The artifact versions in the below snippets rely on the JAX-RS Distribution Provider 1.14.6 which was the most current version at the time this tutorial was written. If there is a newer version available in the meantime you need to update the snippets.
mvn install:install-file \
-Dfile=org.eclipse.ecf.provider.jaxrs\_1.7.1.202202112253.jar \
-DgroupId=org.eclipse.ecf \
-DartifactId=org.eclipse.ecf.provider.jaxrs \
-Dversion=1.7.1 \
-Dpackaging=jar
mvn install:install-file \
-Dfile=org.eclipse.ecf.provider.jaxrs.server\_1.11.1.202202112253.jar \
-DgroupId=org.eclipse.ecf \
-DartifactId=org.eclipse.ecf.provider.jaxrs.server \
-Dversion=1.11.1 \
-Dpackaging=jar
mvn install:install-file \
-Dfile=org.eclipse.ecf.provider.jersey.server\_1.11.1.202202112253.jar \
-DgroupId=org.eclipse.ecf \
-DartifactId=org.eclipse.ecf.provider.jersey.server \
-Dversion=1.11.1 \
-Dpackaging=jar
mvn install:install-file \
-Dfile=org.eclipse.ecf.provider.jaxrs.client\_1.8.1.202202112253.jar \
-DgroupId=org.eclipse.ecf \
-DartifactId=org.eclipse.ecf.provider.jaxrs.client \
-Dversion=1.8.1 \
-Dpackaging=jar
mvn install:install-file \
-Dfile=org.eclipse.ecf.provider.jersey.client\_1.8.2.202202112253.jar \
-DgroupId=org.eclipse.ecf \
-DartifactId=org.eclipse.ecf.provider.jersey.client \
-Dversion=1.8.2 \
-Dpackaging=jar
dependencyManagement
section<!-- ECF JAX-RS Distribution Provider -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
<version>1.7.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs.server</artifactId>
<version>1.11.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jersey.server</artifactId>
<version>1.11.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs.client</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jersey.client</artifactId>
<version>1.8.2</version>
</dependency>
<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.10.1</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-json-provider</artifactId>
<version>2.10.1</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet</artifactId>
<version>2.30.1</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet-core</artifactId>
<version>2.30.1</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
<version>2.30.1</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
<version>2.30.1</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.inject</groupId>
<artifactId>jersey-hk2</artifactId>
<version>2.30.1</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.http.jetty</artifactId>
<version>4.1.14</version>
</dependency>
Note:
I have chosen the same versions for the dependencies as the JAX-RS Distribution Provider has. There are already newer versions available, so you can check if newer versions would work. Also note that the above snippet is the minimal necessary configuration. All other dependencies are resolved transitively. I have chosen this approach to minimize the snippet.
The implementation of the service already looks different compared to what you have seen so far. Instead of only adding the necessary Component Properties to configure the service as a Remote Service, the service implementation does directly contain the JAX-RS annotations. That of course also means that the annotations need to be available.
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=ds-component \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = uppercase
version = 1.0-SNAPSHOT
package = org.fipro.modifier.uppercase
After accepting the inserted values with ‘y’ a subfolder named uppercase
is created that contains the service implementation project structure and the uppercase
project is added as module to the parent pom.xml file.
Click Finish
api
project and jakarta.ws.rs-api
to the dependencies of the uppercase
project
dependencies
section <dependency>
<groupId>org.fipro.modifier</groupId>
<artifactId>api</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>jakarta.ws.rs</groupId>
<artifactId>jakarta.ws.rs-api</artifactId>
<version>2.1.6</version>
</dependency>
ComponentImpl
classUppercaseModifier
into the uppercase
projectpackage org.fipro.modifier.uppercase;
import java.util.Locale;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
//The JAX-RS path annotation for this service
@Path("/modify")
//The OSGi DS component annotation
@Component(
immediate = true,
property = {
"service.exported.interfaces=*",
"service.exported.intents=jaxrs"})
public class UppercaseModifier implements StringModifier {
@GET
// The JAX-RS annotation to specify the result type
@Produces(MediaType.TEXT_PLAIN)
// The JAX-RS annotation to specify that the last part
// of the URL is used as method parameter
@Path("/{value}")
@Override
public String modify(@PathParam("value") String input) {
return (input != null)
? input.toUpperCase(Locale.getDefault())
: "No input given";
}
}
For the JAX-RS annotations, please have a look at various existing tutorials and blog posts in the internet, for example
About the OSGi DS configuration:
service.exported.interfaces=*
service.exported.intents=jaxrs
Note:
As mentioned earlier there is a bug in ECF 3.14.26 which is integrated in the Eclipse 2021-21 SimRel repo. The service.exported.intents
property is not enough to get the JAX-RS resource registered. Additionally it is necessary to set service.exported.configs=ecf.jaxrs.jersey.server
to make it work. This was fixed shortly after I reported it and is included with the current ECF 3.14.31 release. The basic idea of the intent configuration is to make the service independent of the underlying JAX-RS Distribution Provider implementation (Jersey vs. Apache CXF).
For the JAX-RS Distribution Provider Runtime a lot more dependencies are required. The following list should cover the additional necessary base dependencies:
For the Service Provider we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Server bundles, the Jetty as embedded server and the HTTP Whiteboard:
For the Service Consumer we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Client bundles and the HttpClient to be able to access the JAX-RS resource:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=application \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = uppercase-app
version = 1.0-SNAPSHOT
package = org.fipro.modifier
impl-artifactId = uppercase
impl-groupId = org.fipro.modifier
impl-version = 1.0-SNAPSHOT
target-java-version = 11
target-java-version = 8
will be used. After setting the correct values and accepting them with ‘y’ a subfolder named uppercase-app
is created that contains .bndrun
files and preparations for configuring the application.dependencies
to the ECF bundles as shown below
(the versions are already configured in the parent pom.xml dependencyManagement
section)<!-- ECF dependencies -->
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>
<!-- ECF -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<!-- ECF Discovery - Zeroconf -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
</dependency>
<!-- ECF JAX-RS Distribution Provider -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs.server</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jersey.server</artifactId>
</dependency>
<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet-core</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.inject</groupId>
<artifactId>jersey-hk2</artifactId>
</dependency>
Open the uppercase-app/uppercase-app.bndrun file
org.osgi.service.http.port=8181
Note:
With the latest version of the JAX-RS Distribution Provider, the .bndrun configuration is much more comfortable than before. There were several improvements to make the definition of a runtime more user friendly, so if you are already familiar with the JAX-RS Distribution Provider and used it in the past, be sure to update it to the latest version to benefit from the latest modifications.
Now you can start the Uppercase JAX-RS Service Runtime from the Overview tab via Launch an Eclipse application. After the runtime is started the service will be available as JAX-RS resource and can be accessed in a browser, e.g. http://localhost:8181/modify/remoteservice
Note:
Unfortunately with the above setup, you will see a 404 instead of the service result. It seems that using Jetty 9 the usage of the base URL is not working for Remote Services. Maybe it is only a configuration issue that I was not able to solve as part of this tutorial. There are two options to handle this issue, either configure additional path segments or use Jetty 10.
Note:
Don’t worry if you see a SelectContainerException
in the console. It is only an information that tells that the service from the first part of the tutorial can not be imported in the runtime of this part of the tutorial and vice versa. The first service is distributed via the Generic Provider, while the second service is distributed by the JAX-RS Provider. But both are using the JmDNS Discovery Provider.
The URL path is defined via the JAX-RS annotations, “modify” via @Path("/modify")
on the class, “remoteservice” is the path parameter defined via @Path("/{value}")
on the method (if you change that value, the result will change accordingly). You can extend the URL via configurations shown below:
ecf.jaxrs.server.pathPrefix=<value>
(e.g. ecf.jaxrs.server.pathPrefix=/services
)@Component
annotation
ecf.jaxrs.server.pathPrefix=<value>
e.g.
@Component(
immediate = true,
property = {
"service.exported.interfaces=*",
"service.exported.configs=ecf.jaxrs.jersey.server",
"service.exported.intents=jaxrs",
"ecf.jaxrs.server.pathPrefix=/upper"})
If all of the above configurations are added, the new URL to the service is, e.g. http://localhost:8181/services/upper/modify/remoteservice
Additional information about available component properties can be found here: Jersey Service Properties
With the above setup the bundle org.apache.felix.http.jetty
is integrated in the runtime. That bundle combines the following:
This makes the integration very easy. If you want to update to Jetty 10 the setup is more complicated, as that is not available as combined Felix bundle. In that case you need the following bundles:
First you create a new Service Provider Runtime project that includes Jetty 10:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=application \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = uppercase-app-jetty10
version = 1.0-SNAPSHOT
package = org.fipro.modifier
impl-artifactId = uppercase
impl-groupId = org.fipro.modifier
impl-version = 1.0-SNAPSHOT
target-java-version = 11
First you need to decline the properties configuration, as by default target-java-version = 8
will be used. After setting the correct values and accepting them with ‘y’ a subfolder named uppercase-app
is created that contains .bndrun
files and preparations for configuring the application.
dependencies
to ECF, Jetty 10 and Equinox Http as shown below
(of course you can also configure the versions for Jetty 10 etc. in the enroute/pom.xml dependencyManagement
section as described before)<!-- ECF dependencies -->
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>
<!-- ECF -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<!-- ECF Discovery - Zeroconf -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
</dependency>
<!-- ECF JAX-RS Distribution Provider -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs.server</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jersey.server</artifactId>
</dependency>
<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet-core</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.inject</groupId>
<artifactId>jersey-hk2</artifactId>
</dependency>
<!-- Equinox OSGi Http Service and Http Whiteboard -->
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.osgi.services</artifactId>
<version>3.10.200</version>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.http.jetty</artifactId>
<version>3.8.100</version>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.http.servlet</artifactId>
<version>1.7.200</version>
</dependency>
<!-- Jetty 10 -->
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-http</artifactId>
<version>10.0.8</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-io</artifactId>
<version>10.0.8</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-security</artifactId>
<version>10.0.8</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-server</artifactId>
<version>10.0.8</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-servlet</artifactId>
<version>10.0.8</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-util</artifactId>
<version>10.0.8</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-util-ajax</artifactId>
<version>10.0.8</version>
</dependency>
<!-- Jetty 10 Dependencies -->
<dependency>
<groupId>jakarta.servlet</groupId>
<artifactId>jakarta.servlet-api</artifactId>
<version>4.0.4</version>
</dependency>
<dependency>
<groupId>jakarta.xml.bind</groupId>
<artifactId>jakarta.xml.bind-api</artifactId>
<version>2.3.3</version>
</dependency>
<!-- Gogo Shell & ECF Console - optionally -->
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.shell</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.runtime</artifactId>
<version>1.0.10</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.command</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
</exclusion>
<exclusion>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.compendium</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.console</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.console</artifactId>
</dependency>
org.osgi.service.http.port=8181
launch.activation.eager=true
osgi.console=
osgi.console.enable.builtin=false
Note:
The OSGi Framework property launch.activation.eager=true
is necessary because of the activation policy set in the Equinox Jetty Http Service bundle. It is configured to be activated lazy, which means it will only be activated if someone requests something from that bundle. But as Equinox does collect all OSGi service interfaces in org.eclipse.osgi.services, actually nobody ever will request something from that bundle, which leaves it in the STARTING state forever. With launch.activation.eager
property the lazy activation will be ignored and all bundles will be simply started. Bug 530076 was created to discuss if the lazy activation could be dropped.
Note:
Unfortunately you can not include the org.apache.felix.webconsole
in a Jetty 10 runtime. The reason is the Servlet API version dependency of webconsole. org.apache.felix.webconsole
requires javax.servlet;version="[2.4,4)"
even in its latest version, while org.eclipse.jetty.servlet
requires javax.servlet;version="[4.0.0,5)"
. So if you want to use the webconsole in your JAX-RS Remote Service, you need to stick with Jetty 9.
Note:
It is currently not possible to use Jetty 11 for OSGi development, as the OSGi implementations are not updated to the jakarta
namespace.
For an overview on the Jetty versions and dependencies, have a look at the Jetty Downloads page.
To consume the Remote Service provided via JAX-RS Distribution Provider, the runtime needs to be extended to include the additional dependencies:
dependencies
section<!-- ECF JAX-RS Distribution Provider -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs.client</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jersey.client</artifactId>
</dependency>
<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet-core</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.inject</groupId>
<artifactId>jersey-hk2</artifactId>
</dependency>
<!-- ECF Console -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.console</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.console</artifactId>
</dependency>
If you now start the Service Consumer Runtime and have the Service Provider Runtime also running, you can execute the following command
modify jax
This will actually lead to an error if you followed my tutorial step by step:
ServiceException: Service exception on remote service proxy
The reason is that the Service Interface does not contain the JAX-RS annotations as the service actually does, and therefore the mapping is working. So while for providing the service the interface does not need to be modified, it has to for the consumer side.
dependencies
section
<dependency>
<groupId>jakarta.ws.rs</groupId>
<artifactId>jakarta.ws.rs-api</artifactId>
<version>2.1.6</version>
</dependency>
StringModifier
class and add the JAX-RS annotations to be exactly the same as for the Service Implementationpackage org.fipro.modifier.api;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
@Path("/modify")
public interface StringModifier {
@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("/{value}")
String modify(@PathParam("value") String input);
}
If you now start the Uppercase Service Provider Runtime and the Service Consumer Runtime again, the error should be gone and you should see the expected result.
After the Service Interface was extended to include the JAX-RS annotations, the first Service Provider Runtime will not resolve anymore because of missing dependencies. To fix this:
Now you can start that Service Provider Runtime again. If the other Service Provider and the Service Consumer are also active, executing the modify command will now output the result of both services.
In the tutorial we used JmDNS/Zeroconf as Discovery Provider. This way there is not much we have to do as a developer or administrator despite adding the according bundle to the runtime. This kind of Discovery is using a broadcast mechanism to announce the service in the network. In cases this doesn’t work, e.g. firewall rules that block broadcasting, it is also possible that you use a static file-based discovery. This can be done using the Endpoint Description Extender Format (EDEF) and is also supported by ECF.
Let’s create an additional service that is distributed via JAX-RS. But this time we exclude the org.eclipse.ecf.provider.jmdns bundle, so there is no additional discovery inside the Service Provider Runtime. We also add the console bundles to be able to inspect the runtime.
Note:
If you don’t want to create another service, you can also modify the previous uppercase service. In that case remove the org.eclipse.ecf.provider.jmdns bundle from the product configuration and ensure that the console bundles are added to be able to inspect the remote service runtime via the OSGi Console.
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=ds-component \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = camelcase
version = 1.0-SNAPSHOT
package = org.fipro.modifier.camelcase
After accepting the inserted values with ‘y’ a subfolder named camelcase
is created that contains the service implementation project structure and the camelcase
project is added as module to the parent pom.xml file.
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=application \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = camelcase-app
version = 1.0-SNAPSHOT
package = org.fipro.modifier
impl-artifactId = camelcase
impl-groupId = org.fipro.modifier
impl-version = 1.0-SNAPSHOT
target-java-version = 11
First you need to decline the properties configuration, as by default target-java-version = 8
will be used. After setting the correct values and accepting them with ‘y’ a subfolder named camelcase-app
is created that contains .bndrun
files and preparations for configuring the application.
Click Finish
api
project and jakarta.ws.rs-api
to the dependencies of the camelcase
project
dependencies
section <dependency>
<groupId>org.fipro.modifier</groupId>
<artifactId>api</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>jakarta.ws.rs</groupId>
<artifactId>jakarta.ws.rs-api</artifactId>
<version>2.1.6</version>
</dependency>
ComponentImpl
classCamelCaseModifier
into the camelcase
projectpackage org.fipro.modifier.camelcase;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
@Path("/modify")
@Component(
immediate = true,
property = {
"service.exported.interfaces=*",
"service.exported.intents=jaxrs",
"ecf.jaxrs.server.pathPrefix=/camelcase"})
public class CamelCaseModifier implements StringModifier {
@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("/{value}")
@Override
public String modify(@PathParam("value") String input) {
StringBuilder builder = new StringBuilder();
if (input != null) {
for (int i = 0; i < input.length(); i++) {
char currentChar = input.charAt(i);
if (i % 2 == 0) {
builder.append(Character.toUpperCase(currentChar));
} else {
builder.append(Character.toLowerCase(currentChar));
}
}
}
else {
builder.append("No input given");
}
return builder.toString();
}
}
dependencies
to the ECF bundles as shown below
(the versions are already configured in the parent pom.xml dependencyManagement
section)<!-- ECF dependencies -->
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>
<!-- ECF -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<!-- ECF JAX-RS Distribution Provider -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jaxrs.server</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.provider.jersey.server</artifactId>
</dependency>
<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-servlet-core</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.inject</groupId>
<artifactId>jersey-hk2</artifactId>
</dependency>
<!-- The Gogo Shell -->
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.shell</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.runtime</artifactId>
<version>1.0.10</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.gogo.command</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
</exclusion>
<exclusion>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.compendium</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- ECF Console -->
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.console</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.ecf</groupId>
<artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.console</artifactId>
</dependency>
osgi.console=
osgi.console.enable.builtin=false
org.osgi.service.http.port=8282
ecf.jaxrs.server.pathPrefix=/services
Once the runtime is started via Run OSGi the service should be available via http://localhost:8282/services/camelcase/modify/remoteservice
You probably noticed a console output on startup that shows the Endpoint Description XML. This is actually what we need for the EDEF file. You can also get the endpoint description at runtime via the ECF Gogo Command listexports <endpoint.id>
:
osgi> listexports
endpoint.id |Exporting Container ID |Exported Service Id
5918da3a-a971-429f-9ff6-87abc70d4742 |http://localhost:8282/services/camelcase |38
osgi> listexports 5918da3a-a971-429f-9ff6-87abc70d4742
<endpoint-descriptions xmlns="http://www.osgi.org/xmlns/rsa/v1.0.0">
<endpoint-description>
<property name="ecf.endpoint.id" value-type="String" value="http://localhost:8282/services/camelcase"/>
<property name="ecf.endpoint.id.ns" value-type="String" value="ecf.namespace.jaxrs"/>
<property name="ecf.endpoint.ts" value-type="Long" value="1642667915518"/>
<property name="ecf.jaxrs.server.pathPrefix" value-type="String" value="/camelcase"/>
<property name="ecf.rsvc.id" value-type="Long" value="1"/>
<property name="endpoint.framework.uuid" value-type="String" value="80778aff-63c7-448d-92a5-7902eb6782ae"/>
<property name="endpoint.id" value-type="String" value="5918da3a-a971-429f-9ff6-87abc70d4742"/>
<property name="endpoint.package.version.org.fipro.modifier" value-type="String" value="1.0.0"/>
<property name="endpoint.service.id" value-type="Long" value="38"/>
<property name="objectClass" value-type="String">
<array>
<value>org.fipro.modifier.StringModifier</value>
</array>
</property>
<property name="remote.configs.supported" value-type="String">
<array>
<value>ecf.jaxrs.jersey.server</value>
</array>
</property>
<property name="remote.intents.supported" value-type="String">
<array>
<value>passByValue</value>
<value>exactlyOnce</value>
<value>ordered</value>
<value>osgi.async</value>
<value>osgi.private</value>
<value>osgi.confidential</value>
<value>jaxrs</value>
</array>
</property>
<property name="service.imported" value-type="String" value="true"/>
<property name="service.imported.configs" value-type="String">
<array>
<value>ecf.jaxrs.jersey.server</value>
</array>
</property>
<property name="service.intents" value-type="String">
<array>
<value>jaxrs</value>
</array>
</property>
</endpoint-description>
</endpoint-descriptions>
The endpoint description is needed by the Service Consumer to discover the new service. Without a Discovery that is broadcasting, the service needs to be discovered statically via an EDEF file. As the EDEF file is registered via manifest header, we create a new bundle. You could also place it in an existing bundle like org.fipro.modifier.client, but for some more OSGi dynamics fun, let’s create a new bundle.
Header
OSGi Bundle Annotation to add the Remote-Service
header to the OSGi metadata@org.osgi.annotation.bundle.Header(name="Remote-Service", value="edef/camelcase.xml")
package edef;
artifactId
<dependencies>
<dependency>
<groupId>org.osgi.enroute</groupId>
<artifactId>osgi-api</artifactId>
<type>pom</type>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-baseline-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
Note:
If you see an error on the project after the modification on the pom.xml file, execute a right-click on the project -> Maven -> Update Project… -> select the client-edef project or even all projects in the dialog and click Update.
dependencies
section <dependency>
<groupId>org.fipro.modifier</groupId>
<artifactId>client-edef</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
If you start the Service Consumer Runtime, the service will directly be available. This is because the new org.fipro.modifier.client-edef bundle is activated automatically by the bnd launcher (a big difference compared to Equinox). Let’s deactivate it via the console. First we need to find the bundle-id via lb
and then stop it via stop <bundle-id>
. The output should look similar to the following snippet:
g! lb edef
START LEVEL 1
ID|State |Level|Name
49|Active | 1|client-edef (1.0.0.202202180929)|1.0.0.202202180929
g! stop 49
Now the service becomes unavailable via the modify
command. If you start the bundle, the service becomes available again.
The EDEF specification itself would not be sufficient for productive usage. For example, the values of the endpoint description properties need to match. For the endpoint.id this would be really problematic, as that value is a random generated uuid and changes on each runtime start. So if the Service Provider Runtime is restarted there is a new endpoint.id value. ECF includes a mechanism to support the discovery and the distribution even if the endpoint.id of the importer and the exporter do not match. This actually makes the EDEF file support work in productive environments.
ECF also provides a mechanism to create an endpoint description using a properties file. All the necessary endpoint description properties need to be included as properties with the respective types and values. The following example shows the properties representation for the EDEF XML of the above example. Note that for endpoint.id and endpoint.framework.uuid the type is set to uuid
and the value is 0. This way ECF will generate a random UUID and the matching feature will ensure that the distribution will work even without matching id values.
ecf.endpoint.id=http://localhost:8282/services/camelcase
ecf.endpoint.id.ns=ecf.namespace.jaxrs
ecf.endpoint.ts:Long=1642761763599
ecf.jaxrs.server.pathPrefix=/camelcase
ecf.rsvc.id:Long=1
endpoint.framework.uuid:uuid=0
endpoint.id:uuid=0
endpoint.package.version.org.fipro.modifier.api=1.0.0
endpoint.service.id:Long=38
objectClass:array=org.fipro.modifier.api.StringModifier
remote.configs.supported:array=ecf.jaxrs.jersey.server
remote.intents.supported:array=passByValue,exactlyOnce,ordered,osgi.async,osgi.private,osgi.confidential,jaxrs
service.imported:boolean=true
service.imported.configs:array=ecf.jaxrs.jersey.server
service.intents:array=jaxrs
Properties files can be used to override values in an underlying XML EDEF file and even as an alternative, so the XML file is not needed anymore. It is even possible to override properties values for different environments, which makes it very interesting in a productive environment. So there can be a default Properties file for the basic endpoint description, then an endpoint description per service that derives from the basic settings, and even profile specific settings that changes for example the ecf.endpoint.id URLs per profile (DEV/INT/PROD). More details on that topic can be found in the ECF Wiki.
Alternatively you can also trigger a remote service import via EDEF programmatically using classes from the org.osgi.service.remoteserviceadmin
package (see below). This way it is possible to dynamically import and close remote service registrations at runtime (without operating via low level OSGi bundle operations). The following snippet is an example for the programmatic registration of the service above:
Map<String, Object> properties = new HashMap<>();
properties.put("ecf.endpoint.id", "http://localhost:8282/services/camelcase");
properties.put("ecf.endpoint.id.ns", "ecf.namespace.jaxrs");
properties.put("ecf.endpoint.ts", 1642489801532l);
properties.put("ecf.jaxrs.server.pathPrefix", "/camelcase");
properties.put("ecf.rsvc.id", 1l);
properties.put("endpoint.framework.uuid", "0");
properties.put("endpoint.id", "0");
properties.put("endpoint.package.version.org.fipro.modifier.api", "1.0.0");
properties.put("endpoint.service.id", 38l);
properties.put("objectClass", new String[] { "org.fipro.modifier.api.StringModifier" });
properties.put("remote.configs.supported", new String[] { "ecf.jaxrs.jersey.server" });
properties.put("remote.intents.supported", new String[] { "passByValue", "exactlyOnce", "ordered", "osgi.async", "osgi.private", "osgi.confidential", "jaxrs" });
properties.put("service.imported", "true");
properties.put("service.intents", new String[] { "jaxrs" });
properties.put("service.imported.configs", new String[] { "ecf.jaxrs.jersey.server" });
EndpointDescription desc = new EndpointDescription(properties);
ImportRegistration importRegistration = admin.importService(desc);
The OSGi specification has several chapters and implementations to support a microservice architecture. The Remote Service and Remote Service Admin specifications are one of these and probably the most complicated ones, which was confirmed by several OSGi experts I talked with at conferences. Also the specification itself is not easy to understand, but I hope that this blog post helps to get a better understanding.
While Remote Services are pretty easy to implement, the complicated steps are in the setup of the runtime by collecting all necessary bundles. While the ECF project provides several examples and also tries to provide support for better bundle resolving, it is still not a trivial task. I hope this tutorial helps also in solving that topic a little bit.
Of course at runtime you might face networking issues, as I did in every talk for example. The typical fallacies are even referred in the Remote Service Specification. With the usage of JAX-RS and HTTP for the distribution of services and EDEF for a static file-based discovery, this might be less problematic. Give them a try if you are running into troubles.
At the end I again want to thank Scott Lewis for his continuous work on ECF and his support whenever I faced issues with my examples and had questions on some details. If you need an extension or if you have other requests regarding ECF or the JAX-RS Distribution Provider, like publishing the JAX-RS Distribution Provider on Maven Central and providing dependencies via pom.xml, please get in touch with him.
In the last years I published several blog posts and gave several talks related to OSGi, and often the topic OSGi Remote Services was raised, but never really covered in detail. Scott Lewis, the project lead of the Eclipse Communication Framework, was really helpful whenever I encountered issues with Remote Services. I promised to write a blog post about that topic as a favour for all the support. And with this blog post I finally want to keep my promise. That said, let’s start with OSGi Remote Services.
First I want to explain the motivation for having a closer look at OSGi Remote Services. Looking at general software architecture discussions in the past, service oriented architectures and microservices are a huge topic. Per definition the idea of a microservices architecture is to have
While new frameworks and tools came up over the years, the OSGi specification(s) covers these topics for a long time. Via the service registry and the service dynamics you can build up very small modules. Those modules can then be integrated into small runtimes and deployed in different environments (despite the needed JVM or a database if needed). The services in those small independent deployments can then be accessed in different ways, like using the HTTP Whiteboard or JAX-RS Whiteboard. This satisfies the aspect of a communication between services via lightweight mechanisms. For inhomogeneous environments the usage of those specifications is a good match. But it means that you need to implement the access layer on the provider side (e.g. the JAX-RS wrapper to access the service via REST) and you need to implement the service access on the consumer side by using a corresponding framework to execute the REST calls.
Ideally the developer of the service as well as the developer of the service consumer should not need to think about the infrastructure of the whole application. Well, it is always good that everybody in a project knows about everything, but the idea is to not making your code dependent on infrastructure. And this is where OSGi Remote Services come in. You develop the service and the service consumer as if they would be executed in the same runtime. In the deployment the lightweight communication will be added to support service communication over a network.
And as initially mentioned, I want to have a look at ways how to probably get rid of the networking issues I faced in the presentations in the past.
To understand this blog post you should be familiar with OSGi services and ideally with OSGi Declarative Services. If you are not familiar with OSGi DS, you can get an introduction by reading my blog post Getting Started with OSGi Declarative Services.
In short, the OSGi Service Layer specifies a Service Producer that publishes a service, and a Service Consumer that listens and retrieves a service. This is shown in the following picture:
With OSGi Remote Services this picture is basically the same. The difference is that the services are registered and consumed across network boundaries. For OSGi Remote Services the above picture could be extended to look like the following:
To understand the above picture and the following blog post better, here is a short glossary for the used terms:
To get a slightly better understanding, the following picture shows some more details inside the Remote Service Implementation block.
Note:
Actually this picture is still a simplified version, as internally there are Endpoint Event Listener and Remote Service Admin Listener that are needed to trigger all the necessary actions. But to get an idea how things play together this picture should be sufficient.
Now let’s explain the picture in more detail:
To simplify the picture again, the important takeaways are the Distribution Provider and the Discovery. The Distribution Provider is responsible for exporting and importing the service, the Discovery is responsible for announcing and discovering the service. The other terms are needed for a deeper understanding, but for a high level understanding of OSGi Remote Services, these two are sufficient.
Now it is time to get our hands dirty and play with OSGi Remote Services. This tutorial has several steps:
There are different ways and tools available for OSGi development. In this tutorial I will use the Eclipse PDE Tooling (Plug-in Development Environment). I also published this tutorial with other toolings if you don’t want to use PDE:
Note:
Remember to activate the PDE DS Annotation Processing via Window → Preferences → Plug-in Development → DS Annotations.
While the implementation and export of an OSGi service as a Remote Service is trivial in first place, the definition of the runtime can become quite complicated. Especially collecting the necessary bundles is not that easy without some guidance.
As a reference, with Equinox as underlying OSGi framework the following bundles need to be part of the runtime as a basis:
With the above basic runtime configuration the Remote Services will not yet work. There are still two things missing, the Discovery and the Distribution Provider. ECF provides different implementations for both. Which implementations to use needs to be defined by the project. In this tutorial we will use Zeroconf/JmDNS for the Discovery and the Generic Distribution Provider:
Note:
You can find the list of different implementations with the documentation about the bundles, configuration types and intents in the ECF Wiki:
With the Eclipse PDE tooling (Plug-in Development Environment) it is a best practice to create a Target Definition. This way you explicitly specify what to consume for building your application. For this tutorial all needed plug-ins and features are available via p2 update sites, so the creation of the Target Definition is straight forward.
The source of the .target file should look similar to the following snippet, just in case you are using the Generic Text Editor for creating and editing a Target Definition instead of the wizard based PDE Target Definition Editor.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?pde version="3.8"?>
<target name="org.fipro.remoteservice.target">
<locations>
<location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
<repository location="https://download.eclipse.org/releases/2021-12"/>
<unit id="org.eclipse.equinox.compendium.sdk.feature.group" version="3.22.200.v20211021-1418"/>
<unit id="org.eclipse.equinox.core.sdk.feature.group" version="3.23.200.v20211104-1730"/>
<unit id="org.eclipse.equinox.executable.feature.group" version="3.8.1400.v20211117-0650"/>
</location>
<location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
<repository location="https://download.eclipse.org/rt/ecf/3.14.31/site.p2"/>
<unit id="org.eclipse.ecf.remoteservice.sdk.feature.feature.group" version="3.14.31.v20220116-0708"/>
</location>
</locations>
</target>
Note:
The Eclipse SimRel p2 repository https://download.eclipse.org/releases/2021-12 also contains ECF, but in the older version 3.14.26. That version has a bug (I will notice later) which was fixed with 3.14.31. The current ECF version can be found via the ECF Download page.
After the creation of the Target Platform project, we need to create the Service API project and the Service Implementation project.
StringModifier
into the created packagepackage org.fipro.modifier.api;
public interface StringModifier {
String modify(String input);
}
StringInverter
into the created packagepackage org.fipro.modifier.inverter;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
@Component(property= {
"service.exported.interfaces=*",
"service.exported.configs=ecf.generic.server" }
)
public class StringInverter implements StringModifier {
@Override
public String modify(String input) {
return (input != null)
? new StringBuilder(input).reverse().toString()
: "No input given";
}
}
The only thing that needs to be done additionally in comparison to creating a local OSGi service, is to configure that the service should be exported as Remote Service. This is done by setting the component property service.exported.interfaces
. The value of this property needs to be a list of types for which the class is registered as a service. For a simple use case like the above, the asterisk can be used, which means to export the service for all interfaces under which it is registered, but to ignore the classes. For more detailed information have a look at the Remote Service Properties section of the OSGi Compendium Specification.
The other component property used in the above example is service.exported.configs
. This property is used to specify the configuration types, for which the Distribution Provider should create Endpoints. If it is not specified, the Distribution Provider is free to choose the default configuration type for the service.
Note:
In the above example we use the ECF Generic Provider. This one by default chooses a SSL configuration type, so without additional configuration the example would not work if we don’t specify the configuration type.
Additionally you can specify Intents via the service.exported.intents
component property to constrain the possible communication mechanisms that a distribution provider can choose to distribute a service. An example will be provided at a later step.
In a PDE based project you either create a launch configuration or a product configuration. With the later you are even able to build an executable runtime from the command line via Tycho that you can then deploy.
true
Now you can save the changes and start the Inverter Service Runtime from the Overview tab via Launch an Eclipse application. But actually you won’t see anything now, unless a running process in the background.
The implementation of a Remote Service Consumer also quite simple. From the development perspective there is nothing to consider. The service consumer is implemented without any additions. Only the runtime needs to be extended to contain the necessary bundles for Discovery and Distribution, which is covered in the next section.
The simplest way of implementing a service consumer is a Gogo Shell command.
ModifyCommand
into the created packagepackage org.fipro.modifier.client;
import java.util.List;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
@Component(
property= {
"osgi.command.scope:String=fipro",
"osgi.command.function:String=modify"},
service=ModifyCommand.class
)
public class ModifyCommand {
@Reference
volatile List<StringModifier> modifier;
public void modify(String input) {
if (modifier.isEmpty()) {
System.out.println("No StringModifier registered");
} else {
modifier.forEach(m -> System.out.println(m.modify(input)));
}
}
}
Creating a Product Project with a Product Configuration for the Service Consumer is similar to the Service Runtime. Just change the project and configuration name to org.fipro.modifier.client.product
. And of course instead of org.fipro.modifier.inverter
you need to add org.fipro.modifier.client
and the console bundles to the Contents of the Product Configuration.
true
Now you can save start the Inverter Service Runtime from the Overview tab via Launch an Eclipse application. Once the application is started you can execute the created Gogo Shell command via
modify <input>
If services are available, it will print out the modified results. Otherwise the message “No StringModifier registered” will be printed.
Note:
I have configured the bare minimum autostarting configuration which should actually start all required bundles based on the bundle configurations and dependencies. If you face any issues, try to check if all bundles are Active. Otherwise add additional entries in the Start Levels section.
There are several events with regards to importing and exporting Remote Services, that are fired by the Remote Service Admin synchronously once they happen. These events are posted asynchronously via the OSGi Event Admin under the topic
org/osgi/service/remoteserviceadmin/<type>
Where <type>
can be one of the following:
A simple event listener that prints to the console on any Remote Service Admin Event could look like this:
@Component(property = EventConstants.EVENT_TOPIC + "=org/osgi/service/remoteserviceadmin/*")
public class RemoteServiceEventListener implements EventHandler {
@Override
public void handleEvent(Event event) {
System.out.println(event.getTopic());
for (String objectClass : ((String[])event.getProperty("objectClass"))) {
System.out.println("\t"+objectClass);
}
}
}
For further details on the Remote Service Admin Events have a look at the OSGi Compendium Specification Chapter 122.7.
If you need to react synchronously on these events, you can implement a RemoteServiceAdminListener
. I typically would not recommend this, unless you really want blocking calls on import/export events. Typically it is intended to be used internally by the Remote Service Admin. But for debugging purposes the ECF project also provides a DebugRemoteServiceAdminListener
. It writes the endpoint description via a Writer to support debugging of Remote Services. Via the following class you could easily register a DebugRemoteServiceAdminListener
via OSGi DS that prints the information on the console.
@Component
public class DebugListener
extends DebugRemoteServiceAdminListener
implements RemoteServiceAdminListener {
// register the DebugRemoteServiceAdminListener via DS
}
To test this you can either add the above components to one of the existing bundles, or create a new bundle and add that bundle to the runtimes.
The ECF project provides several ways for runtime inspection and runtime debugging. This is mainly done Gogo Shell commands provided via separate bundles. To enable the OSGi console and the ECF console commands, you need to add the following bundles to your runtime:
If you add those bundles to the Service Provider Runtime, you also need to add the -console parameter to the Program Arguments of the Product Configuration (Launching tab) to activate the OSGi Console in interactive mode. Of course adding the ECF Console bundles to the Service Consumer Runtime is also very helpful for debugging.
With the ECF Console bundles added to the runtime, there are several commands to inspect and interact with the Remote Service Admin. As an overview the available commands are listed in the wiki: Gogo Commands for Remote Services Development
Additionally the DebugRemoteServiceAdminListener
described above is activated by default with the ECF Console bundles. It can be activated or deactivated in the runtime via the command
ecf:rsadebug <true/false>
One of the biggest issues I faced when working with Remote Services is networking as mentioned in the introduction. In the above example the ECF Generic Distribution Provider is used for a simpler setup. But for example in a corporate network with enabled firewalls somewhere in the network setup, the example will probably not work. As said before, the ECF project provides multiple Distribution Provider implementations, which gives the opportunity to configure the setup to match the project needs. One interesting implementation in that area is the JAX-RS Distribution Provider. Using that one could probably help solving several of the networking issues related to firewalls. But as with the whole Remote Service topic, the complexity in the setup is quite high because of the increased number of dependencies that need to be resolved.
The JAX-RS Distribution Provider implementation is available for Eclipse Jersey and Apache CXF. It uses the OSGi HttpService to register the JAX-RS resource, and of course it then also needs a Servlet container like Eclipse Jetty to provide the JAX-RS resource. I will show the usage of the Jersey based implementation in the following sections.
As a first step the JAX-RS Distribution Provider needs to be consumed. In PDE this means to add it to the Target Definition. Unfortunately it is not officially released via the Eclipse Foundation infrastructure, but the p2 update site is available via the GitHub project.
The source of the .target file should look similar to the following snippet, just in case you are using the Generic Text Editor for creating and editing a Target Definition instead of the wizard based PDE Target Definition Editor.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?pde version="3.8"?>
<target name="org.fipro.remoteservice.target">
<locations>
<location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
<repository location="https://download.eclipse.org/releases/2021-12"/>
<unit id="org.eclipse.equinox.compendium.sdk.feature.group" version="3.22.200.v20211021-1418"/>
<unit id="org.eclipse.equinox.core.sdk.feature.group" version="3.23.200.v20211104-1730"/>
<unit id="org.eclipse.equinox.executable.feature.group" version="3.8.1400.v20211117-0650"/>
<unit id="org.eclipse.equinox.server.jetty.feature.group" version="1.10.900.v20211021-1418"/>
</location>
<location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
<repository location="https://raw.githubusercontent.com/ECF/JaxRSProviders/master/build/"/>
<unit id="org.eclipse.ecf.provider.jersey.client.feature.feature.group" version="0.0.0"/>
<unit id="org.eclipse.ecf.provider.jersey.server.feature.feature.group" version="0.0.0"/>
</location>
<location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
<repository location="https://download.eclipse.org/rt/ecf/3.14.31/site.p2"/>
<unit id="org.eclipse.ecf.remoteservice.sdk.feature.feature.group" version="3.14.31.v20220116-0708"/>
</location>
</locations>
</target>
The implementation of the service already looks different compared to what you have seen so far. Instead of only adding the necessary Component Properties to configure the service as a Remote Service, the service implementation does directly contain the JAX-RS annotations. That of course also means that the annotations need to be available.
UppercaseModifier
class into that packagepackage org.fipro.modifier.uppercase;
import java.util.Locale;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
//The JAX-RS path annotation for this service
@Path("/modify")
//The OSGi DS component annotation
@Component(
immediate = true,
property = {
"service.exported.interfaces=*",
"service.exported.intents=jaxrs"})
public class UppercaseModifier implements StringModifier {
@GET
// The JAX-RS annotation to specify the result type
@Produces(MediaType.TEXT_PLAIN)
// The JAX-RS annotation to specify that the last part
// of the URL is used as method parameter
@Path("/{value}")
@Override
public String modify(@PathParam("value") String input) {
return (input != null)
? input.toUpperCase(Locale.getDefault())
: "No input given";
}
}
For the JAX-RS annotations, please have a look at various existing tutorials and blog posts in the internet, for example
About the OSGi DS configuration:
**service.exported.interfaces=***
**service.exported.intents=jaxrs**
Note:
As mentioned earlier there is a bug in ECF 3.14.26 which is integrated in the Eclipse 2021-21 SimRel repo. The service.exported.intents
property is not enough to get the JAX-RS resource registered. Additionally it is necessary to set service.exported.configs=ecf.jaxrs.jersey.server
to make it work. This was fixed shortly after I reported it and is included with the current ECF 3.14.31 release. The basic idea of the intent configuration is to make the service independent of the underlying JAX-RS Distribution Provider implementation (Jersey vs. Apache CXF).
For the JAX-RS Distribution Provider Runtime a lot more dependencies are required. The following list should cover the additional necessary base dependencies:
For the Service Provider we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Server bundles, the Jetty as embedded server and the HTTP Whiteboard:
For the Service Consumer we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Client bundles to be able to access the JAX-RS resource:
true
Now you can start the Uppercase JAX-RS Service Runtime from the Overview tab via Launch an Eclipse application. After the runtime is started the service will be available as JAX-RS resource and can be accessed in a browser, e.g. http://localhost:8181/modify/remoteservice
Note:
Don’t worry if you see a SelectContainerException
in the console. It is only an information that tells that the service from the first part of the tutorial can not be imported in the runtime of this part of the tutorial and vice versa. The first service is distributed via the Generic Provider, while the second service is distributed by the JAX-RS Provider. But both are using the JmDNS Discovery Provider.
The URL path is defined via the JAX-RS annotations, “modify” via @Path("/modify")
on the class, “remoteservice” is the path parameter defined via @Path("/{value}")
on the method (if you change that value, the result will change accordingly). You can extend the URL via configurations shown below:
-Decf.jaxrs.server.pathPrefix=<value>
-Decf.jaxrs.server.pathPrefix=/services
)@Component
annotationecf.jaxrs.server.pathPrefix=<value>
e.g.
@Component(
immediate = true,
property = {
"service.exported.interfaces=*",
"service.exported.intents=jaxrs",
"ecf.jaxrs.server.pathPrefix=/upper"})
If all of the above configurations are added, the new URL to the service is, e.g. http://localhost:8181/services/upper/modify/remoteservice
Additional information about available component properties can be found here: Jersey Service Properties
Note:
Especially the auto-start configuration is quite annoying with the Equinox launcher when you know that the Bnd launcher or the Felix launcher have configuration attributes for auto-starting all bundles. The Equinox launcher does not have such a configuration AFAIK, but you could achieve something similar by either implementing a custom Configurator or by registering a BundleListener that starts all bundles in RESOLVED state. I stick to the Equinox default to avoid additional topics here, but for the interested, have a look at the provided links.
Note:
With the latest version of the JAX-RS Distribution Provider, the autostart configuration is much more comfortable than before. There were several improvements to make the definition of a runtime more user friendly, so if you are already familiar with the JAX-RS Distribution Provider and used it in the past, be sure to update it to the latest version to benefit from those modifications.
To consume the Remote Service provided via JAX-RS Distribution Provider, the runtime needs to be extended to include the additional dependencies:
If you now start the Service Consumer Runtime and have the Service Provider Runtime also running, you can execute the following command
modify jax
This will actually lead to an error if you followed my tutorial step by step:
ServiceException: Service exception on remote service proxy
The reason is that the Service Interface does not contain the JAX-RS annotations as the service actually does, and therefore the mapping is working. So while for providing the service the interface does not need to be modified, it has to for the consumer side.
Note:
I sometimes encountered a **Circular reference detected**
error. After some investigation this issue seems to be related to autostarting org.apache.felix.scr
. If you have auto-start set to true for that bundle and see that issue, try to remove the autostart configuration for that bundle.
Also ensure that the workspace data is cleared on start, as the previous execution might have left some cached data that conflicts with the updated runtime configuration. To do this:
If that doesn’t help, try to delete the run configuration and create a new one via the Product Configuration.
StringModifier
class and add the JAX-RS annotations to be exactly the same as for the Service Implementationpackage org.fipro.modifier.api;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
@Path("/modify")
public interface StringModifier {
@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("/{value}")
String modify(@PathParam("value") String input);
}
If you now start the Uppercase Service Provider Runtime and the Service Consumer Runtime again, the error should be gone and you should see the expected result.
After the Service Interface was extended to include the JAX-RS annotations, the first Service Provider Runtime will not resolve anymore because of missing dependencies. To fix this:
Now you can start that Service Provider Runtime again. If the other Service Provider and the Service Consumer are also active, executing the modify command will now output the result of both services.
In the tutorial we used JmDNS/Zeroconf as Discovery Provider. This way there is not much we have to do as a developer or administrator despite adding the according bundle to the runtime. This kind of Discovery is using a broadcast mechanism to announce the service in the network. In cases this doesn’t work, e.g. firewall rules that block broadcasting, it is also possible that you use a static file-based discovery. This can be done using the Endpoint Description Extender Format (EDEF) and is also supported by ECF.
Let’s create an additional service that is distributed via JAX-RS. But this time we exclude the org.eclipse.ecf.provider.jmdns bundle, so there is no additional discovery inside the Service Provider Runtime. We also add the console bundles to be able to inspect the runtime.
Note: If you don’t want to create another service, you can also modify the previous uppercase service. In that case remove the org.eclipse.ecf.provider.jmdns bundle from the product configuration and ensure that the console bundles are added to be able to inspect the remote service runtime via the OSGi Console.
Create the Service Implementation plug-in project
CamelCaseModifier
class into that packagepackage org.fipro.modifier.camelcase;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
@Path("/modify")
@Component(
immediate = true,
property = {
"service.exported.interfaces=*",
"service.exported.intents=jaxrs",
"ecf.jaxrs.server.pathPrefix=/camelcase"})
public class CamelCaseModifier implements StringModifier {
@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("/{value}")
@Override
public String modify(@PathParam("value") String input) {
StringBuilder builder = new StringBuilder();
if (input != null) {
for (int i = 0; i < input.length(); i++) {
char currentChar = input.charAt(i);
if (i % 2 == 0) {
builder.append(Character.toUpperCase(currentChar));
} else {
builder.append(Character.toLowerCase(currentChar));
}
}
}
else {
builder.append("No input given");
}
return builder.toString();
}
}
true
Once the runtime is started the service should be available via http://localhost:8282/services/camelcase/modify/remoteservice
You probably noticed a console output on startup that shows the Endpoint Description XML. This is actually what we need for the EDEF file. You can also get the endpoint description at runtime via the ECF Gogo Command listexports <endpoint.id>
:
osgi> listexports
endpoint.id |Exporting Container ID |Exported Service Id
5918da3a-a971-429f-9ff6-87abc70d4742 |http://localhost:8282/services/camelcase |38
osgi> listexports 5918da3a-a971-429f-9ff6-87abc70d4742
<endpoint-descriptions xmlns="http://www.osgi.org/xmlns/rsa/v1.0.0">
<endpoint-description>
<property name="ecf.endpoint.id" value-type="String" value="http://localhost:8282/services/camelcase"/>
<property name="ecf.endpoint.id.ns" value-type="String" value="ecf.namespace.jaxrs"/>
<property name="ecf.endpoint.ts" value-type="Long" value="1642667915518"/>
<property name="ecf.jaxrs.server.pathPrefix" value-type="String" value="/camelcase"/>
<property name="ecf.rsvc.id" value-type="Long" value="1"/>
<property name="endpoint.framework.uuid" value-type="String" value="80778aff-63c7-448d-92a5-7902eb6782ae"/>
<property name="endpoint.id" value-type="String" value="5918da3a-a971-429f-9ff6-87abc70d4742"/>
<property name="endpoint.package.version.org.fipro.modifier" value-type="String" value="1.0.0"/>
<property name="endpoint.service.id" value-type="Long" value="38"/>
<property name="objectClass" value-type="String">
<array>
<value>org.fipro.modifier.StringModifier</value>
</array>
</property>
<property name="remote.configs.supported" value-type="String">
<array>
<value>ecf.jaxrs.jersey.server</value>
</array>
</property>
<property name="remote.intents.supported" value-type="String">
<array>
<value>passByValue</value>
<value>exactlyOnce</value>
<value>ordered</value>
<value>osgi.async</value>
<value>osgi.private</value>
<value>osgi.confidential</value>
<value>jaxrs</value>
</array>
</property>
<property name="service.imported" value-type="String" value="true"/>
<property name="service.imported.configs" value-type="String">
<array>
<value>ecf.jaxrs.jersey.server</value>
</array>
</property>
<property name="service.intents" value-type="String">
<array>
<value>jaxrs</value>
</array>
</property>
</endpoint-description>
</endpoint-descriptions>
The endpoint description is needed by the Service Consumer to discover the new service. Without a Discovery that is broadcasting, the service needs to be discovered statically via an EDEF file. As the EDEF file is registered via manifest header, we create a new plug-in. You could also place it in an existing bundle like org.fipro.modifier.client, but for some more OSGi dynamics fun, let’s create a new plug-in.
Remote-Service: edef/camelcase.xml
If you start the Service Consumer Runtime, the service will not be available. This is because the new org.fipro.modifier.client.edef bundle is not activated as nobody requires it (the Equinox default!). But we can activate it via the console. First we need to find the bundle-id via lb
and then start it via start <bundle-id>
. The output should look similar to the following snippet:
osgi> lb edef
START LEVEL 6
ID|State |Level|Name
63|Resolved | 4|EDEF Discovery Configuration (1.0.0.qualifier)|1.0.0.qualifier
osgi> start 63
Now the service should be available via the modify
command. If you stop the bundle, the service becomes unavailable again.
The EDEF specification itself would not be sufficient for productive usage. For example, the values of the endpoint description properties need to match. For the endpoint.id this would be really problematic, as that value is a random generated uuid and changes on each runtime start. So if the Service Provider Runtime is restarted there is a new endpoint.id value. ECF includes a mechanism to support the discovery and the distribution even if the endpoint.id of the importer and the exporter do not match. This actually makes the EDEF file support work in productive environments.
ECF also provides a mechanism to create an endpoint description using a properties file. All the necessary endpoint description properties need to be included as properties with the respective types and values. The following example shows the properties representation for the EDEF XML of the above example. Note that for endpoint.id and endpoint.framework.uuid the type is set to uuid
and the value is 0. This way ECF will generate a random UUID and the matching feature will ensure that the distribution will work even without matching id values.
ecf.endpoint.id=http://localhost:8282/services/camelcase
ecf.endpoint.id.ns=ecf.namespace.jaxrs
ecf.endpoint.ts:Long=1642761763599
ecf.jaxrs.server.pathPrefix=/camelcase
ecf.rsvc.id:Long=1
endpoint.framework.uuid:uuid=0
endpoint.id:uuid=0
endpoint.package.version.org.fipro.modifier.api=1.0.0
endpoint.service.id:Long=38
objectClass:array=org.fipro.modifier.api.StringModifier
remote.configs.supported:array=ecf.jaxrs.jersey.server
remote.intents.supported:array=passByValue,exactlyOnce,ordered,osgi.async,osgi.private,osgi.confidential,jaxrs
service.imported:boolean=true
service.imported.configs:array=ecf.jaxrs.jersey.server
service.intents:array=jaxrs
Properties files can be used to override values in an underlying XML EDEF file and even as an alternative, so the XML file is not needed anymore. It is even possible to override properties values for different environments, which makes it very interesting in a productive environment. So there can be a default Properties file for the basic endpoint description, then an endpoint description per service that derives from the basic settings, and even profile specific settings that changes for example the ecf.endpoint.id URLs per profile (DEV/INT/PROD). More details on that topic can be found in the ECF Wiki.
Alternatively you can also trigger a remote service import via EDEF programmatically using classes from the org.osgi.service.remoteserviceadmin
package (see below). This way it is possible to dynamically import and close remote service registrations at runtime (without operating via low level OSGi bundle operations). The following snippet is an example for the programmatic registration of the service above:
Map<String, Object> properties = new HashMap<>();
properties.put("ecf.endpoint.id", "http://localhost:8282/services/camelcase");
properties.put("ecf.endpoint.id.ns", "ecf.namespace.jaxrs");
properties.put("ecf.endpoint.ts", 1642489801532l);
properties.put("ecf.jaxrs.server.pathPrefix", "/camelcase");
properties.put("ecf.rsvc.id", 1l);
properties.put("endpoint.framework.uuid", "0");
properties.put("endpoint.id", "0");
properties.put("endpoint.package.version.org.fipro.modifier.api", "1.0.0");
properties.put("endpoint.service.id", 38l);
properties.put("objectClass", new String[] { "org.fipro.modifier.api.StringModifier" });
properties.put("remote.configs.supported", new String[] { "ecf.jaxrs.jersey.server" });
properties.put("remote.intents.supported", new String[] { "passByValue", "exactlyOnce", "ordered", "osgi.async", "osgi.private", "osgi.confidential", "jaxrs" });
properties.put("service.imported", "true");
properties.put("service.intents", new String[] { "jaxrs" });
properties.put("service.imported.configs", new String[] { "ecf.jaxrs.jersey.server" });
EndpointDescription desc = new EndpointDescription(properties);
ImportRegistration importRegistration = admin.importService(desc);
The OSGi specification has several chapters and implementations to support a microservice architecture. The Remote Service and Remote Service Admin specifications are one of these and probably the most complicated ones, which was confirmed by several OSGi experts I talked with at conferences. Also the specification itself is not easy to understand, but I hope that this blog post helps to get a better understanding.
While Remote Services are pretty easy to implement, the complicated steps are in the setup of the runtime by collecting all necessary bundles. While the ECF project provides several examples and also tries to provide support for better bundle resolving, it is still not a trivial task. I hope this tutorial helps also in solving that topic a little bit.
Of course at runtime you might face networking issues, as I did in every talk for example. The typical fallacies are even referred in the Remote Service Specification. With the usage of JAX-RS and HTTP for the distribution of services and EDEF for a static file-based discovery, this might be less problematic. Give them a try if you are running into troubles.
At the end I again want to thank Scott Lewis for his continuous work on ECF and his support whenever I faced issues with my examples and had questions on some details. If you need an extension or if you have other requests regarding ECF or the JAX-RS Distribution Provider, please get in touch with him.
The Eclipse Platform is actually designed to be extensible and there exist many products that are based on the Eclipse IDE or can be installed into the Eclipse IDE as additional plug-ins. But to create such extensions you need to know the base you want to extend. In a setup with multiple partners that use different technology stacks and have different levels of experience with Eclipse based technology, you can’t assume that everything works easily. There are partners that have either experience with Eclipse 3 or Eclipse 4, you have partners that are neither aware of the Eclipse 3 or the Eclipse 4 platform, and you even have partners that do not want to take care about the underlying platform. Therefore we needed to find a way to make it easy for anyone to contribute new features, without having too much platform dependencies to take care about.
As a big fan of OSGi Declarative Services (you might know if you read some of my previous blog posts), I searched for a way to contribute a new feature to the user interface by implementing and providing an OSGi service. As an Eclipse Platform committer I know that the Eclipse 4 programming model fits very good for connecting the OSGi layer with the Eclipse layer. Something that doesn’t work that easily with the Eclipse 3 programming model. I called the solution I developed the Extended Contribution Pattern, which I want to describe here in more detail. And I hope with the techniques I show here, I can convince more people to use OSGi Declarative Services and the Eclipse 4 programming model in their daily work when creating Eclipse based products.
The main idea is that an integration layer is implemented with the Eclipse 4 programming model. That integration layer is responsible for the contribution to the Eclipse 3 based application (again, this is Eclipse 4 + Compatibility layer). Additionally it takes and processes the contributions provided via OSGi DS.
For people knowing the Eclipse 3 programming model, this sounds pretty similar to how Extension Points work. And the idea is actually the same. But in comparison, as a developer of the integration layer:
ExtensionRegistry
, which is quite some code that is also not type safeAs a contributor to the integration layer:
Note:
The Integration Layer is not needed for connecting the OSGi layer with the Eclipse layer. You can directly consume OSGi services easily via injection in Eclipse 4. The Integration Layer is used to abstract out the UI integration.
In this example I will show how the Extended Contribution Pattern can be used to contribute menu items to the context menu of the navigator views. Of course this could also be achieved by either contribute to the Eclipse 3 extension point or by directly contribute via Eclipse 4 model fragments. But the idea is that contributors of functionality should not care about the integration into the platform.
This step is the easiest one. The service interface needs to be a simple marker interface that will be used to mark a contribution class as an OSGi service.
NavigatorMenuContribution
package org.fipro.contribution.integration;
public interface NavigatorMenuContribution { }
Now let’s implement a service for a functionality we want to contribute. The Integration Layer is not complete yet and typically you would not show the contribution service implementation at this point. But to get a better understanding of the next steps in the Integration Layer, it is good to see how the contribution will look like.
Note:
Don’t forget to enable the DS Annotation processing in the Preferences. Otherwise the necessary OSGi Component Descriptions are not generated. As it is not enabled by default, it is a common pitfall when implementing OSGi Declarative Services with PDE tooling.
First we need to define the dependencies:
Switch to the Dependencies tab and add the following packages to the Imported Packages
javax.annotation
Needed for the @PostConstruct
annotation
org.fipro.contribution.integration (1.0.0)
org.osgi.service.component.annotations [1.3.0,2.0.0) optional
org.eclipse.jface
org.eclipse.core.resources
org.eclipse.core.runtime
Note:
Typically I recommend to use Import-Package instead of Require-Bundle. For plain OSGi this is the best solution. But I learned over the years that especially in the context of Eclipse IDE contributions being that strict doesn’t work out. Especially because of some split package issues in the Eclipse Platform. My personal rule for PDE based projects is:
Now create the service:
FileSizeContribution
@Component(property = {
"name = File Size",
"description = Show the size of the selected file" })
public class FileSizeContribution implements NavigatorMenuContribution {
@PostConstruct
public void showFileSize(IFile file, Shell shell) {
URI uri = file.getRawLocationURI();
Path path = Paths.get(uri);
try {
long size = Files.size(path);
MessageDialog.openInformation(
shell,
"File size",
String.format("The size of the selected file is %d bytes", size));
} catch (IOException e) {
MessageDialog.openError(
shell,
"Failed to retrieve the file size",
"Exception occured on retrieving the file size: "
+ e.getLocalizedMessage());
}
}
}
The important things to notice in the above snippet are:
NavigatorMenuContribution
@Component
to mark it as an OSGi DS component@Component
annotation has two properties to specify the name
and the description
. They will later be used for the user interface integration. In my opinion these two properties are component configurations and should therefore be specified as such. You could on the other side argue that this information could also be provided via some dedicated methods, but implementing methods to provide configurations for the service instance feels incorrect.@PostConstruct
. The first method parameter defines for which type the service is responsible.For a contributor the rules are pretty simple:
@Component
as an OSGi Declarative Service@PostConstruct
A contributor does not need to take care about the infrastructure in the Eclipse application and can focus on the feature that should be contributed.
Back to the Integration Layer now. To provide as much flexibility on the contributor side, there needs to be a mechanism that can map that flexibility to the real integration. For this we create a registry that consumes the contributions in first place and stores them for further usage. For the storage we introduce a wrapper around the service, that stores the type for which the service should be registered and the properties that should be used in the user interface (e.g. name and description). For the service properties the issue is that the properties are provided on OSGi DS injection level and can be retrieved from the ServiceRegistry
, but they are not easily accessible in the Eclipse layer. By keeping the information in a wrapper that is populated when the service becomes available, the problem can be handled.
The wrapper class looks similar to the following snippet:
public class NavigatorMenuContributionWrapper {
private final String id;
private final NavigatorMenuContribution instance;
private final String name;
private final String description;
private final String type;
public NavigatorMenuContributionWrapper(
Note:
If you are sure that the IDE you are contributing to is always started with Java >= 16, you can of course also implement that wrapper as a Java Record, which avoids quite some boilerplate code. In that case the accessor methods are different, as they are not prefixed with get.
public record NavigatorMenuContributionWrapper(
String id,
NavigatorMenuContribution serviceInstance,
String name,
String description,
String type) { }
In this tutorial I will stick with the old POJO approach, so people that are not yet on the latest Java version can follow easily.
The registry that consumes the NavigatorMenuContribution
services and stores them locally has the following characteristics:
service
parameter on the @Component
annotation.NavigatorMenuContribution
services. The reason is that we need to create the wrapper instances with the component properties. Field injection would not work here.@PostConstruct
method via reflection. To avoid reflection you could support a component property that gets evaluated, but that would make the contribution not so intuitive, as you would need to specify the same information twice. And actually the reflection is only executed once per service binding, so it should not really have an effect at runtime.Logger
via method injection of the LoggerFactory
. This is due to the fact that PDE does not support DS 1.4 annotation processing. With that support you could get the Logger
directly via field injection. Alternatively you can of course use a logging framework like SLF4J and don’t use the OSGi logging at all.The complete implementation looks like this:
@Component(service = NavigatorMenuContributionRegistry.class)
public class NavigatorMenuContributionRegistry {
LoggerFactory factory;
Logger logger;
private ConcurrentHashMap<String, Map<String, NavigatorMenuContributionWrapper>> registry
= new ConcurrentHashMap<>();
@Reference(
cardinality = ReferenceCardinality.MULTIPLE,
policy = ReferencePolicy.DYNAMIC)
protected void bindService(
NavigatorMenuContribution service, Map<String, Object> properties) {
String className = getClassName(service, properties);
if (className != null) {
Map<String, NavigatorMenuContributionWrapper> services =
this.registry.computeIfAbsent(
className,
key -> new ConcurrentHashMap<String, NavigatorMenuContributionWrapper>());
String id = (String) properties.getOrDefault("id", service.getClass().getName());
if (!services.containsKey(id)) {
services.put(id,
new NavigatorMenuContributionWrapper(
id,
service,
(String) properties.getOrDefault("name", service.getClass().getSimpleName()),
(String) properties.getOrDefault("description", null),
className));
} else {
if (this.logger != null) {
this.logger.error("A NavigatorMenuContribution with the ID {} already exists!", id);
} else {
System.out.println("A NavigatorMenuContribution with the ID " + id + " already exists!");
}
}
} else {
if (this.logger != null) {
this.logger.error(
"Unable to extract contribution class name for NavigatorMenuContribution {}",
service.getClass().getName());
} else {
System.out.println(
"Unable to extract contribution class name for NavigatorMenuContribution "
+ service.getClass().getName());
}
}
}
protected void unbindService(
NavigatorMenuContribution service, Map<String, Object> properties) {
String className = getClassName(service, properties);
String id = (String) properties.getOrDefault("id", service.getClass().getName());
if (className != null) {
Map<String, NavigatorMenuContributionWrapper> services =
this.registry.getOrDefault(className, new HashMap<>());
services.remove(id);
}
}
@SuppressWarnings("unchecked")
public List<NavigatorMenuContributionWrapper> getServices(Class<?> clazz) {
HashSet<String> classNames = new LinkedHashSet<>();
if (clazz != null) {
classNames.add(clazz.getName());
List<Class<?>> allInterfaces = ClassUtils.getAllInterfaces(clazz);
classNames.addAll(
allInterfaces.stream()
.map(Class::getName)
.collect(Collectors.toList()));
}
return classNames.stream()
.filter(Objects::nonNull)
.flatMap(name -> this.registry.getOrDefault(name, new HashMap<>()).values().stream())
.collect(Collectors.toList());
}
public NavigatorMenuContributionWrapper getService(String className, String id) {
return this.registry.getOrDefault(className, new HashMap<>()).get(id);
}
/**
* Extracts the class name for which the service should be
* registered. Returns the first parameter of the method annotated with
* {@link PostConstruct} .
*
* @param service The service for which the contribution class name
* should be returned.
* @param properties The component properties map of the
* service object.
* @return The contribution class name for which the service should be
* registered.
*/
private String getClassName(NavigatorMenuContribution service, Map<String, Object> properties) {
String className = null;
// find method annotated with @PostConstruct
Class<?> contributionClass = service.getClass();
Method[] methods = contributionClass.getMethods();
for (Method method : methods) {
if (method.isAnnotationPresent(PostConstruct.class)) {
Class<?>[] parameterTypes = method.getParameterTypes();
if (parameterTypes.length > 0) {
if (Collection.class.isAssignableFrom(parameterTypes[0])) {
// extract generic information for List support
Type[] genericParameterTypes = method.getGenericParameterTypes();
if (genericParameterTypes[0] instanceof ParameterizedType) {
Type[] typeArguments =
((ParameterizedType)genericParameterTypes[0]).getActualTypeArguments();
className = typeArguments.length > 0 ? typeArguments[0].getTypeName() : null;
}
} else {
className = parameterTypes[0].getName();
}
break;
}
}
}
return className;
}
@Reference(
cardinality = ReferenceCardinality.OPTIONAL,
policy = ReferencePolicy.DYNAMIC)
void setLogger(LoggerFactory factory) {
this.factory = factory;
this.logger = factory.getLogger(getClass());
}
void unsetLogger(LoggerFactory loggerFactory) {
if (this.factory == loggerFactory) {
this.factory = null;
this.logger = null;
}
}
}
Remember to update the Dependencies in the MANIFEST.MF to include the necessary packages.
With the above implementation we need to add additional dependencies. To avoid complications at implementation time in the next step, we update the plug-in dependencies in advance. As we know that we want to consume OSGi services and operate on Eclipse resources, we know what dependencies we need. In a real-world project the dependencies typically grow while implementing.
Switch to the Dependencies tab and add the following packages to the Imported Packages if they are not included yet
javax.annotation
@PostConstruct
annotationjavax.inject (1.0.0)
org.apache.commons.lang (2.6.0)
ClassUtils
in the inspection.org.osgi.service.component.annotations [1.3.0,2.0.0) optional
org.osgi.service.log (1.5.0)
Add the following plug-ins to the Required Plug-ins section if they are not included yet
org.eclipse.e4.core.contexts
IEclipseContext
org.eclipse.e4.core.di
@Evaluate
)org.eclipse.e4.core.di.extensions
@Service
)org.eclipse.e4.ui.di
@AboutToShow
)org.eclipse.e4.ui.model.workbench
MMenuElement
)org.eclipse.e4.ui.services
IServiceConstants
)org.eclipse.e4.ui.workbench
EModelService
)org.eclipse.jface
org.eclipse.core.resources
org.eclipse.core.runtime
After the services and the Integration Layer are specified, let’s have a look on how to use it. For this create a Model Fragment to contribute a dynamic menu contribution to the context menus.
org.fipro.contribution.integration
The wizard that opens will do the following three things:
Since Eclipse 2021-06 (4.20) it is also possible to register a Model Fragment via Manifest header. To make use of this follow these steps:
Model-Fragment: fragment.e4xmi;apply=always
org.eclipse.e4.ui.model.workbench
from the MANIFEST.MF file if not needed. In this example we will not remove it, as we need it in another use case for dynamically creating model elements.Note:
Adding support for the new Model-Fragment
header in the PDE tooling is currently ongoing, e.g. via Bug 572946. So with the next Eclipse 2021-12 (4.22) the manual modification is not necessary anymore. The Eclipse 2021-12 M3 is already including the support. Using that version you will see this wizard:
The next step is to define the model contributions. This example is about contributing a Dynamic Menu Contribution to the context menu of the Navigators. Therefore it is necessary to contribute a Command, a Handler and the Menu Contribution. To do this start by opening the fragment.e4xmi file.
After the above steps the model fragment is prepared for the contributions and the corresponding classes are generated. The next step is to implement the Imperative Expression, the Handler and the Dynamic Menu Contribution.
Imperative Expressions are the replacement for Core Expressions if you want to rely on plain Eclipse 4 without the plugin.xml. Using Imperative Expressions you have the option to implement an expression rather than describing it in an XML format. As in my opinion the definition of a Core Expression in the plugin.xml was never really intuitive, I really like the Imperative Expression in Eclipse 4. You might want to discuss that the declarative way of the Core Expressions is more powerful, but actually I have not yet found a case where an Imperative Expression is not suitable as a replacement.
The following code shows the implementation of the ResourceExpression
that checks if a single element is selected and that element is an IResource
and there is at least one contribution service registered for that type.
public class ResourceExpression {
@Evaluate
public boolean evaluate(
The Dynamic Menu Contribution implementation takes the selected element and tries to retrieve the registered contribution services from the registry. If services for the selected type are registered it creates the menu items that should be added to the context menu.
public class DynamicMenuContribution {
@AboutToShow
public void aboutToShow(
List<MMenuElement> items,
EModelService modelService,
MApplication app,
@Service NavigatorMenuContributionRegistry registry,
@Named(IServiceConstants.ACTIVE_SELECTION) IStructuredSelection selection) {
List<NavigatorMenuContributionWrapper> services =
registry.getServices(selection.getFirstElement().getClass());
services.forEach(s -> {
MHandledMenuItem menuItem =
The handler is triggered by selecting the generated menu item and therefore gets the provided command parameters. It is then using the ContextInjectionFactory
to execute the method annotated with @PostConstruct
in the service instance. The following code shows how this could look like.
public class FileNavigatorActionHandler {
@Execute
public void execute(
@Named("contribution.type") String type,
@Named("contribution.id") String id,
@Named(IServiceConstants.ACTIVE_SELECTION) IStructuredSelection selection,
@Service NavigatorMenuContributionRegistry registry,
IEclipseContext context) {
NavigatorMenuContributionWrapper wrapper =
Let’s verify if everything works as intended. For this simply right click on one of the projects and select Run As -> Eclipse Application
This will start an Eclipse IDE that has the plug-ins from the workspace installed.
In the newly opened Eclipse instance create a new project. In that project create a directory and a file. If you right click on the created directory, you should not see any additional menu entry. But on performing a right click on the created file, you should find the menu entry Navigator Contributions, which is a sub-menu that contains the File Size entry. Selecting that should open a dialog that shows the size of the selected file. Hovering the File Size menu entry should also open the tooltip with the description that is provided via service property.
Note:
For this example use a simple text file. Creating for example a Java source file will not work, as a Java source file is a CompilationUnit
, which is not an IResource
.
Now let’s extend the example and contribute some more features to verify if the Extended Contribution Pattern works.
Switch to the Dependencies tab and add the following packages to the Imported Packages
javax.annotation
Needed for the @PostConstruct
annotation
org.fipro.contribution.integration (1.0.0)
Needed for the previously created marker interfaceorg.osgi.service.component.annotations [1.3.0,2.0.0) optional
Needed for the OSGi DS annotationsorg.eclipse.jface
Needed for showing dialogsorg.eclipse.core.resources
Needed for the Eclipse Core Resources API to access the Eclipse resourcesorg.eclipse.core.runtime
Needed as transitive dependency for operating on the resourcesIFile
handling, e.g. FileCopyContribution
as shown below@Component(property = {
"name = File Copy",
"description = Create a copy of the selected file" })
public class FileCopyContribution implements NavigatorMenuContribution {
@PostConstruct
public void copyFile(IFile file, Shell shell) {
URI uri = file.getRawLocationURI();
Path path = Paths.get(uri);
Path toPath = Paths.get(
path.getParent().toString(),
"CopyOf_" + file.getName());
try {
Files.copy(path, toPath);
// refresh the navigator
file.getParent().refreshLocal(IResource.DEPTH_INFINITE, null);
} catch (IOException | CoreException e) {
MessageDialog.openError(
shell,
"Failed to copy the file size",
"Exception occured on copying the file: "
+ e.getLocalizedMessage());
}
}
}
IFolder
, e.g. FolderContentContribution
as shown below@Component(property = {
"name = Folder Content",
"description = Show the number of files in the selected folder" })
public class FolderContentContribution implements NavigatorMenuContribution {
@PostConstruct
public void showFolderContent(IFolder folder, Shell shell) {
URI uri = folder.getRawLocationURI();
Path path = Paths.get(uri);
try {
long count = Files.list(path).count();
MessageDialog.openInformation(
shell,
"Folder Content",
String.format("The folder contains %d files", count));
} catch (IOException e) {
MessageDialog.openError(
shell,
"Failed to retrieve the folder content",
"Exception occured on retrieving the folder content: "
+ e.getLocalizedMessage());
}
}
}
If you start the application again like before, you will see an additional menu entry in the context menu for a file, and there is now even a menu entry in the context menu of a folder.
A nice side effect is that the solution supports OSGi dynamics. That means a contribution can come and go at runtime without the need to restart the Eclipse IDE. To verify this, open the Host OSGi Console (open the Console view and switch to the Host OSGi Console via the view menu).
Enter the following command to find the id of the org.fipro.contribution.extended
bundle:
lb fipro
Then stop that bundle via
stop <id>
The console looks like this in my environment as an example:
osgi> lb fipro
START LEVEL 6
ID|State |Level|Name
649|Active | 4|Service (1.0.0.qualifier)|1.0.0.qualifier
685|Active | 4|Integration Layer (1.0.0.qualifier)|1.0.0.qualifier
689|Active | 4|Extended (1.0.0.qualifier)|1.0.0.qualifier
osgi> stop 689
Now verify that the menu contributions for the folder and the File Copy menu entry are gone. If you start the bundle again via start <id>
the menu entries are available again.
Maybe it is only me that is so excited about that, but supporting OSGi Dynamics more and more in the Eclipse IDE itself feels good.
With the Extended Contribution Pattern it is possible to create a framework that eases the collaboration between heterogeneous organisations. While you only need a few people that manage the Integration Layer and therefore know about Eclipse Platform details, every developer in the collaboration is able to contribute a functionality. As you can see above, the implementation of a contribution service is simple in terms of integration. This is by the way similar to how popular web frameworks are designed.
As I said in the introduction, the APP4MC project uses the Extended Contribution Pattern in various places. We have implemented a Model Visualization that shows a visualization of a selected AMALTHEA Model element, e.g. via JavaFX, PlantUML or plain SWT. You can get some more details in the APP4MC Online Help.
APP4MC 2.0 will also include context sensitive actions on selected AMALTHEA Model elements. So it is possible to contribute processing actions for a selected model element or actions to create model elements in a selected model element container.
You can also see that the combination of OSGi Declarative Services and the Eclipse 4 programming model brings a lot of benefits. And there was quite some progress over the last years to improve this. Actually the implementation and usage of OSGi services becomes really usable with the usage of the Eclipse 4 programming model, as you can easily consume services via injection (note the @Service
annotation). The only thing to remember is that the PROTOTYPE scope is not yet supported in the Eclipse injection, which means the services are single instances, which blocks you from using states in your services for the Extended Contribution Pattern.
Finally some words about Eclipse 3.x vs. Eclipse 4.x. As an Eclipse Platform committer I am used to the Eclipse 4 programming model for several years. Since 2015 I published articles about the migration from Eclipse 3 to Eclipse 4 and talked about that topic on conferences. But still people rely on the Eclipse 3 programming model and ask questions about Eclipse 4 migrations. IMHO there are several reasons why Eclipse 3 is still active in so many places:
In this blog post you should have noticed that it is possible and even not complicated to extend an Eclipse 3.x based application like the Eclipse IDE with plain Eclipse 4.x mechanisms. If you look at techniques like Imperative Expressions and contribution of model fragments via manifest header, your contributing bundle does not contain a single Eclipse 3.x mechanism like extension points and the corresponding plugin.xml file.
This also means, if you still ask yourself if you should migrate from Eclipse 3.x to Eclipse 4.x, just give it a try. Start with a small part and test what you can do. A migration scenario is not a “big bang”, you can do it incrementally. And remember, you probably won’t be able to get rid of everything, e.g. file based editors linked to the navigators, but you can improve in several spots in your project.
Here are some useful links to previous blog posts if you are not yet familiar with all the topics included here:
The sources of this blog post can be found here.
]]>This topic is of course not new and there are already some explanations like this blog post or this topic on the equinox-dev mailing list. But as it still took me a while to get it working, I write this blog post to share my findings with others. And of course to persist my findings in my “external memory” if I need it in the future again. :-)
The first step is to add the necessary bundles to your target platform. You can either consume it from an Eclipse p2 Update Site or directly from a Maven repository using the m2e PDE Integration feature.
Note:
If you open the .target file with the Generic Text Editor, you can simply paste one of the below blocks and then resolve the target definition, instead of using the Target Editor.
Using an Eclipse p2 Update Site you can add the necessary dependencies by adding the following block to your target definition.
<location includeAllPlatforms="true" includeConfigurePhase="false" includeMode="slicer" includeSource="true" type="InstallableUnit">
<repository location="https://download.eclipse.org/releases/2020-12/"/>
<unit id="jakarta.xml.bind" version="2.3.3.v20201118-1818"/>
<unit id="com.sun.xml.bind" version="2.3.3.v20201118-1818"/>
<unit id="javax.activation" version="1.2.2.v20201119-1642"/>
<unit id="javax.xml" version="1.3.4.v201005080400"/>
</location>
Note:
The jakarta.xml.bind bundle from Orbit is a re-bundled version of the original bundle in Maven Central and unfortunately specifies a version constraint on some javax.xml
packages. As the Java runtime does not specify a version on the javax.xml
packages, the configuration will fail to resolve. To solve this you need to add the javax.xml
bundle to your target definition and the product configuration.
For consuming the libraries directly from a Maven repository you can add the following block if you have the m2e PDE Integration feature installed. This way you could even use newer versions that are not yet available via p2 update site.
<location includeDependencyScope="compile" includeSource="true" missingManifest="generate" type="Maven">
<groupId>com.sun.xml.bind</groupId>
<artifactId>jaxb-impl</artifactId>
<version>2.3.3</version>
<type>jar</type>
</location>
<location includeDependencyScope="compile" includeSource="true" missingManifest="generate" type="Maven">
<groupId>jakarta.xml.bind</groupId>
<artifactId>jakarta.xml.bind-api</artifactId>
<version>2.3.3</version>
<type>jar</type>
</location>
Note:
If you don’t have a JavaSE-1.8 mapped in your Eclipse IDE, or your bundle has a JavaSE-11 or higher set as Execution Environment, you need to specify the version constraint to the Import-Package statements to make PDE happy. Otherwise you will see some strange errors.
Note:
The Bundle-SymbolicName of the required bundles in Maven Central is different to the re-bundled versions in the Eclipse p2 Update Site. This needs to be kept in mind when including the bundles to the product. I will use the symbolic names of the bundles from Maven Central in the further sections.
Once the bundles are available in the target platform there are different ways to make JAXB work with Java 11 in your OSGi / Eclipse application.
This is the variant that is most often described.
com.sun.xml.bind.v2
to the imported packages of the bundle that uses JAXBJAXBContext
by using the classloader of the model object
JAXBContext context =
JAXBContext.newInstance(
MyClass.class.getPackageName(),
MyClass.class.getClassLoader());
JAXBContext#newInstance(String, ClassLoader)
method.The following bundles need to be added to the product in order to make JAXB work with Java 11 in OSGi:
jakarta.activation-api
jakarta.xml.bind-api
com.sun.xml.bind.jaxb-impl
The downside of this variant is obviously that you have to modify code and you have to add a dependency to a JAXB implementation in all places where JAXB is used. In case third-party-libraries are part of your product that you don’t have under your control, this solution is probably not suitable. And you can also not exchange the JAXB implementation easily with this approach.
In this variant you create a fragment named jaxb.impl.binding
to the jakarta.xml.bind-api
bundle that adds the package com.sun.xml.bind.v2
to the imported packages.
jakarta.xml.bind-api
as the Fragment-Host
com.sun.xml.bind.v2
to the Import-Package
manifest headerThe resulting MANIFEST.MF should look similar to the following snippet:
Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: JAXB Impl Binding
Bundle-SymbolicName: jaxb.impl.binding
Bundle-Version: 1.0.0.qualifier
Fragment-Host: jakarta.xml.bind-api;bundle-version="2.3.3"
Automatic-Module-Name: jaxb.impl.binding
Bundle-RequiredExecutionEnvironment: JavaSE-11
Import-Package: com.sun.xml.bind.v2
The following bundles need to be added to the product in order to make JAXB work with Java 11 in OSGi:
jakarta.activation-api
jakarta.xml.bind-api
com.sun.xml.bind.jaxb-impl
jaxb.impl.binding
This variant seems to me the most comfortable one. There are no modifications required in the existing bundles and the dependency to the JAXB implementation is encapsulated in a fragment, which makes it easy to exchange if needed.
Note:
There still might be issues at runtime when trying to execute JAXB code. In such cases try to change to Import-Package
statement to either an import with a version or a DynamicImport-Package
:
Import-Package: com.sun.xml.bind.v2;version="2.3.3"
or
DynamicImport-Package: com.sun.xml.bind.\*
If even this does not solve the issue, try to start the application clean to ensure that no bundle caching issue exists!
With this variant you add the necessary bundles to the classloader the framework is started with.
Using bndtools this can be done via the [-runpath](https://bnd.bndtools.org/instructions/runpath.html)
instruction. The Equinox launcher does not know such an instruction. For an Eclipse RCP application you need to create system.bundle
fragment. Such a fragment contains the necessary jar files and exports the packages of the wrapped jars.
jakarta.activation-api-1.2.2.jar
jakarta.xml.bind-api-2.3.3.jar
jaxb-impl-2.3.3.jar
Bundle-ClassPath
manifest header to add the jars to the bundle classpathFragment-Host
manifest header so the fragment is added to the system.bundle
Export-Packages
manifest headerThe resulting MANIFEST.MF should look similar to the following snippet:
Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Extension
Bundle-SymbolicName: jaxb.extension
Bundle-Version: 1.0.0.qualifier
Fragment-Host: system.bundle; extension:=framework
Automatic-Module-Name: jaxb.extension
Bundle-RequiredExecutionEnvironment: JavaSE-11
Bundle-ClassPath: lib/jakarta.activation-api-1.2.2.jar,
lib/jakarta.xml.bind-api-2.3.3.jar,
lib/jaxb-impl-2.3.3.jar,
.
Export-Package: com.sun.istack,
com.sun.istack.localization,
com.sun.istack.logging,
com.sun.xml.bind,
com.sun.xml.bind.annotation,
com.sun.xml.bind.api,
com.sun.xml.bind.api.impl,
com.sun.xml.bind.marshaller,
com.sun.xml.bind.unmarshaller,
com.sun.xml.bind.util,
com.sun.xml.bind.v2,
com.sun.xml.bind.v2.bytecode,
com.sun.xml.bind.v2.model.annotation,
com.sun.xml.bind.v2.model.core,
com.sun.xml.bind.v2.model.impl,
com.sun.xml.bind.v2.model.nav,
com.sun.xml.bind.v2.model.runtime,
com.sun.xml.bind.v2.model.util,
com.sun.xml.bind.v2.runtime,
com.sun.xml.bind.v2.runtime.output,
com.sun.xml.bind.v2.runtime.property,
com.sun.xml.bind.v2.runtime.reflect,
com.sun.xml.bind.v2.runtime.reflect.opt,
com.sun.xml.bind.v2.runtime.unmarshaller,
com.sun.xml.bind.v2.schemagen,
com.sun.xml.bind.v2.schemagen.episode,
com.sun.xml.bind.v2.schemagen.xmlschema,
com.sun.xml.bind.v2.util,
com.sun.xml.txw2,
com.sun.xml.txw2.annotation,
com.sun.xml.txw2.output,
javax.activation,
javax.xml.bind,
javax.xml.bind.annotation,
javax.xml.bind.annotation.adapters,
javax.xml.bind.attachment,
javax.xml.bind.helpers,
javax.xml.bind.util
If you add this system.bundle
fragment to the product, JAXB works the same way it did with Java 8.
This variant has the downside that you have to manage the JAXB libraries that are wrapped by the system.bundle fragment yourself, instead of simply consuming it from a repository.
For me the creation of a jakarta.xml.bind-api
fragment as shown in Variant 2 seems to be the most comfortable variant. At least it worked in my scenarios, and also the build using Tycho 2.2 and the resulting Eclipse RCP product worked.
If you need to support Java 8 and Java 11 with your product at the same time, you should consider specifying the binding fragment as multi-release jar as explained in this blog post. Further information about multi-release jars can be found here:
If you see any issues with the jakarta.xml.bind-api
fragment approach that I have not identified yet, please let me know. Maybe I am missing something important that was not covered by my tests.
At the OSGi Summit 2022 I learned that there is another variant that is even more comfortable than using the jakarta.xml.bind-api
fragment as shown in Variant 2.
In this variant you simply add the bundle org.glassfish.hk2.osgi-resource-locator
to your application. This way the ServiceLocator is activated which is publishing the Java services that are included in the com.sun.xml.bind.jaxb-impl
bundle.
To use the org.glassfish.hk2.osgi-resource-locator
it needs first to be added to the target definition. This can be done either via the following Maven location:
<location includeDependencyDepth="none" includeSource="true" missingManifest="generate" type="Maven">
<dependencies>
<dependency>
<groupId>org.glassfish.hk2</groupId>
<artifactId>osgi-resource-locator</artifactId>
<version>1.0.3</version>
<type>jar</type>
</dependency>
</dependencies>
</location>
Or you consume it from the Eclipse Orbit with the following p2 location:
<location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
<repository location="https://download.eclipse.org/tools/orbit/downloads/drops/R20220830213456/repository"/>
<unit id="org.glassfish.hk2.osgi-resource-locator" version="1.0.3.v20200509-0149"/>
</location>
Once this is done you only need to ensure that the following bundles are part of your application:
jakarta.activation-api
jakarta.xml.bind-api
com.sun.xml.bind.jaxb-impl
org.glassfish.hk2.osgi-resource-locator
Note:
The usage of the org.glassfish.hk2.osgi-resource-locator
ServiceLocator mechanism does not automatically work with every JAXB implementation. The Eclipse MOXy implementation for example does not contain the necessary service definition in the META-INF folder until 4.0.0. But this seems to be fixed in the newest 4.0.0 version of MOXy, so since that version the org.glassfish.hk2.osgi-resource-locator
approach also work here.
For cases where org.glassfish.hk2.osgi-resource-locator
is not working to resolve the classloading issues with JAXB implementations, the fragment approach described in Variant 2 is working.
The examples for verification of the GlassFish HK2 and the fragment approach are available at GitHub.
Thanks to Mark Hoffmann (Twitter: @him7791) for sharing his experience at the OSGi Summit 2022!
]]>scr:list
- lists all DS components
scr:info <id>
- dump detailed information for a selected DS component.While the Gogo Shell is typically already part of an Eclipse application and can be activated by passing the -console
parameter to the Program Arguments, the Webconsole is not available that simple. As Eclipse application projects are mostly still created using PDE, you have to use a target definition to configure the libraries to use for development and deployment. And in the past a target platform could only consume p2 repositories. That was especially important for the Tycho builds, as the also supported Directory locations in a target definition were not supported by Tycho builds. As the Felix Webconsole is not available via p2 update site, the only way to include it to an Eclipse application was to include the necessary jars locally somehow.
Luckily there were a lot of improvements in that area, and since Tycho 2.0 also other file-based locations are supported. And with Tycho 2.2 even Maven dependencies can be included directly. At the time writing this blog post 2.2 is not yet released. But the support for Maven dependencies in a Target Definition is available in m2e. With this enhancement the inclusion of the Felix Webconsole becomes a lot easier.
First you need to install the m2e PDE Integration into the Eclipse IDE.
After the installation it can be used in the PDE Target Editor.
IMHO the PDE Target Editor is the second worst editor in PDE, right after the Component Definition Editor. The later luckily doesn’t need to be used anymore as PDE added support for the OSGi DS Component annotations. As a replacement for the Target Editor I used the Target Platform DSL. Unfortunately the DSL seems to be not actively implemented, and therefore the new Maven location support is missing. But I’ve found out that you can use the Generic Editor for the .target file and get similar features as with the DSL. For me the most important thing is to avoid the dialog for selecting artifacts from an update site, as this one really has its problems. So the nice thing on the DSL is the code completion for unit id and version, which is also working pretty well in the Generic Editor. Which could make the DSL obsolete.
So with the new Maven location and the Generic Editor, I now suggest to use the Target Editor for adding the Maven locations and switch to the Generic Editor for adding InstallableUnits from p2 repositories.
Open a Target Definition file with the Target Editor and add the following artifacts:
commons-fileupload (1.4)
commons-io (2.4)
org.apache.felix.http.jetty (4.1.4)
org.apache.felix.inventory (1.0.6)
org.apache.felix.http.servlet-api (1.1.2)
org.apache.felix.webconsole.plugins.ds (2.1.0)
org.apache.felix.webconsole.plugins.event (1.1.8)
org.apache.felix.webconsole (4.6.0)
Note:
compile
you get the transitive dependencies added too.org.apache.felix.webconsole
are not configured well in the pom.xml.
commons-fileupload
in version 1.3.3, which does not satisfy the Import-Package statement in org.apache.felix.webconsole
.commons.io
in version 2.6, which does not satisfy the Import-Package statement in org.apache.felix.webconsole
.org.apache.felix.inventory
is missing.Require-Capability
header osgi.contract=JavaServlet
. While the javax.servlet-api
bundle that is transitively included by Maven would satisfy the technical requirements (Import Package), it is missing the capability header. To satisfy the capability you need to use org.apache.felix.http.servlet-api
from Maven Central. Alternatively you can directly use the Eclipse Jetty bundles from an Eclipse Update Site and the provided javax.servlet
bundle provided by Eclipse, as the Eclipse Jetty bundles to not specify the Require-Capability
header.To add the Maven locations you need to:
The m2e PDE Integration has a nice feature to insert the values. If you have the Maven dependency XML structure in the clipboard, the values in the dialog are inserted automatically. To make it easier for adapters, here are the dependencies. Note that every dependency needs to be added separately.
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.inventory</artifactId>
<version>1.0.6</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.http.jetty</artifactId>
<version>4.1.4</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.http.servlet-api</artifactId>
<version>1.1.2</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.webconsole.plugins.ds</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.webconsole.plugins.event</artifactId>
<version>1.1.8</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.webconsole</artifactId>
<version>4.6.0</version>
<scope>provided</scope>
</dependency>
If you have a feature based product you can create a new feature that includes the necessary bundles. This feature should include the following bundles:
org.apache.felix.http.servlet-api
org.apache.commons.commons-fileupload
org.apache.commons.io (2.4.0)
org.apache.felix.http.jetty
org.apache.felix.inventory
org.apache.felix.webconsole
org.apache.felix.webconsole.plugins.ds
org.apache.felix.webconsole.plugins.event
If you have a product based on bundles, ensure that these bundles are part of the Contents. Note that org.apache.commons.io
needs to be included in version 2.4.0 to satisfy the dependencies of org.apache.felix.webconsole
.
As Equinox has the policy to NOT activate all bundles on startup, you need to configure that the necessary bundles are started automatically:
org.apache.felix.scr
org.apache.felix.http.jetty
org.apache.felix.webconsole
org.apache.felix.webconsole.plugins.ds
org.apache.felix.webconsole.plugins.event
true
Now you can launch the Eclipse application from the Overview tab via Launch an Eclipse application. The webconsole will be available via http://localhost:8080/system/console/ If you are asked for a login you can use the default admin/admin.
In the main bar of the Webconsole UI you can expand OSGi and find detailed informations on Bundles, Configuration, Events, Components, Log Service and Services. In these sub-sections you can find detailed information on the corresponding topics inside the current OSGi runtime. This way you can inspect and fix possible issues in a much more comfortable way.
Inspecting an OSGi runtime is much more comfortable using the Apache Felix Webconsole. With the new m2e PDE Integration finally Maven artifacts can be added as part of the target platform. Using it including the the Apache Felix Webconsole is much easier than it was before. And I am sure there are a lot more use cases that makes the live of Eclipse developers easier with that new feature. Thanks to Christoph Läubrich who added that feature lately.
Further information on the m2e PDE Integration can be found here:
]]>That blog post was published before OSGi R7 was released. And at that time there was no simple alternative available. With R7 the JAX-RS Whiteboard Specification was added, which provides a way to achieve the same goal by using JAX-RS, which is way simpler than implementing Servlets. I gave a talk at the EclipseCon Europe 2018 with the title How to connect your OSGi application. In this talk I showed how you create a connection to your OSGi application using different specifications, namely
Unfortunately the recording of that talk failed, so I can only link to the slides and my GitHub repository that contains the code I used to show the different approaches in action.
In the Panorama project, in which I am currently involved, one of our goals is to provide cloud services for model processing and evaluation. As a first step we want to publish APP4MC services as cloud services (more information in the Eclipse Newsletter December 2020). There are services contained in APP4MC bundles that are free from dependencies to the Eclipse Runtime and do not require any Extension Points, and there are services in bundles that have dependencies to plug-ins that use Extension Points. But all the services we want to publish as cloud services are OSGi declarative services. While there are numerous ways and frameworks to create REST based web services (e.g. Spring Boot or Microprofile to just name two of them), I was searching for a way to do this in OSGi. Especially because I want to reduce the configuration and implementation efforts with regards to the runtime infrastructure for consuming the existing OSGi services of the project.
For the services that have dependencies to Extensions Points and require a running Eclipse Runtime, I was forced to use the HTTP Service / HTTP Whiteboard approach. The main reason for this is that because of this dependency I needed to stick with a PDE project layout. Unfortunately there is no JAX-RS Whiteboard implementation available in Eclipse and therefore not available via a p2 Update Site. Maybe it would be possible somehow, but actually the solution should be to get rid of Extension Points and the requirement for a running Eclipse runtime.
But this blog post is about JAX-RS Whiteboard and not about project layouts and Extension Points vs. Declarative Services. So I will focus on the services that have a clean dependency structure. The setup should be as comfortable as possible to be able to focus on the REST service implementation, and not struggle with the infrastructure too much.
To create the project structure we can follow the steps described in the enRoute Tutorial.
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=project \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = jaxrs
version = 1.0-SNAPSHOT
package = org.fipro.modifier.jaxrs
app-artifactId: app
app-target-java-version: 8
impl-artifactId: impl
Note:
IMHO app and impl are not good values for project names. Although they are sub projects inside a Maven project, imported to the IDE this leads to confusions if you have multiple such projects in one workspace. By entering ‘n’ the defaults are declined and you need to insert the values for all parameters again. Additionally you can specify the artifactId of the app and the impl project, and the target Java version you want to develop with.
If you forget to specify different values for app and impl at creation time and want to change it afterwards, you will have several things to consider. Even with the refactoring capabilities of the IDE, you need to ensure that you do not forget something, like the fact that the name of the .bndrun file needs to be reflected in the pom.xml file.
Now the projects can be imported to the IDE of your choice. As the projects are plain Maven based Java projects, you can use any IDE. But of course my choice is Eclipse with bndtools.
Once the import is done you should double check the dependencies of the created skeletons. Some of the dependencies and transitive dependencies in the generated pom.xml files are not up-to-date. For example Felix Jetty is included in version 4.0.6 (September 2018), while the most current version is 4.1.4 (November 2020). You can check this for example by opening the Repositories view in the Bndtools perspective and expanding the Maven Dependencies section. The libraries listed inside Maven Dependencies are added from the Maven configuration of the created project. To update the version of one of those libraries, you need to add the corresponding configuration to the dependencyManagement
section of the jaxrs/pom.xml, e.g.
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.http.jetty</artifactId>
<version>4.1.4</version>
</dependency>
You should also update the version of the bnd Maven plugins. The generated pom.xml files use version 4.1.0, which is pretty outdated. At the time writing this blog post the most recent version is 5.2.0.
bnd.version
in the properties
sectionAs the goal is to wrap an existing OSGi Declarative Service to make it accessible as web service, we use the M.U.S.E (Most Useless Service Ever) introduced in my Getting Started with OSGi Declarative Services blog post. Unfortunately the combination of Bndtools workspace projects with Bndtools Maven projects does not work well. Mainly because the Bndtools workspace projects are not automatically available as Maven modules. So we create the API and the service implementation projects also by using the OSGi enRoute archetypes.
Note:
If you have an OSGi service bundle already available via Maven, you can also use that one by adding the dependency to the pom.xml files and skip this section.
jaxrs
directory and create an API module using the api archetype:mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=api \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = api
version = 1.0-SNAPSHOT
package = org.fipro.modifier.api
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
-DarchetypeGroupId=org.osgi.enroute.archetype \
-DarchetypeArtifactId=ds-component \
-DarchetypeVersion=7.0.0
groupId = org.fipro.modifier
artifactId = inverter
version = 1.0-SNAPSHOT
package = org.fipro.modifier.inverter
org.fipro.modifier.api
StringModifier
interface:public interface StringModifier {
String modify(String input);
}
ConsumerInterface
and the ProviderInterface
which were created by the archetype.Ensure that you do NOT delete the package-info.java file in the org.fipro.modifier.api
package. It configures that the package is exported. If this file is missing, the package is a Private-Package
and therefore not usable by other OSGi bundles.
The package-info.java file and its content are part of the Bundle Annotations introduced with R7. Here are some links if you are interested in more detailed information:
dependencies
section.<dependency>
<groupId>org.fipro.modifier</groupId>
<artifactId>api</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
org.fipro.modifier.inverter
StringInverter
service:@Component
public class StringInverter implements StringModifier {
@Override
public String modify(String input) {
return new StringBuilder(input).reverse().toString();
}
}
ComponentImpl
class that was created by the archetype.After the projects are imported to the IDE and the OSGi service to consume is available, we can start implementing the REST based service.
dependencies
section.<dependency>
<groupId>org.fipro.modifier</groupId>
<artifactId>api</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
org.fipro.modifier.jaxrs
InverterRestService
:
@Component
annotation to the class definition and specify the service parameter to specify it as a service, not an immediate component.@JaxrsResource
annotation to the class definition to mark it as a JAX-RS whiteboard resource.
This will add the service property osgi.jaxrs.resource=true
which means this service must be processed by the JAX-RS whiteboard.StringModifier
injected using the @Reference
annotation.StringModifier
.@Component(service=InverterRestService.class)
@JaxrsResource
public class InverterRestService {
@Reference
StringModifier modifier;
@GET
@Path("modify/{input}")
public String modify(@PathParam("input") String input) {
return modifier.modify(input);
}
}
When you read the specification, you will see that the example service is using the PROTOTYPE scope. The example services in the OSGi enRoute tutorials do not use the PROTOTYPE scope. So I was wondering when to use the PROTOTYPE scope for JAX-RS Whiteboard services. I was checking the specification and asked on the OSGi mailing list. Thanks to Raymond Augé who helped me understanding it better. In short, if your component implementation is stateless and you get all necessary information injected to the JAX-RS resource methods, you can avoid the PROTOTYPE scope. If you have a stateful implementation, that for example gets JAX-RS context objects for a request or session injected into a field, you have to use the PROTOTYPE scope to ensure that every information is only used by that single request. The example service in the specification therefore does not need to specify the PROTOTYPE scope, as it is a very simple example. But it is also not wrong to use the PROTOTYPE scope even for simpler services. This aligns the OSGi service design (where typically every component instance is a singleton) with the JAX-RS design, as JAX-RS natively expects to re-create resources on every request.
In the application project we need to ensure that our service is available. In case the StringInverter
from above was implemented, the inverter module needs to be added to the dependencies
section of the app/pom.xml file. If you want to use another service that can be consumed via Maven, you of course need to add that dependency.
dependencies
section.<dependency>
<groupId>org.fipro.modifier</groupId>
<artifactId>inverter</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
org.fipro.modifier.inverter
to the Run RequirementsAs returning a plain String is quite uncommon for a web service, we now extend our setup to return the result as JSON. We will use Jackson for this, so we need to add it to the dependencies of the impl module. The simplest way is to use org.apache.aries.jax.rs.jackson
.
org.apache.aries.jax.rs.jackson
in the dependencies
section.<dependency>
<groupId>org.apache.aries.jax.rs</groupId>
<artifactId>org.apache.aries.jax.rs.jackson</artifactId>
<version>1.0.2</version>
</dependency>
Alternatively you can implement your own converter and register it as a JAX-RS Whiteboard Extension.
dependencies
section.<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.12.0</version>
</dependency>
JacksonJsonConverter
:
@Component
annotation to the class definition and specify the PROTOTYPE
scope parameter to ensure that multiple instances can be requested.@JaxrsExtension
annotation to the class definition to mark the service as a JAX-RS extension type that should be processed by the JAX-RS whiteboard.@JaxrsMediaType(APPLICATION_JSON)
annotation to the class definition to mark the component as providing a serializer capable of supporting the named media type, in this case the standard media type for JSON.@Component(scope = PROTOTYPE)
@JaxrsExtension
@JaxrsMediaType(APPLICATION_JSON)
public class JacksonJsonConverter<T> implements MessageBodyReader<T>, MessageBodyWriter<T> {
@Reference(service=LoggerFactory.class)
private Logger logger;
private final Converter converter = Converters.newConverterBuilder()
.rule(String.class, this::toJson)
.rule(this::toObject)
.build();
private ObjectMapper mapper = new ObjectMapper();
private String toJson(Object value, Type targetType) {
try {
return mapper.writeValueAsString(value);
} catch (JsonProcessingException e) {
logger.error("error on JSON creation", e);
return e.getLocalizedMessage();
}
}
private Object toObject(Object o, Type t) {
try {
if (List.class.getName().equals(t.getTypeName())) {
return this.mapper.readValue((String) o, List.class);
}
return this.mapper.readValue((String) o, String.class);
} catch (IOException e) {
logger.error("error on JSON parsing", e);
}
return CANNOT_HANDLE;
}
@Override
public boolean isWriteable(
Class<?> c, Type t, Annotation[] a, MediaType mediaType) {
return APPLICATION_JSON_TYPE.isCompatible(mediaType)
|| mediaType.getSubtype().endsWith("+json");
}
@Override
public boolean isReadable(
Class<?> c, Type t, Annotation[] a, MediaType mediaType) {
return APPLICATION_JSON_TYPE.isCompatible(mediaType)
|| mediaType.getSubtype().endsWith("+json");
}
@Override
public void writeTo(
T o, Class<?> arg1, Type arg2, Annotation[] arg3, MediaType arg4,
MultivaluedMap<String, java.lang.Object> arg5, OutputStream out)
throws IOException, WebApplicationException {
String json = converter.convert(o).to(String.class);
out.write(json.getBytes());
}
@SuppressWarnings("unchecked")
@Override
public T readFrom(
Class<T> arg0, Type arg1, Annotation[] arg2, MediaType arg3,
MultivaluedMap<String, String> arg4, InputStream in)
throws IOException, WebApplicationException {
BufferedReader reader =
new BufferedReader(new InputStreamReader(in));
return (T) converter.convert(reader.readLine()).to(arg1);
}
}
InverterRestService
@Produces(MediaType.APPLICATION_JSON)
annotation to the class definition to specify that JSON responses are created.@JSONRequired
annotation to the class definition to mark this class to require JSON media type support.StringModifier
injected and return a List
of Strings as a result of the REST resource.@Component(service=InverterRestService.class)
@JaxrsResource
@Produces(MediaType.APPLICATION_JSON)
@JSONRequired
public class InverterRestService {
@Reference
private volatile List<StringModifier> modifier;
@GET
@Path("modify/{input}")
public List<String> modify(@PathParam("input") String input) {
return modifier.stream()
.map(mod -> mod.modify(input))
.collect(Collectors.toList());
}
}
StringModifier
in the inverter module.@Component
public class Upper implements StringModifier {
@Override
public String modify(String input) {
return input.toUpperCase();
}
}
org.apache.aries.jax.rs.jackson
, add it to the Run RequirementsIn the Panorama project the REST based cloud services are designed as file processing services. So you upload a file, process it and download the result. This way you can for example migrate Amalthea Model files to a newer version, perform a static analysis of an Amalthea Model and even transform an Amalthea Model to some executable format and execute the result for simulation scenarios.
When searching for file uploads with REST and Java, you only find information on how to do this with either Jersey or Apache CXF. But even though the Aries JAX-RS Whiteboard reference implementation is based on Apache CXF, none of the tutorials worked for me. The reason is that the Aries JAX-RS Whiteboard completely hides the underlying Apache CXF implementation. Thanks to Tim Ward who helped me on the OSGi mailing list, I was able to solve this. Therefore I want to share the solution here.
Multipart file upload requires support from the underlying servlet container. Using the OSGi enRoute Maven archetypes Apache Felix HTTP Jetty is included as implementation of the R7 OSGi HTTP Service and the R7 OSGi HTTP Whiteboard Specification. So a Jetty is included in the setup and multipart file uploads are supported.
According to the HTTP Whiteboard Specification, Multipart File Uploads need to be enabled via the corresponding component properties. This can be done for example by creating a custom JAX-RS Whiteboard Application and adding the @HttpWhiteboardServletMultipart Component Property Type annotation with the corresponding attributes.
Note: In this tutorial I will not use this approach, but for completeness I want to share how the creation and usage of a JAX-RS Whiteboard application can be done.
@Component(service=Application.class)
@JaxrsApplicationBase("app4mc")
@JaxrsName("app4mcMigration")
@HttpWhiteboardServletMultipart(enabled = true)
public class MigrationApplication extends Application {}
In this case the JAX-RS Whiteboard resource needs to be registered on the created application by using the @JaxrsApplicationSelect
Component Property Type annotation.
@Component(service=Migration.class)
@JaxrsResource
@JaxrsApplicationSelect("(osgi.jaxrs.name=app4mcMigration)")
public class Migration {
...
}
Creating custom JAX-RS Whiteboard Applications make sense if you want to publish multiple applications in one installation/server. In a scenario where only one application is published in isolation, e.g. one REST based service in one container (e.g. Docker), the creation of a custom application is not necessary. Instead it is sufficient to configure the default application provided by the Aries JAX-RS Whiteboard implementation using the Configuration Admin. The PID and the available configuration properties are listed here.
Configuring an OSGi service programmatically via Configuration Admin is not very intuitive. While it is quite powerful to change configurations at runtime, it feels uncomfortable to provide a configuration to a component from the outside. Luckily with R7 the Configurator Specification was introduced to deal with this. Using the Configurator, the component configuration can be provided using a resource in JSON format.
src/main/java/config
.@RequireConfigurator
package config;
import org.osgi.service.configurator.annotations.RequireConfigurator;
src/main/resources/OSGI-INF/configurator
org.apache.aries.jax.rs.whiteboard.default
is the PID of the default applicationosgi.http.whiteboard.servlet.multipart.enabled
is the component property for enabling multipart file uploads{
":configurator:resource-version" : 1,
":configurator:symbolic-name" : "org.fipro.modifier.app.config",
":configurator:version" : "1.0-SNAPSHOT",
"org.apache.aries.jax.rs.whiteboard.default" : {
"osgi.http.whiteboard.servlet.multipart.enabled" : "true"
}
}
org.fipro.modifier.app
to the Run RequirementsNote: While writing this blog post and tested the tutorial I noticed that on Resolve the inverter module was sometimes not resolved for whatever reason. To ensure that the application is started with all necessary bundles, add impl, app and inverter to the Run Requirements. Double check after Resolve that the following bundles are part of the Run Bundles:
org.fipro.modifier.api
org.fipro.modifier.app
org.fipro.modifier.impl
org.fipro.modifier.inverter
As the JAX-RS standards do not contain multipart support, we need to fallback to Servlet implementations. Fortunately we can get JAX-RS resources injected as method parameter or fields by using for example the @Context
JAX-RS annotation. For the multipart support we can get the HttpServletRequest
injected and extract the information from there.
InverterRestService
@POST
@Path("modify/upload")
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces(MediaType.TEXT_PLAIN)
public Response upload(@Context HttpServletRequest request)
throws IOException, ServletException {
// get the part with name "file" received by within
// a multipart/form-data POST request
Part part = request.getPart("file");
if (part != null
&& part.getSubmittedFileName() != null
&& part.getSubmittedFileName().length() > 0) {
StringBuilder inputBuilder = new StringBuilder();
try (InputStream is = part.getInputStream();
BufferedReader br =
new BufferedReader(new InputStreamReader(is))) {
String line;
while ((line = br.readLine()) != null) {
inputBuilder.append(line).append("\n");
}
}
// modify file content
String input = inputBuilder.toString();
List<String> modified = modifier.stream()
.map(mod -> mod.modify(input))
.collect(Collectors.toList());
return Response.ok(String.join("\n\n", modified)).build();
}
return Response.status(Status.PRECONDITION_FAILED).build();
}
@Consumes(MediaType.MULTIPART_FORM_DATA)
Specify that this REST resource consumes multipart/form-data.@Produces(MediaType.TEXT_PLAIN)
Specify that the result is plain text, which is for this use case the easiest way for returning the modified file content.@Context HttpServletRequest request
The HttpServletRequest is injected as method parameter.Part part = request.getPart("file")
Extract the Part
with the name file
(which is actually the form parameter name) from the HttpServletRequest
.If you are using a tool like Postman, you can test if the multipart upload is working by starting the app via app.bndrun and execute a POST request on http://localhost:8080/modify/upload
To also be able to test the upload without additional tools, we publish a simple form as a static resource in our application. We use the HTTP Whiteboard Specification to register an HTML form as static resource with our REST service. For this add the @HttpWhiteboardResource
component property type annotation to the InverterRestService
.
@HttpWhiteboardResource(pattern = "/files/*", prefix = "static")
With this configuration all requests to URLs with the /files
path are mapped to resources in the static
folder. The next step is therefore to add the static form to the project:
<html>
<body>
<h1>File Upload with JAX-RS</h1>
<form
action="http://localhost:8080/modify/upload"
method="post"
enctype="multipart/form-data">
<p>
Select a file : <input type="file" name="file" size="45"/>
</p>
<input type="submit" value="Upload It"/>
</form>
</body>
</html>
After starting the app via app.bndrun you can open a browser and navigate to http://localhost:8080/files/upload.html Now you can select a file (don’t use a binary file) and upload it to see the modification result of the REST service.
To debug your REST based service you can start the application by using Debug OSGi instead of Run OSGi in the app.bndrun. But in the OSGi context you often face issues even before you can debug code. For this the app archetype creates an additional debug run configuration. The debug.bndrun file is located next to the app.bndrun file in the app module.
With the debug run configuration the following additional features are enabled to inspect the runtime:
This allows to interact with the Gogo Shell in the Console View. And even more comfortable by using the Webconsole. For the later open a browser and navigate to http://localhost:8080/system/console. Login with the default username/password admin/admin. Using the Webconsole you can check which bundles are installed and in which state they are. You can also inspect the available OSGi DS Components and check the active configurations.
As the project setup is a plain Java/Maven project, the build is pretty easy:
clean verify
in the Goals fieldFrom the command line:
mvn clean verify
Note:
It can happen that an error occurs on building the app module if you followed the steps in this tutorial exactly. The reason is that the build locates a change in the Run Bundles of the app.bndrun file. But it is just a difference in the ordering of the bundles. To solve this open the app.bndrun file, remove all entries from the Run Bundles and hit Resolve again. After that the order of the Run Bundles will be the same as the one in the build.
Note:
This build process works because we used the Eclipse IDE with Bndtools. If you are using another IDE or working only on the command line, have a look at the OSGi enRoute Microservices Tutorial that explains the separate steps for building from command line.
After the build succeeds you will find the resulting app.jar
in jaxrs/app/target. Execute the following line to start the self-executable jar from the command line if you are located in the jaxrs folder:
java -jar app/target/app.jar
If you also want to build the debug configuration, you need to enable this in the pom.xml file of the app module:
build/plugins
section update the bnd-export-maven-plugin
and add the debug.bndrun to the bndruns
.<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-export-maven-plugin</artifactId>
<configuration>
<bndruns>
<bndrun>app.bndrun</bndrun>
<bndrun>debug.bndrun</bndrun>
</bndruns>
</configuration>
</plugin>
Executing the build again, you will now also find a debug.jar
in the target folder of the app module, you can use to inspect the OSGi runtime.
While setting up this tutorial I faced several issues that mainly came from missing information or misunderstandings. Luckily the OSGi community was really helpful in solving this. So my contribution back is to write this blog post to help others that struggle with similar issues. The key takeaways are:
Note:
The Maven project structure also causes quite some headache if you want to wrap OSGi services from Eclipse projects like APP4MC. Usually Eclipse projects publish their results as p2 update sites and not via Maven. And for Maven projects it is no possible to consume p2 update sites. Luckily more and more projects publish their results on Maven Central. And the APP4MC project plans to also do this. We are currently cleaning up the dependencies to make it possible to at least consume the model implementation easily from any Java based project. As long as dependencies are not available via Maven Central, the only way to solve the build is to install the artifacts in the local repository. This can either be done by building and installing the resulting artifacts locally via mvn clean install
. Alternatively you can use the maven-install-plugin, which can even be integrated into your Maven build if you add the artifact to install to the source code repository. Thanks to Neil Bartlett who gave me the necessary pointer on this topic.
The sources of this tutorial are available on GitHub.
For an extended example have a look at the APP4MC Cloud Services.
Now I have a blog post about HTTP Service / HTTP Whiteboard and JAX-RS Whiteboard. The still missing blog post about Remote Services is not forgotten, but obviously I need more time to write about it, as it is the most complicated specification in OSGi. So stay tuned for that one. :)
]]>A more detailed inspection reveals that the high memory consumption is not because of the data in memory itself. There are a lot of primitive wrapper objects and internal objects in the map implementation that consume a big portion of the memory, as you can see in the following image.
Note:
Primitive wrapper objects have a higher memory consumption than primitive values itself. As there are already good articles about that topic available I will not repeat that. If you are interested in some more details in the topic Primitives vs Objects you can have a look at Baeldung for example.
So I started to check the NatTable implementation in search of the memory issue. And I found some causes. In several places there are internal caches for the index-position mapping to improve the rendering performance. Also the row heights and column widths are stored internally in a collection if a user resized them. Additionally some scaling operations incorrectly where using Double objects instead of primitive values to avoid rounding issues on scaling.
From my experience in an Android project I remembered an article that described a similar issue. In short: Java has no collections for primitive types, therefore primitive values need to be stored via wrapper objects. In Android they introduced the SparseArray to deal with this issue. So I was searching for primitive collections in Java and found Eclipse Collections. To be honest, I heard about Eclipse Collections before, but I always thought the standard Java Collections are already good enough, so why checking some third party collections. Small spoiler: I was wrong!
Looking at the website of Eclipse Collections, they state that they have a better performance and better memory consumption than the standard Java Collections. But a good developer and architect does not simply trust statements like “take my library and all your problems are solved”. So I started my evaluation of Eclipse Collections to see if the memory and performance issues in NatTable can be solved by using them. Additionally I was looking at the Primitive Type Streams introduced with Java 8 to see if some issues can even be leveraged using that API.
Right at the beginning of my evaluation I noticed the first issue. Which way should be used to create a huge collection of test data to process? I read about some discussions using the good old for-loop vs. IntStream. So I started with some basic performance measurements to compare those two. The goal was to create test data with values from 0 to 1.000.000 where every 100.000 entry is missing.
The for-loop for creating an int[]
with the described values looks like this:
int[] values = new int[999_991];
int index = 0;
for (int i = 0; i < 1_000_000; i++) {
if (i == 0 || i % 100_000 != 0) {
values[index] = i;
index++;
}
}
Using the IntStream
API it looks like this:
int[] values = IntStream.range(0, 1_000_000)
.filter(i -> i == 0 || i % 100_000 != 0)
.toArray();
Additionally I wanted to compare the performance for creating an ArrayList<Integer>
via for-loop and IntStream
.
ArrayList<Integer> values = new ArrayList<>(999_991);
for (int i = 0; i < 1_000_000; i++) {
if (i == 0 || i % 100_000 != 0) {
values.add(i);
}
}
List<Integer> values = IntStream.range(0, 1_000_000)
.filter(i -> (i == 0 || i % 100_000 != 0))
.boxed()
.collect(Collectors.toList());
The result is interesting, although not suprising. Using the for-loop for creating an int[]
is the clear winner.
The usage of the IntStream
is not bad but definitely worse than the for-loop.
So for recurring tasks and huge ranges a refactoring from for-loop to IntStream
is not a good idea.
The creation of collections with wrapper objects is of course even worse, as wrapper objects need to be created via boxing.
collecting int[] via for-loop 1 ms
collecting int[] via IntStream 4 ms
collecting List<Integer> via for-loop 7 ms
collecting List<Integer> via IntStream 13 ms
I also tested the usage of HashSet
and TreeSet
for the wrapper objects, as typically in NatTable I need distinct values, often sorted for further processing. HashSet
as well as TreeSet
have a worse performance in the creation scenario, but TreeSet
is the clear looser here.
collecting HashSet<Integer> via for-loop 16 ms
collecting TreeSet<Integer> via for-loop 189 ms
collecting Set<Integer> via IntStream 26 ms
Note:
Running the tests in a single execution, the numbers are worse, which is caused by the VM ramp up and class loading. Executing it 10 times the average numbers are similar to the above but are still worse because the first execution is that much worse. The numbers shown above are the average out of 100 executions. And even increasing the number of executions to 1.000 the average values are quite the same and sometimes even get drastically better because of the VM optimizations for code that gets executed often. So the numbers presented here are the average out of 100 executions.
After evaluating the performance of standard Java API for creating test data, I looked at the Eclipse Collections - Primitive Collections. I compared MutableIntList
with MutableIntSet
and used the different factory methods for creating the test data:
MutableIntList
MutableIntList values = IntLists.mutable.withInitialCapacity(999_991);
for (int i = 0; i < 1_000_000; i++) {
if (i == 0 || i % 100_000 != 0) {
values.add(i);
}
}
Note:
The method withInitialCapacity(int)
is introduced with Eclipse Collections 10.3. In previous versions it is not possible to specify an initial capacity using the primitive type factories, you can only create an emty MutableIntList
or MutableIntSet
using emtpy()
. Without specifying the initial capacity, the iteration approach takes 3ms for the MutableIntList
and 32ms for the MutableIntSet
.
of(int...)
/ with(int...)
MutableIntList values = IntLists.mutable.of(inputArray);
ofAll(Iterable<Integer>)
/ withAll(Iterable<Integer>)
MutableIntList values = IntLists.mutable.ofAll(inputCollection);
ofAll(IntStream)
/ withAll(IntStream)
MutableIntList values = IntLists.mutable.ofAll(
IntStream
.range(0, 1_000_000)
.filter(i -> (i == 0 || i % 100_000 != 0)));
To create MutableIntSet
use the IntSets
utility class:
MutableIntSet values = IntSets.mutable.xxx
Note:
For the factory methods of course the generation of the input also needs to be taken into account. So for creating data from scratch the time for creating the array or the collection needs to be added on top.
The result shows that at creation time the MutableIntList
is much faster than the MutableIntSet
. And the usage of the factory method with an int[]
parameter is faster than using an Integer collection or IntStream
or the direct operation on the MutableIntList
. The reason for this is probably that using an int[]
the MutableIntList
instances are actually wrapper to the int[]
. In this case you alse need to be careful, as modifications done via the primitive collection are directly reflected outside of the collection.
creating MutableIntList via iteration 1 ms
creating MutableIntList of int[] 0 ms
creating MutableIntList via Integer collection 4 ms
creating MutableIntList via IntStream 6 ms
creating MutableIntSet via iteration 21 ms
creating MutableIntSet of int[] 32 ms
creating MutableIntSet of Integer collection 39 ms
creating MutableIntSet via IntStream 38 ms
In several use cases the usage of a Set
would be nicer to directly avoid duplicates in the collection. In NatTable a sorted order is also needed often, but there is no TreeSet
equivalent in the primitive collections. But the MutableIntList
comes with some nice API to deal with this. Via distinct()
we get a new MutableIntList
that only contains distinct values, via sortThis()
the MutableIntList
is directly sorted.
The following call returns a new MutableIntList
with distinct values in a sorted order, similar to a TreeSet
.
MutableIntList uniqueSorted = values.distinct().sortThis();
When changing this in the test, the time for creating a MutableIntList
with distinct values in a sorted order increases to about 27 ms. Still less than creating a MutableIntSet
. But as our input array is already sorted and only contains distinct values, this measurement is probably not really meaningful.
The key takeaways in this part are:
IntStream.range().
MutableIntList
has a better performance at creation time compared to MutableIntSet
. This is the same with default Java List and Set implementations.MutableIntList
has some nice API for modifications compared to handling a primitive array, which makes it more comfortable to use.As already mentioned, Eclipse Collections come with nice and comfortable API similar to the Java Stream API. But here I don’t want to go in more detail on that API. Instead I want to see how Eclipse Collections perform when using the standard Java Collections API and compare it with the performance of the Java Collections. By doing this I want to ensure that by using Eclipse Collections the performance is getting better or at least is not becoming worse than by using the default Java collections.
The first use case is the check if a value is contained in a collection. This is done by the contains()
method.
boolean found = valuesCollection.contains(search);
For the array we compare the old-school for-loop
boolean found = false;
for (int i : valuesArray) {
if (i == search) {
found = true;
break;
}
}
with the primitive streams approach
boolean found = Arrays.stream(valuesArray).anyMatch(x -> x == search);
Additionally I added a test for using Arrays.binarySearch()
. But the result is not 100% comparable, as binarySearch()
requires the array to be sorted in advance. Since our array already contains the test data in sorted order, this test works.
boolean found = Arrays.binarySearch(valuesArray, search) >= 0;
We use the collections/arrays that we created before and first check for the value 450.000 which exists in the middle of the collection. Below you find the execution times of the different approaches.
contains in List 1 ms
contains in Set 0 ms
contains in int[] stream 2 ms
contains in int[] iteration 1 ms
contains in int[] binary search 0 ms
contains in MutableIntList 0 ms
contains in MutableIntSet 0 ms
Then we execute the same setup and check for the value 2.000.000 which does not exist in the collection. This way the whole collection/array needs to be processed, while in the above case the search stops once the value is found.
contains in List 2 ms
contains in Set 0 ms
contains in int[] stream 2 ms
contains in int[] iteration 1 ms
contains in int[] binary search 0 ms
contains in MutableIntList 0 ms
contains in MutableIntSet 0 ms
What we can see here is that the Java Primitive Streams have the worst performance for the contains()
case and the Eclipse Collections perform best. But actually there is not much difference in the performance.
For people with a good knowledge of the Java Collections API the specific measurement of indexOf()
might look strange. This is because for example the ArrayList
internally uses indexOf()
in the contains()
implementation. And we have tested that before. But the Eclipse Primitive Collections are not using indexOf()
in contains()
. They operate on the internal array. Also indexOf()
is implemented differently without the use of the equals()
method. So a dedicated verification is useful. Below are the results for testing an existing value and a not existing value.
Check indexOf() 450_000
indexOf in collection 0 ms
indexOf in int[] iteration 0 ms
indexOf in MutableIntList 0 ms
Check indexOf() 2_000_000
indexOf in collection 1 ms
indexOf in int[] iteration 0 ms
indexOf in MutableIntList 0 ms
The results are actually not surprising. Also in this case there is not much difference in the performance.
Note:
There is no indexOf()
for Sets
and of course we can also not get an index when using Java Primitive Streams. So this test only compares ArrayList
, iteration on an int[]
and the MutableIntList
. I also skipped testing binarySearch()
here, as the results would be equal to the contains()
test with the same restrictions.
Removing multiple items from a List is a big performance issue. Before my investigation here I was not aware on how serious this issue is. What I already knew from past optimizations is, that removeAll()
on an ArrayList
is much worse than iterating manually over the items to remove and then remove every item solely.
For the test I am creating the base collection with 1.000.000 entries and a collection with the values from 200.000 to 299.999 that should be removed. First I execute the iteration to remove every item solely
for (Integer r : toRemoveList) {
valueCollection.remove(r);
}
then I execute the test with removeAll()
valueCollection.removeAll(toRemoveList);
The tests are executed on an ArrayList
, a HashSet
, a MutableIntList
and a MutableIntSet
.
Additionally I added a test that uses the Primitive Stream API to filter and create a new array from the result. As this is not a modification of the original collection, the result is not 100% comparable to the other executions. But anyhow maybe interesting to see (even with a dependency to binarySearch()
).
int[] result = Arrays.stream(values)
.filter(v -> (Arrays.binarySearch(toRemove, v) < 0))
.toArray();
Note:
The code for removing items from an array is not very comfortable. Of course we could also use some library like Apache Commons with primitive type arrays. But this is about comparing plain Java Collections with Eclipse Collections. Therefore I decided to skip this.
Below are the execution results:
remove all by primitive stream 21 ms
remove all by iteration List 29045 ms
remove all List 64068 ms
remove all by iteration Set 1 ms
remove all Set 1 ms
remove all by iteration MutableIntList 13602 ms
remove all MutableIntList 21 ms
remove all by iteration MutableIntSet 2 ms
remove all MutableIntSet 2 ms
You can see that the iteration approach on an ArrayList
is almost twice as fast as using removeAll()
. But still the performance is very bad. The performance for removeAll()
as well as the iteration approach on a Set
and a MutableIntSet
are very good. Interestingly the call to removeAll()
on a MutableIntList
is also acceptable, while the iteration approach seems to have a performance issue.
The key takeaways in this part are:
From the above measurements and observations I can say that in most cases there is a performance improvement when using Eclipse Collections compared to the standard Java Collections. And even for use cases where no big improvement can be seen, there is a small improvement or at least no performance decrease. So I decided to integrate Eclipse Collections in NatTable and use the Primitive Collections in every place where primitive values where stored in Java Collections. Additionally I fixed all places where wrapper objects were created unnecessarily. Then I executed the example from the beginning again to measure the memory consumption. And I was really impressed!
As you can see in the above graph, the heap usage stays below 250 MB even on scrolling. Remember, before using Eclipse Primitive Collections, the heap usage growed up to 1,5 GB. Going into more detail we can see that a lot of objects that were created for internal management are not created anymore. So now really the data model that should be visualized by NatTable is taking most of memory, not the NatTable itself anymore.
One thing I noticed in the tests is that there is still quite some memory allocated if the MutableIntList
or MutableIntSet
are cleared via clear()
. Basically it is the same with the Java Collections. The collection allocates the space for the needed size. For the Eclipse Collections this means the internal array keeps its size as it only fills the array with 0. To even clean up this memory you need to assign a new empty collection instance.
Note:
The concrete IntArrayList
class contains a trimToSize()
method. But as you typically work agains the interfaces when using the factories, that method is not accessible, and also not all implementations contain such a method.
The data to show in a NatTable is accessed by an IDataProvider
. This is an abstraction to the underlying data structure, so that users can choose the data structure they like. The most common data structure in use is a List
, and NatTable provides the ListDataProvider
to simplify the usage of a List
as underlying data structure. With the ListDataProvider
as an abstraction there is no iteration internally. Instead there is a point access per cell via a nested for loop:
for (int column = 0; column < dataProvider.getColumnCount(); column++) {
for (int row = 0; row < dataProvider.getRowCount(); row++) {
dataProvider.getDataValue(column, row);
}
}
For the ListDataProvider
this means, for every cell first the row object is retrieved from the List
, then the property of the row object is accessed. As NatTable is a virtual table by design, it actually never happens that all values from the underlying data structure is accessed. Only the data that is currently visible is accessed at once. While an existing performance test in the NatTable performance test suite showed an impressive performance boost by switching from ArrayList
to MutableList
, a more detailed benchmark revealed that both List
implementations have a similar performance. I can’t tell why the existing test showed such a big difference, probably some side effects in the test setup, as the numbers swap if the test execution is swapped.
Executing the benchmark with Java 8 and Java 11 on the other hand shows a difference. Using Java 11 as runtime the tests execute about 50% faster for both ArrayList
and MutableList
. And it also shows that with Java 11 it makes a difference if the nested iteration iterates column or row first. While with Java 8 the execution time was similar, with Java 11 the row first approach shows a better performance.
Being sceptic at the beginning and have to admit that Eclipse Collections are really interesting and useful when it comes to performance and memory usage optimizations with collections in Java. The API is really handy and similar to the Java Streams API, which makes the usage quite comfortable.
My takeaways after the verification:
MutableIntList
, which has the better performance at creation compared to the MutableIntSet
.MutableIntSet
or MutableIntList
. This gives a similar memory consumption than using primitive type arrays, by granting a rich API for modifications at runtime.Based on the observations above I decided that Eclipse Collections will become a major dependency for NatTable Core. With NatTable 2.0 it will be part of the NatTable Core Feature. I am sure that internally even more optimizations are possible by using Eclipse Collections. And I will investigate where and how this can be done. So you can expect even more improvements in that area in the future.
In case you think my tests are incorrect or need to be improved, or you simply want to verify my statements, here are the links to the classes I used for my verification:
In the example class I increased the number of data rows to about 2.000.000 via this code:
List<Person> personsWithAddress = PersonService.getFixedPersons();
for (int i = 1; i < 100_000; i++) {
personsWithAddress.addAll(PersonService.getFixedPersons());
}
and I increased the row groups via these two lines of code:
rowGroupHeaderLayer.addGroup("Flanders", 0, 8 * 100_000);
rowGroupHeaderLayer.addGroup("Simpsons", 8 * 100_000, 10 * 100_000);
If some of my observations are wrong or the code can be made even better, please let me know! I am always willing to learn!
Thanks to the Eclipse Collections team for this library!
If you are interested in learning more about Eclipse Collections, you might want to check out the Eclipse Collections Kata.
]]>To enable the UI bindings for dynamic scaling / zooming the newly introduced ScalingUiBindingConfiguration
needs to be added to the NatTable.
natTable.addConfiguration( new ScalingUiBindingConfiguration(natTable));
This will add a MouseWheelListener
and some key bindings to zoom in/out:
The dynamic scaling can be triggered programmatically by executing the ConfigureScalingCommand
on the NatTable instance. This command already exists for quite a while, but it was mainly used internally to align the NatTable scaling with the display scaling. I have introduced new default IDpiConverter
to make it easier to trigger dynamic scaling:
DefaultHorizontalDpiConverter
Provides the horizontal dots per inch of the default display.DefaultVerticalDpiConverter
Provides the vertical dots per inch of the default display.FixedScalingDpiConverter
Can be created with a DPI value to set a custom scaling.At initialization time, NatTable internally fires a ConfigureScalingCommand
with the default IDpiConverter
to align the scaling with the display settings.
As long as only text is included in the table, registering the ScalingUiBindingConfiguration
is all you have to do. Once ICellPainter
are used that render images, some additional work has to be done. The reason for this is that for performance and memory reasons the images are referenced in the painter and not requested for every rendering operation. As painters are not part of the event handling, they can not be simply updated. Also for several reasons there are mechanisms that avoid applying the registered configurations multiple times.
There are three ways to style a NatTable, and as of now this requires three different ways to handle dynamic scaling updates for image painters.
AbstractRegistryConfiguration
This is the default way that exists for a long time.
Most of the default configurations provide the styling configuration this way.
As there is no way to identify which configuration registers ICellPainter
and how the instances are created, the ScalingUiBindingConfiguration
needs to be initialized with an updater that knows which steps to perform.
natTable.addConfiguration(
new ScalingUiBindingConfiguration(natTable, configRegistry -> {
// we need to re-create the CheckBoxPainter
// to reflect the scaling factor on the checkboxes
configRegistry.registerConfigAttribute(
CellConfigAttributes.CELL_PAINTER,
new CheckBoxPainter(),
DisplayMode.NORMAL,
"MARRIED");
}));
ThemeConfiguration
the styling options for a NatTable are collected in one place. In the previous state the ICellPainter
instance creation was done on the member initialization which was quite static. Therefore the ICellPainter
instance creation was moved to a new method named createPainterInstances()
, so the painter update on scaling can be performed without any additional effort. For custom painter configurations this means that they should be added to a theme via IThemeExtension
.
natTable.addConfiguration(
new ScalingUiBindingConfiguration(natTable));
// additional configurations
natTable.configure();
...
IThemeExtension customThemeExtension = new IThemeExtension() {
@Override
public void registerStyles(IConfigRegistry configRegistry) {
configRegistry.registerConfigAttribute(
CellConfigAttributes.CELL_PAINTER,
new CheckBoxPainter(),
DisplayMode.NORMAL,
"MARRIED");
}
@Override
public void unregisterStyles(IConfigRegistry configRegistry) {
configRegistry.unregisterConfigAttribute(
CellConfigAttributes.CELL_PAINTER,
DisplayMode.NORMAL,
"MARRIED");
}
};
ThemeConfiguration modernTheme =
new ModernNatTableThemeConfiguration();
modernTheme.addThemeExtension(customThemeExtension);
natTable.setTheme(modernTheme);
natTable.registerCommandHandler(
new CSSConfigureScalingCommandHandler(natTable));
I have tested several scenarios, and the current state of development looks quite good. But of course I am not sure if I tested everything and found every possible edge case. Therefore it would be nice to get some feedback from early adopters if the new zoom feature is stable or not. The p2 update site with the current development snapshot can be found on the NatTable SNAPSHOTS page. From build number 900 on the feature is included. Any issues found can be reported on the corresponding Bugzilla ticket 560802.
Please also note that with the newly introduced zooming capability I have dropped the ZoomLayer
. It did only increase the cell dimensions but not the font or the images. Therefore it was not functional (maybe never finished) IMHO and to avoid confusions in the future I have deleted it now.