Introduction

Since version 1.4, HiveMQ offers a free and open source plugin SDK with service provider interfaces. This allows everyone to extend HiveMQ and add custom functionality via plugins.

With custom HiveMQ plugins, it’s easy to add functionality like writing messages to databases, integrate with other service buses, collect statistics, add fine-grained security and virtually anything you else you can imagine.

Plugin development for HiveMQ is as easy as writing a Java main method once you grasp the core concepts. This documentation covers all relevant topics to get you started as quickly as possible. If you prefer to learn from real world plugins, visit the HiveMQ Github page or use the Maven Archetype for plugins.

General Concepts

A HiveMQ plugin is essentially a Java JAR file which is dropped in into the plugins folder of the HiveMQ installation. HiveMQ utilizes the standard Java Service Loader mechanism [1].

Using Dependency Injection

The recommended way of developing plugins is by using Dependency Injection. HiveMQ utilizes Google Guice as dependency injection provider. Typically most plugins will use the JSR 330 API like the javax.inject.Inject annotation.

HiveMQ plugins (and all dependent classes) can be written with constructor injection, field injection and method injection. Constructor injection tends to be the most convenient way as it allows maximum testability.

It is also possible to use annotations from JSR 250 aka Lifecycle Management Annotations in your plugin. You can annotate a method in your class with @javax.annotation.PostConstruct and @javax.annotation.PreDestroy to hook into the lifecycle.

Table 1. Commonly used annotations for HiveMQ plugins
Name Target Description

@javax.inject.Inject

Constructor, Field, Method (Setter)

Injects the object(s) specified.

@javax.inject.Singleton

Class, Method (Provider)

Defines that an object is a Singleton.

@javax.annotation.PostConstruct

Method

Executes this method after all injection took place.

@javax.annotation.PreDestroy

Method

Executes shutdown logic when HiveMQs LifecycleManager shuts down. This is only the case when HiveMQ itself shuts down.

@com.google.inject.Provides

Method

Used for Provider Methods defined in the HiveMQPluginModule.


The following shows an example of a constructor injection in the PluginEntryPoint of the HiveMQ plugin. It also shows the use of a javax.annotation.PostConstruct annotation to execute a method after the injection (and the constructor call) is finished.

Constructor Injection and Lifecycle Example
import com.dcsquare.hivemq.spi.PluginEntryPoint;
import org.apache.commons.configuration.Configuration;
import javax.annotation.PostConstruct;
import javax.inject.Inject;

public class TestPlugin extends PluginEntryPoint {

    private final Configuration configuration;

    @Inject 1
    public TestPlugin(Configuration configuration) {
        this.configuration = configuration;
    }

    @PostConstruct 2
    public void postConstruct() {

        System.out.print(configuration.getString("myProperty"));
    }
}
1 Constructor Injection
2 JSR 250 Lifecycle method which runs after injections took place
Don’t like dependency injection?
Although not recommended, it is possible to write HiveMQ plugins the old fashioned Java way without dependency injection. Please make sure that you have a no-argument constructor in your PluginEntryPoint subclass.

Plugin Project Structure

Minimal Plugin Project Structure
Figure 1. HiveMQ Plugin Structure

For the most minimal HiveMQ plugin you need at least two different classes and a text file for the service loader mechanism:

  1. A class which extends com.dcsquare.hivemq.spi.HiveMQPluginModule

  2. A class which extends com.dcsquare.hivemq.spi.PluginEntryPoint

  3. A textfile which resides in META-INF/services and its name is com.dcsquare.hivemq.spi.HiveMQPluginModule

Using the Maven Archetype
There is a Maven Archetype available which generates all classes and files needed for a plugin.
HiveMQPluginModule

Your class which extends com.dcsquare.hivemq.spi.HiveMQPluginModule is responsible for bootstrapping your module. That means, it is responsible for the following things:

  • Defining where your configurations of the plugin come from (e.g. from a properties file, XML file, database,…).

  • Define bindings for your dependency injection.

  • Declaring a class which is your Plugin Entry Point. This class acts as your plugin business logic "main method".

Using Guice Provider methods
You can use standard Guice provider methods in your implementation of com.dcsquare.hivemq.spi.HiveMQPluginModule.

The following shows the most simple implementation of com.dcsquare.hivemq.spi.HiveMQPluginModule.

Example com.dcsquare.hivemq.spi.HiveMQPluginModule implementation
import com.dcsquare.hivemq.spi.HiveMQPluginModule;
import com.dcsquare.hivemq.spi.PluginEntryPoint;
import com.google.inject.Provider;
import org.apache.commons.configuration.AbstractConfiguration;

import static com.dcsquare.hivemq.spi.config.Configurations.noConfigurationNeeded;


public class TestPluginModule extends HiveMQPluginModule {

    @Override
    public Provider<Iterable<? extends AbstractConfiguration>> getConfigurations() {
        return noConfigurationNeeded(); 1
    }

    @Override
    protected void configurePlugin() {
	2
    }

    @Override
    protected Class<? extends PluginEntryPoint> entryPointClass() {
        return TestPlugin.class; 3
    }
}
1 This is a convenient method from the com.dcsquare.hivemq.spi.config.Configurations util which defines that your plugin does not need any external configurations
2 You can wire your dependencies here with standard Guice bindings.
3 This is your plugins main entry class.
PluginEntryPoint

Your class which implements com.dcsquare.hivemq.spi.PluginEntryPoint can be seen as the "main method" of your plugin. The constructor and the @PostConstruct annotated method are called when the HiveMQ server starts up.

You can use Dependency Injection and inject all the dependencies you wired up in your HiveMQPluginModule subclass. It’s even possible to inject some HiveMQ specific objects like the CallbackRegistry or Configurations.

Example com.dcsquare.hivemq.spi.PluginEntryPoint implementation
import com.dcsquare.hivemq.spi.PluginEntryPoint;

import javax.annotation.PostConstruct;
import javax.inject.Inject;

public class TestPlugin extends PluginEntryPoint {

    private final MyDependency dependency;

    @Inject
    public TestPlugin(MyDependency dependency) { 1
        this.dependency = dependency;
    }

    @PostConstruct
    public void postConstruct() { 2
        dependency.doSomething();
    }
}
1 You can inject any dependency with constructor injection.
2 This method gets executed after the injections are done and the constructor was executed.
META-INF/services/com.dcsquare.hivemq.spi.HiveMQPluginModule file

To enable HiveMQ to detect your plugin as a plugin, a text file called com.dcsquare.hivemq.spi.HiveMQPluginModule has to be created in META-INF/services. The contents of the file is one line which contains the fully qualified class name of your HiveMQPluginModule subclass.

Example of META-INF/services/com.dcsquare.hivemq.spi.HiveMQPluginModule contents
com.dcsquare.hivemq.testplugin.TestPluginModule

CallbackRegistry

Typically most HiveMQ plugins implemented at least one Callback. See the Callbacks Chapter for more information on the concrete callbacks.

Registering your own callbacks is pretty easy from your PluginEntryPoint implementation. You can call getCallbackRegistry() to get the callback registry.

Example of registering a callback
import com.dcsquare.hivemq.spi.PluginEntryPoint;


public class TestPlugin extends PluginEntryPoint {

    public TestPlugin() {

        getCallbackRegistry().addCallback(new MyPublishReceivedCallback());
    }
}
Getting a reference to the callback registry in other classes
When you want to use the callback registry in other classes than your PluginEntryPoint class, you can just inject it. If you don’t like to use dependency injection, you have to pass the CallbackRegistry manually.

Configuration Files

Many non-trivial HiveMQ plugins need some kind of configuration. The datasources for these configurations can be very different for each plugin as some plugins use configuration files, other read the configuration from a database and some plugins don’t need any configuration at all.

For configuration purposes, HiveMQ utilizes the fantastic Apache Commons Configuration library which has many built-in Configuration Providers for different datasources like config files and databases. Using these configuration providers from Commons Configuration allows you — the plugin developer — to focus on writing plugins instead of fiddling with reading properties files.

When dealing with configurations, there are two useful utility classes available which offer convenient methods for configuration providers:

com.dcsquare.hivemq.spi.config.Configurations

Contains utility methods for creating new configuration providers which reduce the boiler plate code drastically.

com.dcsquare.hivemq.spi.util.PathUtils

Contains utility methods when dealing with file and folder locations relative to the HiveMQ directory.

Example of using configurations
import com.dcsquare.hivemq.spi.HiveMQPluginModule;
import com.dcsquare.hivemq.spi.PluginEntryPoint;
import com.google.inject.Provider;
import org.apache.commons.configuration.AbstractConfiguration;

import java.util.concurrent.TimeUnit;

import static com.dcsquare.hivemq.spi.config.Configurations.newConfigurationProvider;
import static com.dcsquare.hivemq.spi.config.Configurations.newReloadablePropertiesConfiguration;


public class TestPluginModule extends HiveMQPluginModule {

    @Override
    public Provider<Iterable<? extends AbstractConfiguration>> getConfigurations() {
        final AbstractConfiguration reloadableConfiguration =
                newReloadablePropertiesConfiguration("myConfig.properties", 60, TimeUnit.SECONDS); 1

        return newConfigurationProvider(reloadableConfiguration); 2
    }

    @Override
    protected void configurePlugin() {
    }

    @Override
    protected Class<? extends PluginEntryPoint> entryPointClass() {
        return TestPlugin.class;
    }
}

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -


import com.dcsquare.hivemq.spi.PluginEntryPoint;
import org.apache.commons.configuration.Configuration;

import javax.annotation.PostConstruct;
import javax.inject.Inject;


public class TestPlugin extends PluginEntryPoint {


    private final Configuration configuration;

    @Inject
    public TestPlugin(Configuration configuration) { 3
        this.configuration = configuration;
    }

    @PostConstruct
    public void postConstruct() {
        System.out.println(configuration.getString("stringKey", "defaultValue")); 4
        System.out.println(configuration.getInt("myIntValueProperty", 10));
    }
}
1 Here we are using a convenient utility method from the Configurations class to use a config file which is located in the plugin folder and reloaded every minute.
2 This method from the Configurations class saves us some boilerplate.
3 Here we inject our Configuration object which holds all properties from all configuration providers.
4 Here we just ask the Configuration object for a specific property value. If no value was found, a default value is used. See the Apache Commons Configuration documentation for more information.

Plugin Information

It is strongly recommended to annotate your HiveMQPluginModule with com.dcsquare.hivemq.spi.plugin.meta.Information for additional metadata about the plugin. HiveMQ uses these metadata for monitoring and information purposes. When starting HiveMQ with your plugin, HiveMQ will log your defined plugin name and version on startup.

If you forget to add the @Information annotation to your HiveMQPluginModule implementation, HiveMQ will log a warning on startup.
Example of using @Information
import com.dcsquare.hivemq.spi.HiveMQPluginModule;
import com.dcsquare.hivemq.spi.plugin.meta.Information;

@Information(
        name = "My test plugin",
        author = "John Doe",
        version = "1.0",
        description = "A test plugin to show the HiveMQ plugin capabilities")
public class TestPluginModule extends HiveMQPluginModule {
…
}

Logging

For logging purposes, HiveMQ plugins are encouraged to utilize the excellent SLF4J API. When using SLF4J, plugins don’t have to wrestle with logging configuration as the HiveMQ server takes care of it. No surprises here, just use the standard SLF4J API.

Example of using a SLF4J Logger
import com.dcsquare.hivemq.spi.PluginEntryPoint;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class TestPlugin extends PluginEntryPoint {

    private static final Logger log = LoggerFactory.getLogger(TestPlugin.class); 1

    public TestPlugin() {

        log.info("Hello, I'm logging here");
    }
}
1 Standard SLF4J Logger creation

Don’t block!

The single most important rule for all plugins is: Don’t block in callbacks. Never. If you can, use an java.util.concurrent.ExecutorService for everything which potentially blocks in callbacks. By the way: It’s also a great idea to use Connection Pools if you are dealing with databases.

The following shows an example code which uses an ExecutorService.

Example code for nonblocking actions
import com.dcsquare.hivemq.spi.HiveMQPluginModule;
import com.dcsquare.hivemq.spi.PluginEntryPoint;
import com.google.inject.Provider;
import org.apache.commons.configuration.AbstractConfiguration;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import static com.dcsquare.hivemq.spi.config.Configurations.noConfigurationNeeded;


public class TestPluginModule extends HiveMQPluginModule {

    @Override
    public Provider<Iterable<? extends AbstractConfiguration>> getConfigurations() {
        return noConfigurationNeeded();
    }

    @Override
    protected void configurePlugin() {
        bind(ExecutorService.class).toInstance(Executors.newCachedThreadPool()); 1
    }

    @Override
    protected Class<? extends PluginEntryPoint> entryPointClass() {
        return TestPlugin.class;
    }
}

- - - - - - - - - - - - - - - - - - - - -

import com.dcsquare.hivemq.spi.PluginEntryPoint;
import com.dcsquare.hivemq.spi.callback.events.broker.OnBrokerStart;
import com.dcsquare.hivemq.spi.callback.exception.BrokerUnableToStartException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import java.util.concurrent.ExecutorService;

import static com.dcsquare.hivemq.spi.callback.CallbackPriority.HIGH;


public class TestPlugin extends PluginEntryPoint {

    private static final Logger log = LoggerFactory.getLogger(TestPlugin.class);

    private final ExecutorService executorService;

    @Inject
    public TestPlugin(final ExecutorService executorService) { 2
        this.executorService = executorService;
    }

    @PostConstruct
    public void postConstruct() { 3
        getCallbackRegistry().addCallback(new OnBrokerStart() { 4
            @Override
            public void onBrokerStart() throws BrokerUnableToStartException {
                executorService.submit(new Runnable() { 5
                    @Override
                    public void run() {
                        log.info("Starting long operation");
                        try {
                            Thread.sleep(5000); 6
                        } catch (InterruptedException e) {
                            log.error("Error", e);
                        }
                        log.info("Stopping long operation");

                    }
                });
            }

            @Override
            public int priority() {
                return HIGH; 7
            }
        });
    }
}
1 Binding an ExecutorService to a cached ThreadPool
2 Injecting the ExecutorService we bound before
3 After the injections are done this method gets executed
4 We add an anonymous inner class which implements the OnBrokerStart Callback
5 We submit a Runnable — which gets executed asynchronously — to the ExecutorService
6 This is our long running method. We simulate it with a simple Thread.sleep.
7 This defines that our Callback has High Priority compared to other OnBrokerStart callbacks for HiveMQ.
Bring your own ExecutorService — with Qualifiers!
It’s a brilliant idea to use BindingAnnotations for your plugins ExecutorService. When you have installed more than one plugin which uses dependency injection and each of it binds an ExecutorService, you’ll get errors. Bindings are global per HiveMQ instance and not per plugin. That means when you don’t want to share the binding, use custom Binding Annotations to avoid nasty bugs.

Caching

The single most important rule in HiveMQ plugins is: Don’t block. Under some circumstances you have to block, for example when you are waiting of responses from your database, a REST API or when reading something from the filesystem. If you are doing this in a callback which is called very often (like the OnPublishReceivedCallback), you most likely want to implement caching.

In general, all plugins are responsible for proper caching by themselves.

HiveMQ offers the convenient annotation com.dcsquare.hivemq.spi.aop.cache.Cached which caches the return values of methods. This annotation also respects the parameters you pass to the method and will only cache return values for method executions with the same parameters. You can define a duration how long the method return values are cached

Here is an example how the method caching works in a plugin:

Example of a plugin which uses Caching
import com.dcsquare.hivemq.spi.PluginEntryPoint;
import com.dcsquare.hivemq.spi.aop.cache.Cached;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import java.util.Date;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;


public class TestPlugin extends PluginEntryPoint {

    private static final Logger log = LoggerFactory.getLogger(TestPlugin.class);
    private final ScheduledExecutorService scheduledExecutorService;

    @Inject
    public TestPlugin(ScheduledExecutorService scheduledExecutorService) { 1
        this.scheduledExecutorService = scheduledExecutorService;
    }

    @PostConstruct
    public void postConstruct() {
        scheduledExecutorService.scheduleAtFixedRate(new Runnable() { 2
            @Override
            public void run() {
                log.info(getDate().toString()); 3
            }
        }, 1, 1, TimeUnit.SECONDS); 4
    }


    @Cached(timeToLive = 10, timeUnit = TimeUnit.SECONDS) 5
    public Date getDate() {
        return new Date();
    }
}
1 Let’s assume the ScheduledExecutorService was bound in the HiveMQPluginModule
2 We are scheduling a new Runnable Task which gets executed at a fixed rate
3 We just output the the value from the getDate() method to verify this was actually cached
4 We are scheduling this task every second
5 Here we define that the result of the method has to be cached for 10 seconds.
The caching with the com.dcsquare.hivemq.spi.aop.cache.Cached annotation only works on objects created by the Dependency Injection Container. That means if you instantiate a class with that annotation by yourself, the caching won’t work.

Get started quickly with the Plugin Maven Archetype

So if you just read the theroretical parts of the documentation or you want to dive into the plugin development real quick, you should read this chapter carefully. It will be explained what a Maven Archetype is, how you can benefit from it and how to use it.

What is a Maven Archetype?

A Maven archetype is a template tooling, which allows to create a template for a Maven project. This template can then be used to create a new Maven project. So in short the HiveMQ Plugin Archetype provides you with a fully functional HelloWorld plugin to get started with developing your own plugins. This is by far the simplest way to get started with plugin development. If you want to gain more insights on Maven and/or the Maven archetype, please refer to the offical Maven guides and the archetype guide.

Creating your first Plugin Project from the command line

  • Open a terminal and switch to the folder in which you want to create a new project folder with the plugin source files.

Project Structure
The generation process will create a folder with all the necessary files inside the directory in which the following command is executed.
  • Make sure you have Apache Maven available (for instructions look here)

  • Execute Archetype command

    Generate a new Maven project with the archetype
    	mvn archetype:generate -DarchetypeGroupId=com.dcsquare -DarchetypeArtifactId=hivemq-plugin-archetype -DarchetypeVersion=1.0 -DarchetypeRepository=http://repository.hivemq.com:8081/nexus/content/groups/public/
  • Specifiy the common Maven identifiers for the new project in the prompt: GroupId, ArtifactId, Version

  • Congrats, you have just created your first HiveMQ plugin :)

Next Steps
If you want to learn more about the generated HelloWorld example, please read the provided JavaDoc. For information on how to run and debug your new plugin see Development with Maven Plugin.

Creating your first Plugin Project using IntelliJ IDEA

  • Go to FileNew Project and select Maven Module

Create new Maven Module
Add Archetype
  • Click OK and select the Archetype in the list

Select Archetype and enter own GroupId
  • Done! IntelliJ is now creating the project and resolving the Maven dependencies.

Finished project
Wannt to run your newly created plugin on HiveMQ?
The next chapter explains how to run and debug plugins within your IDE. Go directly to the IntelliJ section.

Creating your first Plugin Project using Eclipse

  • Make sure you have the Eclipse IDE for Java Developers, otherwise Maven is not included by default

  • Go to FileNewProject and select Maven Project

  • Ensure that the Create a simple project (skip archetype selection) checkbox is not selected

  • In the next step click Add Archetype and fill in the details

Fill in Archetype Details
  • Click OK and select the archetype in the list

Select Archetype in list
  • Enter the desired Group Id, Artifact Id, Version for your plugin and click Finish

Enter desired ArtifactId
  • Done! Eclipse will create the project and resolve all dependencies!

Finished project
Wannt to run your newly created plugin on HiveMQ?
The next chapter explains how to run and debug plugins within your IDE. Go directly to the Eclipse section.

Development with the Maven Plugin

This chapter introduces the HiveMQ Maven plugin, which is a great help for any developer throughout the development lifecycle of a HiveMQ plugin.

Introduction

The plugin must to be packaged as a JAR file (see Packaging & Deployment) and copied manually to the HiveMQ plugin folder (<HiveMQHome>/plugins) in order to initialize the plugin on HiveMQ startup. This process is very inconvenient and not suitable for a fast switching between developing and manual testing. Another set-back is that debugging can not be applied to a plugin in the way it would be possible with a normal Java application. The reason is that the plugin can not be debugged without a running HiveMQ instance. All these described scenarios are essential for an effective plugin development, and that’s why we created the HiveMQ Maven Plugin to solve these problems.

The Maven plugin helps developers to

  • easily run their developed plugin on HiveMQ for testing purposes

  • successfully debug their plugin

Functionality

The Maven plugin automates the above stated steps, which are necessary to run a plugin on a real HiveMQ instance. Therefore you need to have a HiveMQ instance located on your development machine, which the plugin can use to deploy the plugin under development to. When the Maven plugin is invoked it will create an ad-hoc plugin directory in the maven build directory (this can also be customized, see the configration options) and will move the plugin jar file to the newly created folder. If you want to test configuration files for your plugin, please put them in src/main/resources, because then they are copied to the ad-hoc plugins folder as well.

Plugin folder of HiveMQ, when working with the Maven Plugin
The plugin folder of the HiveMQ instance, used from the Maven Plugin, is not used. Plugins or configuration files located there won’t be used when starting HiveMQ from within the Maven Plugin.

Afterwards the HiveMQ located in the specified directory (need to be declared using the configration options) will be started with the ad-hoc plugin folder. The Maven plugin will then show the HiveMQ console output. This empowers plugin developers with an easy way for testing newly created plugins. Additionally it provides the capability to debug the plugin during runtime using the Java remote application debugging mechanism. Java Remote Debugging is a client/server concept, in which both parties, application (HiveMQ with plugin) and IDE, are able to be the server or the client. Each scenario brings other advantages and disadvantages:

  • HiveMQ with Plugin: Server; IDE: Client

    • Advantage

      • Easier handling, because HiveMQ can be started without IDE debugger being started first.

    • Disadvantage

      • If the code you want to debug is executed at startup, then you have to be quick in starting the IDE client debugger, otherwise the code you wish to debug is already been passed through.

  • HiveMQ with Plugin: Client; IDE: Server

    • Advantage

      • It is possible to debug code at startup

    • Disadvantage

      • Before the HiveMQ can be started, your IDEs debugging process need to be up and running.

Both of these scenarios have a right to exist and can be switch through the configuration options of the Maven plugin. It is also possible to specify the port on which they connect and the host name of the server, if HiveMQ is running in client mode.

Best Practice
Use CLIENT mode, when you want to debug code in your callback and only use the SERVER mode, when debugging the Plugin Module or Plugin Entry Point is necessary.

This was just a quick overview over Java Remote Debugging in the context of HiveMQ plugin development, for more insights see this link.

Usage

As an effective way to use the plugin, an own Maven profile attached to the Maven package goal has been proven to be a great workflow. This triggers the plugin to be ran on HiveMQ everytime the Maven package goal is executed and the profile RunWithHiveMQ is active. In order to make it work as described the following snippet has to be added to the pom.xml in the root folder of the plugin project. This is already part of the project if the provided Maven Archetype was used. The only change which needs to be made is to provide the correct HiveMQ home folder.

IDE Support
For more information on how to do this with you favorite IDE look here.
HiveMQ Maven Plugin Usage
<project....>
...

 <profiles>
        <profile>
            <id>RunWithHiveMQ</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>com.dcsquare</groupId>
                        <artifactId>hivemq-maven-plugin</artifactId>
                        <version>1.0</version>
                        <executions>
                            <execution>
                                <id>hivemq</id>
                                <phase>package</phase> 3
                                <goals>
                                    <goal>hivemq</goal>
                                </goals>

                                <configuration>1
                                    <hiveMQDir>
                                        /Applications/hivemq 2
                                    </hiveMQDir>
                                </configuration>

                            </execution>
                        </executions>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>
...
</project>
1 All configuration properties for the Maven plugin have to be placed in here.
2 Please specifiy the HiveMQ home directory. If you have not downloaded HiveMQ yet, please download the latest HiveMQ.
3 Here you can change the Maven phase during which the plugin is executed, it only makes sense to do this in package or later phases, because during package the plugin jar file is generated.

Configuration Options

Do I need additional configuration for the Maven plugin?
In general, the Maven plugin has reasonable defaults, but sometimes it is necessary to adjust some settings or just to get an overview of the default settings. The HiveMQ directory configuration property is the only mandatory one.
Table 2. Configuration Options
Name Default Required Description

hivemqDir

true

This need to be set to your local HiveMQ directory. If you have not downloaded HiveMQ yet, please download the latest HiveMQ.

pluginJarName

{artifactId}-{version}.jar

false

The name of the plugin jar file.

pluginDir

${project.build.directory}

false

The directory in which your plugin jar file is located.

hivemqJar

hivemq.jar

false

The name of the HiveMQ jar file in the bin directory.

verbose

true

false

This property specifies, whether the messages logged from HiveMQ to the console should be shown or not.

noPlugins

false

false

When this is set to true, HiveMQ will start without using the plugin, which is currently under development. A possible use case would be to compare the behaviour of HiveMQ with and without the plugin.

debugMode

SERVER

false

Specifies the debug mode of the plugin. Valid values are NONE, SERVER and CLIENT. Use NONE for starting HiveMQ + plugin without debugging. When you want to debug the bootstrapping part of your plugin use CLIENT, otherwise user SERVER. For more insights about debug modes see the functionality section. And make sure to configure your IDE corresponding to your debug mode.

debugPort

5005

false

The debug port on which the debug server starts when in SERVER mode or the client connects to when in CLIENT mode.

debugServerHostName

localhost

false

If the debugMode is CLIENT, through this property it is possible to specify the host name of the debug server (for example the machine your IDE is running on). This is only needed if you want to remotely debug your plugin on another machine, see also the functionality section.

Workflow with common IDEs

Prerequisites
You can use any plugin, where the HiveMQ Maven plugin is configured, which means that the pom.xml should contain the snippet shown here. The simplest method is to create your own plugin HelloWorld project with the provided Maven Archetype. Be advised to change the HiveMQ home directory!

IntelliJ IDEA

Run HiveMQ with Plugin
  • Open the pom.xml and create a new configuration element debugMode in the Maven Plugin and set value to NONE

    Run-Plugin-In-IntelliJ-Step-1
  • Run Maven Goal package and make sure Profile RunWithHiveMQ is selected

    Run-Plugin-In-IntelliJ-Step-2
Debug Plugin in Server Mode
  • Create new configuration element debugMode and set value to SERVER, Run Maven Goal package and make sure Profile RunWithHiveMQ is selected

    Debug-Plugin-In-IntelliJ-Server-Mode-Step-1
  • Create a new Run Configuration

    Debug-Plugin-In-IntelliJ-Server-Mode-Step-2
  • Select Remote Configuration

    Debug-Plugin-In-IntelliJ-Server-Mode-Step-3
  • Make sure the Transport is set to Socket, Mode is set to Attach and the port is 5005.

    Debug-Plugin-In-IntelliJ-Server-Mode-Step-4
  • Run the newly created Configuration

    Debug-Plugin-In-IntelliJ-Server-Mode-Step-5
  • Wait until the Debugger Console opens and shows Connected

    Debug-Plugin-In-IntelliJ-Server-Mode-Step-6
Debug Plugin in Client Mode
  • Create new configuration element debugMode and set value to CLIENT and create a new Run Configuration

    Debug-Plugin-In-IntelliJ-Client-Mode-Step-1
  • Select Remote Configuration

    Debug-Plugin-In-IntelliJ-Client-Mode-Step-2
  • Make sure the Transport is set to Socket, Mode is set to Listen and the port is 5005.

    Debug-Plugin-In-IntelliJ-Client-Mode-Step-3
  • Run the newly created Configuration

    Debug-Plugin-In-IntelliJ-Client-Mode-Step-4
  • Wait until the Debugger Console opens and shows Connected, then run Maven package

    Debug-Plugin-In-IntelliJ-Client-Mode-Step-5
  • Wait until HiveMQ is started

    Debug-Plugin-In-IntelliJ-Client-Mode-Step-6
  • Switch to the Debug window and check if it shows Connected

    Debug-Plugin-In-IntelliJ-Client-Mode-Step-7

Eclipse

Maven Goal Package
  • Create new Run Configuration for Maven Goal

    Run-Maven-Package-In-Eclipse-Step-1
  • Insert Maven Goal package und Profile RunWithHiveMQ

    Run-Maven-Package-In-Eclipse-Step-2
  • Click apply and you are done!

Run HiveMQ with Plugin
  • Open the pom.xml and create a new configuration element debugMode in the Maven Plugin and set value to NONE

    Run-Plugin-In-Eclipse-Step-1
  • Click on the Run icon to run the above created Run Configuration.

  • Done! HiveMQ is starting in the Maven console.

Debug Plugin in Server Mode
  • Create new configuration element debugMode and set value to SERVER; Run Maven Goal package

    Debug-Plugin-In-Eclipse-Server-Mode-Step-1
  • Create a new Debug Configruation: Remote Java Application

    Debug-Plugin-In-Eclipse-Server-Mode-Step-2
  • Make sure Connection Type is Socket Attach, the host is localhost, the port is 5005 and then click Debug

    Debug-Plugin-In-Eclipse-Server-Mode-Step-3
  • Change to the Debug Perspective and check if the debugger is running

    Debug-Plugin-In-Eclipse-Server-Mode-Step-4
Debug Plugin in Client Mode
  • Create a new Debug Configruation: Remote Java Application

    Debug-Plugin-In-Eclipse-Client-Mode-Step-1
  • Make sure Connection Type is Socket Listen, the port is 5005 and then click Debug

    Debug-Plugin-In-Eclipse-Client-Mode-Step-2
  • Change to the Debug Perspective and check if the debugger is listening

    Debug-Plugin-In-Eclipse-Client-Mode-Step-3
  • Create new configuration element debugMode and set value to CLIENT; Run the Maven Goal package

    Debug-Plugin-In-Eclipse-Client-Mode-Step-4
  • Wait until HiveMQ is started

    Debug-Plugin-In-Eclipse-Client-Mode-Step-5
  • Change to the Debug Perspective and check if the debugger is connected

    Debug-Plugin-In-Eclipse-Client-Mode-Step-6

Callbacks

A core concept of HiveMQ plugin development is using the callbacks HiveMQ provides to execute custom business logic on events when they occur.

To hook your callback implementations into HiveMQ, you can use the Callback Registry.

Example of Registering a Plugin Callback
import com.dcsquare.hivemq.spi.PluginEntryPoint;
import com.dcsquare.hivemq.spi.callback.lowlevel.OnPingCallback;
import com.dcsquare.hivemq.spi.security.ClientData;

import javax.annotation.PostConstruct;

public class TestPlugin extends PluginEntryPoint {

    @PostConstruct
    public void postConstruct() {
        getCallbackRegistry().addCallback(new OnPingCallback() { 1
            @Override
            public void onPingReceived(ClientData clientData) {
                System.out.println("Ping received");
            }
        });
    }
}
1 Registering the anonymous inner callback class
Callback Hierarchy
Figure 2. The Callback Hierarchy

Every callback interface which can be implemented is a Asynchronous or Synchronous callback.

The only difference which matters for plugin implementors is, that Synchronous Callbacks have priorities. Priorities define the order of the callback execution when there is more than one implementation of a callback. Use the constants from the com.dcsquare.hivemq.spi.callback.CallbackPriority class or define your own numerical return value. Lower values mean higher priority.

Synchronous callbacks are executed in order because they typically allow to interfere with the message processing/authentication/authorization mechanism of HiveMQ. Naturally, it is very important to prevent blocking in Synchronous Callbacks.

Synchronous Callback Execution
Figure 3. Synchronous Callback Execution

Asynchronous Callbacks are executed in parallel. HiveMQ synchronizes after the parallel execution. That means, that a slow callback can block HiveMQ, although all other callbacks are already finished.

Asynchronous Callback Execution
Figure 4. Asynchronous Callback Execution
Same priorities on callbacks
It’s not allowed that several implementations of the same callback interface have the same priority.

Overview of all Callbacks

Table 3. Callbacks
Name Type Description

Broker Event Callbacks

OnBrokerStart

Synchronous

Called when the broker starts up. This happens before bootstrapping functionality like binding network interfaces.

OnBrokerStop

Synchronous

Called when the broker stops.

OnStatisticsUpdate

Asynchronous

Called when the broker updates the statistics.

MQTT Message Callbacks

OnConnectCallback

Synchronous

Called when a CONNECT message arrives.

OnDisconnectCallback

Synchronous

Called when a DISCONNECT message arrives or a TCP connection loss occurs.

OnPublishCallback

Synchronous

Called when a PUBLISH MQTT message arrives.

OnSubscribeCallback

Synchronous

Called when a MQTT SUBSCRIBE message arrives.

OnUnsubscribeCallback

Synchronous

Called when a MQTT UNSUBSCRIBE message arrives.

Security Callbacks

AfterLoginCallback

Asynchronous

Called when a client made and successful or unsuccessful login attempt.

OnAuthenticationCallback

Synchronous

Called after a CONNECT message arrived to check the credentials

OnAuthorizationCallback

Synchronous

Returns MqttTopicPermissions when a client publishes or subscribes.

OnBrokerStart

Type

Synchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.broker.OnBrokerStart callback is useful for implementing custom startup logic and verifications. A common use case is for example verifying that a database connection can be obtained or that expected files are available and valid.

Interfering with HiveMQ

It is possible to throw a com.dcsquare.hivemq.spi.callback.exception.BrokerUnableToStartException. When the onBrokerStart() method of the OnBrokerStart callback implementation throws this exception, HiveMQ refuses to start.


OnBrokerStop

Type

Synchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.broker.OnBrokerStop callback is useful for implementing custom shutdown logic. A common use case is for example shutting down a database connection.

Interfering with HiveMQ

It’s not possible to interfere with HiveMQ directly with this callback.


OnStatisticsUpdate

Type

Aynchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.broker.OnStatisticsUpdate callback is useful for working with the HiveMQ statistics. A common use case is for example publishing the statistics to a persistent store like a database.

Interfering with HiveMQ

It’s not possible to interfere with HiveMQ directly with this callback. You can update some statistics on the com.dcsquare.hivemq.spi.statistics.HiveMQStatistics object which gets passed to the onStatisticsUpdate method. In general, this callback should be used as a read-only callback, though.

This callback is only called once in a statistics update interval and only if statistics are enabled in HiveMQ.

OnConnectCallback

Type

Synchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.OnConnectCallback is useful for performing custom logic when a MQTT CONNECT message arrives. A common use case is for example logging when a client connects. Useful edge cases fort his callbacks are when you need to verify against some parts of the CONNECT message like the LWT part.

Interfering with HiveMQ

It is possible to throw a com.dcsquare.hivemq.spi.callback.exception.RefusedConnectionException to disconnect a client with a specific return code for the CONNACK message. A CONNECT message object and a ClientData object — which contains information about the credentials and the optional client authentication certificate — are passed as parameters to the onConnect.

Although possible for scenarios with only one authentication resource, this callback is not designed to perform authentication, please use the OnAuthenticationCallback for this purpose.

OnDisconnectCallback

Type

Asynchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.OnDisconnectCallback is useful for performing custom logic when a client disconnects due to a MQTT DISCONNECT message or a TCP connection loss.

Interfering with HiveMQ

It’s not possible to interfere with HiveMQ directly with this callback. The abruptAbort parameter indicates if there was a TCP connection loss (abruptAbort = true) or a graceful disconnect with a MQTT DISCONNECT message.


OnPublishCallback

Type

Synchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.OnPublishCallback gets called when a PUBLISH MQTT message arrives. This callback is useful for interfering to PUBLISH messages, e.g. verify that the message has special payload semantics. Don’t use this callback for Topic based permissions.

This callback gets called very often. Please make sure you use proper caching and that you don’t block.
Interfering with HiveMQ

It’s possible to throw a com.dcsquare.hivemq.spi.callback.exception.OnPublishReceivedException when the published message is not valid from a business logic point of view. It’s possible to disconnect the publishing client by passing true as parameter to the OnPublishReceivedException.

This callback is not meant to be used for topic restrictions. Please see the Client Authorization Chapter for more details.

OnPublishSend

Type

Asynchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.OnPublishSend callback gets called when an outgoing PUBLISH MQTT is going to be sent to a subscribing client.

This callback gets called very often. Please make sure you use proper caching and that you don’t block.
Interfering with HiveMQ

It’s not possible to interfere with HiveMQ directly with this callback.


OnSubscribeCallback

Type

Synchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.OnSubscribeCallback callback gets called when a MQTT SUBSCRIBE message arrives. Useful for validating SUBSCRIBE messages or logging subscriptions. This callback is not designed to be used for Authorization. Please see the Authorization chapter for more details.

Interfering with HiveMQ

It’s possible to throw a com.dcsquare.hivemq.spi.callback.exception.InvalidSubscriptionException when the SUBSCRIBE message is invalid. The client gets disconnected when the exception is thrown.


OnUnsubscribeCallback

Type

Asynchronous

Purpose

The com.dcsquare.hivemq.spi.callback.events.OnUnsubscribeCallback callback gets called when a MQTT UNSUBSCRIBE message arrives.

Interfering with HiveMQ

It’s not possible to interfere with HiveMQ directly with this callback.


AfterLoginCallback

Type

Asynchronous

Purpose

The com.dcsquare.hivemq.spi.callback.security.AfterLoginCallback gets called when a client made a successful or unsuccessful login attempt. This happens when the client sent a CONNECT message before. The callback offers a method afterSuccessfulLogin for successful logins and a method afterFailedLogin for unsuccessful login attempts.

Interfering with HiveMQ

It’s not possible to interfere with HiveMQ directly with this callback.


OnAuthenticationCallback

Type

Synchronous

Purpose

The com.dcsquare.hivemq.spi.callback.security.OnAuthenticationCallback gets called after a CONNECT message arrives and handles the authentication of the client. Username/Password from the MQTT CONNECT message can be used to authenticate the client or when using client certificate authentication for the transport layer authentication, the client certificate can also be used to authenticate on the application layer.

If your scenario allows this, you should use Caching.
Interfering with HiveMQ

The checkCredentials method must return either true or false, depending if the authentication was successful. When the authentication wasn’t successful and you want to control the CONNACK MQTT message return code, you can throw a AuthenticationException. This has the side effect that all other plugins which implement authentication are ignored once the exception was thrown.


OnAuthorizationCallback

Type

Synchronous

Purpose

The com.dcsquare.hivemq.spi.callback.security.OnAuthorizationCallback is responsible for returning com.dcsquare.hivemq.spi.topic.MqttTopicPermissions. Everytime a specific action on a topic like publishing or subscribing is done by a client, the callback is executed to check if the client is allowed to do this.

This callback gets called very often, so make sure you use proper Caching.
Interfering with HiveMQ

HiveMQ uses the returned MqttTopicPermissions to check if a client is allowed to do a specific action.


OnInsufficientPermissionsCallback

Type

Asynchronous

Purpose

The com.dcsquare.hivemq.spi.callback.security.OnInsufficientPermissionsCallback gets called when a client was disconnected due to insufficient permissions when publishing or subscribing. At the time the callback gets executed the client was already disconnected, so this is an informational callback.

Interfering with HiveMQ

It’s not possible to interfere with HiveMQ directly with this callback.


RestrictionsAfterLoginCallback

Type

Synchronous

Purpose

The com.dcsquare.hivemq.spi.callback.security.RestrictionsAfterLoginCallback gets executed after a CONNECT message arrives and the client was authenticated successfully. This callback provides restrictions which affect only a specific client, e.g. throttling or maximum allowed MQTT message sizes.

Interfering with HiveMQ

The callback returns a set of restrictions which are applied to the specific client.

Authentication and Authorization

One of the many use cases for writing a HiveMQ plugin is the implementation of client authentication and authorization. The callbacks enables the plugin developer among other things to completely customize the authentication and authorization behavior.

Client Authentication

The following sequence diagram shows an overview of what happens in the HiveMQ core, when a new client is trying to connect and how a plugin can interfere with it.

Client Authentication Callbacks
Figure 5. Execution flow of callbacks during and after the client authentication
Default Behavior
When no plugin is present, all clients are authenticated successfully.

Implement username/password authentication

Example of an username/password authentication callback
public class UserAuthentication implements OnAuthenticationCallback {

    Logger log = LoggerFactory.getLogger(UserAuthentication.class);

    @Override
    public Boolean checkCredentials(ClientCredentialsData clientData1) throws AuthenticationException {


        String username;
        if (!clientData.getUsername().isPresent() 4) {
            throw new AuthenticationException("No Username provided", ReturnCode.REFUSED_NOT_AUTHORIZED); 2
        }
        username = clientData.getUsername().get();


        if (Strings.isNullOrEmpty(username)) {
            throw new AuthenticationException("No Username provided", ReturnCode.REFUSED_NOT_AUTHORIZED);2
        }

        Optional<String> password = Optional.fromNullable(retrievePasswordFromDatabase(username));

        if (!password.isPresent()) {
            throw new AuthenticationException("No Account with the credentials was found!", ReturnCode.REFUSED_NOT_AUTHORIZED);2
        } else {
            if (clientData.getPassword().get().equals(password.get())) {
                return true;
            }
            return false;
        }

    }

    @Cached(timeToLive = 10, timeUnit = TimeUnit.MINUTES) 3
    private String retrievePasswordFromDatabase(String username) {

        String password =....     //Call to any database to ask for the password of the user

        return password;
    }

    @Override
    public int priority() {
        return CallbackPriority.MEDIUM;
    }
}
1 ClientData holds all data provided by the client and can be used to identify it.
2 If a AuthenticationException is thrown, the client will be disconnected independently of possible other authentication plugins, see the guidelines for mulitple plugins.
3 The @Cached annotation is used to cache the request for a particular username for 10 minutes, therefore the plugin only blocks when it fetches the value from the database.
4 ClientData uses the Optional class from Google Guava, which allows better handling of possible null values.

Client Authorization

The authorization of is verified every time a client tries to publish to a certain topic or subscribes to topics. The following diagram shows all possible flows of a publish message (the same flow applies for a subscribe message):

Client Authorization Callback

The most interesting callback here is the OnAuthorizationCallback, which returns a list of MqttTopicPermissions for the client.

The list of MqttTopicPermissions should contains all permissions the client has. The matching if a certain action will be allowed due to the permissions of the client will be done by HiveMQ.

The following snippet shows the available constructors for the MqttTopicPermission class.

Available constructors for MqttTopicPermission
    public MqttTopicPermission(String topic) 1

    public MqttTopicPermission(String topic, ALLOWED_ACTIVITY activity) 2

    public MqttTopicPermission(String topic, ALLOWED_QOS qos) 3

    public MqttTopicPermission(String topic, ALLOWED_QOS qos, ALLOWED_ACTIVITY activity) 4
1 Only limit the topic a client can publish/subscribe to, but allow publish, subscribe with all Quality of Service (QoS) levels.
2 Limit the topic and the client’s ability to publish, subscribe or do both, allow all QoS.
3 Limit the topic and the client’s ability to use only some of the QoS or all of them, allow publish and subscribe.
4 Limit the topic, the client’s ability to use QoS and to subscribe/publish.
Example implementation of a permission, which allows a client to only publish/subscribe to topics with his client id upfront.
public class ClientIdTopic implements OnAuthorizationCallback {

    @Override
    @Cached(timeToLive = 10, timeUnit = TimeUnit.MINUTES) 2
    public List<MqttTopicPermission> getPermissionsForClient(ClientData clientData) {

        List<MqttTopicPermission> mqttTopicPermissions = new ArrayList<MqttTopicPermission>();
        mqttTopicPermissions.add(new MqttTopicPermission(clientData.getClientId() + "/#")); 1

        return mqttTopicPermissions;

    }
}
1 The permission allows a client only to publish/subscribe to topics, which begin with his client id.
2 The method is another ideal candidate for the @Cached annotation.

Guidelines for multiple plugins

In the case more than one authentication or authorization plugins is running simultaneously, additional aspects have to be taken into consideration.

AuthenticationException or false

There are two ways in a OnAuthenticationCallback to tell HiveMQ that the client is not allowed to connect:

  • return false

  • thrown an AuthenticationException

If there is only a single plugin implementing the OnAuthenticationCallback the only reason to throw an AuthenticationException is to customize the return code. In a multiple plugin scenario there is another important difference: When an AuthenticationException is being catched by HiveMQ, the result of the other plugins have no meaning, because the client gets disconnected instantly. By contrast, when returning false, HiveMQ synchronizes all callbacks and then computes the over all result. More on that in the next chapter.

How to decide if AuthenticationException or return false should be used?
This is up to the developer, because it is highly depending on the authentication logic in a particular use case. See the following example for a advice.

An example use case to show the difference would be, that the client user can be either in a MySQL or PostgreSQL database. So there are two plugins, one for checking the existance of the user in each database. If one of the plugins encouter that the client has not provided username and password, an AuthenticationException should be thrown, because most likely without this information the other plugin is also not able to authenticate.

Difference between returning false and throwing an AuthenticationException
...
    @Override
    @Cached(timeToLive = 1, timeUnit = TimeUnit.MINUTES)
    public Boolean checkCredentials(ClientCredentialsData clientData) throws AuthenticationException {


        String clientUsername;
        String clientPassword;
        if (!clientData.getUsername().isPresent() && !clientData.getPassword().isPresent() ) {
            clientUsername = clientData.getUsername().get();
            clientPassword = clientData.getPassword().get();
        }
        else
        {
            throw new AuthenticationException(ReturnCode.REFUSED_BAD_USERNAME_OR_PASSWORD);1
        }

        String savedPassword = retrievePassword(clientUsername);

        if(clientPassword.equals(savedPassword))
        {
            return true;
        }
        else
        {
            return false; 2
        }
    }
...
1 Username and password are not present and an AuthenticationException is thrown, because in the above stated use case no authentication can be granted.
2 If the user is not present or the password does not match, returning false would be the correct thing to do, preserving the chance for the other plugin to successfully authenticate the client.

Successful authentication/authorization

If none of the plugins had thrown an AuthenticationException, the return value of each plugin will be taken into consideration for determine the overall result.

Criterion to authentication or authorize a client
If one of the plugins returned true or the matching permission, HiveMQ is authenticating the client or accepting the authorization request.

Packaging and Deployment

If you want to package the plugin and deploy it to your production environment, just follow these steps:
  • Execute the Maven goal package without the RunWithHiveMQ Profile (see Maven Plugin usage)

  • Find the jar file following this naming convertion <artifactId>-<version>.jar in your project build folder, usually target

  • Copy the jar file and all configuration files (if needed) to the HiveMQ plugin folder (<HiveMQHome>/plugins) in you production environment

  • Start HiveMQ … Done!


1. Actually HiveMQ just utilizes the META-INF/services/ file approach and uses an enhanced service loader mechanism which allows dependency injection in plugins