Introduction

HiveMQ is a MQTT broker tailored specifically for enterprises which find themselves in the emerging age of Machine-to-Machine communication (M2M) and the Internet of Things. It was built from the ground up with maximum scalability and enterprise-ready security concepts in mind. HiveMQ implements the Message Queue Telemetry Transport protocol, the de-facto M2M messaging standard, and through its 100% compliance to the specification, it is leading when it comes to professional adoption of all possibilities of the Internet of Things for companies.

Pre Installation Requirements

HiveMQ is a high performance MQTT broker and is designed to run on server hardware. While HiveMQ also runs on embedded devices, its full potential is unleashed on server hardware.

Hardware
  • At least 4GB of RAM

  • 4 or more CPUs

  • 10GB or more free disk space.

Operating Systems
  • Windows, Mac OS X or Linux. It is recommended to use a Linux distribution in production.

Environment
  • Oracle JRE 1.7 or newer must be installed.

System resources
HiveMQ scales with your system resources. If you scale up to more CPUs and RAM, HiveMQ delivers higher throughput and lower latencies. The performance of persistence features is bound to IO of the underlying system.

Installation

This guide explains the default installation, if you want to try and evaluate HiveMQ without installation you can follow the Getting Started Guide instead.

General installation information

HiveMQ comes as a zip file which contains the executables, init scripts and sample configurations.

The zip contains the following directories:

Folder name Description

bin

The folder with start scripts and binary files

conf

The folder with the configurations

data

Persistent client data and cluster data are located here

plugins

The folder where plugins reside

license

The folder where the HiveMQ License file(s) resides

log

All log files can be found here

Example configurations
HiveMQ comes with many example configuration files in the conf/examples directory.

Installation for specific Operating Systems

This procedure shows how to install the most recent HiveMQ version.

Installation instructions for Unix based systems (Linux,BSD,MacOS X,Unix)

The default installation directory is /opt/hivemq and the default user to run HiveMQ is named hivemq. If you need to install HiveMQ to a custom directory or run it under a custom user please be aware of changing the $HIVEMQ_DIRECTORY and/or the HIVEMQ_USER in the $HIVEMQ_DIRECTORY/bin/start.sh script.

  1. Login as root

    Some of the following commands need root privileges, please login as root or use sudo to execute the commands.

  2. Change directory to where you want to download and install HiveMQ. By default we prefer /opt.

    cd /opt
  3. Get your evaluation version from our website.

  4. Copy the provided download link and download HiveMQ

    wget --content-disposition <your download link>

    or

    curl -O -L <your download link>
  5. Extract the files

    unzip hivemq-<version>.zip
  6. Create hivemq symlink

    ln -s /opt/hivemq-<version> /opt/hivemq
  7. Create HiveMQ user

    useradd -d /opt/hivemq hivemq
  8. Make scripts executable and change owner to hivemq user

    chown -R hivemq:hivemq /opt/hivemq-<version>
    chown -R hivemq:hivemq /opt/hivemq
    cd /opt/hivemq
    chmod +x ./bin/run.sh
  9. Adjust the configuration properties to your needs.

    See chapter Configuration for detailed instructions how to configure HiveMQ.

    If you just want to try HiveMQ you can skip this part now and proceed with Starting HiveMQ.

  10. Install the init script (optional)

    For Debian-based linux like Debian, Ubuntu, Raspbian using init.d scripts

    cp /opt/hivemq/bin/init-script/hivemq-debian /etc/init.d/hivemq
    chmod +x /etc/init.d/hivemq

    For Debian-based linux like Debian, Ubuntu, Raspbian using systemd

    cp /opt/hivemq/bin/init-script/hivemq.service /etc/systemd/system/hivemq.service

    For all other unix systems

    cp /opt/hivemq/bin/init-script/hivemq /etc/init.d/hivemq
    chmod +x /etc/init.d/hivemq
  11. Modify /etc/init.d/hivemq (optional)

    Set the HIVEMQ_HOME and the HIVEMQ_USER variable to the correct values for your system.

    By default this would be:

    HIVEMQ_HOME=/opt/hivemq

    HIVEMQ_USER=hivemq

    If you installed HiveMQ to a different directory than /opt/hivemq please point the HIVEMQ_HOME in your init script to the correct directory. Otherwise the daemon will not start correctly.

  12. Start HiveMQ on boot (optional)

    For Debian-based linux like Debian, Ubuntu, Raspbian

    update-rc.d hivemq defaults

    For Debian-based linux like Debian, Ubuntu, Raspbian using systemd

    systemctl enable hivemq

    Debian > 6.0

    insserv hivemq

    CentOS or RHEL

    chkconfig hivemq on

Installation instructions for Windows based systems

  1. Download the latest HiveMQ version from our website: http://www.hivemq.com/downloads/

  2. Extract the file hivemq.zip to C:\hivemq using your favorite Zip unpack utility.

Installation as Windows Service

The steps to install HiveMQ as a Windows Service are:

  1. Download the hivemq-windows-service.zip file from here

  2. Unzip the hivemq-windows-service.zip file.

  3. Copy the windows-service folder to your HiveMQ home folder.

  4. Open the windows-service folder.

  5. Double click the installService.bat file.

  6. Reboot

Make sure you have the permission to install a service. Therefore it might be necessary to right click the installService.bat and select Run as administrator.

Starting HiveMQ

The following instructions show how to start HiveMQ after installing

Starting HiveMQ on Unix based systems (Linux,BSD,Mac OS X,Unix) manually

  1. Change directory to HiveMQ directory

    cd /opt/hivemq
  2. Execute startup script

    ./bin/run.sh

Starting HiveMQ on Unix based systems (Linux,BSD,Mac OS X,Unix) as daemon

  1. Start the daemon

    /etc/init.d/hivemq start

Starting HiveMQ on Windows based systems

Double click on the run.bat file.

Testing the installation

The following instructions show how to verify that HiveMQ is up and running

Verifying HiveMQ is running on a Unix based systems (Linux,BSD,Mac OS X,Unix)

Check if HiveMQ is listening to the default port for MQTT

netstat -an|grep 1883

If you’re running HiveMQ as daemon:

/etc/init.d/hivemq status

Verifying HiveMQ is running on a Windows based systems

Check if HiveMQ is listening to the default port for MQTT. Open cmd.exe and run:

netstat -an|find "1883"

Using ports between 1 and 1024 on Linux machines

HiveMQ uses port 1883 (for standard MQTT) and 8883 (for MQTT + TLS) by default. Sometimes it may be desirable to use ports between 0 and 1024, which are reserved for root users.

A common solution for running HiveMQ on ports below port 1024 is using authbind.

Installing authbind

Install authbind on Debian based systems
sudo apt-get install authbind
Install authbind on RHEL/CentOS
# Unfortunately authbind is not available in the standard RHEL repositories, so we have to install it manually
yum install -y gcc-c++
wget http://ftp.debian.org/debian/pool/main/a/authbind/authbind_2.1.1.tar.gz -O authbind.tar.gz
tar zxf authbind.tar.gz
cd authbind-2.1.1
make
make install

Configuring authbind

In this example we’re assuming we want to give HiveMQ the privilege to run on port 80.

touch /etc/authbind/byport/80
chmod 500 /etc/authbind/byport/80
chown hivemq:hivemq /etc/authbind/byport/80

Modifying the init script

Now we have to modify the init script to use authbind.

CentOS based Systems
sed -i 's|su $HIVEMQ_USER -c "$HIVEMQ_HOME/bin/run.sh >/dev/null 2>\&1 \&"|su $HIVEMQ_USER -c "exec /usr/local/bin/authbind --deep $HIVEMQ_HOME/bin/run.sh >/dev/null 2>\&1 \&"|g' /etc/init.d/hivemq
Debian based Systems
sed -i 's|su $HIVEMQ_USER -c "$HIVEMQ_HOME/bin/run.sh >/dev/null 2>\&1 \&"|su $HIVEMQ_USER -c "exec /usr/bin/authbind --deep $HIVEMQ_HOME/bin/run.sh >/dev/null 2>\&1 \&"|g' /etc/init.d/hivemq

Modifying the systemd script

Now we have to modify the systemd script to use authbind.

CentOS based Systems
sed -i 's|ExecStart=/usr/bin/java|ExecStart=/usr/local/bin/authbind --deep /usr/bin/java|g' /etc/systemd/system/hivemq.service
Debian based Systems
sed -i 's|ExecStart=/usr/bin/java|ExecStart=/usr/bin/authbind --deep /usr/bin/java|g' /etc/systemd/system/hivemq.service

After changing the file you need to reload the systemd daemon to pick up the changes.

systemctl daemon-reload

Manually set specific HiveMQ folders

HiveMQ allows manual setting of specific folders for an easier maintenance.

To do you need to add one or several of the following options to your bin/run.sh file.

JAVA_OPTS="$JAVA_OPTS ...
Table 1. MQTT Configuration Options
Java Options Affected folder

-Dhivemq.home=/your/folder/here"

Base folder (bin needs to be a subfolder of this folder)

-Dhivemq.license.folder=/your/folder/here"

License files folder

-Dhivemq.log.folder=/your/folder/here"

Log folder

-Dhivemq.config.folder=/your/folder/here"

Configuration files folder

-Dhivemq.plugin.folder=/your/folder/here"

Plugin binaries folder

-Dhivemq.data.folder=/your/folder/here"

HiveMQ data folder

Example:

JAVA_OPTS="$JAVA_OPTS -Dhivemq.home=/mqtt/broker/hivemq"

Sets the HiveMQ home folder to /mqtt/broker/hivemq

Troubleshooting

Port is already in use

If you are seeing a message like the one below in your logs, there is probably already an application running on the port HiveMQ wants to use.

2015-09-05 20:34:05,252 ERROR - Could not start TCP Listener on port 1883 and address 0.0.0.0. Is it already in use?

To find out which application is running on this port, use the following command on Linux:

lsof -iTCP:1883

To solve the issue, you have basically two options:

  1. Stop the application already using that port.

  2. Start the HiveMQ listener(s) on another port. You can learn more about starting HiveMQ on another port in the Configuration Chapter.

Installing a HiveMQ license

Installing a HiveMQ license is very easy. Just drop the hivemq.lic file you received to the license folder.

You can even add a license when HiveMQ is running, HiveMQ will pick up the license automatically.

When a valid license file was found, HiveMQ will log a message similar to the following message:

2015-09-05 20:49:44,322 INFO  - Found valid site license (hivemq.lic) issued to XXX for max XXX connections, valid until XXX.

Multiple license files

Since HiveMQ is often deployed in mission critical 24/7 systems, it is ouf course possible to add new license files on the fly without restarting HiveMQ if you obtain a new license (e.g. if you scale up your maximum concurrent connections).

HiveMQ will recognize multiple license files and will automatically use the license file with the highest concurrent connections or the license which is valid the longest. So you don’t need to restart HiveMQ just because your license changed.

Obtaining a HiveMQ license
If you want to obtain a HiveMQ license, please contact sales@hivemq.com.

Linux Configuration

In case HiveMQ is running on a Linux OS, please make sure that the maximum amount of files that the HiveMQ process may open is sufficient. An easy was to do this is to add the following lines to the /etc/security/limits.conf file:

hivemq  hard    nofile  1000000
hivemq  soft    nofile  1000000
root    hard    nofile  1000000
root    soft    nofile  1000000

On Systems with many connections it may also be necessary to enable the system to open more sockets and tweak some tcp configurations. In order to do this, add the following lines to the /etc/sysctl.conf file:

# This causes the kernel to actively send RST packets when a service is overloaded.
net.ipv4.tcp_fin_timeout = 30

# The maximum file handles that can be allocated.
fs.file-max = 5097152

# Enable fast recycling of waiting sockets.
net.ipv4.tcp_tw_recycle = 1

# Allow to reuse waiting sockets for new connections when it is safe from protocol viewpoint.
net.ipv4.tcp_tw_reuse = 1

# The default size of receive buffers used by sockets.
net.core.rmem_default = 524288

# The default size of send buffers used by sockets.
net.core.wmem_default = 524288

# The maximum size of received buffers used by sockets.
net.core.rmem_max = 67108864

# The maximum size of sent buffers used by sockets.
net.core.wmem_max = 67108864

# The size of the receive buffer for each TCP connection. (min, default, max)
net.ipv4.tcp_rmem = 4096 87380 16777216

# The size of the sent buffer for each TCP connection. (min, default, max)
net.ipv4.tcp_wmem = 4096 65536 16777216

For the changes to take effect, type sysctl -p or restart the system.

Configuration

Configuration Files

HiveMQ is configured with sensible default settings. Therefore most users will find it sufficient to use the default values to get started. All configuration files are located in the conf folder of the HiveMQ directory.

HiveMQ uses a simple but powerful XML based configuration.

There is a XML Schema Definition File (XSD) available in the conf folder. Good text editors give you autocompletion and validation for your config.xml file based on the XSD file.

The config.xml file is read only once during the HiveMQ startup and you have to restart HiveMQ before any changes made during runtime are coming into effect. It’s possible to change many settings at runtime with a custom plugin, though.

Changing settings with the Plugin System

Often static configuration files are not sufficient for every use case. Sometimes settings need to be read from a database, a webservice needs to get called for configuration details in case of centralized configuration storage and sometimes settings need to be changed at runtime.

HiveMQs powerful plugin system allows to implement these kind of requirements easily and exposes many services for reconfiguring HiveMQ at runtime.

The plugin development guide shows in detail how to use the plugin configuration services.

Using environment variables for configuration

In many cases (docker for example) you want to configure your ports, bind adresses etc. by setting environment variables on the system HiveMQ runs on. HiveMQ supports this by providing placeholders which will be replaced with the content of environment variables at the time the configuration file is read.

You can use ${YOUR_ENVVAR_NAME} anywhere in the config.xml file and it will be replaced with the value of the specified environment variable.

Set environment variable
export HIVEMQ_PORT=18830
Use the environment variable in the configuration file
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>${HIVEMQ_PORT}</port>
        </tcp-listener>
    </listeners>
For HiveMQ this will result in
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>18830</port>
        </tcp-listener>
    </listeners>
Make sure that HiveMQ is started in the same context as your environment variables are set, otherwise HiveMQ will not be able to access them.

Default Configuration

HiveMQ comes with sensible defaults. By default it will bind to all interfaces and port 1883.

HiveMQ is configured by default with the following settings:

  • The maximum allowed client identifier length is 65535

  • Maximum queued (in-flight) messages are set to 1000. After that limit is reached, HiveMQ will drop messages for that client

  • No maximum concurrent connection limit is applied (except if you have a license which is only valid for a specific amount of concurrent connections)

  • No throttling will take place

  • Clients will get disconnected if they don’t send a CONNECT message in 10 seconds after opening the TCP connection

  • HiveMQ will check for updates. See the Update Check chapter for more details

HiveMQ Default Config
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
    </listeners>
    <mqtt>
        <max-client-id-length>65535</max-client-id-length>
        <retry-interval>0</retry-interval>
        <no-connect-packet-idle-timeout-millis>10000</no-connect-packet-idle-timeout-millis>
        <max-queued-messages>1000</max-queued-messages>
    </mqtt>
    <throttling>
        <max-connections>-1</max-connections>
        <max-message-size>268435456</max-message-size>
        <outgoing-limit>0</outgoing-limit>
        <incoming-limit>0</incoming-limit>
    </throttling>
    <general>
        <update-check-enabled>true</update-check-enabled>
    </general>

</hivemq>
Example Configurations
HiveMQ comes with many example configurations to get you started quickly. All example configurations reside in the conf/examples/configuration folder. If you want to use one of the example configurations, copy it to the conf folder and name it config.xml.

Adding multiple listeners

By default HiveMQ binds to port 1883, which is the default MQTT port.

HiveMQ can be configured to use multiple listeners for different protocols. These listeners can be bound to specific network interfaces.

The following listener types are available:

Table 2. Available listener types
Listener Description

tcp-listener

A listener for MQTT which uses TCP

tls-tcp-listener

A listener for MQTT which uses TLS

websocket-listener

A listener for MQTT over websockets

tls-websocket-listener

A listener for MQTT over secure websockets (TLS)

The following configuration shows how to use multiple MQTT TCP Listeners.

Multiple Listeners
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <!-- Open to the outside world -->
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
        <!-- Only reachable for clients on the same machine -->
         <tcp-listener>
             <port>1884</port>
             <bind-address>127.0.0.1</bind-address>
         </tcp-listener>
    </listeners>
   ...

</hivemq>

The configuration of websocket listeners is discussed in the Websocket chapter and the configuration of MQTT over TLS is discussed in the SSL/TLS chapter.

You can bind different listeners to different network interfaces. If you want for example to use a plain MQTT listener only to network interfaces for internal networks and MQTT-TLS listeners to an internet reachable interface, this is easy to configure.

Update Check

HiveMQ has an automatic update check which writes to the log file when a new version is available.

The update check will send anonymized data about the HiveMQ installation.

The following data is included:

  • The HiveMQ version

  • The HiveMQ id

  • Information about the system (VM Information, System Architecture (e.g. x86_64), OS Information (e.g. Windows, Linux))

  • Information of installed plugins (name, version)

You can always disable the automatic update check in the config.xml. To disable the update check apply the following configuration:

Disable the update check
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <general>
        <update-check-enabled>false</update-check-enabled>
    </general>
    ...

</hivemq>

MQTT Configuration

HiveMQ implements the MQTT 3.1 and MQTT 3.1.1 specifications to 100%. For parts of the specifications which leave some parts open to the broker implementor, HiveMQ comes with sensible default values for all settings related to MQTT.

Table 3. MQTT Configuration Options
Listener Default Value Description

max-client-id-length

65535

The maximum allowed length of a MQTT client identifier.

retry-interval

0

The retry interval for re-sending MQTT messages (like QoS 1 and QoS 2 messages) to a client in case the previous messages was not acknowledged. This time unit is in seconds

no-connect-packet-idle-timeout-millis

10000

The time for HiveMQ to wait before disconnecting a TCP connection when no CONNECT packet arrived

Maximum Client Identifier length

The MQTT specification 3.1.1 allows client identifier lengths up to 65535 bytes.

Most applications don’t need to have such long client identifiers, so it may be useful to restrict client identifier lengths to your use case.

The following examples shows how to change the maximum client identifier length limit:

Change the maximum client identifier length
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        <!-- Restrict to 23 bytes to strictly conform to the MQTT 3.1 specification -->
        <max-client-id-length>23</max-client-id-length>
        ...
    </mqtt>
    ...

</hivemq>

If a client uses a client identifier with more bytes in the CONNECT message, HiveMQ will reject the connection with a Connection Refused, identifier rejected error code.

MQTT 3.1 client identifier length
MQTT 3.1 defined an artificial length restriction to 23 bytes. HiveMQ omits this restriction and also allows up to 65535 bytes for MQTT 3.1 client identifiers. You can of course set the max-client-id-length value to 23 to enforce that limit

Retry Interval

MQTT systems implement the Quality of Service 1 guarantees with a two-way and the Quality of Service 2 guarantees with a four way message flow. We recommend to read this blog post if you’re interested in the details how Quality of Service Flows work. The broker (as well as the client) is required to re-send a message if it never received an acknowledgement for a particular message in the quality of service flow.

HiveMQ by default does not retry to send a message.

If you are dealing with very unreliable networks and very high latency on a regular basis, it may be useful to resend messages, in case an acknowledgement takes up a large amount of time.

Setting a retry interval
We strongly recommend not setting a retry interval value for QoS 2 use cases. If you decide to do so, you have to make sure the consumers can appropriately handle the duplicate message flag.

The following example shows how to set a retry interval

Change the retry interval
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        <!-- Resend messages only once a minute-->
        <retry-interval>60</retry-interval>
        ...
    </mqtt>
    ...

</hivemq>
Setting the retry interval too low
If you set the retry interval too low, you could add too much pressure on MQTT clients which don’t answer fast enough (because of a slow connection or not enough processing power). So make sure the timeout is high enough that your clients have time to respond, otherwise that causes unneeded backpressure and results in lower performance instead of better performance.

Connection Timeouts

MQTT, as layer 7 protocol in the OSI layer model, relies on TCP and so it’s required for clients to open a TCP connection before they can send a MQTT CONNECT message to initiate the MQTT connection.

The fact that MQTT operates at application layer means that just because a client initiates a TCP connection, doesn’t necessarily mean that it initiates a MQTT connection. So malicious MQTT clients could drain server resources by opening a TCP connection and never initiating a MQTT connection by sending a MQTT CONNECT message. These kind of clients can attack your MQTT broker by draining all system resources (like memory) quickly.

To avoid these kind of attacks, it’s important that you disconnect clients which don’t initiate a MQTT connection as soon as possible.

HiveMQ by default waits 10 seconds for the CONNECT message of a client before it closes an open TCP socket. You can tune this behaviour to your application needs.

Change the idle timeout
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        ...
        <!-- Disconnect idle clients after 10 seconds -->
        <no-connect-packet-idle-timeout-millis>10000</no-connect-packet-idle-timeout-millis>
        ...
    </mqtt>
    ...

</hivemq>
Clients with a slow connection
If you have clients which use a network with very high latency, a few seconds might not be enough and these clients could get disconnected although they try to send a CONNECT message. So make sure the timeout fits your use case.

Maximum Queued In-Flight Messages

The MQTT specification states that topics must be treated as Ordered Topics. Ordered topics guarantee, that a each QoS 1 or 2 message flow for a specific topic finishes before the next QoS message flow starts. That means that it is guaranteed that all QoS 1 and 2 messages are delivered in order.

HiveMQ treats all topics which are subscribed by a specific client as Ordered Topics.

This also means, that HiveMQ will queue QoS 1 and 2 messages which can’t get delivered immediately to a client because another message flow for the topic is in progress. If the client consumes messages slower than new matching messages are received by HiveMQ, these new messages will queue up for that specific client. Because this can drain system resources (memory) quickly, HiveMQ will discard messages if the configured max-queued-messages limit is exceeded for a specific client.

The queued message limit is the limit for all queued messages of an individual client, independent of the number of concrete topics the client subscribed to.

The following example sets a lower queued in-flight message limit:

Configure a lower queued messages limit
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        <!-- Queue fewer in-flight messages per client -->
        <max-queued-messages>500</max-queued-messages>
    </mqtt>
    ...

</hivemq>
This setting is used for configuring queued in-flight messages. There is also a similar setting in the persistence configuration which is used for queued messages of a offline client with persistent session. Don’t mix these two concepts!

Monitoring dropped messages

In a healthy MQTT environment, messages should never be dropped, especially if your MQTT clients rely on receiving important messages.

It’s highly recommended to monitor if messages are dropped or if the average queued messages are at a critical number.

The following metrics of HiveMQ can be used to monitor the drop rate (and total count) of messages and the average queued message usage in the system.

Table 4. Monitoring Metrics for Dropped Messages
Metric Description

com.hivemq.messages.dropped.rate

The rate of dropped messages per second. This metric also exposes a total counter of dropped messages.

com.hivemq.clients.half-full-queue.count

The number of clients which have at least 50% message queue utilization

We recommend that the average number of queued messages should not exceed 50% of the maximum amount queued messages for a longer time if you want to prevent message drop.

Unlimited queuing

Use with caution
It’s not recommended to allow unlimited message queuing. The use should be restricted to very specific cases, where heap-size is not an issue.

It’s possible to configure HiveMQ to allow unlimited message queuing for in-flight QoS 1 and 2 messages.

If you really want to allow unlimited in-flight message queuing for your MQTT clients, you can do it with the following configuration:

Unlimited in-flight message queuing
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        <!-- Allow unlimited message queuing (NOT RECOMMENDED!) -->
        <max-queued-messages>-1</max-queued-messages>
    </mqtt>

</hivemq>

QoS 0 message ordering

In conformance with the MQTT specification, all QoS 0 messages are delivered immediately, so they will get delivered out-of-order. The QoS 0 messages are delivered to clients in the same order as they are received by HiveMQ due to TCP ordering guarantees, but this does not necessarily mean that a QoS 2 message is received by a client before the QoS 0 message, even if HiveMQ receives the QoS 2 message before QoS 0 message. This difference is subtle but very important.

Unordered Topics

Unordered Topics are not supported by HiveMQ at the moment. If you feel this feature would be important to you, please contact support@hivemq.com.

Persistence Configuration

In order to guarantee consistency of data between broker restarts, HiveMQ uses disk persistence by default. That means that even if the broker stops or crashes, all data will be preserved and after a restart the broker can continue its operation as if nothing happened.

It’s also possible to configure in-memory persistence, which can significantly improve performance for the cost of losing all state when the broker stops.

All persistence data is stored in the data folder of HiveMQ. If you want to reset HiveMQ, just delete the data folder when HiveMQ is stopped.

The HiveMQ Persistence Subsystem consists of the following components which can be configured separately:

Table 5. Persistence Components
Name Description

Client Session Persistence

The persistence store for persistent session information

Client Session Queued Messages Persistence

The persistence store for queued messages of offline clients

Client Session Subscriptions Persistence

The persistence store for subscriptions of a persistent session

Client Incoming Message Flow Persistence

The persistence store for incoming QoS 1 and 2 message flows

Client Outgoing Message Flow Persistence

The persistence store for outgoing QoS 1 and 2 message flows

Client Retained Message Persistence

The persistence store for retained messages

A common requirement is to read and write with other tools from the persistent files on disk. HiveMQs persistence is designed for high throughput and lowest latency and is not a general purpose store or even database. In order to implement these requirements, the plugin system should be used.

File persistence

By default, all persistence stores are backed by the file system of the machine HiveMQ is running on. So even if HiveMQ stops or restarts no state is lost and after restarting the broker, all clients keep their persistent sessions, all queued messages for offline persistent clients are preserved, MQTT message flows can resume and retained messages are available.

HiveMQ has various mechanisms to get maximum write and read performance but in the end the performance for persisting data is I/O bound. So if you experience that disk reads and write are too slow, using better hardware (like SSDs) can help boosting performance.

While file persistence is the default mode, you can explicitly enable file persistence by using

<mode>file</mode>

for each individual persistence. The following configuration file shows how to manually set file persistence to each persistence store.

File Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Persistence with explicit file persistence set -->
    <persistence>
        <client-session>
            <general>
                <mode>file</mode>
            </general>
            <queued-messages>
                <mode>file</mode>
            </queued-messages>
            <subscriptions>
                <mode>file</mode>
            </subscriptions>
        </client-session>
        <message-flow>
            <incoming>
                <mode>file</mode>
            </incoming>
            <outgoing>
                <mode>file</mode>
            </outgoing>
        </message-flow>
        <retained-messages>
            <mode>file</mode>
        </retained-messages>
    </persistence>
    ...
</hivemq>

If you’re in doubt if the disk performance is good enough for you, we suggest benchmarking HiveMQ with QoS 1 and 2 messages and persistent clients. You can also look at the benchmarks at our website.

File Persistence Behaviour Configuration

If you are using a file persistence there are additional configuration parameters for the behavior of HiveMQ’s persistence implementation.

Experts only
HiveMQ provides sane default values for these settings, only change them if you know what the consequences are.
Table 6. File persistence configurations
Configuration Default Value Description

jmx-enabled

true

Enables/Disables JMX Metrics for the internal file persistences.

garbage-collection-type

delete

possible values are delete or rename. If you set this to rename the files will be renamed before they are deleted.

garbage-collection-deletion-delay

60000

Amount of time in milliseconds to wait until a file will be deleted permanently.

garbage-collection-run-period

30000

Interval in which the background garbage collection is triggered

garbage-collection-files-interval

1

garbage collection is triggered after this many new files are created

garbage-collection-min-file-age

2

minimum of new versions to delete this file

sync-period

1000

time to wait until flushing to disk

durable-writes

false

if each write should flush to disk

File Persistence Behaviour Configuration Example
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <persistence>
       <client-session>
           <general>
               <mode>file</mode>
               <file-persistence-configuration>
                   <jmx-enabled>true</jmx-enabled>
                   <garbage-collection-type>delete</garbage-collection-type>
                   <garbage-collection-deletion-delay>60000</garbage-collection-deletion-delay>
                   <garbage-collection-run-period>30000</garbage-collection-run-period>
                   <garbage-collection-files-interval>1</garbage-collection-files-interval>
                   <garbage-collection-min-file-age>2</garbage-collection-min-file-age>
                   <sync-period>1000</sync-period>
                   <durable-writes>false</durable-writes>
               </file-persistence-configuration>
           </general>
       </client-session>
    </persistence>
</hivemq>

In-Memory persistence

If it’s not important for you that persistent clients retain their subscriptions or if you don’t need HiveMQ to hold state between broker restarts but you are looking for extreme performance, then you should consider using in-memory persistence.

In-Memory persistence offers stellar performance and latencies at the cost of losing all state after restarting HiveMQ. This option is often used with clustering, since new cluster nodes receive state from other cluster nodes on startup and cluster nodes are often ephemeral.

Each persistence store can be configured individually to run in file persistence or in-memory mode. The following configuration shows an example with all persistence stores configured to run in-memory:

Manually configuring in-memory persistence
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Persistence with in-memory persistence set -->
    <persistence>
        <client-session>
            <general>
                <mode>in-memory</mode>
            </general>
            <queued-messages>
                <mode>in-memory</mode>
            </queued-messages>
            <subscriptions>
                <mode>in-memory</mode>
            </subscriptions>
        </client-session>
        <message-flow>
            <incoming>
                <mode>in-memory</mode>
            </incoming>
            <outgoing>
                <mode>in-memory</mode>
            </outgoing>
        </message-flow>
        <retained-messages>
            <mode>in-memory</mode>
        </retained-messages>
        <publish-payloads>
            <mode>in-memory</mode>
        </publish-payloads>
    </persistence>
    ...
</hivemq>
Client Session Persistence
You should strongly consider to run the all the Client Session Persistence Stores either in-memory or with disk persistence, except if you know exactly what you’re doing.

Client Session Persistence Configuration

The Client Session Persistence is responsible for storing all data about a persistent session.

Don’t set this store to in-memory mode if you want to set the other Client Session Stores to use a disk-based persistence!

The Client Session Persistence has the following configuration options:

Table 7. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Session Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <persistence>
        <client-session>
            <!-- Changing the Client Session Persistence Configuration -->
            <general>
                <mode>in-memory</mode>
            </general>
            ...
        </client-session>
        ...
    </persistence>
    ...
</hivemq>

Client Session Queued Messages Persistence Configuration

The Client Session Queued Messages Persistence is responsible for storing the queued messages for offline clients.

When a client with a persistent session subscribes to a topic with QoS 1 or 2, HiveMQ will save all missed messages for these topics if the client goes offline.

Queuing unlimited messages for offline clients can drain system resources (disk space), so HiveMQ will limit the saved messages for each client to a specific number.

The Client Session Queued Messages Persistence has the following configuration options:

Table 8. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

max-queued-messages

1000

The maximum number of queued messages for a specific client. When that limit is reached, HiveMQ will drop messages for that client

queued-messages-strategy

discard

The discard strategy when the maximum amount of queued messages is reached: discard for discarding new messages, discard-oldest for discarding the oldest queued message when a new message arrives

The following example shows how to set these configuration options:

Configure the Client Session Queued Messages Persistence
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <persistence>
         <queued-messages>
             <!-- Limit the maximum queued messages per client to 100 -->
             <max-queued-messages>100</max-queued-messages>
             <!-- Discard the oldest message if a new message arrives -->
             <queued-messages-strategy>discard-oldest</queued-messages-strategy>
             <!-- Use in-memory persistence -->
             <mode>in-memory</mode>
         </queued-messages>
        ...
    </persistence>
    ...
</hivemq>
Message Queuing for offline clients
Only QoS 1 and 2 messages for persistent MQTT sessions are queued. When no messages are queued for a client, it’s a classic mistake to either forget using persistent sessions or forget to subscribe with QoS 1 or 2.
Disabling message queuing

It’s possible to disable message queuing for offline clients if you don’t need this MQTT functionality.

To disable the message queuing, set the the max-queued-messages setting to 0.

It’s also recommended to use in-memory persistence for queued messages. This saves some disk space.

Disabling queued messages
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <persistence>
         <queued-messages>
             <!-- No queued messages -->
             <max-queued-messages>0</max-queued-messages>
             <!-- Save some disk space -->
             <mode>in-memory</mode>
         </queued-messages>
        ...
    </persistence>
    ...
</hivemq>

Client Session Subscriptions Persistence Configuration

Persistent clients don’t lose their granted subscriptions, even if they are offline and reconnect.

HiveMQ persists these subscriptions by default to disk, so they are not lost even if HiveMQ restarts. The Client Session Subscriptions Persistence is responsible for storing these subscriptions.

The Client Session Subscription Persistence has the following configuration options:

Table 9. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Session Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <persistence>
        <client-session>
            ...
            <!-- Changing the Client Session Subscription Persistence Configuration -->
            <subscriptions>
                <mode>in-memory</mode>
            </subscriptions>
            ...
        </client-session>
        ...
    </persistence>
    ...
</hivemq>

Client Incoming Message Flow Persistence Configuration

Quality of Service 2 messages need a four way communication and in order to guarantee the exactly once semantics. MQTT clients and brokers must resume the message flows on reconnect, in case the client disconnected or the broker stopped.

The Incoming Message Flow Persistence stores the process of the QoS 2 message flow for incoming MQTT PUBLISH messages. By default it uses disk persistence.

If you’re using in-memory mode, you could weaken the exactly once semantics after a broker restart, so it’s strongly recommended to use the default persistence, except if you’re in a HiveMQ HA cluster.

The Client Incoming Message Flow Persistence has the following configuration options:

Table 10. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Incoming Incoming Message Flow Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <message-flow>
        <!-- Changing the incoming message flow -->
        <incoming>
            <mode>in-memory</mode>
        </incoming>
    </message-flow>
    ...
</hivemq>

Client Outgoing Message Flow Persistence Configuration

Quality of Service 1 and 2 messages need a two/four way communication and in order to guarantee the QoS semantics. MQTT clients and brokers must resume the message flows on reconnect, in case the client disconnected or the broker stopped.

The Outgoing Message Flow Persistence stores the process of the QoS 2 message flow for outgoing MQTT PUBLISH messages. By default it uses disk persistence.

If you’re using in-memory mode, you could weaken the exactly once semantics after a broker restart, so it’s strongly recommended to use the default persistence, except if you’re in a HiveMQ HA cluster.

The Client Outgoing Message Flow Persistence has the following configuration options:

Table 11. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Outgoing Message Flow Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <message-flow>
        <!-- Changing the outgoing message flow -->
        <outgoing>
            <mode>in-memory</mode>
        </outgoing>
    </message-flow>
    ...
</hivemq>

Client Retained Message Persistence Configuration

HiveMQ stores the retained messages by default on disk, so even after broker restarts the retained messages are available to new subscribers.

If it’s not important for you to have retained messages available after broker restarts, it’s possible to use in-memory persistence.

The Retained Message Persistence has the following configuration options:

Table 12. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Retained Message Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Changing the retained message to be in-memory -->
    <retained-messages>
        <mode>in-memory</mode>
    </retained-messages>
    ...
</hivemq>

Security

From the ground up, HiveMQ was designed with maximum security in mind. It is mission critical for many IoT & M2M scenarios to enable secure, encrypted end-to-end communication and advanced authentication and authorization features. HiveMQ gives you the flexibility to enable specific security features for your concrete use. The following shows how to enable and configure these security features.

If you’re new to MQTT security concepts, we recommend reading our MQTT Security Fundamentals Series on our blog.

SSL/TLS

Transport Layer Security (TLS) is a cryptographic protocol which allows a secure and encrypted communication at transport layer between a client application and a server. If a TLS listener is enabled in HiveMQ, each client connection for that listener is encrypted and secured by TLS. It is also possible to use X.509 certificate client authentication, please see the chapter X.509 Certificate Authentication for more details.

Multiple listeners
You can configure HiveMQ with multiple listeners so HiveMQ can handle secure and insecure connections simultaneously. See the Hybrid Mode chapter for more details.

For usage scenarios where sensitive information is published via MQTT it is strongly recommended to enable TLS. When configured correctly it is very very hard [1] for an attacker to break the encryption and read the packets on the wire. Since TLS is a proofen technology and the whole transport is encrypted, TLS could be a better choice than a hand-rolled payload encryption when security is more important for your scenario than package overhead. See the Infobox for more details.

As in most cases, added security comes with some disadvantages. The most important disadvantage is, that SSL/TLS comes with a signifcant increase in used bandwidth. While we are talking of tens of bytes here, it can make a huge difference in scenarios where small bandwidth usage is key. Please note that the SSL handshake (which takes place when a connection is established) comes with an additonal overhead in terms of bandwidth and CPU. This is very important to consider when you have to deal with many unreliable connections which could easily drop.

Encryption at Transport Layer vs Encryption at Application Layer

Encryption at transport layer [2] has the advantage that the whole connection is encrypted, including all MQTT messages sent from the client to the server and from the server to the client. This ensures that nobody but the client which is connected to HiveMQ can read any message of the communication. Since the payload of the MQTT message remain unencrypted raw bytes in this case, fully interoperability with other MQTT clients (even if they do not use TLS) is ensured. All MQTT messages (not only PUBLISHes) are secured with this technique.

Encryption at application layer means that the payload of a MQTT PUBLISH message is encrypted with an application specific encryption and only clients who know how to encrypt the payload can read the original message. When not used together with TLS the transport is unencrypted and attackers could read the raw message on the wire. If the attacker does not know how to decrypt the payload, the payload of the MQTT PUBLISH message is secure. It is important to understand that only the payload of a MQTT PUBLISH can be encrypted, all other information like the topic of the message is unencrypted. Only PUBLISH payloads can be encrypted, all other MQTT messages like CONNECT cannot be secured with this technique.

Of course both encryption techniques can be used together. If it is important for your scenario that only few trusted clients can decrypt the contents of specific MQTT publishes and you want to secure your complete communication, this could be a great fit.

Configuration

To enable TLS (over TCP), you need to add a tls-tcp-listener to the listeners in the config.xml file. You can add an arbitrary number of `tls-tcp-listener`s to your config file with different network interface bindings.

TlS TCP Listeners need to have a proper configuration for the XML tls element. The tls element has the following properties:

Table 13. TLS element options
Name Default Mandatory Description

protocols

All JVM enabled protocols

no

The enabled protocols

cipher-suites

All JVM enabled cipher suites

no

The enabled cipher-suites

client-authentication-mode

"NONE"

no

The client authentication mode, possibilities are NONE, OPTIONAL (client certificate is used if presented), REQUIRED (client certificate is required)

handshake-timeout

10000

no

The SSL handshake timeout in milliseconds

keystore.path

none

yes

The path to the keystore where your certificate and private key are included

keystore.password

none

yes

The password to open the keystore

keystore.private-key-password

none

no

The password for the private key (if any)

truststore.path

none

no

The path for the truststore which includes trusted client certificates

truststore.password

none

no

The password to open the truststore

Adding a tls-tcp-listener
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <listeners>
        ...
        <tls-tcp-listener>
            <port>8883</port>
            <bind-address>0.0.0.0</bind-address>
            <tls>
                <keystore>
                    <!-- Configuring the path to the keystore -->
                    <path>/path/to/the/key/store.jks</path>
                    <!-- The password of the keystore -->
                    <password>password-keystore</password>
                    <!-- The password of the private key -->
                    <private-key-password>password-key</private-key-password>
                </keystore>
            </tls>
        </tls-tcp-listener>
    </listeners>
    ...
</hivemq>
Standard port
The IANA standard port for MQTT over TLS is 8883

Keystores in the JKS format can be used.

Java Key Stores & Trust Stores

Java Keystores and Java Truststores are containers which contain information needed for SSL like X.509 certificates and keys. Typically each truststore and each keystore is persisted in one single file and they are protected by a master password.

Keystores and Truststores are conceptually similar but there is a difference in duty. In a SSL context, Keystores provide credentials and Truststores verify credentials. That means, a Keystore contains a public key certificate and the corresponding private key. [3] Servers (like HiveMQ) typically use keystores to protect the private key for their SSL connections.

Truststores contain trusted certificates or certificates signed by a CA in order to identify the partner of the SSL connection. Typically clients which want to connect to a server have to store the certificate of the server (or the trusted CA when the server certificate was signed by a CA) to identify the server as a trusted server. When using client certificate authentication on the server side, the trusted certificates of the client also have to be in the truststore.

It is possible to use the same file as Keystore and Truststore. However, we recommend strongly to separate the Keystore and Truststore when using HiveMQ.

If you are not sure how to create a Keystore, you can find some useful information in the Appendix of this document.

Communication Protocol

If no explicit SSL/TLS version is set, TLS (which is the same as TLSv1) is used to secure the communication between HiveMQ and the clients. If possible, it is recommended to use TLSv1.1 or TLSv1.2, as these protocols tend to be more secure.

By default all protocols which are enabled by the JVM are used.

To enable only specific protocols (e.g. if you know all your clients can use TLS 1.2) you can configure this with an configuration like this:

Configuring TLS versions
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <listeners>
        ...
        <tls-tcp-listener>
            <tls>
                ...
                <!-- Enable specific TLS versions manually -->
                <protocols>
                    <protocol>SSLv3</protocol>
                    <protocol>TLSv1</protocol>
                    <protocol>TLSv1.1</protocol>
                    <protocol>TLSv1.2</protocol>
                </protocols>
                ...
            </tls>
        </tls-tcp-listener>
    </listeners>
    ...
</hivemq>

Cipher Suites

TLS can only be as secure as the used cipher suites. While your JVM vendor probably makes sure that only secure ciphers are activated by default, you may want to limit HiveMQ to use specific cipher suites you are comfortable with.

By default all enabled cipher suites which are enabled by your JVM are used.

List of cipher suites
You can see a list of available cipher suites for the Oracle JVM here: Oracle JCA documentation.

To configure cipher suites explicitly, you can use a configuration similar to the following:

Configuring cipher suites for listeners explicitly
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <tls>
        ...
        <!-- Only allow specific cipher suites -->
        <cipher-suites>
            <cipher-suite>TLS_RSA_WITH_AES_128_CBC_SHA</cipher-suite>
            <cipher-suite>TLS_RSA_WITH_AES_256_CBC_SHA256</cipher-suite>
            <cipher-suite>SSL_RSA_WITH_3DES_EDE_CBC_SHA</cipher-suite>
        </cipher-suites>
    </tls>
    ...
</hivemq>

Each TLS listener can be configured to have its own list of enabled cipher suites.

Secure and Insecure Listeners

HiveMQ can be configured to run with multiple listeners. It’s e.g. possible to handle standard TCP connections on one port and SSL/TLS connections on another port. This is completely transparent and all clients can communicate among themselves via Publish/Subscribe regardless how they are connected to the broker. Clients can decide if they want to use SSL/TLS or a "standard" TCP, non SSL connection. This is extremely useful when some clients only have a unreliable network connectivity and/or very limited available bandwidth where every additional byte overhead matters and bandwidth efficiency is more important than a secure connection. In this case, these clients can connect to HiveMQ via the unsecured port while other clients can use a secure connection via the SSL enabled port.

Allowing secure and insecure connections
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    <listeners>
           <!-- Insecure connection -->
           <tcp-listener>
               <port>1883</port>
               <bind-address>0.0.0.0</bind-address>
           </tcp-listener>
           <!-- Secure connection -->
           <tls-tcp-listener>
               <port>8883</port>
               <bind-address>0.0.0.0</bind-address>
               <tls>
                   <keystore>
                       <path>/path/to/the/key/store.jks</path>
                       <password>password-keystore</password>
                       <private-key-password>password-key</private-key-password>
                   </keystore>
               </tls>
           </tls-tcp-listener>
       </listeners>
    ...
</hivemq>

Of course it is also possible to allow different kinds of Authentication in scenarios where multiple listeners are used. When TLS clients are providing a certificate for X.509 Client Certificate Authentication, they can be authenticated over the certificate and all other clients not providing any certificate will be authenticated by their username and password.

The same works for authorization, where TLS clients providing a certificate could gain more permissions than non-TLS clients. This comes in very handy when you are providing MQTT services to your customers and you have to differentiate between different kind of clients.

The standard TCP connection port and the TLS port must be different ports. It is important to understand that it is not possible to connect to a standard TCP port with an TLS enabled client and it is not allowed to connect to a TLS port with clients who do not initiate a SSL handshake (= non-TLS clients).

Secure Websockets

The configuration of secure websockets is similar to the TCP TLS Listener configuration. Adding a secure websocket listener is essentially a websocket listener which contains an additional tls element.

The following example shows a websocket listener with TLS enabled:

Configuration of a TLS websocket listener
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    <listeners>
        <tls-websocket-listener>
            <port>8001</port>
            <bind-address>0.0.0.0</bind-address>
            <path>/mqtt</path>
            <allow-extensions>true</allow-extensions>
            <subprotocols>
                <subprotocol>mqttv3.1</subprotocol>
                <subprotocol>mqtt</subprotocol>
            </subprotocols>
            <tls>
                <keystore>
                    <path>/path/to/the/key/store.jks</path>
                    <password>password-keystore</password>
                    <private-key-password>password-key</private-key-password>
                </keystore>
            </tls>
        </tls-websocket-listener>
    </listeners>
    ...
</hivemq>

Authentication, Authorization & Permissions

HiveMQ offers several ways to implement authentication and authorization for your concrete use case. HiveMQ allows everything for every client by default. That means, if you don’t need any authentication or authorization for your scenario, you can skip this chapter. Even when you provide an username and password in the MQTT CONNECT message, HiveMQ ignores it and gives a client, which provides credentials the same permissions as a client, which do not provide any credentials.

You can use HiveMQs powerful plugin system to add the authentication and authorization behaviour you need.

Authentication

It is important to understand that there are two levels of authentication which can occur. The first type is the authentication with X.509 client certificates on the transport layer. The second type is Username/Password authentication. This takes place when a MQTT CONNECT message is sent by the client and is recognized on the application layer.

Authorization & Permissions

All authorization and permission logic has to be added to HiveMQ by plugins. For typical use cases, many off-the-shelf plugins already exist, so it is often sufficient to select your authorization plugin of choice and plug it into HiveMQ. A list of plugins can be found here.

Since X.509 client certificate authentication happens when the TLS Handshake occurs, there is no application context because this happens on the transport. No application authorization logic can be performed at this stage, because this happens on the transport layer. Fortunately HiveMQ offers authorization based on TLS client certificates via its powerful plugin system. This enables you to authenticate and authorize the client with X.509 client certificates.

When authentication via the MQTT CONNECT message is enabled, you can implement an authorization based on the credentials provided by the client. For many use cases this is sufficient and easier to implement than TLS client certificate authentication. The credentials provided in the CONNECT message are username/password combinations which you can validate. There are some off-the-shelf plugins, which enable HiveMQ to read credentials from a file or database. Also check out the Enterprise Integrations which may be useful for more complex scenarios.

Never deploy a internet-facing MQTT broker without authentication or at least transport encryption. Malicious clients can listen to any topics if no authentication or authorization mechanism is in place.

X.509 Client Certificate Authentication

To activate X.509 client certificate authentication at transport layer in HiveMQ, you need to provide a Java Truststore. It is possible to add additional application level authentication and authorization with client certificates to implement more advanced scenarios where it is not sufficient enough to have authentication on the transport level.

Prerequisites
  1. A keystore and a truststore file must be configured for a tls-tcp-listener or tls-websocket-listener. The keystore contains the server certificate and the server certificates private key. The truststore must contain all the client certificates or valid root certificates for the client certificates. See the Configuration Chapter for more details.

  2. (optional) If you need application level authentication & authorization based on the X.509 client certificates, you need a HiveMQ plugin which implements this behaviour.

Distinguish application and transport layer authentication
It is very important to understand the difference between application layer authentication and transport layer authentication. Transport layer authentication is not a concept introduced by HiveMQ but a concept from the TLS-Handshake. Application layer authentication is a security implementation of HiveMQ and is — as you could have guessed — application specific. It is absolutely possible to reuse your keystore and truststore for your transport layer client authentication in other server applications which support this concept, too.

The following diagram shows the schematic X.509 client certificate authentication flow. Especially the SSL part of the diagram is technically not 100% correct but for the sake of grasping the concept this should be sufficient.

4 schematic certificate flow
Figure 1. A schematic overview of the X.509 certificate authentication flow
If you would dive deeper into the X509 client certificate authentication concepts with MQTT, we recommend to read our blog post: MQTT Security Fundamentals

Username and Password Authentication and Authorization

The most common practice for authentication when using MQTT is by far classic username/password credential matching. In the official MQTT specification, the MQTT CONNECT message defines username and password sections in the message payload. HiveMQ utilizes these credential fields to enable application level authentication & authorization.

Prerequisites
  1. Install a plugin which handles credential matching.

In general, all clients whose credentials are invalid (the plugin decides if they match or not) are disconnected with the corresponding MQTT CONNACK message, which returns code 4 (bad username or password).

You have to install a plugin, which enables the functionality to handle usernames and passwords. By default all username/password combinations (even anonymous) are allowed and all permissions are gained automatically.

The following diagram shows a schematic overview how the authentication is performed.

4 schematic username password flow
Figure 2. A schematic overview of the Username/Password authentication flow

A good starting point for username/password authentication handling is the File Authentication Plugin. It enables HiveMQ to use a list of usernames and (hashed) passwords to limit the access to the broker.

There are many plugins for authentication available and it is easy to create one at your own if you need something more fancy.

Read our blog posts about authentication and authorization if you want to learn more about about MQTT authentication and authorization.

Client ID Authentication and Authorization

Additional to classic username/password authentication it is possible to add authentication and authorization logic with MQTT client identifiers. In principle this is exactly the same as username/password authentication with the difference that only the client identifier field of the MQTT CONNECT message is used.

Prerequisites
  1. Install a plugin which handles authentication & authorization.

For a list of available off-the-shelf plugins see the plugin directory.

The following diagram shows a schematic overview how the authentication is performed.

4 schematic clientid flow
Figure 3. A schematic overview of the Username/Password authentication flow
Use cases

There are several use cases for client identifier authentication and authorization logic. Some popular are:

  • You want to make sure that only client identifiers with a specific pattern can connect. Like myapp-client1. Only clients which start with myapp should connect.

  • You want to make sure that only a subset of clients connect to this HiveMQ cluster node based on the client identifier.

  • The client identifiers are MAC adresses of your devices and you cannot use username/password authentication because it is not possible to deploy a different username/password combination on every device.

IP-based Authentication and Authorization

If your devices have static IP addresses or you can make sure that all clients connect from specific IP ranges, you can also authenticate clients based on IP addresses.

The HiveMQ plugin system allows to validate the IP address of a client and you can implement authentication and authorization logic based on these IP addresses. While it’s not recommended to solely rely on the IP address, it’s a good second line of defence if to whitelist only specific IP addresses.

This option is also useful if you want to blacklist specific malicious clients if you have automated heuristics in place to blacklist specific IP addresses if you can’t reconfigure your firewall regularly to block these IP addresses.

If you are using a load balancer in front of HiveMQ, you probably won’t get the original IP address of a MQTT client but the load balancers IP address. So use this authentication option with care.

Logging

Per default, HiveMQ writes all log data to the log subfolder. For every day an archived log file is created with the name hivemq.$DATE.log

The most current logfile is always named hivemq.log. The standard log configuration has a log rolling policy which means that it will always be archived on midnight. A file will be archived for 30 days before it gets deleted. It is recommended to manually backup the log files if you want to archive them longer.

The following example shows typical entries of the log subfolder.

Example 1. log folder contents
hivemq.log
hivemq.2013-05-11.log
hivemq.2013-05-12.log
hivemq.2013-05-13.log
By default HiveMQ is not very chatty and won’t log any malicious client behaviour, because otherwise DOS attacks would be possible by spamming log files. If you want HiveMQ to log these entries, set the log level to DEBUG. Be aware that HiveMQ can get very chatty in these cases and you should monitor your log file sizes.

Changing the log behaviour

The HiveMQ Logging Subsystem uses the standard Java logging framework Logback. While the default log configuration is suitable for most use cases, you might want to change the logging behaviour so it suits your use case.

There is an example logback.xml file in the conf/examples/logging folder. Edit the file so it suits your needs and drop it to the conf folder of HiveMQ. HiveMQ will pick up the new log file after a restart.

If you want to change the log level of HiveMQ at runtime, there is a LogService available in the plugin system which allows to change the log level to suit your needs at runtime.

If you want to limit the file size of your log files you can add the following policy to the rolling policy in your file appender configuration.

File size based rollover example
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        ...
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${hivemq.home}/log/hivemq.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
            <timeBasedFileNamingAndTriggeringPolicy
                    class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <!-- maximum size a single log file can have -->
                <maxFileSize>100MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
            ...
        </rollingPolicy>
        ...
    </appender>
The %i tag int <fileNamePattern> is mandatory.

If you have additionally a <maxHistory> configured you are effectively limiting the total log size to <maxHistory> * <maxFileSize>.

Monitoring

The ability to monitor server applications is very important for operating these applications. HiveMQ is no exception, in fact HiveMQ was designed to enable different kinds of monitoring easily. When using HiveMQ in critical infrastructure, it is strongly recommended to enable monitoring and use a decent application for displaying the relevant information you need for your operations.

Gathering metrics is enabled by default. The HiveMQ metrics subsystem is designed to be very performant and no performance penalties are expected for monitoring relevant metrics, even in low-latency and high-throughput environments.

JMX

HiveMQ has extensive support for Java Management Extensions (JMX) to monitor internals of HiveMQ and the JVM. JMX is a proven industry standard for Java Monitoring and many external tools support JMX natively or via extensions.

Enabling JMX

To enable JMX monitoring, open the bin/run.sh script and uncomment the line which begins with:

#JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote

Be sure to set all JMX options properly according to Oracles official JMX documentation.

HiveMQ comes with a pre-installed JXM plugin. If you don’t want to use JMX for monitoring, you can simply delete the JMX plugin from the plugin directory.

If your HiveMQ runs behind some kind of NAT you have to set some additional options:

JAVA_OPTS="$JAVA_OPTS -Djava.rmi.server.hostname=<PUBLIC_IP>"
JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.rmi.port=9010"

This allows you to connect via JConsole using PUBLIC_IP:9010.

MBeans

When JMX is activated, the following MBeans can be used for monitoring:

MBean Name Description

metrics

The HiveMQ metrics and statistics. A list of all available metrics is available here.

java.nio

Statistics and metrics about used native memory

java.lang

All information about the Java Virtual Machine can be monitored here.

Adding custom JMX metrics
Your custom plugins can also add custom JMX Metrics (see the plugin development guide for more information).

Graphite

Graphite is a graphing system which is awesome for monitoring and displaying statistics of different data sources. It’s highly scalable and a perfect fit when monitoring many HiveMQ cluster nodes. Even for single HiveMQ instances it’s worth taking a look when the built-in JMX Monitoring is not sufficient for your use case or when you want to preserve statistics history.

Graphite Server
It is strongly recommended that Graphite is installed on a different server than HiveMQ.

Graphite Monitoring is not part of the HiveMQ installation. There is a free plugin available which allows to report to Graphite. You can download the plugin at the HiveMQ website.

$SYS Topic

With a free and open source plugin, you can get from the HiveMQ website, HiveMQ supports a special MQTT topic tree called $SYS Topic. MQTT clients can subscribe to this topic and receive system information and statistics of the HiveMQ MQTT broker. Once the plugin is installed, the following $SYS subtopics are exposed:

Table 14. $SYS subtopics
Topic Description

$SYS/broker/clients/connected

The currently connected clients.

$SYS/broker/clients/disconnected

The clients which are not connected and have a persistent session on the broker.

$SYS/clients/maximum

The maximum number of active clients which were connected simultaneously.

$SYS/clients/total

The total count of connected and disconnect (with persistent session) clients.

$SYS/load/bytes/received

The total bytes received.

$SYS/load/bytes/sent

The total bytes sent.

$SYS/broker/load/connections/1min

The moving average of the number of CONNECT packets received by the broker during the last minute.

$SYS/broker/load/connections/5min

The moving average of the number of CONNECT packets received by the broker during the last 5 minutes.

$SYS/broker/load/connections/15min

The moving average of the number of CONNECT packets received by the broker during the last 15 minutes.

$SYS/broker/load/messages/received/1min

The moving average of the number of all types of MQTT messages received by the broker during the last minute.

$SYS/broker/load/messages/received/5min

The moving average of the number of all types of MQTT messages received by the broker during the last 5 minutes.

$SYS/broker/load/messages/received/15min

The moving average of the number of all types of MQTT messages received by the broker during the last 15 minutes.

$SYS/broker/load/messages/sent/1min

The moving average of the number of all types of MQTT messages sent by the broker during the last minute.

$SYS/broker/load/messages/sent/5min

The moving average of the number of all types of MQTT messages sent by the broker during the last 5 minutes.

$SYS/broker/load/messages/sent/15min

The moving average of the number of all types of MQTT messages sent by the broker during the last 15 minutes.

$SYS/broker/load/publish/dropped/1min

The moving average of the number of MQTT PUBLISH messages dropped by the broker during the last minute.

$SYS/broker/load/publish/dropped/5min

The moving average of the number of MQTT PUBLISH messages dropped by the broker during the last 5 minutes.

$SYS/broker/load/publish/dropped/15min

The moving average of the number of MQTT PUBLISH messages dropped by the broker during the last 15 minutes.

$SYS/broker/load/publish/received/1min

The moving average of the number of MQTT PUBLISH messages received by the broker during the last minute.

$SYS/broker/load/publish/received/5min

The moving average of the number of MQTT PUBLISH messages received by the broker during the last 5 minutes.

$SYS/broker/load/publish/received/15min

The moving average of the number of MQTT PUBLISH messages received by the broker during the last 15 minutes.

$SYS/broker/load/publish/sent/1min

The moving average of the number of MQTT PUBLISH messages sent by the broker during the last minute.

$SYS/broker/load/publish/sent/5min

The moving average of the number of MQTT PUBLISH messages sent by the broker during the last 5 minutes.

$SYS/broker/load/publish/sent/15min

The moving average of the number of MQTT PUBLISH messages sent by the broker during the last 15 minutes.

$SYS/broker/messages/publish/dropped

The total number of MQTT PUBLISH messages that have been dropped due to inflight/queuing limits.

$SYS/broker/messages/publish/received

The total MQTT PUBLISH messages received.

$SYS/broker/messages/publish/sent

The total MQTT PUBLISH messages sent.

$SYS/broker/messages/received

The total MQTT messages received.

$SYS/broker/messages/retained/count

The amount of all retained messages.

$SYS/broker/messages/sent

The total MQTT messages sent.

$SYS/broker/subscriptions/count

The total count of subscriptions

$SYS/broker/time

The current time on the broker. Only published on subscription.

$SYS/broker/uptime

The uptime of the broker in seconds. Only published on subscription.

$SYS/broker/version

The HiveMQ version. Only published on subscription.

$SYS Topic standard
There is no official standardization what $SYS topics should exist. There is a consensus between broker vendors of available $SYS topics available here. These special topics are not always fully interoperable between MQTT brokers and clients should not rely on these topics.

It is not possible for any client to publish to the $SYS topic or one of its subtopics. These values are published exclusively by HiveMQ.

While $SYS topics are a good fit for broker monitoring in a trusted environment, we recommend not using SYS topics in production and relying on a more sophisticated monitoring solution.

Monitoring of Plugins

With the powerful HiveMQ Plugin System it is possible to add integration plugins for virtually everything you can imagine. A common pitfall when writing plugins is that these plugins block HiveMQ threads in some way and the overall performance of the installation can decrease dramatically. Fortunately HiveMQ offers a way to monitor the execution time of plugin callback on specific callbacks.

The following metrics can be monitored (e.g. with JMX) for plugin callback execution times:

Table 15. Available plugin callback execution time metrics
Metric Name Description

com.hivemq.plugin.callbacks.after-login.failed.time

Metrics about the AfterLoginCallback

com.hivemq.plugin.callbacks.after-login.success.time

Metrics about the AfterLoginCallback

com.hivemq.plugin.callbacks.authentication.time

Metrics about the OnAuthenticationCallback

com.hivemq.plugin.callbacks.authorization.time

Metrics about the OnAuthorizationCallback

com.hivemq.plugin.callbacks.connack-send.time

Metrics about the OnConnackSend Callback

com.hivemq.plugin.callbacks.connect.time

Metrics about the OnConnectCallback

com.hivemq.plugin.callbacks.disconnect.time

Metrics about the OnDisconnectCallback

com.hivemq.plugin.callbacks.permissions-disconnect.publish.time

Metrics about the OnInsufficientPermissionDisconnect Callback

com.hivemq.plugin.callbacks.permissions-disconnect.subscribe.time

Metrics about the OnInsufficientPermissionDisconnect Callback

com.hivemq.plugin.callbacks.puback-received.time

Metrics about the OnPubackReceived Callback

com.hivemq.plugin.callbacks.puback-send.time

Metrics about the OnPubackSend Callback

com.hivemq.plugin.callbacks.pubcomp-received.time

Metrics about the OnPubcompReceived Callback

com.hivemq.plugin.callbacks.pubcomp-send.time

Metrics about the OnPubcompSend Callback

com.hivemq.plugin.callbacks.publish-received.time

Metrics about the OnPublishReceivedCallback

com.hivemq.plugin.callbacks.publish-send.time

Metrics about the OnPublishSend Callback

com.hivemq.plugin.callbacks.pubrec-received.time

Metrics about the OnPubrecReceived Callback

com.hivemq.plugin.callbacks.pubrec-send.time

Metrics about the OnPubrecSend Callback

com.hivemq.plugin.callbacks.pubrel-received.time

Metrics about the OnPubreReceived Callback

com.hivemq.plugin.callbacks.pubrel-send.time

Metrics about the OnPubrelSend Callback

com.hivemq.plugin.callbacks.restrictions.time

Metrics about the RestrictionsAfterLoginCallback

com.hivemq.plugin.callbacks.suback-send.time

Metrics about the OnSubackSend Callback

com.hivemq.plugin.callbacks.subscribe.time

Metrics about the OnSubscribeCallback

com.hivemq.plugin.callbacks.unsuback-send.time

Metrics about the OnUnsubackSend Callback

com.hivemq.plugin.callbacks.unsubscribe.time

Metrics about the OnUnsubscribeReceivedCallback

For all these metrics you can get the following details

Table 16. Available metric details
Attribute Name Description

50thPercentile

The 50th percentile for callback execution times

75thPercentile

The 75th percentile for callback execution times

95thPercentile

The 95th percentile for callback execution times

98thPercentile

The 98th percentile for callback execution times

99thPercentile

The 99th percentile for callback execution times

999thPercentile

The 99.9th percentile for callback execution times

Mean

The mean for callback execution times

StdDev

The standard deviation of callback execution times

Count

The total count of callback executions

FifteenMinuteRate

The average rate (events/s) of callback executions in the last 15 minutes

FiveMinuteRate

The average rate (events/s) of callback executions in the last 5 minutes

OneMinuteRate

The average rate (events/s) of callback executions in the last minute

MeanRate

The mean rate (events/s) of callback executions

Max

The maximum rate (events/s) of callback executions

Min

The minimum rate (events/s) of callback executions

Available Metrics

Metric Types

There are five different types of Metrics available. The following table shows all available metric types:

Table 17. Metric Types
Metric Type Description

Gauge

A gauge returns a simple value at the point of time the metric was requested

Counter

A counter is a simple incrementing and decrementing number.

Histogram

A Histogram measures the distribution of values in a stream of data. They allow to measure min, mean, max, standard deviation of values and quantiles.

Meter

A meter measures the rate at which a set of events occur. Meters measure mean, 1-, 5-, and 15-minute moving averages of events.

Timer

A timer is basically a histogram of the duration of a type of event and a meter of the rate of its occurrence. It captures rate and duration information.

The following table lists metrics that are available for monitoring HiveMQ regardless if the HiveMQ server instance runs in single mode or as part of a cluster:

Table 18. Available standard metric details
Metric Type Description

com.hivemq.cache.payload-persistence.averageLoadPenalty

Gauge

Cache statistic capturing the average load penalty of the payload persistence cache

com.hivemq.cache.payload-persistence.evictionCount

Gauge

Cache statistic capturing the eviction count of the payload persistence cache

com.hivemq.cache.payload-persistence.hitCount

Gauge

Cache statistic capturing the hit count of the payload persistence cache

com.hivemq.cache.payload-persistence.hitRate

Gauge

Cache statistic capturing the hit rate of the payload persistence cache

com.hivemq.cache.payload-persistence.loadCount

Gauge

Cache statistic capturing the load count of the payload persistence cache

com.hivemq.cache.payload-persistence.loadExceptionCount

Gauge

Cache statistic capturing the load exception count of the payload persistence cache

com.hivemq.cache.payload-persistence.loadExceptionRate

Gauge

Cache statistic capturing the load exception rate of the payload persistence cache

com.hivemq.cache.payload-persistence.loadSuccessCount

Gauge

Cache statistic capturing the load success count of the payload persistence cache

com.hivemq.cache.payload-persistence.missCount

Gauge

Cache statistic capturing the miss count of the payload persistence cache

com.hivemq.cache.payload-persistence.missRate

Gauge

Cache statistic capturing the miss rate of the payload persistence cache

com.hivemq.cache.payload-persistence.requestCount

Gauge

Cache statistic capturing the request count of the payload persistence cache

com.hivemq.cache.payload-persistence.totalLoadTime

Gauge

Cache statistic capturing the total load time of the payload persistence cache

com.hivemq.cache.shared-subscription.averageLoadPenalty

Gauge

Cache statistic capturing the average load penalty of the shared subscription cache

com.hivemq.cache.shared-subscription.evictionCount

Gauge

Cache statistic capturing the eviction count of the shared subscription cache

com.hivemq.cache.shared-subscription.hitCount

Gauge

Cache statistic capturing the hit count of the shared subscription cache

com.hivemq.cache.shared-subscription.hitRate

Gauge

Cache statistic capturing the hit rate of the shared subscription cache

com.hivemq.cache.shared-subscription.loadCount

Gauge

Cache statistic capturing the load count of the shared subscription cache

com.hivemq.cache.shared-subscription.loadExceptionCount

Gauge

Cache statistic capturing the load exception count of the shared subscription cache

com.hivemq.cache.shared-subscription.loadExceptionRate

Gauge

Cache statistic capturing the load exception rate of the shared subscription cache

com.hivemq.cache.shared-subscription.loadSuccessCount

Gauge

Cache statistic capturing the load success count of the shared subscription cache

com.hivemq.cache.shared-subscription.missCount

Gauge

Cache statistic capturing the miss count of the shared subscription cache

com.hivemq.cache.shared-subscription.missRate

Gauge

Cache statistic capturing the miss rate of the shared subscription cache

com.hivemq.cache.shared-subscription.requestCount

Gauge

Cache statistic capturing the request count of the shared subscription cache

com.hivemq.cache.shared-subscription.totalLoadTime

Gauge

Cache statistic capturing the total load time of the shared subscription cache

com.hivemq.callback.executor.completed

Meter

Measures the current rate of completed CallbackExecutor jobs

com.hivemq.callback.executor.duration

Timer

Captures metrics about the job durations for jobs submitted to the CallbackExecutor

com.hivemq.callback.executor.running

Counter

Measures how many CallbackExecutor jobs are running at the moment

com.hivemq.callback.executor.submitted

Meter

Measures the current rate of submitted jobs to the CallbackExecutor

com.hivemq.clients.half-full-queue.count

Counter

Counts the amount of offline clients with a message queue, that is at least half-full

com.hivemq.cluster.name-request.retry.count

Counter

Counts the amount of retry until a name for a node via it‘s address is resolved

com.hivemq.cluster.topology-change.time

Timer

Measures the time spent waiting for cluster topology changes

com.hivemq.direct-memory.used

Gauge

Currently used direct memory in bytes

com.hivemq.exceptions.total

Meter

Measures the rate of inconsequential exceptions thrown during the socket life cycle

com.hivemq.keep-alive.disconnect.count

Counter

Counts every closed connection that was closed because the client missed sending PINGREQ message during the keep-alive interval

com.hivemq.logging.all

Meter

Measures the rate of logging statements of all levels

com.hivemq.logging.debug

Meter

Measures the rate of logging statements in DEBUG level

com.hivemq.logging.error

Meter

Measures the rate of logging statements in ERROR level

com.hivemq.logging.info

Meter

Measures the rate of logging statements in INFO level

com.hivemq.logging.trace

Meter

Measures the rate of logging statements in TRACE level

com.hivemq.logging.warn

Meter

Measures the rate of logging statements in WARN level

com.hivemq.messages.dropped.count

Counter

Counts every dropped message.

com.hivemq.messages.dropped.in-flight-window.count

Counter

Counts the messages that have been dropped because the in flight window was full

com.hivemq.messages.dropped.internal-error.count

Counter

Counts the messages that have been dropped due to internal errors

com.hivemq.messages.dropped.not-connected.count

Counter

Counts the messages that have been dropped because the client disconnected and has no persistent session

com.hivemq.messages.dropped.not-writable.count

Counter

Counts the messages with QoS 0 that have been dropped because the client socket was not writeable

com.hivemq.messages.dropped.qos-0-queue-not-empty.count

Counter

Counts the messages with QoS 0 that have been dropped because the queue for the client wasn’t empty

com.hivemq.messages.dropped.queue-full.count

Counter

Counts the messages that have been dropped because the client session message queue was full

com.hivemq.messages.dropped.rate

Meter

Measures the current rate of dropped messages.

com.hivemq.messages.incoming.connect.count

Counter

Counts every incoming MQTT CONNECT message

com.hivemq.messages.incoming.connect.rate

Meter

Measures the current rate of incoming MQTT CONNECT messages

com.hivemq.messages.incoming.disconnect.count

Counter

Counts every incoming MQTT DISCONNECT message

com.hivemq.messages.incoming.disconnect.rate

Meter

Measures the current rate of incoming MQTT DISCONNECT messages

com.hivemq.messages.incoming.pingreq.count

Counter

Counts every incoming MQTT PINGREQ message

com.hivemq.messages.incoming.pingreq.rate

Meter

Measures the current rate of incoming MQTT PINGREQ messages

com.hivemq.messages.incoming.puback.count

Counter

Counts every incoming MQTT PUBACK message

com.hivemq.messages.incoming.puback.rate

Meter

Measures the current rate of incoming MQTT PUBACK messages

com.hivemq.messages.incoming.pubcomp.count

Counter

Counts every incoming MQTT PUBCOMP message

com.hivemq.messages.incoming.pubcomp.rate

Meter

Measures the current rate of incoming MQTT PUBCOMP messages

com.hivemq.messages.incoming.publish.bytes

Histogram

Measures the distribution of incoming MQTT message size (including MQTT packet headers)

com.hivemq.messages.incoming.publish.count

Counter

Counts every incoming MQTT PUBLISH message

com.hivemq.messages.incoming.publish.rate

Meter

Measures the current rate of incoming MQTT PUBLISH messages

com.hivemq.messages.incoming.pubrec.count

Counter

Counts every incoming MQTT PUBREC message

com.hivemq.messages.incoming.pubrec.rate

Meter

Measures the current rate of incoming MQTT PUBREC messages

com.hivemq.messages.incoming.pubrel.count

Counter

Counts every incoming MQTT PUBREL message

com.hivemq.messages.incoming.pubrel.rate

Meter

Measures the current rate of incoming MQTT PUBREL messages

com.hivemq.messages.incoming.subscribe.count

Counter

Counts every incoming MQTT SUBSCRIBE message

com.hivemq.messages.incoming.subscribe.rate

Meter

Measures the current rate of incoming MQTT SUBSCRIBE messages

com.hivemq.messages.incoming.total.bytes

Histogram

Measures the size distribution of incoming MQTT messages (including MQTT packet headers)

com.hivemq.messages.incoming.total.count

Counter

Counts every incoming MQTT message

com.hivemq.messages.incoming.total.rate

Meter

Measures the current rate of incoming MQTT messages

com.hivemq.messages.incoming.unsubscribe.count

Counter

Counts every incoming MQTT UNSUBSCRIBE message

com.hivemq.messages.incoming.unsubscribe.rate

Meter

Measures the current rate of incoming MQTT UNSUBSCRIBE messages

com.hivemq.messages.outgoing.connack.count

Counter

Counts every outgoing MQTT CONNACK message

com.hivemq.messages.outgoing.connack.rate

Meter

Measures the current rate of outgoing MQTT CONNACK messages

com.hivemq.messages.outgoing.pingresp.count

Counter

Counts every outgoing MQTT PINGRESP message

com.hivemq.messages.outgoing.pingresp.rate

Meter

Measures the current rate of outgoing MQTT PINGRESP messages

com.hivemq.messages.outgoing.puback.count

Counter

Counts every outgoing MQTT PUBACK message

com.hivemq.messages.outgoing.puback.rate

Meter

Measures the current rate of outgoing MQTT PUBACK messages

com.hivemq.messages.outgoing.pubcomp.count

Counter

Counts every outgoing MQTT PUBCOMP message

com.hivemq.messages.outgoing.pubcomp.rate

Meter

Measures the current rate of outgoing MQTT PUBCOMP messages

com.hivemq.messages.outgoing.publish.bytes

Histogram

Measures the size distribution of outgoing MQTT messages (including MQTT packet headers)

com.hivemq.messages.outgoing.publish.count

Counter

Counts every outgoing MQTT PUBLISH message

com.hivemq.messages.outgoing.publish.rate

Meter

Measures the current rate of outgoing MQTT PUBLISH messages

com.hivemq.messages.outgoing.pubrec.count

Counter

Counts every outgoing MQTT PUBREC message

com.hivemq.messages.outgoing.pubrec.rate

Meter

Measures the current rate of outgoing MQTT PUBREC messages

com.hivemq.messages.outgoing.pubrel.count

Counter

Counts every outgoing MQTT PUBREL message

com.hivemq.messages.outgoing.pubrel.rate

Meter

Measures the current rate of outgoing MQTT PUBREL messages

com.hivemq.messages.outgoing.suback.count

Counter

Counts every outgoing MQTT SUBACK message

com.hivemq.messages.outgoing.suback.rate

Meter

Measures the current rate of outgoing MQTT SUBACK messages

com.hivemq.messages.outgoing.total.bytes

Histogram

Measures the size distribution of outgoing MQTT messages (including MQTT packet headers)

com.hivemq.messages.outgoing.total.count

Counter

Counts every outgoing MQTT message

com.hivemq.messages.outgoing.total.rate

Meter

Measures the current rate of outgoing MQTT messages

com.hivemq.messages.outgoing.unsuback.count

Counter

Counts every outgoing MQTT UNSUBACK message

com.hivemq.messages.outgoing.unsuback.rate

Meter

Measures the current rate of outgoing MQTT UNSUBACK messages

com.hivemq.messages.publish-resent

Meter

Measures the current rate of resent PUBLISH messages (QoS > 0)

com.hivemq.messages.pubrel-resent

Meter

Measures the current rate of resent PUBREL messages (OoS = 2)

com.hivemq.messages.retained.current

Gauge

The current amount of retained messages

com.hivemq.messages.retained.mean

Histogram

Metrics about the mean payload-size of retained messages in bytes

com.hivemq.messages.retained.rate

Meter

The current rate of newly retained messages

com.hivemq.networking.bytes.read.current

Gauge

The current (last 5 seconds) amount of read bytes

com.hivemq.networking.bytes.read.total

Gauge

The total amount of read bytes

com.hivemq.networking.bytes.write.current

Gauge

The current (last 5 seconds) amount of written bytes

com.hivemq.networking.bytes.write.total

Gauge

Total amount of written bytes

com.hivemq.networking.connections.current

Gauge

The current total number of active MQTT connections

com.hivemq.networking.connections.mean

Histogram

The mean total number of active MQTT connections

com.hivemq.payload-persistence.cleanup-executor.completed

Meter

Measure the rate of completed tasks submitted to the scheduler in charge of the cleanup of the persistence payload

com.hivemq.payload-persistence.cleanup-executor.duration

Timer

Captures metrics about the job durations for jobs submitted to the scheduler in charge of the cleanup of the persistence payload

com.hivemq.payload-persistence.cleanup-executor.running

Counter

Counts tasks that are currently running in the scheduler in charge of the cleanup of the persistence payload

com.hivemq.payload-persistence.cleanup-executor.scheduled.once

Meter

Measures about the tasks that have been scheduled to run only once in the scheduler in charge of the cleanup of the persistence payload

com.hivemq.payload-persistence.cleanup-executor.scheduled.overrun

Counter

Counts the periodic tasks which ran longer then their time frame allowed in the scheduler in charge of the cleanup of the persistence payload

com.hivemq.payload-persistence.cleanup-executor.scheduled.percent-of-period

Histogram

Metrics about how much percent of their allowed time frame periodic tasks used while running the cleanup of the persistence payload

com.hivemq.payload-persistence.cleanup-executor.scheduled.repetitively

Meter

Measures about the tasks that have been scheduled to run repetitively in the scheduler in charge of the cleanup of the persistence payload

com.hivemq.payload-persistence.cleanup-executor.submitted

Meter

Measures about the tasks that have been submitted to the scheduler in charge of the cleanup of the persistence payload

com.hivemq.persistence-executor.completed

Meter

Measure the rate of completed tasks submitted to the persistence executor

com.hivemq.persistence-executor.duration

Timer

Captures metrics about the job durations for jobs submitted to the persistence executor

com.hivemq.persistence-executor.running

Counter

Counts tasks that are currently running in the persistence executor

com.hivemq.persistence-executor.submitted

Meter

Measures about the tasks that have been submitted to the scheduler responsible for persistence

com.hivemq.persistence-scheduled-executor.completed

Meter

Measure the rate of completed tasks submitted to the scheduler responsible for persistence

com.hivemq.persistence-scheduled-executor.duration

Timer

Captures metrics about the job durations for jobs submitted to the scheduler responsible for persistence

com.hivemq.persistence-scheduled-executor.running

Counter

Counts tasks that are currently running in the scheduler responsible for persistence

com.hivemq.persistence-scheduled-executor.scheduled.once

Meter

Measures about the tasks that have been scheduled to run once in the scheduler responsible for persistence

com.hivemq.persistence-scheduled-executor.scheduled.overrun

Counter

Counts the periodic tasks which ran longer then their time frame allowed in the scheduler responsible for persistence

com.hivemq.persistence-scheduled-executor.scheduled.percent-of-period

Histogram

Metrics about how much percent of their allowed time frame periodic tasks used in the scheduler responsible for persistence

com.hivemq.persistence-scheduled-executor.scheduled.repetitively

Meter

Measures about the tasks that have been scheduled to run repetitively in the scheduler responsible for persistence

com.hivemq.persistence-scheduled-executor.submitted

Meter

Measures about the tasks that have been submitted to the scheduler responsible for persistence

com.hivemq.persistence.executor.client-session.tasks

Gauge

Current amount of disk I/O tasks that are enqueued by the client session persistence

com.hivemq.persistence.executor.client-session.time

Timer

Measures the mean execution time (in nanoseconds) of client session disk I/O tasks

com.hivemq.persistence.executor.noempty-queues

Gauge

Current amount of single writer task queues that are not empty

com.hivemq.persistence.executor.outgoing-message-flow.tasks

Gauge

Current amount of disk I/O tasks that are enqueued by the outgoing message flow persistence

com.hivemq.persistence.executor.outgoing-message-flow.time

Timer

Measures the mean execution time (in nanoseconds) of outgoing message flow disk I/O tasks

com.hivemq.persistence.executor.queue-misses

Counter

Current count of loops that all single writer threads have done without executing a task

com.hivemq.persistence.executor.queued-messages.tasks

Gauge

Current amount of disk I/O tasks that are enqueued by the queued messages persistence

com.hivemq.persistence.executor.queued-messages.time

Timer

Measures the mean execution time (in nanoseconds) of queued messages disk I/O tasks

com.hivemq.persistence.executor.request-event-bus.tasks

Gauge

Current amount of tasks that are enqueued by the request event bus

com.hivemq.persistence.executor.request-event-bus.time

Timer

Measures the mean execution time (in nanoseconds) of request event bus tasks

com.hivemq.persistence.executor.retained-messages.tasks

Gauge

Current amount of disk I/O tasks that are enqueued by the retained message persistence

com.hivemq.persistence.executor.retained-messages.time

Timer

Measures the mean execution time (in nanoseconds) of retained message disk I/O tasks

com.hivemq.persistence.executor.running.threads

Gauge

Current amount of threads that are executing disk I/O tasks

com.hivemq.persistence.executor.subscription.tasks

Gauge

Current amount of disk I/O tasks that are enqueued by the subscription persistence

com.hivemq.persistence.executor.subscription.time

Timer

Measures the mean execution time (in nanoseconds) of subscription disk I/O tasks

com.hivemq.persistence.executor.total.tasks

Gauge

Current amount of disk I/O tasks that are enqueued by all persistence executors

com.hivemq.persistence.payload-entries.count

Gauge

Holds the current amount of payloads stored in the payload persistence

com.hivemq.persistence.removable-entries.count

Gauge

Holds the current amount of payloads stored in the payload persistence, that can be removed by the cleanup

com.hivemq.plugin.callbacks.after-login.failed.time

Timer

Metrics about the AfterLoginCallback

com.hivemq.plugin.callbacks.after-login.success.time

Timer

Metrics about the AfterLoginCallback

com.hivemq.plugin.callbacks.authentication.time

Timer

Metrics about the OnAuthenticationCallback

com.hivemq.plugin.callbacks.authorization.time

Timer

Metrics about the OnAuthorizationCallback

com.hivemq.plugin.callbacks.connack-send.time

Timer

Metrics about the OnConnackSend Callback

com.hivemq.plugin.callbacks.connect.time

Timer

Metrics about the OnConnectCallback

com.hivemq.plugin.callbacks.disconnect.time

Timer

Metrics about the OnDisconnectCallback

com.hivemq.plugin.callbacks.permissions-disconnect.publish.time

Timer

Metrics about the OnInsufficientPermissionDisconnectCallback

com.hivemq.plugin.callbacks.permissions-disconnect.subscribe.time

Timer

Metrics about the OnInsufficientPermissionDisconnectCallback

com.hivemq.plugin.callbacks.ping.time

Timer

Metrics about the OnPingCallback

com.hivemq.plugin.callbacks.puback-received.time

Timer

Metrics about the OnPubackReceived Callback

com.hivemq.plugin.callbacks.puback-send.time

Timer

Metrics about the OnPubackSend Callback

com.hivemq.plugin.callbacks.pubcomp-received.time

Timer

Metrics about the OnPubcompReceived Callback

com.hivemq.plugin.callbacks.pubcomp-send.time

Timer

Metrics about the OnPubcompSend Callback

com.hivemq.plugin.callbacks.publish-received.time

Timer

Metrics about the OnPublishReceivedCallback

com.hivemq.plugin.callbacks.publish-send.time

Timer

Metrics about the OnPublishSend Callback

com.hivemq.plugin.callbacks.pubrec-received.time

Timer

Metrics about the OnPubrecSend Callback

com.hivemq.plugin.callbacks.pubrec-send.time

Timer

Metrics about the OnPubrecSend Callback

com.hivemq.plugin.callbacks.pubrel-received.time

Timer

Metrics about the OnPubrelReceived Callback

com.hivemq.plugin.callbacks.pubrel-send.time

Timer

Metrics about the OnPubrelSend Callback

com.hivemq.plugin.callbacks.restrictions.time

Timer

Metrics about the RestrictionsAfterLoginCallback

com.hivemq.plugin.callbacks.suback-send.time

Timer

Metrics about the OnSubackSend Callback

com.hivemq.plugin.callbacks.subscribe.time

Timer

Metrics about the OnSubscribeCallback

com.hivemq.plugin.callbacks.topic-subscription.time

Timer

Metrics about the OnTopicSubscriptionCallback

com.hivemq.plugin.callbacks.unsuback-send.time

Timer

Metrics about the OnUnsubackSend Callback

com.hivemq.plugin.callbacks.unsubscribe.time

Timer

Metrics about the OnUnsubscribeReceivedCallback

com.hivemq.plugin.executor.completed

Meter

Measure the rate of completed tasks submitted to the plugin executor

com.hivemq.plugin.executor.duration

Timer

Measure the rate of completed tasks submitted to the scheduler responsible for plugins

com.hivemq.plugin.executor.running

Counter

Counts tasks that are currently running in the scheduler responsible for plugins

com.hivemq.plugin.executor.scheduled.once

Meter

Measures about the tasks that have been scheduled to run once in the scheduler responsible for plugins

com.hivemq.plugin.executor.scheduled.overrun

Counter

Counts the periodic tasks which ran longer then their time frame allowed in the scheduler responsible for plugins

com.hivemq.plugin.executor.scheduled.percent-of-period

Histogram

Metrics about how much percent of their allowed time frame periodic tasks used in the scheduler responsible for plugins

com.hivemq.plugin.executor.scheduled.repetitively

Meter

Measures about the tasks that have been scheduled to run repetitively in the scheduler responsible for plugins

com.hivemq.plugin.executor.submitted

Meter

Measures about the tasks that have been submitted to the scheduler responsible for plugins

com.hivemq.queues.publish.rate

Meter

Measures the rate of messages put into the publish queue

com.hivemq.queues.publish.size

Counter

Measures the current count of messages in the publish queue

com.hivemq.sessions.overall.current

Gauge

Measures the current count of stored sessions. These sessions include all sessions, including online and offline clients

com.hivemq.sessions.persistent.active

Counter

Measures the current count of active persistent sessions (= Online MQTT clients which are connected with cleanSession=false).

com.hivemq.single-writer-executor.completed

Meter

Measure the rate of completed tasks submitted to the single-writer executor

com.hivemq.single-writer-executor.duration

Timer

Measure the rate of completed tasks submitted to the scheduler responsible for single-writer

com.hivemq.single-writer-executor.running

Counter

Counts tasks that are currently running in the scheduler responsible for single-writer

com.hivemq.single-writer-executor.submitted

Meter

Measures about the tasks that have been submitted to the scheduler responsible for single-writer

com.hivemq.subscriptions.overall.current

Counter

Measures the current count of subscriptions on the broker

com.hivemq.system.max-file-descriptor

Gauge

Maximum allowed amount of file descriptors

com.hivemq.system.open-file-descriptor

Gauge

Amount of open file descriptors

com.hivemq.system.physical-memory.free

Gauge

Current amount of free physical memory in bytes

com.hivemq.system.physical-memory.total

Gauge

Total amount of physical memory (bytes) available

com.hivemq.system.process-cpu.load

Gauge

Current CPU usage for the JVM process (0.0 idle – 1.0 full CPU usage)

com.hivemq.system.process-cpu.time

Gauge

Total amount of CPU time the JVM process has used to this point(in nanoseconds)

com.hivemq.system.swap-space.free

Gauge

Current amount of free swap space in bytes

com.hivemq.system.swap-space.total

Gauge

Total amount of swap space available in bytes

com.hivemq.system.system-cpu.load

Gauge

Current CPU usage for the whole system (0.0 idle – 1.0 full CPU usage)

The following table (Table 7) lists metrics that are only available for monitoring if the HiveMQ server instance is part of a cluster.

Table 19. Additional metric in cluster environment
Metric Type Description

com.hivemq.cluster.sent.*

Meter

Provides measures for every class that made once a SEND request (every class get its own metric)

Clustering

The cluster feature has changed with the HiveMQ 3.1 release and the cluster configurations are not backwards compatible with older HiveMQ versions. If you are upgrading from an older HiveMQ version with cluster enabled, please read the next paragraphs carefully.

One of the outstanding features of HiveMQ is the ability to form a MQTT Broker cluster. It is easy to implement High Availability Services with HiveMQ as well as high performance clusters with massive message throughput. The elastic cluster functionality makes it easy to add cluster nodes at runtime. It is possible to implement even very complex scenarios.

A MQTT broker cluster is a distributed system that represents one logical MQTT broker. It consists of many different MQTT broker nodes that are typically installed on different physical machines and are connected over a network. From a MQTT client’s perspective, a cluster of brokers behaves like a single MQTT broker.

Using MQTT broker clusters has advantages over broker bridging, which is essentially just a broker acting as a client. Bridging is not an official part of the MQTT spec and there are many disadvantages compared to clustering.

Cluster vs Bridging
Bridging has some disadvantages when it comes to reliability and performance. Although MQTT is a very lightweight protocol, you bear the whole protocol overhead with you with every message when using bridging. To guarantee that a message arrives at the bridged broker only once, bridging has to be configured with MQTT QoS 2, which is a massive overhead compared to the cluster communication mechanisms. Bridges only allow point-to-point broker connections which makes it harder to scale with many brokers. Also, hot standby scenarios with loadbalancers in front are trickier with bridges due to the lack of a replicated client sessions across the brokers.

Prerequisites

Clustering is disabled by default. To enable clustering you need to change the config.xml file and add additional files to your conf folder. The concrete steps are listed below.

Enabling cluster support

By default clustering is disabled and HiveMQ needs to be started in cluster mode in order to try to form a cluster with other HiveMQs. This has the advantage that HiveMQ has a significant faster startup time when running in standalone mode than in cluster mode.

Changes in the config.xml file

Table 20. Relevant properties for limiting connections in the config.xml file
Property Name Description

cluster.enabled

Enables or disables the cluster. By default the cluster is disabled. Set to true if you want to enable cluster support.

No additional settings need to be configured in the config.xml file.

The following example shows how the config.xml file looks like when cluster is enabled:

Enabling the cluster feature
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
    </cluster>
    ...
</hivemq>
Default Cluster Configuration
If no other configuration is specified HiveMQ will start its cluster with UDP as transport and MULTICAST as discovery.

Cluster Node Names

Each HiveMQ generates a random ClusterID at startup. This ID is randomly generated and cannot be changed. The ID is used by HiveMQ to identify the nodes in a cluster.

Example ClusterID
0QMpE

Cluster Configuration

To build a cluster you have to choose a type of transport (TCP/UDP) and a type of discovery which fits your needs.

For a HiveMQ cluster to be fully functional it is vital, that each of the cluster notes share an identical setup. This means that the hardware specs, operating system (and distribution) and JDK are exactly the same on each node. Running a HiveMQ cluster with different setups on different nodes can lead to unpredictable behaviour and therefore is not supported by HiveMQ.

Cluster Transport

The transport describes the network protocol which is used to transfer information between nodes. In the case of HiveMQ you can choose between TCP or UDP as your transport.

Using UPD
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <transport>
            <udp></udp>
        </transport>
    </cluster>
    ...
</hivemq>
Using TCP
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <transport>
            <tcp></tcp>
        </transport>
    </cluster>
    ...
</hivemq>
Using TLS
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:noNamespaceSchemaLocation="hivemq-config.xsd">
   ...
   <cluster>
       <enabled>true</enabled>
       <transport>
           <tcp>
               <tls>
                    <enabled>true</enabled>
                    ...
               </tls>
           </tcp>
       </transport>
   </cluster>
   ...
</hivemq>
UDP transport is recommended since it creates less network traffic than TCP does. There are use-cases for which you need to choose TCP as transport. In most cases this will depend on your firewall/routing setup.

UDP Transport

The UDP transport can be configured with the following parameters:

Table 21. Configuring UDP transport
Property Name Default value Description

bind-address

null

The network address to bind to. Example: 192.168.28.12

bind-port

8000

The network port to listen on.

external-address

null

The external address to bind to if the node is behind some kind of NAT.

external-port

0

The external port to bind to if the node is behind some kind of NAT.

multicast-enabled

true

If UDP multicast should be used. This is required for MULTICAST discovery.

mulitcast-address

228.8.8.8

The multicast network address to bind to.

multicast-port

45588

The multicast port to listen on.

Example UDP transport configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <transport>
            <udp>
                <bind-address>192.168.1.2</bind-address>
                <bind-port>8000</bind-port>
                <multicast-enabled>true</multicast-enabled>
                <multicast-address>228.8.8.8</multicast-address>
                <multicast-port>45588</multicast-port>
            </udp>
        </transport>
    </cluster>
    ...
</hivemq>
UDP Multicast
It is strongly recommended to enable multicast if it is available, since it reduces the amount of network traffic between all nodes.

TCP Transport

The TCP transport can be configured with the following parameters:

Table 22. Configuring TCP transport
Property Name Default value Description

bind-address

null

The network address to bind to. Example: 192.168.28.12

bind-port

8000

The network port to listen on.

external-address

null

The external address to use if the node is behind some kind of NAT.

external-port

0

The external port to use if the node is behind some kind of NAT.

client-bind-address

null

The multicast network address to bind to.

Deprecated Configuration

The following option is deprecated as it leads to bind problems in multi node cluster deployments. HiveMQ will ignore the value of this setting.

Table 23. Deprecated Configuration
Property Name Default value Description

client-bind-port

0

The multicast port to listen on. 0 uses an ephemeral port.

Example TCP transport configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <transport>
            <tcp>
                <bind-address>192.168.2.31</bind-address>
                <bind-port>7800</bind-port>
                <client-bind-address>192.168.2.31</client-bind-address>
            </tcp>
        </transport>
    </cluster>
    ...
</hivemq>

TLS Transport

In order to use TLS in the cluster transport, you need to add a <tls> config block to your TCP transport configuration and set the <enabled> flag to true. TCP transport is required in order to use TLS.

The TLS transport can be configured using the following parameters:

Table 24. Configuring TLS transport
Property Name Default value required Description

enabled

false

true

enables TLS in the cluster transport

protocols

All JVM enabled protocols

false

The enabled protocols

cipher-suites

All JVM enabled cipher suites

false

The enabled cipher-suites

server-keystore

null

true

The JKS keystore configuration for the server certificate

server-certificate-truststore

null

false

The JKS truststore configuration, for trusting server certificates

client-authentication-mode

NONE

false

The client authentication mode, possibilities are NONE, OPTIONAL (client certificate is used if presented), REQUIRED (client certificate is required)

client-authentication-keystore

null

false

The JKS keystore configuration for the client authentication certificate

client-certificate-truststore

null

flase

The JKS truststore configuration, for trusting client certificates

Table 25. Configuring TLS keystore
Property Name Default value required Description

path

null

true

The path to the JKS keystore where your certificate and private key are stored

password

null

true

The password for the keystore

private-key-password

null

true

The password for the private key

Table 26. Configuring TLS truststore
Property Name Default value required Description

path

null

true

The path for the JKS truststore which includes trusted certificates

password

null

true

The password for the truststore

Example TLS transport configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <transport>
           <tcp>
               <bind-address>192.168.2.31</bind-address>
               <bind-port>7800</bind-port>
               <client-bind-address>192.168.2.31</client-bind-address>
               <client-bind-port>15000</client-bind-port>
               <tls>
                 <enabled>true</enabled>
                 <server-keystore>
                    <path>/path/to/the/key/server.jks</path>
                    <password>password-keystore</password>
                    <private-key-password>password-private-key</private-key-password>
                 </server-keystore>
                 <server-certificate-truststore>
                    <path>/path/to/the/trust/server.jks</path>
                    <password>password-truststore</password>
                 </server-certificate-truststore>
               </tls>
           </tcp>
        </transport>
    </cluster>
    ...
</hivemq>
Only the transport is done over tls, discovery will still be unencrypted.

Cluster Discovery

Discovery describes the mechanism which allows cluster nodes to find each other and form a cluster together. There are several built-in mechanisms available as well as the possibility to add your own custom discovery mechanism via HiveMQ’s plugin system.

There are static and dynamic mechanisms. Dynamic mechanisms discover nodes at runtime and allow you to build a cluster without having to know the amount of nodes before setting up the cluster, while static mechanisms require you to specify all available nodes at the first cluster setup.

The following mechanisms are available. Not all mechanisms are compatible with all transports.

Table 27. Discovery mechanisms
Name UDP transport TCP transport Dynamic Description

static

Static list of nodes and their TCP bind ports.

multicast

Finds all nodes using the same multicast address and port.

broadcast

Finds all nodes in the same subnet by using the IP broadcast address.

plugin

Uses a dynamic list of nodes provided by a HiveMQ plugin.

Static Discovery

Uses a static list of nodes and their TCP bind-ports to discover other nodes. HiveMQ regularly checks if it can connect to each of those nodes and forms a cluster with them. Not all of the specified nodes must be available for HiveMQ to successfully form a cluster. You can also list the broker’s own IP and port.

The port you specify must be the same as the bind-port in the node’s TCP transport.

Static discovery example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <transport>
            <tcp>
                <bind-address>192.168.1.1</bind-address>
                <bind-port>7800</bind-port>
            </tcp>
        </transport>
        <discovery>
            <static>
                <node>
                    <host>192.168.1.1</host>
                    <port>7800</port>
                </node>
                <node>
                    <host>192.168.1.2</host>
                    <port>7800</port>
                </node>
                <node>
                    <host>192.168.1.3</host>
                    <port>7801</port>
                </node>
            </static>
        </discovery>
    </cluster>
    ...
</hivemq>

Multicast Discovery

Multicast discovery is a dynamic discovery mechanism, which works by utilizing a IP multicast address configured with your UDP transport. It regularly checks if any node’s are available which are listening to the same IP multicast address.

multicast-enabled must be true in your UDP transport for multicast discovery to work
Multicast discovery example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <transport>
            <udp></udp>
        </transport>
        <discovery>
            <multicast/>
        </discovery>
    </cluster>
    ...
</hivemq>
Check that multicast is enabled and configured correctly before setting up a HiveMQ cluster with multicast discovery. IP multicast might not be available on some cloud providers.

Broadcast Discovery

Broadcast discovery is a dynamic discovery mechanism which looks for other nodes on the same IP subnet by sending out discovery messages over the IP broadcast address.

It can be configured with the following parameters:

Property Name Default value Description

broadcast-address

255.255.255.255

The broadcast address to use. This should be configured to your subnet’s broadcast address. Example: 192.168.1.255.

port

8555

The port on which the nodes exchange discovery messages. Must be the same port or in the same port-range on all nodes.

port-range

0

Amount of additional ports to check for other nodes. The range goes from port to port+range.

Broadcast discovery example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <discovery>
            <broadcast>
                <broadcast-address>192.168.1.255</broadcast-address>
                <port>8555</port>
                <port-range>10</port-range>
            </broadcast>
        </discovery>
    </cluster>
    ...
</hivemq>
Broadcast discovery only works if all your nodes are in the same IP subnet.

Plugin Discovery

Since HiveMQ 3.1.0 it is possible for you to create your custom cluster discovery mechanism by using the plugin system.

We also provide off-the-shelf plugins for discovery. Like the S3 discovery plugin. All our discovery plugins are available on the HiveMQ plugin page. For more information on how to create your own discovery plugin see the Plugin Developer Guide.

You can have more than one discovery plugin installed at once. In this case HiveMQ will try to form a cluster with all nodes provided from all plugins.

To use the installed discovery plugin(s) you need to configure HiveMQ to use the plugin discovery.

You can also specify an optional reload-interval in seconds. Each plugin will be called every reload-interval seconds to get the currently available nodes for your cluster.

Plugin discovery example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <discovery>
            <plugin>
                <reload-interval>60</reload-interval>
            </plugin>
        </discovery>
    </cluster>
    ...
</hivemq>

S3 Discovery

S3 discovery has been improved significantly in HiveMQ 3.1.0 and is now a plugin. For more information see the S3 Plugin Page

Cluster Failure Detection

HiveMQ’s cluster provides several means of failure detection for added stability and fault tolerance.

Heartbeat

To check if all currently connected cluster nodes are available and responding a continuous heartbeat is sent between the nodes. You can configure this heartbeat with the following parameters:

Property Name Default value Description

enabled

true

If the heartbeat is enabled.

interval

3000 (TCP) / 8000 (UDP)

The interval in which a heartbeat message is sent to other nodes.

timeout

9000 (TCP) / 40000 (UDP)

Amount of time with no response to a heartbeat message until the node is temporarily removed from cluster.

Heartbeat Port
The port used for Hearbeat can not be configured. The transport port will be used for this mechanism. (Default: 8000)
Heartbeat example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <failure-detection>
            <heartbeat>
                <enabled>true</enabled>
                <interval>5000</interval>
                <timeout>15000</timeout>
            </heartbeat>
        </failure-detection>
    </cluster>
    ...
</hivemq>

TCP Health Check

In addition to the heartbeat there is also a TCP health check. The TCP health check holds an open TCP connection to other nodes for the purpose of recognizing a disconnecting node much faster than the heartbeat could. The TCP health check also allows nodes to disconnect gracefully from a cluster.

You can configure the TCP health check with the following parameters:

Property Name Default value Description

enabled

true

If the health check is enabled.

bind-address

null

The network address to bind to

bind-port

0

The port to bind to. 0 uses an ephemeral port.

port-range

50

Port range to check on other nodes.

TCP health check example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <cluster>
        <enabled>true</enabled>
        <failure-detection>
            <tcp-health-check>
                <enabled>true</enabled>
                <bind-address>192.168.1.32</bind-address>
                <bind-port>9191</bind-port>
                <port-range>10</port-range>
            </tcp-health-check>
        </failure-detection>
    </cluster>
    ...
</hivemq>

Cluster Replicas

HiveMQ’s cluster replicates stored data across nodes dynamically. This means that each piece of persistent data gets stored on more than one node. The minimum amount of replicas can be configured separately for different types of data.

These types are:

Data Description

client-session

Client Sessions

outgoing-message-flow

Each client’s outgoing message flow

queued-messages

Queued messages for each client

retained-messages

Retained messages for each topic

subscriptions

Subscriptions for each client

topic-tree

Topic Tree for all topics

The configuration parameter replicate-count is available for each of these types.

For the type outgoing-message-flow you can specify an interval in milliseconds in which the data will be replicated. Since this data changes quickly it is recommended to keep this value higher than 1 second.

Replica example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    …
    <cluster>
        <enabled>true</enabled>
        <replicates>
            <client-session>
                <replicate-count>1</replicate-count>
            </client-session>
            <outgoing-message-flow>
                <replicate-count>1</replicate-count>
                <replication-interval>1000</replication-interval>
            </outgoing-message-flow>
            <queued-messages>
                <replicate-count>1</replicate-count>
            </queued-messages>
            <retained-messages>
                <replicate-count>1</replicate-count>
            </retained-messages>
            <subscriptions>
                <replicate-count>1</replicate-count>
            </subscriptions>
            <topic-tree>
                <replicate-count>1</replicate-count>
            </topic-tree>
        </replicates>
    </cluster>
    ...
</hivemq>
It is strongly recommended that the types client-session, queued-messages and subscriptions are configured with the same replicate-count. Other configurations might result in a performance decrease.

Elastic Cluster with UDP Multicast

When you have control over your network infrastructure or you need a true auto discovery mechanism which detects cluster nodes when they start up in the same network, then UDP multicast is a perfect fit. This is a master-master scenario, so there is no single point of failure on the HiveMQ node side and you can scale out quickly by just starting up a new HiveMQ node. It’s also a good starting point for playing around with HiveMQs cluster functionality if you are new to these topics.

Please follow these steps to get your UDP cluster up and running. These steps must be made for every HiveMQ cluster node.

Example configuration for elastic UDP cluster
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
    </listeners>

    <cluster>
        <enabled>true</enabled>
        <transport>
            <udp>
                <bind-address>192.168.1.10</bind-address>
                <bind-port>8000</bind-port>
                <multicast-enabled>true</multicast-enabled>
                <multicast-address>228.8.8.8</multicast-address>
                <multicast-port>45588</multicast-port>
            </udp>
        </transport>
        <discovery>
            <multicast/>
        </discovery>
    </cluster>

</hivemq>

HiveMQ instances will automatically form a cluster when the nodes find themselves.

It doesn’t work!!!

Although we do our best do give you the smoothest cluster setup experience, cluster configuration is a complex subject and there are many error sources. Here a few things you could check if something doesn’t work:

  • Is your firewall enabled and not configured accordingly (Tip: Disable the firewall for testing)

  • Are your cluster nodes in the same network?

  • Is multicast enabled on your machine? Is multicast supported by your switch/router?

Fixed size cluster with TCP

There are some scenarios where you probably do not want the HiveMQ cluster to be elastic, can’t or don’t want to use multicast, or you don’t want auto discovery because you already know all your cluster node IP addresses. This is often the case when you want to build something like a High Availability MQTT Cluster [4] for example.

Please follow these steps to get your TCP cluster up and running. You will get a cluster which uses TCP for transport and has a list of server IP addresses hardcoded. Execute these steps on all HiveMQ cluster nodes.

Static TCP cluster example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
    </listeners>

    <cluster>
        <enabled>true</enabled>
        <transport>
            <tcp>
                <bind-address>192.168.1.10</bind-address>
                <bind-port>7800</bind-port>
            </tcp>
        </transport>
        <discovery>
            <static>
                <node>
                    <host>192.168.1.10</host>
                    <port>7800</port>
                </node>
                <node>
                    <host>192.168.1.20</host>
                    <port>7800</port>
                </node>
            </static>
        </discovery>
    </cluster>

</hivemq>

Elastic Cluster with AWS S3 auto discovery

HiveMQ works perfectly on AWS, nevertheless there are some things that are a little bit different from running HiveMQ on your own servers. The explanation of a TCP fixed size cluster does work similar on AWS. In this case a Elastic IP address is recommended. The UDP multicast example doesn’t work on EC2, because Amazon doesn’t allow multicast. In order to build an elastic HiveMQ Cluster on AWS, the Amazon Simple Storage Service (S3) is leveraged for storing all necessary data for discovering other nodes. The following is a step-by-step guide on how to get a HiveMQ cluster up and running on AWS with auto-discovery by the use of S3 buckets.

Deploy 2 or more EC2 instances with AMI

In this tutorial EC2 instances without a AWS Virtual Private Cloud are used, because this makes the general cluster setup easier to understand. We recommend to run all HiveMQ cluster nodes in a VPC, so only the relevant MQTT ports are open to the public. It is very straight-forward to use the described steps and apply them to a VPC, if you are familiar with the necessary steps to configure a VPC on AWS.

  1. Open EC2 Console

  2. Go to Instances in the menu

  3. Click on Launch Instance

  4. Select Amazon Linux AMI 2014.03.1 with 64bit (or newer) or your favorite liunx distribution

AMI - Amazon Linux
The advantage of AMI is that it already has Java installed and other useful tools in order to get started right away.
  1. Choose the appropriate Tier for your use case

AWS Tiers
HiveMQ runs on all tiers, it totally depends how many client connections and messages per second you expect.
  1. Click on Next:Configure Instance Details

    1. Number of instances: 2 (or more)

    2. Request Spot Instances: false

    3. Network: EC2-Classic

    4. IAM role: none

    5. Shutdown behavior: Stop

Instance Details and Storage
The provided details are ideal for a simple default setup. If you are familiar with AWS you can configure your preferred settings.
  1. Click on Next:Add Storage

  2. Click on Next:Tag Instance

  3. Click on Next:Configure Security Groups

    • Create a new Security Group with the following rules

      Type Protocol Port Range Source Comment

      SSH

      TCP

      22

      Anywhere

      SSH Access

      Custom TCP

      TCP

      1883

      Anywhere

      MQTT

      Custom TCP

      TCP

      8000

      Anywhere

      MQTT over Websockets

      Custom TCP

      TCP

      7800-7805

      Anywhere

      Cluster (TCP)

      Custom TCP

      TCP

      9750-9800

      Anywhere

      Cluster (FD_Sock)

  4. Click on Review and Launch

  5. Click on Launch

  6. Create or use an existing Key Pair

Key Pair

The key pair is your key to access all the instances. At creation time Amazon will show you a secret access key and an access key. They won’t be displayed again, so make sure to hold them on a safe place.

  1. Wait until all newly created instances have a green status

Configure your S3 Bucket

  1. Open the S3 Console

  2. Click Create Bucket

  3. Choose a name and a region near to your location

Install and Configure HiveMQ

  1. SSH to the machines you need 2 things: the key file and the IP address or the Amazon DNS name (something like ec2-255-255-255-255.eu-west-1.compute.amazonaws.com) of the machines

    ssh -l ec2-user -i keypair.pem <IPaddress>
  2. Transfer HiveMQ to each instance

    1. Get your evaluation version from our website.

    2. Copy the provided download link and download HiveMQ

      wget --content-disposition <your download link>
    3. or Copy HiveMQ with scp from you computer

      scp -i keypair.pem ~/Downloads/hivemq.zip ec2-user@<IPaddress>:/home/ec2-user
    4. Unzip HiveMQ

      sudo su
      cp /home/ec2-user/hivemq.zip /opt/
      unzip hivemq.zip
      mv hivemq-X.X. hivemq
Run HiveMQ as a service?
The complete install instructions can be found here and these include all details on how to install HiveMQ as a service on different operating systems.

Prepare HiveMQ for cluster usage

Install and configure the S3 discovery plugin. See the S3 plugin page for more information.

Change your conf/config.xml to something like:

<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
        <websocket-listener>
            <port>8000</port>
            <bind-address>0.0.0.0</bind-address>
            <path>/mqtt</path>
        </websocket-listener>
    </listeners>

    <cluster>
        <enabled>true</enabled>
        <transport>
            <udp>
                <bind-port>7800</bind-port>
                <multicast-enabled>false</multicast-enabled>
            </udp>
        </transport>
        <discovery>
            <plugin>
                <reload-interval>30</reload-interval>
            </plugin>
        </discovery>
        <failure-detection>
            <tcp-health-check>
                <bind-port>9750</bind-port>
                <port-range>50</port-range>
            </tcp-health-check>
        </failure-detection>
    </cluster>

</hivemq>

Test the Cluster

  1. SSH to each instance

    ssh -l ec2-user -i keypair.pem <IP Address>
  2. Start HiveMQ on each instance

    sudo su
    ./opt/hivemq/bin/run.sh
  3. Wait for the following lines

    Cluster size = 2, members : [0QMpE, jw8wu].

    You have successfully formed a HiveMQ Cluster!

  4. Test the Cluster functionality

    • Start a Subscriber to one of the instances (for example mosquitto_sub)

      mosquitto_sub -h <IPaddress-instance1> -t test
    • Publish to another instance

      mosquitto_pub -h <IPaddress-instance2> -t test -m "ClusterTest"
    • Everything works perfectly, if the subscribing clients gets the test message.

Need help?
If you need professional support for your HiveMQ cluster, dc-square GmbH offers consulting for configuring, optimizing and managing HiveMQ clusters for you. Contact us to find out how we can help you.

Restart the Cluster with Persistent Data

Since HiveMQ 3.2.0, the persistent data will be moved to a backup folder hivemq/data/cluster-backup automatically, as soon as you start HiveMQ with cluster mode enabled. This will not cause the cluster to lose any data in case a node is restarted, since all the data is replicated to the remaining nodes. If you want to shutdown and restart an entire cluster without losing the persistent data, there are a few things to consider.

In case replications are configured to be zero, restarting a node will cause persistent data to be lost. Therefor it is highly recommended to have at least one replication (default), if the cluster has any persistent data.

The persistent data consist of the following things.

  • retained messages

  • subscriptions of persistent session clients

  • queued messages of persistent session clients

  • the current state of each individual message transmission, that is not yet completed

Shutting down the Cluster

In order to make sure that the data is not scattered across multiple nodes, it is important to shutdown one node at a time. Make sure that the last running node has enough disk space to store the data of the entire cluster. The last node that is shutdown will be the first to restart.

Restarting the Cluster

Go to the hivemq/bin folder of the last instance that was shutdown. Execute the stateful-cluster.sh file. This will start HiveMQ as usual but without moving the data to a backup folder. As soon as the first instance is running, you can start the other instances with the run.sh file, as usual. In general you should avoid starting more than one HiveMQ instance with the stateful-cluster.sh file, this could lead to inconsistency in the Cluster. However, in case all remaining instances of a cluster (more than one) are shutdown unexpectedly at roughly the same time, it is necessary to start all those instances with the stateful-cluster.sh file, to prevent the loss of data.

PROXY Protocol

HiveMQ supports the PROXY protocol for all listener types. The PROXY protocol is a TCP based protocol that allows to transport client details like IP address and port over multiple proxies. This is very useful if you are running your HiveMQ brokers behind a load balancer that proxies the TCP connection. Useful client information like IP address, port and SSL information is lost since the broker only "sees" the TCP connection of the load balancer and not of the original client.

HiveMQ is compatible with all load balancers and proxies support the PROXY protocol in version 1 or 2. Custom TLVs are as well supported as standard TLVs (these are used to carry custom information like X509 client certificate information)

How does the PROXY protocol work?

The PROXY protocol adds meta information about the proxied TCP client (which happens to be a MQTT client in HiveMQ scenarios) at the beginning of the TCP stream, which means the PROXY protocol information is the first information that is sent over the wire, even before the MQTT CONNECT message.

Proxy Protocol

HiveMQ is protocol-agnostic and allows you to connect to the same listener with and without the PROXY header present.

PROXY protocol and TLS
If you’re using TLS in conjunction with the PROXY protocol, the TLS handshake needs to be finished before PROXY protocol information is sent.

When to use the PROXY protocol?

MQTT 3.1 and 3.1.1 do not provide any headers for metadata that would allow to carry the original client’s IP address to the MQTT broker (like with HTTP’s X-Forwarded-For headers) in case the original TCP connection is proxied by a load balancer.

It’s beneficial to use the PROXY protocol every time the MQTT clients don’t connect directly to the broker but a load balancer or proxy. The load balancer or proxy must be configured to send the PROXY protocol information to HiveMQ, though.

The benefits of using the PROXY protocol, even if you don’t want to utilize the PROXY protocol information in your HiveMQ plugins is, that debug logging also utilizes the original client’s IP address forwarded by the proxy. There is no downside in enabling the PROXY protocol, so it may be useful to enable it. Keep in mind that you should only do this on listeners that aren’t public facing but are only available to trusted resources, though.

Plugin System

With the HiveMQ plugin system you have complete access to all PROXY protocol information of MQTT clients. So it’s possible to add custom business logic to your plugins based on the PROXY information. You can even use custom TLVs if your proxy supports it.

You can learn more about PROXY information and the plugin system in the HiveMQ plugin documentation

Configuration

Both versions, 1 (which is text based) and 2 (which is binary), are supported by HiveMQ transparently. This means you just need to specify that you want to use the PROXY protocol, it’s not needed to configure a specific version, HiveMQ will figure out the correct version to use automagically for you.

The PROXY protocol is not enabled by default for listeners, so you need to enable it manually for all listeners that should use the PROXY protocol. The following snippet shows how you can enable the proxy protocol for specific listeners:

Multiple Listeners
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

     <listeners>
         <tcp-listener>
             <port>1883</port>
             <bind-address>0.0.0.0</bind-address>
             <proxy-protocol>true</proxy-protocol>
         </tcp-listener>
     </listeners>


</hivemq>

All listeners support the PROXY protocol, so you can also use the PROXY protocol in conjunction with <websockets or TLS. Using plain TCP is the most common way, though.

Custom TLVs

HiveMQ supports custom Type-Length-Values. Not all load balancers support TLVs, though. These TLVs are available since PROXY protocol version 2.

TLVs are (potentially cascaded) key-value pairs that carry arbitrary data like SSL X509 client certificate Common Names or the TLS version used by a client that connected to the proxy. There are pre-defined TLVs in the PROXY protocol spec but there is nothing that prevents you from bringing your own TLVs. With the HiveMQ plugin system you can use all TLVs that are sent from the PROXY or load balancer in your own plugins.

Plugins

One of the outstanding features of HiveMQ is its ability to extend HiveMQ with virtually every functionality you can imagine. HiveMQs plugin system is highly event and callback oriented, which allows a deep integration of HiveMQ into your already existing application- and IT infrastructure.

A list of officially supported off-the-shelf plugins can be found here.

There are also Enterprise Integrations available if you need a more complex integration of HiveMQ. Learn more about that Enterprise Integrations on the website.

Developing custom plugins
To get started with writing plugins, you should take a look at our official Plugin Developer Guide.

Installing Plugins

The installation of plugins is very simple: Download the zip archive of your desired plugin and drop the jar file into the HiveMQ plugins folder. Plugins which expect configuration files also come with one or more config files. Drop these files into the conf folder

Instructions for installing plugins
  1. Download a plugin from the HiveMQ plugin store

  2. Unzip the contents of your download.

  3. Copy the jar file to $HIVEMQ_DIRECTORY/plugins.

  4. Copy the configuration file to the $HIVEMQ_DIRECTORY/conf folder

  5. Modify the properties of the plugin according to the plugins installation guide or README file.

Validating if the plugin works

HiveMQ will log every plugin it uses when starting up. A typical log entry with a plugin enabled looks like the following listing

Startup log with an enabled plugin
2015-09-06 17:34:23,936 INFO  - Loaded Plugin HiveMQ MQTT Message Log Plugin - v1.0.0

Configuring Plugins

Every plugin comes with its own configuration file (if it needs any). Please read the installation guide or the README file which comes with every officially supported plugin.

Developing Plugins

HiveMQ can can be deeply integrated in your existing application- and IT infrastructure. For complex or unique scenarios there is often no one-size-fits-all plugin. HiveMQ gives the ability to write your own plugins for your very own integration scenarios.

The public SDK is available for free and open source on Github. For more information about developing HiveMQ plugins, please refer to our extensive Plugin Documentation.

Professional plugin development

For individually developed and fully supported Enterprise Integration plugins which are guaranteed to scale with the HiveMQ MQTT broker, don’t hesitate to contact us. Plugin development trainings and workshops are also available. Please contact sales@hivemq.com for more details.

REST Service

HiveMQ has an inbuilt server for HTTP content.

It is possible to use:

The following configuration shows how to bind multiple HTTP Listeners:

REST Service example configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <rest-service>
        <servlet-path>/servlet</servlet-path>
        <jax-rs-path>/*</jax-rs-path>
        <listeners>
            <http-listener>
                <name>monitoring</name>
                <bind-address>192.168.1.2</bind-address>
                <port>8080</port>
            </http-listener>
            <http-listener>
                <name>reporting</name>
                <bind-address>192.168.1.2</bind-address>
                <port>8081</port>
            </http-listener>
        </listeners>
    </rest-service>
    ...
</hivemq>

An in depth explanation and further examples are supplied in the HiveMQ - Plugin Developer Guide.

Throttling & Limits

CPU, memory and bandwidth are limited resources. It can be crucial to limit the bandwidth or the maximum TCP connections HiveMQ can accept to save resources and avoid abuse by malicious clients.

All throttling and limit properties reside in the application.properties file and they can be adjusted at runtime, which means you don’t have to restart HiveMQ when applying throttling parameters.

The config.xml file has a throttling element where all global throttling can be configured.

All throttling parameters can be changed by the plugin system at runtime. You can implement your own heuristics for throttling and change the throttling behaviour at runtime.

Limiting connections

You can apply a global connection limit to HiveMQ. That means, after a defined number of open MQTT connections, HiveMQ automatically inhibits new client connections. Limiting the connections can be pretty handy if your server hardware resources are limited.

By default, the max-connections property is set to -1, which means HiveMQ can handle unlimited connections. [5].

The following example shows how to configure HiveMQ to allow a maximum of 100 concurrent connections.

Limiting concurrent connections
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
   <throttling>
       <max-connections>100</max-connections>
       ...
   </throttling>
    ...
</hivemq>

If there are multiple listeners configured, be aware that the concurrent connections are global for this HiveMQ instance.

Connection Limits
If you set a connection limit which is higher than the connection limit defined by your license, the higher limit will have no effect and the lower of the two limits will be used.

Throttling bandwidth

You can configure HiveMQ to globally throttle the incoming and outgoing network traffic for MQTT clients if you don’t want to reserve all your bandwidth for HiveMQ or if you want to artificially slow down all clients.

Throttling message throughput
A convenient way of limiting the message throughput of HiveMQ when you have many subscribers is to throttle the incoming traffic. If your system resources such as CPU and RAM are at premium this is an efficient and quick way to limit the maximum incoming messages. A sane global throttling by default can prevent resource exhaustion.

The following example limits the incoming and outgoing traffic to 1 KBs.

Limiting bandwidth
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
   <throttling>
       <outgoing-limit>1024</outgoing-limit>
       <incoming-limit>1024</incoming-limit>
       ...
   </throttling>
    ...
</hivemq>
Throttling incoming traffic

To throttle incoming traffic, you can set the property outgoing-limit to a value higher than 0. A value of 0 means that throttling is disabled. All values are interpreted as bytes per second (b/s).

Throttling outgoing traffic

To throttle outgoing traffic, you can set the property incoming-limit to a value higher than 0. A value of 0 means that throttling is disabled. All values are interpreted as bytes per second (b/s).

Throttling individual clients
With HiveMQs powerful Plugin System it is possible to throttle individual clients based on credentials, quotas or other characteristics.

Limiting MQTT message sizes

The MQTT protocol itself defines a MQTT PUBLISH payload size limit of 256MB. For a majority of the M2M and IoT scenarios this is a very high limit and most clients will probably never send a message with a payload which is that huge.

An convenient way set an upper message size limit is to use HiveMQs global message size limit which does not only restrict the MQTT PUBLISH payload but any message sent to the server.

To limit the upper size of incoming messages, set the property max-message-size in the config.xml file. The of the property is the maximum message size in bytes. 256MB is the default value as specified in the MQTT specification.

If a client sends a message which is bigger than the defined value, the server will discard the message after the threshold was exceeded. It is not possible for HiveMQ to detect that a message size in bytes is higher than the defined value before that many bytes were really sent by the client.

The following example shows how to configure a global maximum message size of 1 MB.

Limiting bandwidth
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
   <throttling>
       <!-- Throttling to 1 MB -->
       <max-message-size>1048576</max-message-size>
       ...
   </throttling>
    ...
</hivemq>
Malicious Clients
When you don’t limit the MQTT message size, it is very easy for attackers or malicious clients to steal your bandwidth and exhaust your servers memory when sending tens or hundreds of that huge messages at once for a longer period. You should strongly consider using a upper limit which suits your use case best.

Websockets

HiveMQ offers native support for all common websocket versions. All major browsers are supported. Here is an exhaustive list of all supported browsers.

When using websockets with HiveMQ, there is no need to use a separate webserver for handling the HTTP requests and the websocket upgrade, HiveMQ handles this transparently.

There is also support for secure websockets out of the box. Secure Websockets allow secure communication over websockets and are a great way to increase security for your application. See the secure websockets configuration chapter for more information.

Usage

Essentially websockets enable every web application to be a full-fledged MQTT client. With the usage of a a supported MQTT Javascript library you can deliver MQTT messages to your web application with real push behaviour. Every websocket connection gets treated like a standard TCP connection and HiveMQ does not differentiate between websocket and standard TCP connections.

Supported MQTT Javascript Libraries

In general, HiveMQ supports any MQTT 3.1 and MQTT 3.1.1 compliant Javascript library which utilizes websockets. We recommend using the Eclipse Paho Javascript library or the MQTT.js library.

Enabling Websockets

To enable websockets, you need to specify a websocket listener. The following properties can be configured properly to enable websockets:

Table 28. Properties of a websocket listener
Property Name Description

bind-address

The IP address on which websockets will be bound

port

The port used for websockets.

subprotocols

The subprotocols used for websockets. When using Paho.js this should be mqttv3.1 or mqtt

path

The path for the websocket

allow-extensions

Defines if websocket extensions are allowed

The default subprotocols are mqttv3.1 and mqtt. If you really want to use a websocket subprotocol other than mqttv3.1 and mqtt, set subprotocols to any value you like. Please bear in mind that HiveMQ only implements MQTT 3.1 and MQTT 3.1.1, so things could break.

The following example shows a configuration of the websocket listener

Websocket Listener
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

   <listeners>
      <websocket-listener>
          <port>8000</port>
          <bind-address>0.0.0.0</bind-address>
          <path>/mqtt</path>
          <subprotocols>
              <subprotocol>mqttv3.1</subprotocol>
              <subprotocol>mqtt</subprotocol>
          </subprotocols>
          <allow-extensions>true</allow-extensions>
      </websocket-listener>
       ...
   </listeners>
    ...
</hivemq>

Enabling secure Websockets

To enable secure websockets, use a tls-websocket-listener. Most properties of the Enabling Websockets chapter also apply for secure websockets.

The following exmaple shows the configuration of a secure websocket listener:

Secure Websocket Listener
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

   <listeners>
      <tls-websocket-listener>
          <port>8000</port>
          <bind-address>0.0.0.0</bind-address>
          <path>/mqtt</path>
          <subprotocols>
              <subprotocol>mqttv3.1</subprotocol>
              <subprotocol>mqtt</subprotocol>
          </subprotocols>
          <allow-extensions>true</allow-extensions>
           <tls>
               <keystore>
                   <path>/path/to/the/key/store.jks</path>
                   <password>password-keystore</password>
                   <private-key-password>password-key</private-key-password>
               </keystore>
               <client-authentication-mode>NONE</client-authentication-mode>
           </tls>
      </tls-websocket-listener>
       ...
   </listeners>
    ...
</hivemq>
You can use standard websockets and secure websockets simultaneously when configuring different listeners.

Gotchas

Although a connection over websockets is not much different to standard TCP connections for HiveMQ, there are some gotchas because, most of them occur because of incomplete and incorrect websocket implementations in browsers which you should be aware of:

  • Only binary websocket frames are supported as MQTT is a binary protocol. All MQTT Javascript libraries should support Binary Websockets [6].

  • If you are using Secure Websockets, make sure that you don’t use a self signed certificate or that your browser accepts the certificate. In most cases you can browse directly to the Endpoint (use https!) and accept the certificate. Unfortunately most browsers do not support accepting untrusted certificates for websockets out of the box.

  • Client Certificate Authentication *should* work in theory but most browsers have severe bugs in this area at the time of writing. Although client certificate authentication is a great way to increase security, it is recommended to test thoroughly if your targeted browsers work for you.

  • If you are using secure websockets, many Chrome versions have issues with establishing the connection if you did not 'visit' the resource via classic HTTP GET first. A suitable workaround for the issue is putting the following ajax call in your Javascript before calling 'connect':

    $.ajax({url: 'https://' + host + ':' + port+ '/mqtt',

          success: function(result) {

                console.log(result);

          },

          error: function(jqXHR, textStatus, errorThrown, result) {

                console.log('Error: ' + textStatus + ' ' + errorThrown, jqXHR);

          }

    });

Shared Subscriptions

Shared Subscriptions are a unique feature of HiveMQ which allows MQTT clients to share the same subscription on the broker. When using "standard" MQTT subscriptions, each client receives a copy of the message. If shared subscriptions are used, all clients which share the same subscription will receive messages in an alternating fashion. This mechanism is sometimes called "client load balancing", we’re sticking to the Shared Subscription terminology in this user guide, though.

Clients can subscribe to a shared subscription with standard MQTT mechanisms. The topic structure for shared subscriptions is the following:

$share:GROUPID:TOPIC

or

$share/GROUPID/TOPIC

The shared subscription consists of 3 parts:

  • A static shared subscription identifier (“$share”)

  • A group identifier

  • The concrete topic subscriptions (may include wildcards)

A concrete example for such a subscriber would be $share:my-shared-subscribers:myhome/groundfloor/+/temperature.

Another example for such an subscriber with different syntax would be $share/my-shared-subscribers/myhome/groundfloor/+/temperature.

It’s important to understand that only one subscriber per group identifier will receive the message. So if multiple MQTT clients subscribe with the same group identifier to a topic, HiveMQ will distribute the message among them in an alternating fashion.

The group-id must not contain the ":" character. All other UTF-8 characters are allowed.

HiveMQ supports a shared subscription syntax that separates the shared subscriptions parts with either a colon ( : ) or with a slash ( / ).

Use Cases

There are many use cases for shared subscriptions which excel most in high-scalability scenarios. Among the most popular use cases are:

  • Client Load Balancing for MQTT clients which can’t handle the load on subscribed topics on their own.

  • Worker (backend) applications which ingest MQTT streams and need to be scaled horizontally.

  • Intra-cluster node traffic should be relieved by optimizing subscriber node-locality for incoming publishes.

  • QoS 1 and 2 are used for their delivery semantics but Ordered Topic guarantees are not needed.

  • There are hot topics with higher message rate than other topics in the system and these topics become the scalability bottleneck.

Shared Subscriptions Explained

With standard publish / subscribe mechanisms every subscriber gets its own copy of every message which matches the subscribed topic. When using shared subscriptions, each subscription group, which can be conceptually imagined as a virtual client, acts as proxy for multiple real subscribers at once. HiveMQ then selects one subscriber of the group and delivers the message. The following picture demonstrates the principle:

shared subscriptions

There can be an arbitrary number of Shared Subscription Groups in a HiveMQ deployment. So for example the following scenario would be possible:

shared subscriptions multiple groups

In this example there are two different groups with 2 subscribing clients in each shared subscription group. Both groups have the same subscription but have different group identifiers. When a publisher sends a message with a matching topic, one (and only one) client of each group receives the message.

It’s possible that different clients have different subscriptions for the same group identifier. In this case HiveMQ filters by matching subscribers per group and then distributes the message to one of the found clients. While technically possible, this can cause lots of confusion in understanding the message flow in your system. Our recommendation is to go with identical client subscriptions per shared subscription group.

Shared Subscriptions in Single Node Deployments

In HiveMQ Single Node Deployments, the distribution mode of messages for a shared subscription group is round-robin. This guarantees that the load is distributed evenly across all active subscribers in the same shared subscription group.

Shared Subscriptions in Cluster Deployments

Shared Subscriptions are designed to relieve cluster traffic and latency dramatically for high scalability deployments. In fact, Shared Subscriptions are the recommended way to connect horizontally scaling backend systems with HiveMQ if the backend systems need to ingest data via MQTT.

While Single Node deployments guarantee a round-robin behaviour for messages, these guarantees are not in place for cluster deployments. In HiveMQ cluster deployments, messages are distributed via probabilistic algorithm. If a PUBLISH is received on a specific node and one or more shared subscribers are available on the same node, these local shared subscribers have a higher probability of receiving the message. A small percentage of messages will still hit other cluster nodes and these cluster nodes distribute the message among their shared subscribers.

It’s worth noting, that no round-robin algorithm is used for distributing (even on the same node), the messages will be distributed randomly.

Shared subscriptions + Offline clients

Offline clients with persistent sessions are not considered for distributing in the shared subscription algorithm. If no subscribers for a shared subscription group are connected (anymore), the messages are distributed across offline clients, which means the messages will be queued for QoS > 0. QoS 0 messages won’t be queued for offline clients.

When a clients offline queue is full, the message for that client won’t be dropped but queued for the next offline client in a shared subscription group.

Shared subscriptions + QoS Levels

It is currently unfeasible to guarantee a QoS 2 in shared subscriptions, as assumptions about client state would be required. Shared subscriptions with QoS 2 will be downgraded to QoS 1.

It’s highly recommended that all shared subscribers for a group subscribe with the same Quality of Service level to avoid complex situations which are hard to debug.

Members in a shared subscription group can subscribe with different QoS levels, though. When a client is selected by the Shared Subscription algorithm, the QoS level will be evaluated and the message will be sent with the correct QoS level.

If possible, always use the same QoS level for a shared subscription group.

Migrating from older HiveMQ Versions

HiveMQ 3.0 introduced the concept of Shared Subscriptions. These shared subscriptions were global rules on topics and didn’t have a topic identifier. Clients also didn’t have any mechanism to subscribe to specific groups on their own.

All shared subscription topics needed to be defined either via the configuration file or the plugin system.

The new Shared Subscriptions concept introduced in HiveMQ 3.1 is more powerful and extremely flexible. You can still use the old semantics, though:

  • All subscriptions which match the configured Shared Subscriptions in the config.xml are now added automatically to the Shared Subscription $share:default:<subscription>.

  • The same applies for the plugin system.

Instead of using this deprecated mechanism, please use the mechanisms described in this chapter. If you’re looking for the old way to implement shared subscriptions, take a look on the old documentation.

Running HiveMQ in the Cloud

Infrastructure as a Service (IaaS) providers like Amazon Webservices (AWS) and Microsoft Windows Azure are excellent choices for building reliable and scalable infrastructures for your services or platforms. HiveMQ runs perfectly on any cloud infrastructure provider. This chapters covers step-by-step example configurations for popular IaaS providers. The prerequisites for every IaaS provider listed here are that you have a valid account for the desired service.

Running HiveMQ on Amazon Webservices (AWS)

This will explain how to install HiveMQ on a Amazon EC2 instance. The steps may vary when using Windows as operating system.

  1. Navigate to Instances on your EC2 Dashboard in the AWS Management Console.

  2. Click Launch Instance and start the wizard to launch a new instance. Select the Ubuntu 12.04 LTS AMI. [7]

  3. Follow the wizard and configure your EC2 instance.

  4. Create a new Security Group called MQTT with a custom TCP Rule with port 1883.

  5. Enable SSH Access: Create a Security Rule with a custom TCP Rule with port 22.

  6. Finish the wizard to start the EC2 instance creation.

  7. Now connect to your server instance via SSH.

  8. Execute the following command to install all required dependencies:

    sudo apt-get install unzip openjdk-7-jdk
  9. Get your evaluation version from our website.

  10. Copy the provided download link and download HiveMQ

    wget --content-disposition <your download link>
  11. Unzip HiveMQ:

    unzip hivemq-3.x.x.zip
  12. Start HiveMQ:

    cd hivemq-3.x.x
    ./bin/run.sh

Running HiveMQ on Microsoft Windows Azure

This will explain how to install HiveMQ on an Azure Linux VM. The steps may vary when using Windows as operating system.

  1. Create a new Virtual Machine via the Azure Web Interface. Select Ubuntu 12.04 LTS as Image. [7]

  2. After the Virtual Machine was created, we have to open the standard MQTT port which happens to be 1883. Go to the Endpoint Settings of your VM in the Azure Web Interface and create a new Endpoint. Use TCP as protocol and set the public and private port to 1883.

  3. Now connect to your server instance via SSH.

  4. Execute the following command to install all required dependencies:

    sudo apt-get install unzip openjdk-7-jdk
  5. Get your evaluation version from our website.

  6. Copy the provided download link and download HiveMQ

    wget --content-disposition <your download link>
  7. Unzip HiveMQ:

    unzip hivemq-3.x.x.zip
  8. Start HiveMQ:

    cd hivemq-3.x.x
    ./bin/run.sh

Diagnostics Mode

HiveMQ comes with a so called Diagnostics Mode which will collect data about the system HiveMQ is installed to. This provides valuable information for our support team for resolving issues on the concrete HiveMQ installation.

The diagnostic mode is disabled by default and should only be enabled in case you are facing an issue with your installation. Performance will decrease and HiveMQ will write lots of information to disk, so this mode is not meant to be used in production.

Enabling Diagnostics Mode

In order to enable the diagnostics mode, modify the run.sh file for Linux systems or the run.bat file for Windows systems and uncomment the following line(s):

Enabling diagnostics mode for Linux
# Uncomment for enabling Diagnostic Mode
JAVA_OPTS="$JAVA_OPTS -DdiagnosticMode=true"
Enabling diagnostics mode for Windows
rem Uncomment for enabling diagnostic mode
set "JAVA_OPTS=-DdiagnosticMode=true %JAVA_OPTS%"

When configured correctly, HiveMQ will log a statement similar to this:

2015-09-09 12:59:25,669 INFO  - Starting with Diagnostic mode

Sending diagnostics file to the HiveMQ support team

After starting with diagnostics mode, HiveMQ will create a folder called diagnostics. All diagnostic information is located here. The following files are created:

File Name Description

diagnostics.txt

Diagnostic information about HiveMQ and the system HiveMQ is running on

tracelog.log

A trace log of HiveMQ

Run the diagnostic mode as long as you need to reproduce the issue you want to have solved. After that, stop HiveMQ. Review the created files if you’re comfortable with the information included, edit if you feel something is too sensible for our support team. Now send all files in the diagnostics folder to support@hivemq.com and describe your problem as specific as possible.

When running in diagnostic mode for some time, the log file will get huge. So be sure to run in diagnostic mode only as long as you need to demonstrate the issue you’re facing.

Appendix

The appendix covers additional topics relevant to HiveMQ.

Appendix A: HowTo configure server-side SSL/TLS with HiveMQ and Keytool (self-signed)

The next sections show you step by step how to configure server side SSL/TLS with HiveMQ. In this scenario the client only validates the correct server certificate.

Generate a server side certificate for HiveMQ

  • Execute the following command

    keytool -genkey -keyalg RSA -alias hivemq -keystore hivemq.jks -storepass changeme -validity 360 -keysize 4096

    It is highly recommended to use a strong password instead of changeme!

  • Enter all prompted information for the certificate.

    The first question asks about your first and last name. This is the common name of your certificate. Please enter the URL under which you will connect with your MQTT clients, e.g. broker.yourdomain.com (for production) or localhost (for development).
  • Confirm the correct entries with yes.

  • Determine the passwort for the newly generated key

    • (HiveMQ 2.x) Press Enter to use the same password as for the keystore.

    • (HiveMQ 3.x) It is highly recommended to use another passwort for the key than the key store itself

  • Place the hivemq.jks in the HiveMQ directory and add a tls listener to config.xml

    <listeners>
    ...
        <tls-tcp-listener>
            <port>8883</port>
            <bind-address>0.0.0.0</bind-address>
            <tls>
                <keystore>
                    <path>hivemq.jks</path>
                    <password>your-keystore-password</password>
                    <private-key-password>your-key-password</private-key-password>
                </keystore>
                <client-authentication-mode>NONE</client-authentication-mode>
            </tls>
        </tls-tcp-listener>
    ...
    </listeners>
    The two passwords your-keystore-password and your-key-password are depending on which passwords you have entered in the previous steps.

Now we have successfully created a server-side key store and enabled HiveMQ to speak SSL/TLS.

Generate a PEM client certificate (e.g. mosquitto_pub/_sub)

When connecting with mosquitto_pub/_sub utils a PEM file of the certificate is required in order to allow the mosquitto clients to validate the self-signed certificate. You need to have access to the server key store generated in the above steps.

  • Export a PEM file from the server key store

    keytool -exportcert -keystore hivemq.jks -alias hivemq -keypass your-key-password -storepass your-keystore-password -rfc -file hivemq-server-cert.pem
    Please use your chosen key store password instead of your-keystore-password and your chosen key password instead of your-key-password.
  • Use it with mosquitto_pub

    mosquitto_pub -t "test/topic" -m "TLS works with client PEM" -p 8883 --cafile hivemq-server-cert.pem

Generate a client JKS trust store (e.g. Paho Java)

When connecting with any Java MQTT client a JKS client key store is required in order to validate the self-signed certificate. You need to have access to the server key store generated in the above steps.

  • Export the server certificate from the server key store

    keytool -export -keystore hivemq.jks -alias hivemq -storepass your-keystore-password -file hivemq-server.crt
    Please use your choosen key store password instead of your-keystore-password.
  • Generate a client truststore

    • Execute the following command

      keytool -import -file hivemq-server.crt -alias HiveMQ -keystore mqtt-client-trust-store.jks -storepass changeme

      It is highly recommended to use a strong password instead of changeme!

    • Confirm the certificate with yes, before the truststore is successfully created.

  • Connect with Paho

    This is only a simple example on how to test if TLS is working. For production usage a more sophisticated implementation is necessary.
    Connect to HiveMQ with SSL using Eclipse Paho
    private static Logger log = LoggerFactory.getLogger(PublisherSSL.class);
    
    public static void main(String... args) {
        try {
            String clientId = "sslTestClient";
            MqttClient client = new MqttClient("ssl://localhost:8883", clientId, new MemoryPersistence());
    
            MqttConnectOptions mqttConnectOptions = new MqttConnectOptions();
    
            try {
                mqttConnectOptions.setSocketFactory(getTruststoreFactory());
            } catch (Exception e) {
                log.error("Error while setting up TLS", e);
            }
    
            client.connect(mqttConnectOptions);
            client.publish("test/topic", "TLS works with client JKS!".getBytes(), 0, false);
            client.disconnect();
    
        } catch (MqttException e) {
            log.error("MQTT Exception:",e);
        }
    }
    
    public static SocketFactory getTruststoreFactory() throws Exception {
    
        KeyStore trustStore = KeyStore.getInstance("JKS");
        InputStream in = new FileInputStream("mqtt-client-trust-store.jks");
        trustStore.load(in, "your-client-keystore-password".toCharArray());
    
        TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
        tmf.init(trustStore);
    
        SSLContext sslCtx = SSLContext.getInstance("TLSv1.2");
        sslCtx.init(null, tmf.getTrustManagers(), null);
        return sslCtx.getSocketFactory();
    }
    Don’t forget to replace the passwords in the snippet with your chosen passwords!

Appendix B: HowTo configure SSL/TLS with client certificates(self-signed) for HiveMQ using Keytool

Using PEM files (for mosquitto_pub/_sub and other clients)

  1. Follow the following sections from Appendix A: Generate a server side certificate for HiveMQ and Generate a PEM client certificate

  2. Make sure the server side TLS works with mosquitto_pub/_sub and HiveMQ

  3. Generate client certificates

    This step needs to be done for each client that connects to HiveMQ. Also this is not the only way on how to create certificates. There are different options depending on your use case and capabilities.
    1. Generate client certificate for PEM-based clients like mosquitto_sub/_pub

      1. Execute the following command for each client

        openssl req -x509 -newkey rsa:4096 -keyout mqtt-client-key-2.pem -out mqtt-client-cert-2.pem -days 360
      2. Enter all prompted information for the certificate. You end of with two PEM files: mqtt-client-key-2.pem and mqtt-client-cert-2.pem

  4. Create a trust store for HiveMQ

    When the client connects with a certificate HiveMQ needs a trust store in order to validate the self-signed certificates. You need to have access to all certificate PEM files generated in the previous step.

    1. Export the client certificate from the PEM certification file

      openssl x509 -outform der -in mqtt-client-cert-2.pem -out mqtt-client-cert-2.crt
    2. Generate one common server truststore

      1. Execute the following command for each client certificate

        keytool -import -file mqtt-client-cert-2.crt -alias client2 -keystore hivemq-trust-store.jks -storepass changeme

        It is highly recommended to use a strong password instead of changeme!

      2. Confirm the certificate with yes, before the truststore is successfully created.

  5. Change the config.xml in the HiveMQ directory

    <listeners>
    ...
        <tls-tcp-listener>
            ...
            <tls>
                <keystore> ... </keystore>
                <client-authentication-mode>REQUIRED</client-authentication-mode>
                <truststore>
                        <path>hivemq-trust-store.jks</path>
                        <password>your-hivemq-trust-store-password</password>
                </truststore>
            </tls>
        </tls-tcp-listener>
    ...
    </listeners>
    • Connect with mosquitto_pub/_sub to HiveMQ

      In the code sample we use the client trust store (hivemq-server-cert.pem) from Appendix A and the just generated PEM files to setup a full TLS client authentication to HiveMQ

      mosquitto_pub -t "test" -m "test" -p 8883 --cert mqtt-client-cert-2.pem --key mqtt-client-key-2.pem --cafile hivemq-server-cert.pem

Using Java Key stores (Eclipse Paho Java clients)

  1. Follow the following sections from Appendix A: Generate a server side certificate for HiveMQ and Generate a client JKS trust store

  2. Make sure the server side TLS works with Eclipse Paho and HiveMQ

  3. Generate client certificates

    This step needs to be done for each client that connects to HiveMQ. Also this is not the only way on how to create certificates. There are different options depending on your use case and capabilities.
    1. Generate client certificate for each Eclipse Paho Java client

      1. Execute the following command for each client

        keytool -genkey -keyalg RSA -alias mqtt-paho-client-1 -keystore mqtt-paho-client-1.jks -storepass changeme -validity 360 -keysize 4096

        It is highly recommended to use a strong password instead of changeme!

      2. Enter all prompted information for the certificate.

  4. Generate one common server truststore for HiveMQ

    When the client connects with a certificate HiveMQ needs a trust store in order to validate the self-signed certificates. You need to have access to all client key store generated in the previous step.

    1. Export the client certificate from each of the client key stores

      keytool -export -keystore mqtt-paho-client-1.jks -alias mqtt-paho-client-1 -storepass your-client-keystore-password -file mqtt-paho-client-1.crt
      Please use your choosen key store password instead of your-client-keystore-password.
    2. Generate a server truststore

      1. Execute the following command for each client certificate

        keytool -import -file mqtt-paho-client-1.crt -alias client1 -keystore hivemq-trust-store.jks -storepass changeme

        It is highly recommended to use a strong password instead of changeme!

      2. Confirm the certificate with yes, before the truststore is successfully created.

  5. Change the config.xml in the HiveMQ directory

    <listeners>
    ...
        <tls-tcp-listener>
            ...
            <tls>
                <keystore> ... </keystore>
                <client-authentication-mode>REQUIRED</client-authentication-mode>
                <truststore>
                        <path>hivemq-trust-store.jks</path>
                        <password>your-hivemq-trust-store-password</password>
                </truststore>
            </tls>
        </tls-tcp-listener>
    ...
    </listeners>
    • Connect with Paho In the code sample we use the client trust store from Appendix A and the just generated client key store to setup a full TLS client authentication with Eclipse Paho for Java

      This is only a simple example on how to test if TLS is working. For production usage a more sophisticated implementation is necessary.
      Connect to HiveMQ with SSL using Eclipse Paho
      private static Logger log = LoggerFactory.getLogger(PublisherClientCertSSL.class);
      
      public static void main(String... args) {
          try {
              String clientId = "sslTestWithCert";
              MqttClient client = new MqttClient("ssl://localhost:8883", clientId, new MemoryPersistence());
      
              MqttConnectOptions mqttConnectOptions = new MqttConnectOptions();
      
              try {
                  mqttConnectOptions.setSocketFactory(getTruststoreFactory());
              } catch (Exception e) {
                  log.error("Error while setting up TLS", e);
              }
      
              client.connect(mqttConnectOptions);
              client.publish("test", "test".getBytes(), 1, true);
              client.disconnect();
      
          } catch (MqttException e) {
              log.error("MQTT Exception:",e);
          }
      }
      
      public static SocketFactory getTruststoreFactory() throws Exception {
      
          //Create key store
      
          KeyStore keyStore = KeyStore.getInstance("JKS");
          InputStream inKey = new FileInputStream("mqtt-paho-client-1.jks");
          keyStore.load(inKey, "your-client-key-store-password".toCharArray());
      
          KeyManagerFactory kmf = KeyManagerFactory
                  .getInstance(KeyManagerFactory.getDefaultAlgorithm());
          kmf.init(keyStore,"your-client-key-password".toCharArray());
      
          //Create trust store
      
          KeyStore trustStore = KeyStore.getInstance("JKS");
          InputStream in = new FileInputStream("mqtt-client-trust-store.jks");
          trustStore.load(in, "your-client-trust-store-password".toCharArray());
      
          TrustManagerFactory tmf = TrustManagerFactory
                  .getInstance(TrustManagerFactory.getDefaultAlgorithm());
          tmf.init(trustStore);
      
          // Build SSL context
      
          SSLContext sslCtx = SSLContext.getInstance("TLSv1.2");
          sslCtx.init(kmf.getKeyManagers(), tmf.getTrustManagers() , null);
          return sslCtx.getSocketFactory();
      
      }
      Don’t forget to replace the passwords in the snippet with your chosen passwords!

Appendix C: HowTo configure SSL with HiveMQ and Portecle (self-signed)

  • Download the newest version of Portecle from here and unzip it

  • Start Portecle with a double-click on the jar or java -jar portecle.jar in the console

    Portecle after start
    Figure 4. Portecle after start
  • Firstly create the server Keystore

    • Choose File → New Keystore from the menu and select JKS as Keystore type in the popup window

      Choose Java Keystore
      Figure 5. Choose Java Keystore
    • Create a new Key Pair: Tools → Generate Key Pair

    • Choose a Key Alogrithm and Key Size

      Common Key Algorithm and Size
      Figure 6. Common Key Algorithm and Size
    • Enter the Signature Algorithm (recommended is SHA512withRSA) and the Certificate Details

      Certificate Signature Algorithm and Details
      Figure 7. Certificate Signature Algorithm and Details
    • Choose a Key Pair Entry Alias.

      An alias for the key pair
      Figure 8. An alias for the key pair
    • Set a password for the Key Pair Entry

      A password to protect the private key
      Figure 9. A password to protect the private key
      Successful Generation of Key Pair
      Figure 10. Successful Generation of Key Pair
    • Save the Keystore: File → Save Keystore As…​

      Save the Keystore
      Figure 11. Save the Keystore
    • Export the certificate for the client: Right Click on the certificate key pair and click Export

    • Select Head Certificate as Export Type and choose an export format (recommended is PEM)

      Export Details
      Figure 12. Export Details
    • Choose directory to save the certificate

      Export Successful
      Figure 13. Export Successful
  • Secondly create the client Keystore

    • Choose File → New Keystore from the menu and also select JKS as Keystore Type

    • Import the just saved certificate via Tools → Import Trusted Certificate

    • Select the previous exported certificate

      Import server head certificate
      Figure 14. Import server head certificate
    • Confirm the message that the trust path could not be established. This is because the certificate is self-signed and no certificate chain can verify it.

      Warning because of the self-signed certificate
      Figure 15. Warning because of the self-signed certificate
    • Confirm the showed certificate details

      Certificate Details
      Figure 16. Certificate Details
    • Trust the certificate with clicking Yes.

      Accept to trust our created self-signed certificate for the server
      Figure 17. Accept to trust our created self-signed certificate for the server
    • Enter an alias

      Alias for the server certificate in the client keystore
      Figure 18. Alias for the server certificate in the client keystore
      Successful import of ther server certificate in the client keystore
      Figure 19. Successful import of ther server certificate in the client keystore
    • Save the Keystore as client.jks with File → Save Keystore As…​

  • Place the server.jks in the HiveMQ directory and also create the following configuration and save it as configuration.properties

    ssl.enabled=true
    keystore.location={HIVEMQ_HOME}/server.jks
    keystore.password=yourSpecifiedPassword
  • Use client.jks to connect with a client


1. at the time of writing, most attacks on TLS are of theoretical nature
2. Technically TLS works at OSI Layer 5 and 6. For the sake of simplicity we call it transport layer through the document
3. We strongly recommend to refresh your knowledge about private and public key cryptography if you are not sure what this means. This link provides useful information.
4. see this blog post
5. This is only true if your license allows unlimited connections. Otherwise you are restricted to the maximum connections which is defined by your license
6. It would hardly make sense to use Text Websocket Frames for a binary protocol like MQTT.
7. You can of course use other Linux distributions or Windows. In this case you most likely have to use other tools than apt-get to install the dependencies like Java.