Configuration

Configuration Files

HiveMQ is configured with sensible default settings. Therefore most users will find it sufficient to use the default values to get started. All configuration files are located in the conf folder of the HiveMQ directory.

HiveMQ uses a simple but powerful XML based configuration.

Autocompletion
There is a XML Schema Definition File (XSD) available in the conf folder. Sophisticated text editors give you auto completion and validation for your config.xml file based on the XSD file.

The config.xml file is read only once during the HiveMQ startup. A HiveMQ restart is required to make changes that were made during runtime take effect. It’s possible to change many settings at runtime with a custom plugin, though.

Default Configuration

HiveMQ is designed to use sensible default values. The default and standard TCP listener binds to all interfaces and port 1883.

In addition to the TCP listener configuration, which is visible on the default configuration file, these restrictions are configured by default:

  • The maximum allowed client identifier length is 65535

  • Maximum queued (in-flight) messages are set to 1000. After that limit is reached, HiveMQ will drop messages for that client

  • No maximum concurrent connection limit is applied (except if you have a license which is only valid for a specific amount of concurrent connections)

  • No throttling will take place

  • Clients will get disconnected if they don’t send a CONNECT message in 10 seconds after opening the TCP connection

  • HiveMQ will check for updates. See the Update Check chapter for more details

HiveMQ Default Config
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
    </listeners>
    <mqtt>
        <max-client-id-length>65535</max-client-id-length>
        <retry-interval>0</retry-interval>
        <no-connect-packet-idle-timeout-millis>10000</no-connect-packet-idle-timeout-millis>
        <max-queued-messages>1000</max-queued-messages>
    </mqtt>
    <throttling>
        <max-connections>-1</max-connections>
        <max-message-size>268435456</max-message-size>
        <outgoing-limit>0</outgoing-limit>
        <incoming-limit>0</incoming-limit>
    </throttling>
    <general>
        <update-check-enabled>true</update-check-enabled>
    </general>

</hivemq>
Example Configurations
HiveMQ comes with many example configurations to get you started quickly. All example configurations reside in the conf/examples/configuration folder. If you want to use one of the example configurations, copy it to the conf folder and name it config.xml.

Changing settings with the Plugin System

Static configuration files are insufficient for some use cases. Sometimes settings need to be read from a database, a webservice needs to get called for configuration details in case of centralized configuration storage and sometimes settings need to be changed at runtime.

HiveMQs powerful plugin system allows easy implementation of these kinds of requirements and exposes many services for reconfiguring HiveMQ at runtime.

The plugin development guide shows in detail how to use the plugin configuration services.

Environment variables

In many cases like a containerized environment it can be beneficial or even necessary to configure your ports, bind addresses etc. by setting environment variables on the system HiveMQ runs on.
HiveMQ supports this by providing placeholders, which will be replaced with the content of environment variables at the time the configuration file is read.

You can use ${YOUR_ENVVAR_NAME} anywhere in the config.xml file and it will be replaced with the value of the specified environment variable.

Set environment variable
export HIVEMQ_PORT=1883
Use the environment variable in the configuration file
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>${HIVEMQ_PORT}</port>
        </tcp-listener>
    </listeners>
For HiveMQ this will result in
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    <listeners>
        <tcp-listener>
            <port>18830</port>
        </tcp-listener>
    </listeners>
</hivemq>
Make sure that HiveMQ is started in the same context as your environment variables are set, otherwise HiveMQ will not be able to access them.

Update Check

HiveMQ has an automatic update check which writes to the log file when a new version is available.

The update check will send anonymized data about the HiveMQ installation.

The following data is included:

  • The HiveMQ version

  • The HiveMQ id

  • Information about the system (VM Information, System Architecture (e.g. x86_64), OS Information (e.g. Windows, Linux))

  • Information of installed plugins (name, version)

You can always disable the automatic update check in the config.xml. To disable the update check apply the following configuration:

Disable the update check
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <general>
        <update-check-enabled>false</update-check-enabled>
    </general>
    ...

</hivemq>

Manually set specific HiveMQ folders

HiveMQ allows manual setting of specific folders for an easier maintenance.

To do so you need to add one or several of the following options to your bin/run.sh file.

JAVA_OPTS="$JAVA_OPTS -D...=/your/folder/here"

Alternatively you can define environment variables.

export ...=/your/folder/here

If both a Java option and environment variable are set for the same folder, the value of the Java option is used.

Table 1. Folder Configuration Options
Java Option Environment Variable Affected folder

hivemq.home

HIVEMQ_HOME

Base folder (bin needs to be a sub folder of this folder)

hivemq.license.folder

HIVEMQ_LICENSE_FOLDER

License files folder

hivemq.log.folder

HIVEMQ_LOG_FOLDER

Log files folder

hivemq.config.folder

HIVEMQ_CONFIG_FOLDER

Configuration files folder

hivemq.plugin.folder

HIVEMQ_PLUGIN_FOLDER

Plugin binaries folder

hivemq.data.folder

HIVEMQ_DATA_FOLDER

HiveMQ data folder

Example for Java option:

JAVA_OPTS="$JAVA_OPTS -Dhivemq.home=/mqtt/broker/hivemq"

Example for environment variable:

export HIVEMQ_HOME=/mqtt/broker/hivemq

Sets the HiveMQ home folder to /mqtt/broker/hivemq.

IPv6

IPv6 is an internet protocol standard and the successor of the established IPv4. Since the standardization in 1998 the usage is continuously increasing.

With some small touches HiveMQ is able to operate, using IPv6. Here is a guide to using IPv6 for our more experimental users.

Running HiveMQ with IPv6 in a production environment is currently not supported.

Necessary changes

HiveMQ uses IPv4 addresses by default. This setting can be changed in the run-script:

#Stop preferring IPv4 addresses
JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=false"

#Prefer IPv6 addresses
JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv6Addresses=true"

The bind-address in your config needs to be configured with an IPv6 address:

config.xml
<?xml version="1.0"?>
<hivemq>

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <!-- '::' instead of 0.0.0.0 -->
            <bind-address>::</bind-address>
        </tcp-listener>
    </listeners>

</hivemq>

Available listeners with IPv6

Listeners Status

TCP

TLS

[1]

WebSocket

Secure WebSocket

[2]

Cluster discovery

When using IPv6 some restrictions in terms of cluster discovery apply. The following lists the availability for each of the possible HiveMQ cluster discovery options:

  • Static discovery with IPv6 works as usual.

  • Broadcast discovery with IPv6 is not available.

  • Multicast discovery can’t be used for IPv6 in the current version of HiveMQ.

Special-use addresses

Table 2. IPv6 special-use addresses
Special-use addresses IPv4 IPv6 counterpart

Any

0.0.0.0

::

Loopback

127.0.0.1

::1

Default multicast

228.8.8.8

ff0e::8:8:8

MQTT Specific Configuration Options

HiveMQ is 100% compliant with the MQTT 3.1 and MQTT 3.1.1 specifications. For those parts of the specifications that leave some parts open to the broker implementation, HiveMQ uses sensible default values for all MQTT related settings.

Table 3. MQTT Configuration Options
Configuration Default Value Description

max-client-id-length

65535

The maximum allowed length of a MQTT client identifier.

retry-interval

0

The retry interval for re-sending MQTT messages (like QoS 1 and QoS 2 messages) to a client in case the previous messages was not acknowledged. This time unit is in seconds.

no-connect-packet-idle-timeout-millis

10000

The time for HiveMQ to wait before disconnecting a TCP connection when no CONNECT packet arrived.

max-queued-messages

1000

The maximum allowed size of the In-Flight Messages Queue.

client-session-ttl

-1

The maximum allowed time to live value for client session.

publish-ttl

-1

The maximum allowed time to live value for publishes.

retained-publish-ttl

-1

The maximum allowed time to live value for retained publishes.

Maximum Client Identifier length

The MQTT specification 3.1.1 allows client identifier lengths up to 65535 bytes.

Most applications don’t need to have such long client identifiers, so it may be useful to restrict client identifier lengths to your use case.

The following examples shows how to change the maximum client identifier length limit:

Change the maximum client identifier length
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        <!-- Restrict to 23 bytes to strictly conform to the MQTT 3.1 specification -->
        <max-client-id-length>23</max-client-id-length>
    </mqtt>
    ...

</hivemq>

If a client uses a client identifier with more bytes in the CONNECT message, HiveMQ will reject the connection with a Connection Refused, identifier rejected error code.

MQTT 3.1 client identifier length
MQTT 3.1 defined an artificial length restriction to 23 bytes. HiveMQ omits this restriction and also allows up to 65535 bytes for MQTT 3.1 client identifiers. You can of course set the max-client-id-length value to 23 to enforce that limit.

Retry Interval

MQTT systems implement the Quality of Service 1 guarantees with a two-way and the Quality of Service 2 guarantees with a four way message flow. We recommend to read this blog post if you’re interested in the details how Quality of Service Flows work. The broker (as well as the client) is required to re-send a message if it never received an acknowledgement for a particular message in the quality of service flow.

HiveMQ by default does not retry to send a message.

If you are dealing with very unreliable networks and very high latency on a regular basis, it may be useful to resend messages, in case an acknowledgement takes up a large amount of time.

Setting a retry interval
We strongly recommend not setting a retry interval value for QoS 2 use cases. If you decide to do so, you have to make sure the consumers can appropriately handle the duplicate message flag.

The following example shows how to set a retry interval:

Change the retry interval
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        <!-- Resend messages only once a minute-->
        <retry-interval>60</retry-interval>
    </mqtt>
    ...

</hivemq>

Connection Timeouts

MQTT, as layer 7 protocol in the OSI layer model, relies on TCP and so it’s required for clients to open a TCP connection before they can send a MQTT CONNECT message to initiate the MQTT connection.

The fact that MQTT operates at application layer means that just because a client initiates a TCP connection, doesn’t necessarily mean that it initiates a MQTT connection. So malicious MQTT clients could drain server resources by opening a TCP connection and never initiating a MQTT connection by sending a MQTT CONNECT message. These kind of clients can attack your MQTT broker by draining all system resources (like memory) quickly.

To avoid these kind of attacks, it’s important that you disconnect clients which don’t initiate a MQTT connection as soon as possible.

HiveMQ by default waits 10 seconds for the CONNECT message of a client before it closes an open TCP socket. You can tune this behaviour to your application needs.

Change the idle timeout
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        <!-- Disconnect idle clients after 10 seconds -->
        <no-connect-packet-idle-timeout-millis>10000</no-connect-packet-idle-timeout-millis>
    </mqtt>
    ...

</hivemq>
Clients with a slow connection
If you have clients which use a network with very high latency, a few seconds might not be enough and these clients could get disconnected although they try to send a CONNECT message. So make sure the timeout fits your use case.

Maximum Queued In-Flight Messages

The MQTT specification states that topics must be treated as Ordered Topics. Ordered topics guarantee, that each QoS 1 or 2 message flow for a specific topic finishes before the next QoS message flow starts. That means that it is guaranteed that all QoS 1 and 2 messages are delivered in order.

HiveMQ treats all topics which are subscribed by a specific client as Ordered Topics.

This also means, that HiveMQ will queue QoS 1 and 2 messages which can’t get delivered immediately to a client because another message flow for the topic is in progress. If the client consumes messages slower than new matching messages are received by HiveMQ, these new messages will queue up for that specific client. Because this can drain system resources (memory) quickly, HiveMQ will discard messages if the configured max-queued-messages limit is exceeded for a specific client.

The queued message limit is the limit for all queued messages of an individual client, independent of the number of concrete topics the client subscribed to.

The following example sets a lower queued in-flight message limit:

Configure a lower queued messages limit
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <mqtt>
        <!-- Queue fewer in-flight messages per client -->
        <max-queued-messages>500</max-queued-messages>
    </mqtt>
    ...

</hivemq>
This setting is used for configuring queued in-flight messages. There is also a similar setting in the persistence configuration which is used for queued messages of a offline client with persistent session. Don’t mix these two concepts!

Topic and Client Id UTF-8-Validation

The utf8-validation value defines whether or not the broker will check UTF-8 validity for topic names and client ids. By default UF8-8 validation is enabled.

Topic and Client Id UTF-8-Validation configuration example
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    ...
    <mqtt>
        <!-- Enables the UTF-8 validation of topic names and client ids -->
        <utf8-validation>true</utf8-validation>
    </mqtt>
    ...

</hivemq>

Time to Live

HiveMQ allows the configuration of an amount of time after which a certain type of data expires. A general time to live for each type can be set in the config.xml. It may be overwritten by plugins for a more specific configuration. These configurations can help to free up some resources like disk memory or RAM. In scenarios where a type of data (i.e. retained messages) is no longer useful after an extended period of time, it is recommended to configure an appropriate time to live. Setting a time to live may also protect the broker from unexpected client behaviour. For example if data is stored for a client ID or a topic, that is not reused or removed by a client, it will be stored indefinitely if no time to live is configured.

Resources are not freed up immediately on expiration. The data will be marked as expired and ignored by the broker until it is cleaned up eventually.

Client Session Time to Live

The client session time to live only applies to clients that connected with clean session = false. It describes the amount of time (in seconds), that has to pass since the client disconnected, before its session expires. In case the client (same client ID) reconnects with clean session = false before the time to live ends, the timer is reset. All subscriptions, queued messages and unfinished message transmissions associated with the session are removed on expiration. By default the client session time to live is set to -1 which is treated as unlimited, therefore the session will never expire. The maximum value for client session time to live is 2147483647.

Changing the Client Session Time to Live Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Changing client session time to live to one hour -->
    <mqtt>
        <client-session-ttl>3600</client-session-ttl>
    </mqtt>
    ...
</hivemq>

Publish Message Time to Live

The publish message time to live describes the amount of time (in seconds), that has to pass since the publish message arrived at the broker, before it expires. An expired publish message is never published. Not publishing due to expiry can happen in the following cases:

  • a client consumes the message too slow.

  • the message has been queued for a offline client and the client consumes the message too slow.

By default the publish message time to live is set to -1 which is treated as unlimited, therefore the publish message will never expire. Max value of publish message time to live is 2147483647.

Changing the Publish Message Time to Live Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Changing publish message time to live to one hour -->
    <mqtt>
        <publish-ttl>3600</publish-ttl>
    </mqtt>
    ...
</hivemq>

Retained Message Time to Live

The retained message time to live describes the amount of time (in seconds), that has to pass since the retained message arrived at the broker, before it expires. An expired retained message is never published. Not publishing due to expiry can happen in the following cases:

  • a client subscribes to the topic of the message after it expired.

  • a client consumes the message too slow.

By default the retained message time to live is set to -1 which is treated as unlimited, therefore the retained message will never expire. Max value of retained message time to live is 2147483647.

Changing the Retained Message Time to Live Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Changing retained message time to live to one hour -->
    <mqtt>
        <retained-publish-ttl>3600</retained-publish-ttl>
    </mqtt>
    ...
</hivemq>

Persistence Configuration Options

In order to guarantee consistency of data between broker restarts, HiveMQ uses disk persistence by default. That means that even if the broker stops or crashes, all data will be preserved and after a restart the broker can continue its operation as if nothing happened.

It’s also possible to configure in-memory persistence, which can significantly improve performance for the cost of losing all state when the broker stops.

All persistence data is stored in the data folder of HiveMQ. If you want to reset HiveMQ, just delete the data folder when HiveMQ is stopped.

A common requirement is to read and write with other tools from the persistent files on disk. HiveMQs persistence is designed for high throughput and lowest latency and is not a general purpose store or even database. In order to implement these requirements, the plugin system should be used.

Persistence components

The HiveMQ Persistence Subsystem consists of the following components which can be configured separately:

Table 4. Persistence Components
Name Description

Client Session Persistence

The persistence store for persistent session information.

Client Session Queued Messages Persistence

The persistence store for queued messages of offline clients.

Client Session Subscriptions Persistence

The persistence store for subscriptions of a persistent session.

Client Incoming Message Flow Persistence

The persistence store for incoming QoS 1 and 2 message flows.

Client Outgoing Message Flow Persistence

The persistence store for outgoing QoS 1 and 2 message flows.

Client Retained Message Persistence

The persistence store for retained messages.

Publish Payload Persistence

The persistence store for publish payloads.

Session Attribute Persistence

The persistence store for session attributes.

Client Group Persistence

The persistence store for client groups.

Client Session Persistence Configuration

The Client Session Persistence is responsible for storing all data about a persistent session.

Don’t set this store to in-memory mode if you want to set the other Client Session Stores to use a disk-based persistence!

The Client Session Persistence has the following configuration options:

Table 5. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Session Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <persistence>
        <client-session>
            <!-- Changing the Client Session Persistence Configuration -->
            <general>
                <mode>in-memory</mode>
            </general>
        </client-session>
    </persistence>
    ...
</hivemq>

Client Session Queued Messages Persistence Configuration

The Client Session Queued Messages Persistence is responsible for storing the queued messages for offline clients.

When a client with a persistent session subscribes to a topic with QoS 1 or 2, HiveMQ will save all missed messages for these topics if the client goes offline.

Queuing unlimited messages for offline clients can drain system resources (disk space), so HiveMQ will limit the saved messages for each client to a specific number.

The Client Session Queued Messages Persistence has the following configuration options:

Table 6. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

max-queued-messages

1000

The maximum number of queued messages for a specific client. When that limit is reached, HiveMQ will drop messages for that client

queued-messages-strategy

discard

The discard strategy when the maximum amount of queued messages is reached: discard for discarding new messages, discard-oldest for discarding the oldest queued message when a new message arrives

The following example shows how to set these configuration options:

Configure the Client Session Queued Messages Persistence
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <persistence>
        <client-session>
            <queued-messages>
                <!-- Limit the maximum queued messages per client to 100 -->
                <max-queued-messages>100</max-queued-messages>
                <!-- Discard the oldest message if a new message arrives -->
                <queued-messages-strategy>discard-oldest</queued-messages-strategy>
                <!-- Use in-memory persistence -->
                <mode>in-memory</mode>
            </queued-messages>
        </client-session>
    </persistence>
    ...
</hivemq>
Performance impact
When using file persistence the 'discard-oldest' strategy will have a higher performance impact than the 'discard' strategy.
Message Queuing for offline clients
Only QoS 1 and 2 messages for persistent MQTT sessions are queued. When no messages are queued for a client, it’s a classic mistake to either forget using persistent sessions or forget to subscribe with QoS 1 or 2.

Client Session Subscriptions Persistence Configuration

Persistent clients don’t lose their granted subscriptions, even if they are offline and reconnect.

HiveMQ persists these subscriptions by default to disk, so they are not lost even if HiveMQ restarts. The Client Session Subscriptions Persistence is responsible for storing these subscriptions.

The Client Session Subscription Persistence has the following configuration options:

Table 7. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Session Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <persistence>
        <client-session>
            <!-- Changing the Client Session Subscription Persistence Configuration -->
            <subscriptions>
                <mode>in-memory</mode>
            </subscriptions>
        </client-session>
        ...
    </persistence>
    ...
</hivemq>

Client Incoming Message Flow Persistence Configuration

Quality of Service 2 messages need a four way communication and in order to guarantee the exactly once semantics. MQTT clients and brokers must resume the message flows on reconnect, in case the client disconnected or the broker stopped.

The Incoming Message Flow Persistence stores the process of the QoS 2 message flow for incoming MQTT PUBLISH messages. By default it uses disk persistence.

If you’re using in-memory mode, you could weaken the exactly once semantics after a broker restart, so it’s strongly recommended to use the default persistence, except if you’re in a HiveMQ HA cluster.

The Client Incoming Message Flow Persistence has the following configuration options:

Table 8. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Incoming Incoming Message Flow Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <message-flow>
        <!-- Changing the incoming message flow -->
        <incoming>
            <mode>in-memory</mode>
        </incoming>
    </message-flow>
    ...
</hivemq>

Client Outgoing Message Flow Persistence Configuration

Quality of Service 1 and 2 messages need a two/four way communication and in order to guarantee the QoS semantics. MQTT clients and brokers must resume the message flows on reconnect, in case the client disconnected or the broker stopped.

The Outgoing Message Flow Persistence stores the process of the QoS 2 message flow for outgoing MQTT PUBLISH messages. By default it uses disk persistence.

If you’re using in-memory mode, you could weaken the exactly once semantics after a broker restart, so it’s strongly recommended to use the default persistence, except if you’re in a HiveMQ HA cluster.

The Client Outgoing Message Flow Persistence has the following configuration options:

Table 9. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Outgoing Message Flow Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <message-flow>
        <!-- Changing the outgoing message flow -->
        <outgoing>
            <mode>in-memory</mode>
        </outgoing>
    </message-flow>
    ...
</hivemq>

Client Retained Message Persistence Configuration

HiveMQ stores the retained messages by default on disk, so even after broker restarts the retained messages are available to new subscribers.

If it’s not important for you to have retained messages available after broker restarts, it’s possible to use in-memory persistence.

The Retained Message Persistence has the following configuration options:

Table 10. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Retained Message Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Changing the retained message to be in-memory -->
    <retained-messages>
        <mode>in-memory</mode>
    </retained-messages>
    ...
</hivemq>

Publish Payload Persistence Configuration

HiveMQ stores the publish payloads on disk by default, so even after broker restarts the publish payloads are still available.

The Publish Payload Persistence has the following configuration options:

Table 11. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Publish Payload Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Changing the publish payload persistence to be in-memory -->
    <publish-payloads>
        <mode>in-memory</mode>
    </publish-payloads>
    ...
</hivemq>

Session Attribute Persistence Configuration

HiveMQ stores the session attributes on disk by default, so even after broker restarts the session attributes are still available.

The Session Attribute Persistence has the following configuration options:

Table 12. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Session Attribute Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Changing the session attribute persistence to be in-memory -->
    <attribute>
        <mode>in-memory</mode>
    </attribute>
    ...
</hivemq>

Client Group Persistence Configuration

HiveMQ stores the client groups on disk by default, so even after broker restarts the client groups are still available.

The Client Group Persistence has the following configuration options:

Table 13. Configuration options
Name Default Description

mode

file

in-memory for memory based persistence, file for disk based persistence

The following example shows how to set these configuration options:

Changing the Client Group Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Changing the client group persistence to be in-memory -->
    <client-group>
        <mode>in-memory</mode>
    </client-group>
    ...
</hivemq>

File persistence

By default, all persistence stores are backed by the file system of the machine HiveMQ is running on. So even if HiveMQ stops or restarts no state is lost and after restarting the broker, all clients keep their persistent sessions, all queued messages for offline persistent clients are preserved, MQTT message flows can resume and retained messages are available.

HiveMQ has various mechanisms to get maximum write and read performance but in the end the performance for persisting data is I/O bound. So if you experience that disk reads and write are too slow, using better hardware (like SSDs) can help boosting performance.

While file persistence is the default mode, you can explicitly enable file persistence by using

<mode>file</mode>

for each individual persistence. The following configuration file shows how to manually set file persistence to each persistence store.

File Persistence Configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Persistence with explicit file persistence set -->
    <persistence>
        <client-session>
            <general>
                <mode>file</mode>
            </general>
            <queued-messages>
                <mode>file</mode>
            </queued-messages>
            <subscriptions>
                <mode>file</mode>
            </subscriptions>
        </client-session>
        <message-flow>
            <incoming>
                <mode>file</mode>
            </incoming>
            <outgoing>
                <mode>file</mode>
            </outgoing>
        </message-flow>
        <retained-messages>
            <mode>file</mode>
        </retained-messages>
        <publish-payloads>
            <mode>file</mode>
        </publish-payloads>
        <attribute>
            <mode>file</mode>
        </attribute>
        <client-group>
            <mode>file</mode>
        </client-group>
    </persistence>
    ...
</hivemq>

If you’re in doubt if the disk performance is good enough for you, we suggest benchmarking HiveMQ with QoS 1 and 2 messages and persistent clients. You can also look at the benchmarks at our website.

File Persistence Behaviour Configuration

If you are using a file persistence there are additional configuration parameters for the behavior of HiveMQ’s persistence implementation.

Experts only
HiveMQ provides sane default values for these settings, only change them if you know what the consequences are.
Table 14. File persistence configurations
Configuration Default Value Description

jmx-enabled

true

Enables/Disables JMX Metrics for the internal file persistences.

garbage-collection-type

delete

possible values are delete or rename. If you set this to rename the files will be renamed before they are deleted.

garbage-collection-deletion-delay

60000

Amount of time in milliseconds to wait until a file will be deleted permanently.

garbage-collection-run-period

30000

Interval in which the background garbage collection is triggered.

garbage-collection-files-interval

1

Garbage collection is triggered after this many new files are created.

garbage-collection-min-file-age

2

Minimum of new versions to delete this file.

sync-period

1000

Time to wait until flushing to disk.

durable-writes

false

If each write should flush to disk.

File Persistence Behaviour Configuration Example
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="hivemq-config.xsd">

    ...
    <persistence>
       <client-session>
           <general>
               <mode>file</mode>
               <file-persistence-configuration>
                   <jmx-enabled>true</jmx-enabled>
                   <garbage-collection-type>delete</garbage-collection-type>
                   <garbage-collection-deletion-delay>60000</garbage-collection-deletion-delay>
                   <garbage-collection-run-period>30000</garbage-collection-run-period>
                   <garbage-collection-files-interval>1</garbage-collection-files-interval>
                   <garbage-collection-min-file-age>2</garbage-collection-min-file-age>
                   <sync-period>1000</sync-period>
                   <durable-writes>false</durable-writes>
               </file-persistence-configuration>
           </general>
       </client-session>
    </persistence>
    ...
</hivemq>

In-Memory persistence

If it’s not important for you that persistent clients retain their subscriptions or if you don’t need HiveMQ to hold state between broker restarts but you are looking for extreme performance, then you should consider using in-memory persistence.

In-Memory persistence offers stellar performance and latencies at the cost of losing all state after restarting HiveMQ. This option is often used with clustering, since new cluster nodes receive state from other cluster nodes on startup and cluster nodes are often ephemeral.

Each persistence store can be configured individually to run in file persistence or in-memory mode. The following configuration shows an example with all persistence stores configured to run in-memory:

Manually configuring in-memory persistence
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <!-- Persistence with in-memory persistence set -->
    <persistence>
        <client-session>
            <general>
                <mode>in-memory</mode>
            </general>
            <queued-messages>
                <mode>in-memory</mode>
            </queued-messages>
            <subscriptions>
                <mode>in-memory</mode>
            </subscriptions>
        </client-session>
        <message-flow>
            <incoming>
                <mode>in-memory</mode>
            </incoming>
            <outgoing>
                <mode>in-memory</mode>
            </outgoing>
        </message-flow>
        <retained-messages>
            <mode>in-memory</mode>
        </retained-messages>
        <publish-payloads>
            <mode>in-memory</mode>
        </publish-payloads>
        <attribute>
            <mode>in-memory</mode>
        </attribute>
        <client-group>
            <mode>in-memory</mode>
        </client-group>
    </persistence>
    ...
</hivemq>
Client Session Persistence
You should strongly consider to run the all the Client Session Persistence Stores either in-memory or with disk persistence, except if you know exactly what you’re doing.

Disabling message queuing

It’s possible to disable message queuing for offline clients if you don’t need this MQTT functionality.

To disable the message queuing, set the the max-queued-messages setting to 0.

It’s also recommended to use in-memory persistence for queued messages. This saves some disk space.

Disabling queued messages
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="../../hivemq-config.xsd">

    ...
    <persistence>
         <client-session>
            <queued-messages>
                <!-- No queued messages -->
                <max-queued-messages>0</max-queued-messages>
                <!-- Save some disk space -->
                <mode>in-memory</mode>
            </queued-messages>
         </client-session>
    </persistence>
    ...
</hivemq>

1. No MQTT Client Library available for testing this feature with IPv6
2. No MQTT Client Library available for testing this feature with IPv6