4.15.x to 4.16.x Migration Guide

This is a minor HiveMQ upgrade. HiveMQ 4.16 is a drop in replacement for HiveMQ 4.15.x.

You can learn more about all the new features HiveMQ 4.16 introduces in our release blogpost.

HiveMQ is prepackaged with multiple HiveMQ Enterprise Extensions (disabled), the open-source MQTT CLI tool, and the HiveMQ Swarm load-testing tool (both located in the tools folder of your HiveMQ installation).

Starting with the HiveMQ 4.9 LTS release, HiveMQ provides enhanced version compatibility for all HiveMQ releases.
For more information, see HiveMQ Rolling Upgrade Policy and our Introducing Flexible MQTT Platform Upgrades with HiveMQ blog post.

When you migrate from one HiveMQ version to another, review the upgrade information for each version between your current HiveMQ version and the target HiveMQ version.
Note changes that are relevant to your use case and adjust your configuration as needed.

Upgrade a HiveMQ Cluster

Rolling upgrades are supported, and it is possible to run HiveMQ version 4.15 and version 4.16 simultaneously in the same cluster. By default, the HiveMQ cluster enables all new cluster features when all nodes are upgraded to the new version. No manual intervention is required.

Please follow the instructions in our user guide to ensure a seamless and successful rolling upgrade.

For more information, see HiveMQ Clustering Documentation.

Upgrade a Single-node HiveMQ Instance

  • Create a backup of the entire HiveMQ 4.15.x installation folder from which you want to migrate

  • Install HiveMQ 4.16 as described in the HiveMQ Installation Guide

  • Migrate the contents of the configuration file from your old HiveMQ 4.15.x installation

  • To migrate your persistent data, copy everything from the data folder of your backup to the data folder of the new HiveMQ 4.16 installation.

Configuration File Changes

You can upgrade from HiveMQ 4.15.x to HiveMQ 4.15 without making changes to your configuration file.

Since 4.10.0, HiveMQ prevents the startup if your configuration file contains invalid values. For more information, see New Validation Behavior for HiveMQ Configuration File.

Persistent Data Migration

When you migrate, HiveMQ 4.16 automatically updates the file storage formats of all the data that you copied into your new data folder.

To migrate the persistent data, you must copy everything in the data folder of the previous HiveMQ 4.15.x installation to the data folder of your new HiveMQ 4.16 installation.

Linux example
cp -r /opt/hivemq-4.15.0/data/* /opt/hivemq-4.16.0/data/

The first time you start HiveMQ 4.16, the file storage formats of the persistent data from your previous installation are automatically updated in the new persistent storage.

Native SSL Default Communication Protocols

Due to security concerns, the OpenJDK Java platform no longer enables TLSv1 and TLSv1.1 by default. As a result, Java applications such as HiveMQ that use TLS to communicate now require TLS 1.2 or above to establish a connection.

To align with the OpenJDK Java platform, from HiveMQ 4.7 onwards, HiveMQ only enables the following TLS protocols by default for native SSL:

  • TLSv1.3

  • TLSv1.2

If you still need to support legacy TLS versions such as TLSv1 or TLSv1.1 for your Native SSL implementation, you must explicitly enable the versions in your tls-tcp-listener configuration. For more information, see Communication Protocol and Native SSL.
Example configuration for native SSL with explicit legacy TLS configuration
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    ...
    <listeners>
        ...
        <tls-tcp-listener>
            <tls>
                ...
                <!-- Enable specific TLS versions manually -->
                <protocols>
                    <protocol>TLSv1.1</protocol>
                </protocols>
                <native-ssl>true</native-ssl>
                ...
            </tls>
        </tls-tcp-listener>
    </listeners>
    ...
</hivemq>

New Variable Notation Handling in the HiveMQ Data Hub Interpolation Engine

To align the behavior with other parts of the platform, the HiveMQ Data Hub now supports only dollar + single curly bracket notation for policy variables: ${}.

Starting with HiveMQ 4.16, all variables that were previously denoted as $variable must be denoted as ${variable}. The HiveMQ Data Hub interpolation engine no longer interpolates variables that use the old $ prefixed notation.

The HiveMQ Data Hub currently offers four predefined policy variables:

Table 1. Data validation policy predefined variables
Variable Type Old notation example New notation example

clientId

String

$clientId

${clientId}

topic

String

$topic

${topic}

policyId

String

$policyId

${policyId}

validationResult

String

$validationResult

${validationResult}

Example policy configuration that uses interpolated variables in the logged message
{
  "id": "policy1",
  "matching": {
    "topicFilter": "topic/+"
  },
  "validation": {
    "validators": [
      {
        "type": "schema",
        "arguments": {
          "strategy": "ALL_OF",
          "schemas": [
            {
              "schemaId": "schema1",
              "version": "latest"
            }
          ]
        }
      }
    ]
  },
  "onSuccess": {
    "pipeline": [
      {
        "id": "logOperationSuccess",
        "functionId": "System.log",
        "arguments": {
          "level": "DEBUG",
          "message": "${clientId} sent a publish on topic '${topic}' with result '${validationResult}'"
        }
      }
    ]
  },
  "onFailure": {
    "pipeline": [
      {
        "id": "logOperationFailure",
        "functionId": "System.log",
        "arguments": {
          "level": "WARN",
          "message": "${clientId} sent an invalid publish on topic '${topic}' with result '${validationResult}'"
        }
      }
    ]
  }
}
To include uninterpolated variables prefixed with dollar + curly bracket in a policy, escape the variable with a backslash: \${topic}.

Policy Migration

Use the following procedure to migrate existing policies that contain $ prefixed variables to the new notation:

  1. To get all existing policies in the broker enter:
    curl -X GET http://localhost:8888/api/v1/data-validation/policies.

  2. Change all variables in affected policies to use the new notation as described above.

  3. Delete each affected policies one by one with the following command:
    curl -X DELETE -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies/{policyId}.

  4. Re-upload your newly migrated policies one by one with the following command:
    curl -X POST --data @policy.json -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies.

New Unknown Variable Behavior in the HiveMQ Data Hub Interpolation Engine

HiveMQ now automatically checks for the presence of unknown variables.

Previously, unknown variables in a data validation policy were ignored and not interpolated.

Currently, the HiveMQ Data Hub recognizes only the following four predefined variables as known variables:

  • clientId

  • topic

  • policyId

  • validationResult

Starting with HiveMQ 4.16, if an unknown variable is present in a data validation policy, the policy is not created and an error is returned.

Renamed Policy Functions with Namespace

HiveMQ 4.16 renames two existing policy functions with the namespace as a prefix:

  • log changes to System.log

  • to changes to Delivery.redirectTo

The new naming helps organize functions by their purpose for easier policy management. As new functions are added, each function will be prefixed with the appropriate namespace. For example, the new Metrics.Counter.increment function in HiveMQ 4.16. In addition, the new naming convention can also accommodate multi-level namespaces.

To see how renamed functions are used in a policy, check our example policy.

Policy Migration

If you have existing policies that contain outdated log and to functions, use the following procedure to migrate the affected policies to the new System.log and Delivery.redirectTo function names:

  1. To get all existing policies in the broker enter:
    curl -X GET http://localhost:8888/api/v1/data-validation/policies.

  2. Change all functionId fields in affected policies to use the new function names (System.log and Delivery.redirectTo).

  3. Delete each of the outdated policies one by one with the following command:
    curl -X DELETE -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies/{policyId}.

  4. Re-upload your newly revised policies one by one with the following command:
    curl -X POST --data @policy.json -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies.

Schema Version Support

HiveMQ version 4.16 introduces the ability to associate different versions of a schema to the same schemaId.

Your HiveMQ system now automatically assigns a version number when you create or update a schema.

As a result, you must specify a version for all schemas in your policy configurations:

  • To specify a particular version of a schema, enter the associated version number. For example, version:"1" or version:"2".

  • To specify that your HiveMQ system automatically uses the most recent version of the schema that is available, enter version:"latest".

For policies created prior to version 4.16, HiveMQ automatically assigns the schema version "latest".
Once you upgrade to HiveMQ 4.16, we highly recommend that you update all your policy files including policies stored in your GitHub repository and that you use the HiveMQ REST API to update all your policies in the HiveMQ Data Hub.
Schema versioning is a new feature of HiveMQ 4.16. If you are currently testing HiveMQ Data Hub version 4.15 and want to do a rolling upgrade from HiveMQ 4.15 to 4.16, do not attempt to add versions to your existing schemas until the upgrade to version 4.16 is complete.
Example policy configuration with different selected schema versions
{
  "id": "policy1",
  "matching": {
    "topicFilter": "topic/+"
  },
  "validation": {
    "validators": [
      {
        "type": "schema",
        "arguments": {
          "strategy": "ALL_OF",
          "schemas": [
            {
              "schemaId": "schema1",
              "version": "1"
            },
            {
              "schemaId": "schema2",
              "version": "latest"
            }
          ]
        }
      }
    ]
  }
}