HiveMQ on DC/OS

Written by Simon Baier

Category: HiveMQ Cloud Cloud-Native Operations Third Party

Published: September 10, 2019


In today’s world we as customers have been conditioned to expect flawless availability with outstanding user experience from all the great companies delivering digital services catering to our every needs, such as convenience, entertainment, and security. These services and the quality of delivery customers expect now are only possible via the use of sophisticated, distributed systems and seamlessly connecting many services together, which puts an enormous burden upon system architects and especially operators. Technologies like DC/OS, a distributed operating system based on the Apache Mesos distributed systems kernel, are therefore becoming increasingly popular and important. DC/OS turns multiple machines into a single logical computer, significantly simplifying the management of the system, as well as the management and installation of distributed services across the system.

For user convenience DC/OS provides a great ecosystem in their service catalog, from where you can easily spawn a huge variety of services. Typically these services are either developed by D2IQ directly or by the developers of the application that will be run as a service. It is good practise to include most or all operational knowledge required within these DC/OS services. This pattern is also used in other orchestration and automation technologies like Kubernetes’ Operator pattern.

With this in mind the HiveMQ team has decided to develop our own, official HiveMQ service for DC/OS. This means if you are running DC/OS, adding a HiveMQ service is now as easy as running a single command, or clicking “Install” while browsing the service catalog. The purpose of this post is to illustrate how to deploy HiveMQ on DC/OS and putting it into context with existing services.

Installing HiveMQ on DC/OS

To install HiveMQ on DC/OS, navigate to the Catalog on your DC/OS dashboard and type HiveMQ. Click “Review & Run” and configure the service as needed. Confirm with “Run Service” and DC/OS will take care of launching the HiveMQ nodes.

Test

Hint: You can also install and customize the service using the DC/OS CLI, see the DC/OS example

Note: For the following chapters, keep in mind that <service-name> is the name of your installation of the service. The default value for this parameter is hivemq.

Connecting Clients to HiveMQ on DC/OS

The Mesosphere DC/OS HiveMQ service provides several endpoints for connecting MQTT clients. The easiest way to connect clients, if you are running DC/OS Enterprise, is using Edge-LB.

In case you want to roll out your own load balancing solution or are not running DC/OS Enterprise, there are other ways to connect to your HiveMQ cluster from the edge. For example, you can use the Virtual IP address provided by the service. Another possibility is utilizing the SRV Records provided by Mesos-DNS.
(e.g. _mqtt._<service-name>-<node-index>._tcp.<service-name>.mesos.)

After setting up Edge-LB, create a pool using the following hivemq-pool.json:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
  "apiVersion": "V2",
  "name": "hivemq",
  "count": 1,
  "haproxy": {
    "stats": {
      "bindPort": 9090
    },
    "frontends": [{
      "bindPort": 1883,
      "protocol": "TCP",
      "linkBackend": {
        "defaultBackend": "mqtt"
      }
    }],
    "backends": [{
      "name": "mqtt",
      "protocol": "TCP",
      "services": [{
        "mesos": {
          "frameworkName": "hivemq",
          "taskNamePattern": ".*-node"
        },
        "endpoint": {
          "portName": "mqtt"
        }
      }]
    }]
  }
}

This pool will spawn a single load balancer on each of your public DC/OS nodes. Clients can then connect to the one of the load balancers, which will distribute the load across all HiveMQ of your cluster. This example will use the default mqtt port. You can also configure the pool to forward connections to the WebSocket- or TLS listeners, if they are enabled.

Hint: By default, Edge-LB will allow 10,000 concurrent connections per instance. To change this, you will need to use the Edge-LB template commands to dump and update the maxconn parameters in the template.

Hint: To allow more than 50,000 connections, you should run multiple instances (increase the count of the pool):

1
2
3
4
5
6
7
{
  "apiVersion": "V2",
  "name": "hivemq",
  "count": 3,
  "haproxy": {
  ...
}

Enabling TLS

Hint: This type of TLS support is only available on DC/OS Enterprise.

Caution: If you want to use TLS on your cluster transport, you must deploy the service with cluster TLS enabled. The service will prohibit changes to this configuration after initial deployment to avoid data loss.

To enable TLS, you must first create a service account using your private and public keys:

1
2
3
4
5
dcos package install --cli dcos-enterprise-cli
dcos security org service-accounts keypair private-key.pem public-key.pem
dcos security org service-accounts create -p public-key.pem -d "HiveMQ service account" <service-name>-principal
dcos security secrets create-sa-secret --strict private-key.pem <service-name>-principal <service-name>/account-secret
dcos security org groups add_user superusers <service-name>-principal

Configure your service to use the service account <service-name>-principal and secret <service-name>/account-secret. Use the DC/OS Dashboard to update your configuration or use an options.json.
For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
    "service": {
        "name": "hivemq",
        "service_account": "hivemq-principal",
        "service_account_secret": "hivemq/account-secret"
    },
    "hivemq": {
        "listener_configuration": {
            "mqtt_tls_enabled": true
        }
    }
}

See Authenticating DC/OS Services for more information.

Integrating HiveMQ in your DC/OS Monitoring

Monitoring is a crucial part for operations of any kind of service. For Typical IoT use case with ten of thousands of clients and intense message throughput this is especially true. DC/OS provides a quick and uncomplicated way of using Prometheus and Grafana to monitor your HiveMQ cluster. First follow the steps at Export DC/OS Metrics to Prometheus to set up Prometheus and Grafana on DC/OS. After setting up your monitoring deployment, log into your Grafana instance and add the Prometheus data source. Finally you can import the HiveMQ Dashboard to Grafana.
Contact Us for recommendations and guidance towards the best monitoring approach for your specific use case.

Custom HiveMQ extensions

As mentioned in the opening paragraph of this post, modern service architecture in almost all cases consists of numerous important systems that need to be integrated with one another. DC/OS delivers improved operational and managerial integration of multiple services. HiveMQ with its Java based Extension SDK provides an open source way of integrating services on application level. You can install, delete and manage the configuration of any HiveMQ Extension, with the HiveMQ DC/OS service.

Note: To ensure consistency on all nodes in your cluster make sure to re-apply plans to any node you may add to the service for scaling in the future.

Adding HiveMQ extensions

To install an extension, you can use the add-extension plan. This plan requires a single parameter URL which requires a path to a ZIP compressed extension folder.

For example, to manually install the file RBAC extension on each current cluster node, run:

1
$ dcos hivemq --name=<service_name> plan start add-extension -p URL=https://www.hivemq.com/releases/extensions/hivemq-file-rbac-extension-4.0.0.zip

Deleting HiveMQ extensions

You can also delete extensions using the delete-extension plan. This plan requires the sole parameter EXTENSION, which corresponds to the extension’s folder name.

1
$ dcos hivemq --name=<service_name> plan start delete-extension -p EXTENSION=hivemq-file-rbac-extension

Adding or updating HiveMQ extension configuration

To configure an extension, you can update or add configuration files using the add-config

For example, to manually configure the RBAC extension file credentials.xml on each currently active cluster node, run:

1
$ dcos hivemq --name=<service_name> plan start add-config -p PATH=file-rbac-extension/credentials.xml -p FILE_CONTENT=$(cat local-file.xml | base64)

Enable / disable HiveMQ extensions

Extensions can also be enabled or disabled on any cluster node during runtime as well. To do so, you can use the enable-extension or disable-extension plans. Both plans require the parameter EXTENSION parameter which corresponds to the extension’s folder name, e.g.

1
$ dcos hivemq --name=<service_name> plan start disable-extension -p EXTENSION=hivemq-file-rbac-extension

These HiveMQ extension plans, enable you to integrate your HiveMQ Enterprise cluster with other services such as Kafka by installing the Kafka extension

Installing new licenses

Note: If you plan to update after you deploy an updated license in this way, make sure to define the new license as the default before you perform the update.

Similar to extension management, you can also add new licenses to an already deployed cluster. You can use the add-license plan do to so. This plan requires the parameters LICENSE and LICENSE_NAME to be defined, where LICENSE is your base64 encoded license file and LICENSE_NAME is the name of the license file that is created.

1
$ dcos hivemq --name=<service_name> plan start add-license -p LICENSE=$(cat license.lic | base64) -p LICENSE_NAME=new_license

For additional, in-depth information regarding the DC/OS HiveMQ service, check out the HiveMQ DC/OS service on GitHub

Conclusion

With the HiveMQ DC/OS service, we provide a native solution to introducing HiveMQ as the central messaging broker for any DC/OS based system or platform. With the use of HiveMQ Extensions like the commercial HiveMQ Enterprise Security Extension or the HiveMQ Kafka Extension, which can be installed and managed through DC/OS, you can seamlessly integrate the MQTT broker cluster with other applications and services.
Contact Us, if you have any additional questions or are looking for guidance on integrating HiveMQ with other container or orchestration technologies.

Have a great day,

Simon from the HiveMQ Team

About Simon Baier

Simon is a software developer at HiveMQ. He is passionate about everything "DevOps" like continuous integration, cloud nativeness, and containerization.
Contact Simon

<  HiveMQ Now Listed as Confluent Verified   |   Role Based Access Control to Secure an MQTT Broker   >