Skip to content

IoT Security in the cloud - How to integrate IoT Device Authentication and Authorization with HiveMQ and AWS

by Florian Raschbichler
19 min read

IoT is a major digital force that is empowering companies to create and deliver innovative services for their products. Connected cars, smart homes, and personalized-entertainment viewing across multiple devices are just a few popular use cases. As a result, IoT security is becoming an increasingly important topic. Nowadays, many devices that were not connected in the past are capable of sending sensitive data. Not surprisingly, the business cases that work with this sensitive data are often of great strategic value. Many of our customers have made good use of the HiveMQ Extension SDK to implement tailor-made authentication and authorization solutions for their MQTT deployments. Now, in response to high customer demand, we have introduced the HiveMQ Enterprise Security Extension (ESE). The ESE is a commercial extension that seamlessly integrates with third party systems, enabling you to create centralised, role-based access control for MQTT clients and HiveMQ control center users. In this post, we show you how to utilize AWS, HiveMQ, and the HiveMQ Enterprise Security Extension to build a cloud-based, high-availability MQTT broker cluster with enterprise-ready security mechanisms.

Access Control

A vital corner stone of IT security is access control. Through the combination of Authentication and Authorization the access to sensitive or valuable resources and information can be controlled. While Authentication is used to verify the identify of all parties trying to gain access, Authorization ensures that each identified party only has access to a specific set of resources.

In MQTT, access control means that each MQTT client that wants to connect to the broker must first provide proper credentials to pass the authentication. Based on the authorization information, the client gains access to a specific set of MQTT topics and actions (publish/subscribe).

Typical IoT production use cases need to accomodate large numbers of concurrently-connected devices and clients that provide services directly to end customers. This means that both availability and scalability need to be considered when choosing and building the message broker solution. The MQTT broker cluster in this example utilizes HiveMQ. Since every HiveMQ cluster is a distributed system, we are using the HiveMQ Enterprise Security Extension as a centralized IoT-security solution for authentication and authorization.

High availability MQTT broker cluster

First, we build a high-availability MQTT broker cluster that has two HiveMQ broker nodes hosted on AWS EC2.

HiveMQ on AWS

To install 2 HiveMQ broker nodes on 2 EC2 instances on AWS, we utilize the HiveMQ AMI

  1. Launch the AMI in your region of choice

  2. Select an instance type. We recommend using c5.xlarge for testing purposes.

  3. Configure the instance details
    Configure the instance details

  4. Create 2 instances

  5. Create an s3 full access role for the instances (this will be needed later on)Create an s3 full access role

  6. Go to “Configure Security Group”Go to “Configure Security Group”

  7. Configure the security groupConfigure the security group

  8. Launch the instances

This action will automatically spawn two separate EC2 instances that run HiveMQ as a service.

Cluster Discovery

Next, we want to enable the cluster mode on both of our HiveMQ instances and provide a way for the instances to discover each other. For this purpose, install the HiveMQ S3 Cluster Discovery Extension

  • Create an S3 Bucket the HiveMQ instances may use.
    Make sure to remember the bucket name. You can use the default configuration.

The following steps need to be done on each individual HiveMQ instance:

  • Connect to the instance via SSH

  • Switch to the root user

  • Download the HiveMQ S3 Cluster Discovery Extension

  • Unzip the distribution

  • This will create a folder hivemq-s3-cluster-discovery-extension

  • Open the HiveMQ S3 Cluster Discovery Extension configuration file (you may use a different text editor of course)

  • Configure the S3 Bucket region and name

############################################################
# S3 Bucket                                                #
############################################################

#
# Region for the S3 bucket used by hivemq
# see http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region for a list of regions for S3
# example: us-west-2
#
s3-bucket-region:<your-region>

#
# Name of the bucket used by HiveMQ
#
s3-bucket-name:<your-bucket-name>
  • Change ownership of the extension folder to the hivemq user

  • Move the folder in to the HiveMQ Extension folder

mv hivemq-s3-cluster-discovery-extension/ /opt/hivemq/extensions/

Now that we have the HiveMQ S3 Cluster Discovery Extension successfully installed, let’s adjust the HiveMQ config. Change the /opt/hivemq/conf/config.xml file to look like the following:

<?xml version="1.0"?>
<hivemq>

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
    </listeners>

    <cluster>
        <enabled>true</enabled>
        <transport>
            <tcp>               
                <bind-address>IP_ADDRESS</bind-address>
                <bind-port>7800</bind-port>
            </tcp>
        </transport>

        <discovery>
            <extension/>
        </discovery>
    </cluster>

    <anonymous-usage-statistics>
        <enabled>true</enabled>
    </anonymous-usage-statistics>

    <control-center>
        <listeners>
            <http>
                <port>8080</port>
                <bind-address>0.0.0.0</bind-address>
            </http>
        </listeners>
    </control-center>
</hivemq>

Line 15: Enter your EC2 instance’s internal IP address here.

All that is left to do is to restart the HiveMQ Service on both EC2 instances.

/etc/init.d/hivemq restart

The following log statement in the /opt/hivemq/log/hivemq.log file shows successful cluster establishment:

INFO - Cluster size = 2, members : [8Jojp, WlF1S].

Hint: This process can be applied to an arbitrary number of HiveMQ cluster nodes to create clusters of a bigger size than 2 if necessary.

Testing the cluster

Once we see the log statement, we know that the cluster nodes have found each other. To test the functionality of our cluster, we can use the mosquitto_pub/sub command line tool. Connect a subscriber to one of the nodes and publish a message to the other nodes. Message receipt on the subscribing client tells us that the functionality of our cluster is tested successfully.

  • Subscribe on topic “cluster/test/topic” on broker node 1

mosquitto_sub -t 'cluster/test/topic' -q 1 -h <ip-node-1> -V mqttv311 -i subscriber -d
Client subscriber sending CONNECT
Client subscriber received CONNACK
Client subscriber sending SUBSCRIBE (Mid: 1, Topic: cluster/test/topic, QoS: 1)
Client subscriber received SUBACK
Subscribed (mid: 1): 1

  • Publish message on the same topic on broker node 2

mosquitto_pub -t cluster/test/topic -h <ip-node-2> -i publisher -m 'Cluster Test' -q 1 -d
Client publisher sending CONNECT
Client publisher received CONNACK
Client publisher sending PUBLISH (d0, q1, r0, m1, 'cluster/test/topic', ... (12 bytes))
Client publisher received PUBACK (Mid: 1)

  • Receive message at subscriber

Client subscriber received PUBLISH (d0, q1, r0, m151, 'cluster/test/topic', ... (12 bytes))
Client subscriber sending PUBACK (Mid: 151)
Cluster Test

This concludes our test successfully.

Hint: Add an AWS Load Balancer to your cluster. This will provide a single transparent endpoint, turning the HiveMQ cluster into a single logical broker for your MQTT clients. You can find a detailed guide on adding a network load balancer in this post.

SQL Database as a source of Authentication and Authorization

Because our HiveMQ MQTT-broker cluster that is now open to the internet and can theoretically be accessed and used by anyone, we introduce a centralized authentication and authorization mechanism that can immediately ensure our IoT security. To add authentication and authorization to our broker cluster, we choose the HiveMQ Enterprise Security Extension and use an SQL Database as our centralized source of AUTH information.

AWS RDS

For this example we use a Postgres DB on AWS RDS as the SQL database. The HiveMQ Enterprise Security Extension supports many other SQL databases. A complete list can be found here.

Installation

Using the AWS RDS service as your SQL database is very simple, fast, and convenient. This service integrates perfectly with your HiveMQ cluster running on EC2 instances in AWS.

  1. Go to Amazon Relational Database Service

  2. Select “Create Database”

  3. Select “Easy Create”, “PostgreSQL” and “Free tier”Select “Easy Create”, “PostgreSQL” and “Free tier”

  4. Name your database, select a password for your master user and “Create database”
    Name your database, select a password for your master user and “Create database”

That’s it. We now have a viable SQL database running in the cloud. We can use this database as the source of information for our role-based access control of the MQTT clients that connect to our HiveMQ cluster.

Configuration

Next, we need to input the necessary information into our newly created database. This information consists of the user credentials for MQTT client authentication as well as the roles and permissions that are assigned to each user for authorization.

  • Connect to one of the two EC2 instances running your HiveMQ nodes

  • Install PSQL on your EC2 instance (as root)

[root@ip-xxxx ec2-user]# yum install -y postgresql10
[root@ip-xxxx ec2-user]# yum install -y  https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-redhat10-10-2.noarch.rpm
[root@ip-xxxx ec2-user]# yum -y install postgresql postgresql-server postgresql-devel postgresql-contrib postgresql-docs
[root@ip-xxxx ec2-user]# service postgresql initdb

-Connect to your newly created postgres instance

[root@ip-xxxx ec2-user]# psql -h <your-db-endpoint> -U postgres -W
Password for user postgres:
postgres=>
  • Create database

postgres=> create database hivemqdatabase;
CREATE DATABASE
  • Connect to the created database

  • Run the Postgres Initial Table Creation Script postgresql_create.sql

    • This means, copy and paste the contents of the script into the postgres console

This script creates all tables that are needed to achieve the role-based access control. Our database is now ready to be used as the source of authentication and authorization information.

Authentication and Authorization mechanism on the MQTT broker

The HiveMQ Enterprise MQTT broker allows you to implement both authentication and authorization mechanisms with the HiveMQ Extension SDK. Additionally, HiveMQ offers a variety of pre-built and enterprise extensions that are ready to use without any custom development.

HiveMQ Enterprise Security Extension

In this example, we use the HiveMQ Enterprise Security Extension and configure the extension to use our newly-installed Postgres Database on AWS.

Installation

The following steps need to be done for both of your HiveMQ instances:

  • Connect to the instance via SSH

ssh -i <your-deployment-key> ec2-user@<instance-ip-address>
  • Switch to the root user

  • Download the HiveMQ Enterprise Security Extension

wget <your-download-link>
  • Unzip the distribution

unzip hivemq-enterprise-security-extension-1.2.0.zip
  • This will create a folder hivemq-enterprise-security-extension

  • Change ownership of this folder to the HiveMQ user

  • Open the HiveMQ Enterprise Security configuration file

  • Configure the extension to use the AWS RDS Postgres DB we just created

<?xml version="1.0" encoding="UTF-8" ?>
<enterprise-security-extension
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="enterprise-security-extension.xsd"
        version="1">
    <realms>
        <!-- a postgresql db-->
        <sql-realm>
            <name>postgres-backend</name>
            <enabled>true</enabled>
            <configuration>
                <db-type>POSTGRES</db-type>
                <db-name><your-db-name></db-name>
                <db-host><your-db-endpoint></db-host>
                <db-port>5432</db-port>
                <db-username>postgres</db-username>
                <db-password><your-password></db-password>
            </configuration>
        </sql-realm>
    </realms>
    <pipelines>
        <!-- secure access to the mqtt broker -->
        <listener-pipeline listener="ALL">
            <!-- authenticate over a sql db -->
            <sql-authentication-manager>
                <realm>postgres-backend</realm>
            </sql-authentication-manager>
            <!-- authorize over a sql db -->
            <sql-authorization-manager>
                <realm>postgres-backend</realm>
                <use-authorization-key>false</use-authorization-key>
                <use-authorization-role-key>true</use-authorization-role-key>
            </sql-authorization-manager>
        </listener-pipeline>
    </pipelines>
</enterprise-security-extension>

You can copy this file and adjust lines 13, 14 and 17 accordingly

  • Download the necessary SQL driver for your Postgres DB

  • Move the configured and ready-to-use extension to the HiveMQ Extension folder

As a result, HiveMQ will start the extension during runtime. To verify that the extension started successfully, check your hivemq.log for the following entries:

2019-08-25 16:16:05,296 INFO  - Starting HiveMQ Enterprise Security Extension.
2019-08-25 16:16:05,421 INFO  - com.hivemq.extensions.ese.postgres.postgres-backend - Starting...
2019-08-25 16:16:05,505 INFO  - com.hivemq.extensions.ese.postgres.postgres-backend - Start completed.
2019-08-25 16:16:05,522 INFO  - Access log is written to /opt/hivemq/log/access/access.log.
2019-08-25 16:16:05,529 INFO  - Started HiveMQ Enterprise Security Extension successfully in 234ms.
2019-08-25 16:16:05,529 INFO  - Extension "HiveMQ Enterprise Security Extension" version 1.2.0 started successfully.

Configuration

Now that our HiveMQ cluster, database, and HiveMQ Enterprise Security Extension are running, all we need to do is configure the HiveMQ Enterprise Extension to our individual needs. The ESE comes with a helper tool that allows us to create properly-encrypted passwords and can provide us with the necessary insert statements for our database. In this guide, we work with the following three users, roles, and permissions:

  • Superuser Role: Username: ‘superuser’ Password: ‘supersecurepassword’ - Is allowed to publish and subscribe to all topics (’#’)

  • Frontendclient: Username: ‘frontendclient’ Password: ‘clientpassword’ - Is allowed to publish on the topic ’topic/{clientID}/status’, where {clientID} gets substituted with its own MQTT clientID. This user group is not allowed to subscribe to any topics itself.

  • Backendservice: Username: ‘Backendservice’ Password: ‘backendpassword’ - Is allowed to subscribe to topic ’topic+/status’, allowing it to receive all status from all frontend clients. This user group is not allowed to publish any data itself.

To configure the HiveMQ Enterprise Security Extension accordingly, we need to do the following:

That’s it. We have successfully created a high-availability HiveMQ MQTT broker cluster that runs on AWS with centralized authentication and authorization for any MQTT clients that try to connect. Now, we will use the mosquitto_pub/sub client library to see if everything is working as we expected.

  • Let’s connect as a superuser and subscribe to the topic ‘#’

  • And see what the /opt/hivemq/log/access/access.log file tells us

2019-08-25 17:07:20,384 UTC - authentication-succeeded - Client succeeded authentication: ID superuser-client, IP 80.147.158.33.
2019-08-25 17:07:20,394 UTC - authorization-succeeded - Client succeeded authorization: ID superuser-client, IP 80.147.158.33, permissions [Permission{topicFilter='#', qos=[0, 1, 2], activity=[publish, subscribe], retainedPublishAllowed=true, sharedSubscribeAllowed=true, sharedGroup='', from='superuser'}].

  • Now let’s publish a message as a front-end client to a topic, matching our clientID

  • Again a success

  • Finally, let’s see if our permissions are really set correctly and try to publish to a slightly different topic with the same front-end client

  • We can see that PUBLISH was not successful, as the broker terminates our connection.

Conclusion and resource list

Authentication and authorization are crucial puzzle pieces for IoT security. This blog post demonstrated how to leverage AWS and HiveMQ to build a high-availability MQTT broker cluster with centralised AUTH mechanisms. IoT security is a complex and multi-layered topic. Adding authentication and authorization via the use of a centralized SQL database is a great start to achieving a secure deployment that fulfills your IT-security requirements. Contact Us for guidance on additional aspects of IoT security tailored to your specific needs.

Finally, here’s a list of the resources you need to build a high-availability MQTT broker cluster with centralised IoT Security:

Florian Raschbichler

Florian serves as the head of the HiveMQ support team with years of first hand experience overcoming challenges in achieving reliable, scalable, and secure IoT messaging for enterprise customers.

  • Contact Florian Raschbichler via e-mail
HiveMQ logo
Review HiveMQ on G2