IoT Cloud Security with HiveMQ and AWS

IoT Security in the cloud - How to integrate IoT Device Authentication and Authorization with HiveMQ and AWS

author Florian Raschbichler

Written by Florian Raschbichler

Category: HiveMQ HiveMQ ESE Third-Party AWS IoT Security

Published: August 21, 2019


IoT is a major digital force that is empowering companies to create and deliver innovative services for their products. Connected cars, smart homes, and personalized-entertainment viewing across multiple devices are just a few popular use cases. As a result, IoT security is becoming an increasingly important topic. Nowadays, many devices that were not connected in the past are capable of sending sensitive data. Not surprisingly, the business cases that work with this sensitive data are often of great strategic value. Many of our customers have made good use of the HiveMQ Extension SDK to implement tailor-made authentication and authorization solutions for their MQTT deployments. Now, in response to high customer demand, we have introduced the HiveMQ Enterprise Security Extension (ESE). The ESE is a commercial extension that seamlessly integrates with third party systems, enabling you to create centralised, role-based access control for MQTT clients and HiveMQ control center users. In this post, we show you how to utilize AWS, HiveMQ, and the HiveMQ Enterprise Security Extension to build a cloud-based, high-availability MQTT broker cluster with enterprise-ready security mechanisms.

Access Control

A vital corner stone of IT security is access control. Through the combination of Authentication and Authorization the access to sensitive or valuable resources and information can be controlled. While Authentication is used to verify the identify of all parties trying to gain access, Authorization ensures that each identified party only has access to a specific set of resources.

In MQTT, access control means that each MQTT client that wants to connect to the broker must first provide proper credentials to pass the authentication. Based on the authorization information, the client gains access to a specific set of MQTT topics and actions (publish/subscribe).

Typical IoT production use cases need to accomodate large numbers of concurrently-connected devices and clients that provide services directly to end customers. This means that both availability and scalability need to be considered when choosing and building the message broker solution. The MQTT broker cluster in this example utilizes HiveMQ. Since every HiveMQ cluster is a distributed system, we are using the HiveMQ Enterprise Security Extension as a centralized IoT-security solution for authentication and authorization.

High availability MQTT broker cluster

First, we build a high-availability MQTT broker cluster that has two HiveMQ broker nodes hosted on AWS EC2.

HiveMQ on AWS

To install 2 HiveMQ broker nodes on 2 EC2 instances on AWS, we utilize the HiveMQ AMI

  1. Launch the AMI in your region of choice

  2. Select an instance type. We recommend using c5.xlarge for testing purposes.

  3. Configure the instance details
    Select the ec2 instance type

  4. Create 2 instances

  5. Create an s3 full access role for the instances (this will be needed later on) S3 Full Access Policy

  6. Go to “Configure Security Group”
    Instance Details

  7. Configure the security group
    Security Group Config

  8. Launch the instances

This action will automatically spawn two separate EC2 instances that run HiveMQ as a service.

Cluster Discovery

Next, we want to enable the cluster mode on both of our HiveMQ instances and provide a way for the instances to discover each other. For this purpose, install the HiveMQ S3 Cluster Discovery Extension

  • Create an S3 Bucket the HiveMQ instances may use.
    Make sure to remember the bucket name. You can use the default configuration.

The following steps need to be done on each individual HiveMQ instance:

  • Connect to the instance via SSH

    1
    
    ssh -i <your-deployment-key> ec2-user@<instance-ip-address>

  • Switch to the root user

    1
    
    sudo su

  • Download the HiveMQ S3 Cluster Discovery Extension

    1
    
    wget https://releases.hivemq.com/extensions/hivemq-s3-cluster-discovery-extension-4.0.1.zip

  • Unzip the distribution

    1
    
    unzip hivemq-s3-cluster-discovery-extension-4.0.1.zip

  • This will create a folder hivemq-s3-cluster-discovery-extension

  • Open the HiveMQ S3 Cluster Discovery Extension configuration file (you may use a different text editor of course)

    1
    
    vi hivemq-s3-cluster-discovery-extension/hivemq-s3-cluster-discovery-extension.xml 

  • Configure the S3 Bucket region and name

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
############################################################
# S3 Bucket                                                #
############################################################

#
# Region for the S3 bucket used by hivemq
# see http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region for a list of regions for S3
# example: us-west-2
#
s3-bucket-region:<your-region>

#
# Name of the bucket used by HiveMQ
#
s3-bucket-name:<your-bucket-name>
  • Change ownership of the extension folder to the hivemq user

    1
    
    chown -R hivemq:hivemq hivemq-s3-cluster-discovery-extension

  • Move the folder in to the HiveMQ Extension folder

    1
    
    mv hivemq-s3-cluster-discovery-extension/ /opt/hivemq/extensions/

Now that we have the HiveMQ S3 Cluster Discovery Extension successfully installed, let’s adjust the HiveMQ config. Change the /opt/hivemq/conf/config.xml file to look like the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
<?xml version="1.0"?>
<hivemq>

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
    </listeners>

    <cluster>
        <enabled>true</enabled>
        <transport>
            <tcp>               
                <bind-address>IP_ADDRESS</bind-address>
                <bind-port>7800</bind-port>
            </tcp>
        </transport>

        <discovery>
            <extension/>
        </discovery>
    </cluster>

    <anonymous-usage-statistics>
        <enabled>true</enabled>
    </anonymous-usage-statistics>

    <control-center>
        <listeners>
            <http>
                <port>8080</port>
                <bind-address>0.0.0.0</bind-address>
            </http>
        </listeners>
    </control-center>
</hivemq>

Line 15: Enter your EC2 instance’s internal IP address here.

All that is left to do is to restart the HiveMQ Service on both EC2 instances.

1
/etc/init.d/hivemq restart

The following log statement in the /opt/hivemq/log/hivemq.log file shows successful cluster establishment:

INFO - Cluster size = 2, members : [8Jojp, WlF1S].

Hint: This process can be applied to an arbitrary number of HiveMQ cluster nodes to create clusters of a bigger size than 2 if necessary.

Testing the cluster

Once we see the log statement, we know that the cluster nodes have found each other. To test the functionality of our cluster, we can use the mosquitto_pub/sub command line tool. Connect a subscriber to one of the nodes and publish a message to the other nodes. Message receipt on the subscribing client tells us that the functionality of our cluster is tested successfully.

  • Subscribe on topic “cluster/test/topic” on broker node 1

    1
    2
    3
    4
    5
    6
    
    mosquitto_sub -t 'cluster/test/topic' -q 1 -h <ip-node-1> -V mqttv311 -i subscriber -d
    Client subscriber sending CONNECT
    Client subscriber received CONNACK
    Client subscriber sending SUBSCRIBE (Mid: 1, Topic: cluster/test/topic, QoS: 1)
    Client subscriber received SUBACK
    Subscribed (mid: 1): 1

  • Publish message on the same topic on broker node 2

    1
    2
    3
    4
    5
    
    mosquitto_pub -t cluster/test/topic -h <ip-node-2> -i publisher -m 'Cluster Test' -q 1 -d
    Client publisher sending CONNECT
    Client publisher received CONNACK
    Client publisher sending PUBLISH (d0, q1, r0, m1, 'cluster/test/topic', ... (12 bytes))
    Client publisher received PUBACK (Mid: 1)

  • Receive message at subscriber

    1
    2
    3
    
    Client subscriber received PUBLISH (d0, q1, r0, m151, 'cluster/test/topic', ... (12 bytes))
    Client subscriber sending PUBACK (Mid: 151)
    Cluster Test

This concludes our test successfully.

Hint: Add an AWS Load Balancer to your cluster. This will provide a single transparent endpoint, turning the HiveMQ cluster into a single logical broker for your MQTT clients. You can find a detailed guide on adding a network load balancer in this post.

SQL Database as a source of Authentication and Authorization

Because our HiveMQ MQTT-broker cluster that is now open to the internet and can theoretically be accessed and used by anyone, we introduce a centralized authentication and authorization mechanism that can immediately ensure our IoT security. To add authentication and authorization to our broker cluster, we choose the HiveMQ Enterprise Security Extension and use an SQL Database as our centralized source of AUTH information.

AWS RDS

For this example we use a Postgres DB on AWS RDS as the SQL database. The HiveMQ Enterprise Security Extension supports many other SQL databases. A complete list can be found here.

Installation

Using the AWS RDS service as your SQL database is very simple, fast, and convenient. This service integrates perfectly with your HiveMQ cluster running on EC2 instances in AWS.

  1. Go to Amazon Relational Database Service
  2. Select “Create Database”
  3. Select “Easy Create”, “PostgreSQL” and “Free tier”
    Instance Details
  4. Name your database, select a password for your master user and “Create database”
    Instance Details

That’s it. We now have a viable SQL database running in the cloud. We can use this database as the source of information for our role-based access control of the MQTT clients that connect to our HiveMQ cluster.

Configuration

Next, we need to input the necessary information into our newly created database. This information consists of the user credentials for MQTT client authentication as well as the roles and permissions that are assigned to each user for authorization.

  • Connect to one of the two EC2 instances running your HiveMQ nodes

  • Install PSQL on your EC2 instance (as root)

    1
    2
    3
    4
    
    [root@ip-xxxx ec2-user]# yum install -y postgresql10
    [root@ip-xxxx ec2-user]# yum install -y  https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-redhat10-10-2.noarch.rpm
    [root@ip-xxxx ec2-user]# yum -y install postgresql postgresql-server postgresql-devel postgresql-contrib postgresql-docs
    [root@ip-xxxx ec2-user]# service postgresql initdb

  • Connect to your newly created postgres instance

    1
    2
    3
    
    [root@ip-xxxx ec2-user]# psql -h <your-db-endpoint> -U postgres -W
    Password for user postgres:
    postgres=>

  • Create database

    1
    2
    
    postgres=> create database hivemqdatabase;
    CREATE DATABASE

  • Connect to the created database

    1
    
    postgres=> \c hivemqdatabase;

  • Run the Postgres Initial Table Creation Script postgresql_create.sql

    • This means, copy and paste the contents of the script into the postgres console

This script creates all tables that are needed to achieve the role-based access control. Our database is now ready to be used as the source of authentication and authorization information.

Authentication and Authorization mechanism on the MQTT broker

The HiveMQ Enterprise MQTT broker allows you to implement both authentication and authorization mechanisms with the HiveMQ Extension SDK. Additionally, HiveMQ offers a variety of pre-built and enterprise extensions that are ready to use without any custom development.

HiveMQ Enterprise Security Extension

In this example, we use the HiveMQ Enterprise Security Extension and configure the extension to use our newly-installed Postgres Database on AWS.

Installation

The following steps need to be done for both of your HiveMQ instances:

  • Connect to the instance via SSH

    1
    
    ssh -i <your-deployment-key> ec2-user@<instance-ip-address>

  • Switch to the root user

    1
    
    sudo su

  • Download the HiveMQ Enterprise Security Extension

    1
    
    wget <your-download-link>

  • Unzip the distribution

    1
    
    unzip hivemq-enterprise-security-extension-1.2.0.zip

  • This will create a folder hivemq-enterprise-security-extension

  • Change ownership of this folder to the HiveMQ user

    1
    
    [root@ip-xxxx extensions]# chown -R hivemq:hivemq hivemq-enterprise-security-extension

  • Open the HiveMQ Enterprise Security configuration file

    1
    
    vi hivemq-enterprise-security-extension/enterprise-security-extension.xml (you may use a different text editor of course)

  • Configure the extension to use the AWS RDS Postgres DB we just created

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    
    <?xml version="1.0" encoding="UTF-8" ?>
    <enterprise-security-extension
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xsi:noNamespaceSchemaLocation="enterprise-security-extension.xsd"
            version="1">
        <realms>
            <!-- a postgresql db-->
            <sql-realm>
                <name>postgres-backend</name>
                <enabled>true</enabled>
                <configuration>
                    <db-type>POSTGRES</db-type>
                    <db-name><your-db-name></db-name>
                    <db-host><your-db-endpoint></db-host>
                    <db-port>5432</db-port>
                    <db-username>postgres</db-username>
                    <db-password><your-password></db-password>
                </configuration>
            </sql-realm>
        </realms>
        <pipelines>
            <!-- secure access to the mqtt broker -->
            <listener-pipeline listener="ALL">
                <!-- authenticate over a sql db -->
                <sql-authentication-manager>
                    <realm>postgres-backend</realm>
                </sql-authentication-manager>
                <!-- authorize over a sql db -->
                <sql-authorization-manager>
                    <realm>postgres-backend</realm>
                    <use-authorization-key>false</use-authorization-key>
                    <use-authorization-role-key>true</use-authorization-role-key>
                </sql-authorization-manager>
            </listener-pipeline>
        </pipelines>
    </enterprise-security-extension>

You can copy this file and adjust lines 13, 14 and 17 accordingly

  • Download the necessary SQL driver for your Postgres DB

    1
    2
    3
    
    [root@ip-xxxx ]# cd hivemq-enterprise-security-extension/drivers/jdbc/
    [root@ip-xxxx ]# wget https://jdbc.postgresql.org/download/postgresql-42.2.6.jar
    [root@ip-xxxx ]# chown hivemq:hivemq postgresql-42.2.6.jar

  • Move the configured and ready-to-use extension to the HiveMQ Extension folder

1
[root@ip-xxxx extensions]# mv hivemq-enterprise-security-extension /opt/hivemq/extensions/

As a result, HiveMQ will start the extension during runtime. To verify that the extension started successfully, check your hivemq.log for the following entries:

1
2
3
4
5
6
2019-08-25 16:16:05,296 INFO  - Starting HiveMQ Enterprise Security Extension.
2019-08-25 16:16:05,421 INFO  - com.hivemq.extensions.ese.postgres.postgres-backend - Starting...
2019-08-25 16:16:05,505 INFO  - com.hivemq.extensions.ese.postgres.postgres-backend - Start completed.
2019-08-25 16:16:05,522 INFO  - Access log is written to /opt/hivemq/log/access/access.log.
2019-08-25 16:16:05,529 INFO  - Started HiveMQ Enterprise Security Extension successfully in 234ms.
2019-08-25 16:16:05,529 INFO  - Extension "HiveMQ Enterprise Security Extension" version 1.2.0 started successfully.

Configuration

Now that our HiveMQ cluster, database, and HiveMQ Enterprise Security Extension are running, all we need to do is configure the HiveMQ Enterprise Extension to our individual needs. The ESE comes with a helper tool that allows us to create properly-encrypted passwords and can provide us with the necessary insert statements for our database. In this guide, we work with the following three users, roles, and permissions:

  • Superuser Role: Username: ‘superuser’ Password: ‘supersecurepassword’ - Is allowed to publish and subscribe to all topics (’#’)
  • Frontendclient: Username: ‘frontendclient’ Password: ‘clientpassword’ - Is allowed to publish on the topic ’topic/{clientID}/status’, where {clientID} gets substituted with its own MQTT clientID. This user group is not allowed to subscribe to any topics itself.
  • Backendservice: Username: ‘Backendservice’ Password: ‘backendpassword’ - Is allowed to subscribe to topic ’topic+/status’, allowing it to receive all status from all frontend clients. This user group is not allowed to publish any data itself.

To configure the HiveMQ Enterprise Security Extension accordingly, we need to do the following:

  • Connect to our posgres database again, via one of our EC2 instances
  • Insert the ese-example-users-and-permissions.sql file
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    
    hivemqdatabase=> insert into public.users
    hivemqdatabase->   (id, username, password, password_iterations, password_salt, algorithm)
    hivemqdatabase->   values
    hivemqdatabase->     (1, 'backendservice', 'wtUo2dri+ttHGHRpngg9uG21piWLiKSX7IaNSnU/BfN9pt+ZOLQByG/3JlPPQ7t/pl8S3tjR2+Um/DPBdAQULg==', 100, 'Nv6NU9XY7tvHdSGaKmNTOw==', 'SHA512'),
    hivemqdatabase->     (2, 'frontendclient', 'ZHg/rNJel1BHOYMEvc40ekCRUE5vVLcsPF6mk9GPDcdEmX3stm50MplaqjGb8Lxhy6rNFQZSQRSbOxmFZ8ps1Q==', 100, 'JhpW27QU9WfIaG6FJT5MkQ==', 'SHA512'),
    hivemqdatabase->     (3, 'superuser', 'nOgr9xVnkt51Lr68KS/rAKm/LqxAt8oEki7vCerRod3qDbyMFfDBGT8obnkw+AGygxCQDWdaA2sQnXXoAbVK6Q==', 100, 'wxw+3diCV4bWXQHb6LLniA==', 'SHA512');
    INSERT 0 3
    hivemqdatabase=>
    hivemqdatabase=> insert into public.permissions
    hivemqdatabase->   (id, topic, publish_allowed, subscribe_allowed, qos_0_allowed, qos_1_allowed, qos_2_allowed, retained_msgs_allowed, shared_sub_allowed, shared_group)
    hivemqdatabase->   values
    hivemqdatabase->     (1, 'topic/+/status', false, true, true, true, true, false, false, ''),
    hivemqdatabase->     (2, 'topic/${mqtt-clientid}/status', true, false, true, true, true, true, false, ''),
    hivemqdatabase->     (3, '#', true, true, true, true, true, true, true, '');
    INSERT 0 3
    hivemqdatabase=>
    hivemqdatabase=> insert into public.roles
    hivemqdatabase->   (id, name, description)
    hivemqdatabase->   values
    hivemqdatabase->     (1, 'backendservice', 'only allowed to subscribe to topics'),
    hivemqdatabase->     (2, 'frontendclients', 'only allowed to publish to topics'),
    hivemqdatabase->     (3, 'superuser', 'is allowed to do everything');
    INSERT 0 3
    hivemqdatabase=>
    hivemqdatabase=> insert into public.user_roles
    hivemqdatabase->   (user_id, role_id)
    hivemqdatabase->   values
    hivemqdatabase->     (1, 1),
    hivemqdatabase->     (2, 2),
    hivemqdatabase->     (3, 3);
    INSERT 0 3
    hivemqdatabase=>
    hivemqdatabase=> insert into public.role_permissions
    hivemqdatabase->   (role, permission)
    hivemqdatabase->   values
    hivemqdatabase->     (1, 1),
    hivemqdatabase->     (2, 2),
    hivemqdatabase->     (3, 3);
    INSERT 0 3
    hivemqdatabase=>

That’s it. We have successfully created a high-availability HiveMQ MQTT broker cluster that runs on AWS with centralized authentication and authorization for any MQTT clients that try to connect. Now, we will use the mosquitto_pub/sub client library to see if everything is working as we expected.

  • Let’s connect as a superuser and subscribe to the topic ‘#’

    1
    
    mosquitto_sub -t '#' -q 1 -u superuser -P supersecurepassword -h <broker-ip> -i superuser-client -V mqttv311 -d

  • And see what the /opt/hivemq/log/access/access.log file tells us

    1
    2
    
    2019-08-25 17:07:20,384 UTC - authentication-succeeded - Client succeeded authentication: ID superuser-client, IP 80.147.158.33.
    2019-08-25 17:07:20,394 UTC - authorization-succeeded - Client succeeded authorization: ID superuser-client, IP 80.147.158.33, permissions [Permission{topicFilter='#', qos=[0, 1, 2], activity=[publish, subscribe], retainedPublishAllowed=true, sharedSubscribeAllowed=true, sharedGroup='', from='superuser'}].

  • Now let’s publish a message as a front-end client to a topic, matching our clientID

    1
    
    mosquitto_pub -V mqttv311 -u frontendclient -P clientpassword -h 18.185.2.106 -i front-end-client-1 -t topic/front-end-client-1/status -q 1 -m 'This is test' -d

  • Again a success

    1
    2
    
    2019-08-25 17:07:25,573 UTC - authentication-succeeded - Client succeeded authentication: ID front-end-client-1, IP 80.147.158.33.
    2019-08-25 17:07:25,576 UTC - authorization-succeeded - Client succeeded authorization: ID front-end-client-1, IP 80.147.158.33, permissions [Permission{topicFilter='topic/front-end-client-1/status', qos=[0, 1, 2], activity=[publish], retainedPublishAllowed=true, sharedSubscribeAllowed=false, sharedGroup='', from='frontendclients'}].

  • Finally, let’s see if our permissions are really set correctly and try to publish to a slightly different topic with the same front-end client

  • We can see that PUBLISH was not successful, as the broker terminates our connection.

    1
    2
    3
    4
    5
    
     mosquitto_pub -V mqttv311 -u frontendclient -P clientpassword -h 18.185.2.106 -i front-end-client-1 -t topic/front-end-client-2/status -q 1 -m 'This is test' -d
     Client front-end-client-1 sending CONNECT
     Client front-end-client-1 received CONNACK
     Client front-end-client-1 sending PUBLISH (d0, q1, r0, m1, 'topic/front-end-client-2/status', ... (12 bytes))
     Error: The connection was lost.

Conclusion and resource list

Authentication and authorization are crucial puzzle pieces for IoT security. This blog post demonstrated how to leverage AWS and HiveMQ to build a high-availability MQTT broker cluster with centralised AUTH mechanisms. IoT security is a complex and multi-layered topic. Adding authentication and authorization via the use of a centralized SQL database is a great start to achieving a secure deployment that fulfills your IT-security requirements. Contact Us for guidance on additional aspects of IoT security tailored to your specific needs.

Finally, here’s a list of the resources you need to build a high-availability MQTT broker cluster with centralised IoT Security:

author Florian Raschbichler

About Florian Raschbichler

Florian serves as the head of the HiveMQ support team with years of first hand experience overcoming challenges in achieving reliable, scalable, and secure IoT messaging for enterprise customers.

mail icon Contact Florian
newer posts Lessons Learned: Deploying Connected Car Services in Production
MQTT Topics, Wildcards, & Best Practices – MQTT Essentials: Part 5 older posts