Skip to content

How to install HiveMQ with Terraform

by Matthias Hofschen
12 min read

HiveMQ is the perfect solution to provide highly reliable and scalable MQTT communications for millions of devices. Let’s take a look at how to install a HiveMQ Cluster on AWS using Terraform.

It is really simple to manually install HiveMQ as a service on one machine. You can do a single-machine installation within a few minutes. However, if you want to deploy a cluster of machines for high availability and reliability, change the standard HiveMQ installation, and perhaps configure MQTT communication to use TLS encryption, manual installation becomes less attractive. On top of that, you might want to deploy and configure some of the great extensions that are available from the HiveMQ Marketplace. Doing this sort of installation manually would take a lot of time and doesn’t provide an automatic restart of the service if a machine dies. The solution: we need to automate the installation process.

Terraform is one option to write infrastructure code that allows for the reproducible installation of machines and services such as HiveMQ. Infrastructure as code together with continuous integration/continuous deployment (CI/CD) pipelines and version control systems such as Git has many benefits:

  • Full history of changes

  • Automatic installation procedures

  • Reproducible environments for development, staging, and production

The practise of working with these components across teams of developers and operation engineers is often referred to as GitOps.

Terraform encourages a declarative coding style. A plan of the current infrastructure state is rendered and stored on a configurable backend. For example, AWS S3 or, in our example, on the local filesystem. Subsequent runs of Terraform compare the existing state with the desired state and apply the necessary changes to the infrastructure.

Let’s take a closer look at how to install a two-node HiveMQ cluster on AWS. One of the core elements to use is the AWS Autoscaling Group. This feature allows the HiveMQ cluster to be elastically scaled according to load. The Autoscaling Group can also be configured to automatically restart crashed nodes and provide a higher reliability to the cluster. As an Amazon Machine Image (AMI), we use the freely available HiveMQ AMI from the AWS marketplace. This AMI, based on Centos, provides a solid foundation for the cluster and is preinstalled with HiveMQ and a Java 11 runtime. Keep in mind, without a HiveMQ license the number of concurrent connections is limited.

Overview:
What does the high-level overview of the installation look like? Here is a simple graphic to show you what I have in mind:Terraform Diagram SetupConfiguration files and installation artifacts are uploaded from Git (for now manually) to an AWS S3 bucket. Each HiveMQ cluster node downloads the configuration files and artifacts during its installation. The AWS S3 bucket acts as a point of transfer between version control and service installation.

Once the configuration files are in place, we are ready to invoke the Terraform code and start the installation. After a few minutes, the HiveMQ cluster nodes start and we can obtain the public address for one of the HiveMQ nodes from the AWS EC2 web console. With the address and port 8080, direct your browser to the HiveMQ Control Center.

Let’s dive into the nitty-gritty details:

Prerequisites:

  • If you have not yet installed Terraform on your computer, head over to the Hashicorp website and follow the installation guide.

  • Have your AWS account ready, and make sure to create an access token that authenticates the code against your AWS account. Here is the guide from AWS.

  • On the AWS S3 web console create the S3 Bucket “hivemq-install” with a folder named “hivemq-artifacts”.

  • Download the HiveMQ S3-Cluster-Discovery extension here and upload the zip file to the S3 Bucket’s “hivemq-artifacts” folder.

  • Create two configuration files to customize HiveMQ and upload them to the same “hivemq-artifacts” folder.

  • Upload the first file with the name “config.xml”.

<?xml version="1.0"?>
<hivemq>
    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>${PRIVATE_IP}</bind-address>
            <proxy-protocol>true</proxy-protocol>
        </tcp-listener>
    </listeners>
    <control-center>
        <listeners>
            <http>
                <port>8080</port>
                <bind-address>${PRIVATE_IP}</bind-address>
            </http>
        </listeners>
    </control-center>
    <cluster>
        <enabled>true</enabled>
        <transport>
            <tcp>
                <bind-address>${PRIVATE_IP}</bind-address>
                <bind-port>7800</bind-port>
            </tcp>
        </transport>
        <discovery>
            <extension/>
        </discovery>
    </cluster>
</hivemq>

  • Upload the second file with the name “s3discovery.properties”.

# S3-discovery
credentials-type:instance_profile_credentials

s3-bucket-name: ${BUCKET}
s3-bucket-region: ${REGION}
file-prefix: hivemq/cluster/nodes/

file-expiration:360   # Expiration in seconds
update-interval:180   # Interval in seconds

Infrastructure Code:

  • Now, go ahead and create a local “terraform” folder on your computer for the infrastructure code files.

  • In the “terraform” folder, create a “variables.tf” file and make sure to replace the two placeholders with your access and secret key from AWS. Find out more about authentication with terraform here.

variable "access_key" {
  default = "<PROVIDE YOUR ACCESS KEY>"
}
variable "secret_key" {
  default = "<PROVIDE YOUR SECRET KEY>"
}
variable "region" {
  default = "eu-west-1"
}
variable "s3_bucket" {
  default = "hivemq-install"
}
variable "instance_type" {
  default = "t2.medium" 
}
  • In the “terraform” folder, create a “main.tf” file.

provider "aws" {
  region     = var.region
  access_key = var.access_key
  secret_key = var.secret_key
}

resource "aws_autoscaling_group" "asg" {
  name_prefix        = "hivemq-asg-"
  availability_zones = ["eu-west-1a"]
  desired_capacity   = 2
  max_size           = 2
  min_size           = 1

  launch_template {
    id      = aws_launch_template.template.id
    version = "$Latest"
  }
}

resource "aws_launch_template" "template" {
  name_prefix           = "hivemq-template-"
  instance_type         = var.instance_type
  image_id              = data.aws_ami.hivemq-ami.id
  user_data             = data.template_cloudinit_config.config.rendered

  vpc_security_group_ids = [ aws_security_group.sec-group.id ]
  iam_instance_profile { arn = aws_iam_instance_profile.profile.arn }
}

data "template_cloudinit_config" "config" {
  base64_encode = true
  gzip = true
  part {
    content_type  = "text/x-shellscript"
    content       = templatefile("hivemq-install.sh", {
      s3_bucket = var.s3_bucket
      region    = var.region
    })
  }
}

data "aws_ami" "hivemq-ami" {
  most_recent = true
  owners = ["474125479812"]
  filter {
    name = "name"
    values = ["HiveMQ 4.3.3"]
  }
}

resource "aws_security_group" "sec-group" {
  name_prefix   = "hivemq-sec-group-"

  ingress {
    description = "ssh access"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "mqtt access"
    from_port   = 1883
    to_port     = 1883
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "internal"
    from_port   = 7800
    to_port     = 7800
    protocol    = "tcp"
    cidr_blocks = ["172.31.0.0/16"] #make sure this is the correct cidr for your AWS VPC
  }

  ingress {
    description = "Control Center"
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_iam_instance_profile" "profile" {
  name_prefix   = "hivemq-iam-profile-"
  role          = aws_iam_role.s3_role.name
}

resource "aws_iam_role_policy_attachment" "instance_policy_attach" {
  role          = aws_iam_role.s3_role.name
  policy_arn    = aws_iam_policy.s3_policy.arn
}

resource "aws_iam_role" "s3_role" {
  name_prefix        = "hivemq-role-"
  path               = "/"
  assume_role_policy = <<ROLE
{
  "Version": "2012-10-17",
  "Statement": [ {
    "Effect": "Allow",
    "Action": "sts:AssumeRole",
    "Principal": { "Service": "ec2.amazonaws.com" }, "Sid": "" } ]
}
ROLE
}

resource "aws_iam_policy" "s3_policy" {
  name_prefix           = "hivemq-s3-policy-"
  policy                = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [ {
    "Effect": "Allow",
    "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket" ],
    "Resource": [ "arn:aws:s3:::hivemq-install", "arn:aws:s3:::hivemq-install/*" ]}]
}
POLICY
}

  • Finally, create a “hivemq-install.sh” file in the “terraform” folder.

#!/bin/bash
set -ex

systemctl stop hivemq
wait

yum -y update
yum -y install awscli

mkdir /opt/hivemq-artifacts
aws s3 sync s3://${s3_bucket}/hivemq-artifacts/ /opt/hivemq-artifacts --region ${region}
wait

PRIVATE_IP=$(curl -sS http://169.254.169.254/latest/meta-data/local-ipv4) \
envsubst < /opt/hivemq-artifacts/config.xml \
         > /opt/hivemq/conf/config.xml

unzip /opt/hivemq-artifacts/hivemq-s3-cluster-discovery-extension-*.zip -d /opt/hivemq/extensions

BUCKET="${s3_bucket}" \
REGION="${region}" \
envsubst < /opt/hivemq-artifacts/s3discovery.properties \
         > /opt/hivemq/extensions/hivemq-s3-cluster-discovery-extension/s3discovery.properties

chown -R hivemq:hivemq /opt/hivemq/extensions
chmod -R 770 /opt/hivemq/extensions
systemctl start hivemq

Ready to go:

Now that all the required pieces are in place, start the installation:

  • Initialize the local “terraform” folder. From inside the folder execute the following command:

 terraform init 
  • Use the following command to verify that the code is valid and create the terraform plan:

terraform plan
  • The final step is to apply the code to AWS ( keep in mind that from now on AWS will charge your account for using resources ):

terraform apply -auto-approve

Test your cluster:

  • Now, you can verify that your two-node cluster is actually running. From the AWS EC2 web console, get the public address of one of the HiveMQ nodes. This address will look something like the following: ec2-54-194-18-56.eu-west-1.compute.amazonaws.com

  • Direct your browser to “http://<address>:8080” and provide the HiveMQ Control Center login information. If everything went according to plan, you should see 2 nodes running in the upper right-hand corner of the display. (Find the login prompt here in the documentation).

  • Use the opensource HiveMQ CLI (Command Line Client) tool and test the MQTT connectivity to your cluster. For more information on the CLI client, see the documentation.

  • If the client connects successfully to the cluster, the connection appears on the dashboard.

  • Once you are done testing, do not forget to destroy the cluster. It’s easy to destroy your test cluster, simply use the following Terraform command:

terraform apply -auto-approve

Final notes:

I’ve kept the sample HiveMQ installation in this blog post as simple as possible. To create a secure production-ready installation for a real-world scenario you would add additional components. Here are some examples:

  • An AWS load balancer to front the autoscaling group and provide a unified access point for clients.

  • Cluster security for authentication, authorization and transport layer security (TLS). Take a look at these blog posts for more information.

  • Operational visibility with a monitoring extension such as the InfluxDB extension. Find out more in this blog post.

  • A CI/CD pipeline to provide automation.

  • The terraform configuration to provide a persistent state backend.

  • A larger AWS EC2 instance type such as “m5.xlarge” for production use cases.

How to make this all a lot simpler?

In fact, there is a really simple way to install HiveMQ on AWS. One that is much less involved than what I have described in this post. Just hop over to HiveMQ Cloud and create a dedicated, scalable, and reliable HiveMQ cluster fully configured and ready to go in a few minutes.

Who we are

We love writing about MQTT, IoT protocols and architecture in general. Our experts are here to help, so reach out to us if we can help!

Contact us

Matthias Hofschen

Matthias Hofschen is Engineering Manager at HiveMQ.

  • Contact Matthias Hofschen via e-mail
HiveMQ logo
Review HiveMQ on G2