Automate your HiveMQ installation with Concourse and Terraform
In my previous blog post, I wrote about automating your HiveMQ installation with Terraform. However, manually copying resources to an AWS S3 bucket and executing Terraform commands on the command line does not feel very automated. In this blog post, we will up our level of automation with a Concourse CI pipeline.
Concourse CI was initially released in 2014 and has gained some traction since. The CI relies on Docker containers to define resources for the different stages of a pipeline. Every stage runs inside a Docker container, receives input from the previous stage, and provides output to the next stages. Many standard resources such as a Git resource are available. For a listing of resources, see Concourse CI resources.
Because every stage and task runs in a Docker container, extending pipelines with custom functionality is straightforward. There is no need to install plugins to the CI server, and you can easily migrate pipelines between Concourse installations. Concourse also provides a command-line tool (CLI) called “fly” that makes it easy to script pipelines for further automation.
Let’s get back to talking about GitOps and Infrastructure as Code. The GitOps approach promotes keeping all infrastructure code and configurations under source control. GitOps is really "Operations by Pull Request". The Git repository is the single source of truth when it comes to describing the infrastructure. All changes are done via pull request and generate a complete history of changes. Environments can be installed in a consistent and comparable manner. Using a deployment pipeline defined in code adds to the ease of installing an environment in an automated way.
For this blog post, I provide a GitHub repository that contains the Terraform code from the previous post and two added improvements:
A load balancer that allows clients to connect to a single public DNS address.
Automatic transfer of HiveMQ artifacts and configuration files to the configured S3 bucket.
The repository also contains the Concourse pipeline definition file. This pipeline definition can be registered with any Concourse CI installation for execution. The file defines how to check out the infrastructure code and configuration files from Git and execute the code inside a Terraform Docker container. This eliminates the need to install Terraform.
For convenience, I have included a makefile that uses the Concourse CLI tool fly to easily start the pipeline. For more useful fly commands, see the Concourse fly documentation.
Our goal remains to install a two-node HiveMQ cluster that provides the perfect solution for highly reliable and scalable MQTT communications. The addition of a Concourse pipeline further automates our installation.
Overview
Here’s a high-level overview of what the solution looks like:We reuse the Terraform setup from the previous blog post and add a load balancer. In addition, artifacts and configuration files from Git are automatically uploaded to S3.
The Concourse pipeline sits at the center of this solution. The pipeline checks out the code from a Git repository using the Git resource. The provided Terraform resource then applies the Terraform code from the Git repository and triggers the installation.
Let’s dive into the nitty-gritty details:
Prerequisites
Have your AWS account ready and make sure to create an access token that authenticates the code against your AWS account. Here is the guide from AWS.
Checkout the Git repository
Edit the “pipeline-vars.yml” file to add the AWS access and secret keys.
Create an AWS S3 bucket and add the name to the “pipeline-vars.yml” file.
Make sure you have “docker” and “docker-compose” installed.
Download the Docker compose files for Concourse and start Concourse:
Install the Concourse CLI tool fly. For step-by-step instructions, see The fly CLI.
Ready to go
Now that all the required pieces are in place, start the installation:
Use your browser and open the Concourse CI dashboard: https://localhost:8080/ (test:test) to observe the installation progress.
Use the provided make commands to log in and start the installation:
Test your cluster
Now, you can verify that your two-node cluster is actually running. From the AWS EC2 web console, get the public address of the load balancer. This address will look something like the following:
“hmq-elb-429927720.eu-west-1.elb.amazonaws.com”Direct your browser to “http://<address>:8080” and provide the HiveMQ Control Center login information. If everything went according to plan, you should see 2 nodes running in the upper right-hand corner of the display. (The login prompt is available in the HiveMQ Control Center documentation).
Use the opensource MQTT CLI tool to test the connectivity to your cluster. Take a look at this blog post or the documentation for more information.
If the client connects successfully to the cluster, the connection appears on the dashboard.
Once you are done testing, remember to destroy the cluster using the Concourse pipeline to avoid additional usage charges. For convenience, use the following command:
Final notes
To create a secure production-ready installation for a real-world scenario, you need to add additional components. Here are some examples:
Cluster security for authentication, authorization, and transport layer security (TLS). Take a look at these blog posts for more information.
Operational visibility with a monitoring extension such as the InfluxDB extension. Find out more in this blog post.
A larger AWS EC2 instance type such as “m5.xlarge” for production use cases.
How to make this all a lot simpler?
Don’t forget that a fully-configured and ready-to-use HiveMQ cluster can be installed in just a few minutes with HiveMQ Cloud.