Building a Reliable and Scalable IoT Platform

Building a Reliable and Scalable IoT Platform

author Christian Kurze

Written by Christian Kurze

Category: HiveMQ MongoDB IoT MQTT

Published: June 8, 2020

Today, we welcome Christian Kurze, Principal Solution Architect with MongoDB, who has written a guest blog post about creating a reliable and scalable IoT platform with HiveMQ and MongoDB. This topic is also covered in a recent joint webinar with Christian and Dominik Obermaier, CTO of HiveMQ.

Starting with the Why: The Value of IoT

According to McKinsey, leaders in the Internet of Things (IoT) space see more than a 15% positive impact on costs and revenue. But how can businesses maintain these numbers if margins erode?

First off, with a strong commitment from Senior Management to pursue a business strategy usually built out of the following three pillars:

  • Drive sales and service efficiency through devices monitoring, maintenance apps, field services, and proper staffing
  • Develop new IoT products and services, such as new apps or the fleet management that we covered with a demo in the webinar
  • Optimize business operations in areas such as manufacturing and R&D or across companies through optimized supply chains

These business initiatives are drilled down into IT initiatives that are usually grouped into four central themes:

  1. Connecting devices via sensors and allow them to become actuators such as cars, buildings, equipment, or wearables
  2. Creating business applications such as customer and/or device-facing apps, dashboards, and mobile applications
  3. Building device enablement platforms that can obtain, import, and process data using standard protocols
  4. Adopting cloud and edge computing for new kinds of workloads and to optimize costs

Since IoT (and digital twins) is applicable to each and any industry, it is important to find a common model as a baseline and have a high-level definition of IoT and digital twins, e.g. as the combination of all data across the whole physical lifecycle of a product (from R&D, production, operation, and maintenance to decommissioning) into an information lifecycle that allows us to describe physical assets, predict their behavior, and draw recommendations based on analytics.

Frequently, one of the most important aspects is overlooked: Information lives longer than the physical product itself. This makes a reflexive and continuous improvement cycle in which collected data is used to improve both existing and future products, processes, or services possible.

Core of IoT and Digital Twins

Combination of the physical and information lifecycle of products to form an IoT System / Digital Twin

Why not so far: Typical IoT Challenges

With all the value that IoT contains, one may ask why IoT and digital twins haven’t been more broadly adapted so far? In fact 30% of our webinar participants indicated not having been an IoT project at all so far and another 30% were still in the planning and/or design phase. There might be different reasons for this, but we wanted to highlight the most common challenges that we see our customers facing before they reach out to us.

The volume of data as well as the integration across departments and sometimes even verticals can be challenging. According to McKinsey, only 30% of relevant IoT projects make it into production and a company-wide roll-out. They sum up the primary technical challenges and requirements as follows:

Delivering IoT at scale requires the ability to extract, interpret, and harmonize data from disparate systems that were not designed to work together (McKinsey).

In addition, web technologies today are built for the Internet of Humans not the Internet of Things. Today, about 4 billion people - nearly half of the world’s population - have access to the Internet. In contrast, more than 30 billion devices are currently connected to the Internet of Things (with an expectation of continued exponential growth). New protocols such as MQTT (rather than HTTP/REST) are essential to handle this transformation as well as highly available and massively scaling technologies.

Humans and Devices on the Internet

Comparison of humans and devices on the Internet

We hear the following challenges from our customers every day:

  1. Many different types of data need to be integrated. Device data arrives in various formats (JSON, Avro, Protobuf, custom binary formats). In most cases, we talk about time series data. Data agnostic message brokers are used to distribute data into the backend of IoT solutions and relational databases that are often not suited for IoT data.
  2. Systems have to be responsive. Low latency is critical for many, if not all, IoT use cases. End users simply expect responsive systems. Users do not want to wait for 30 seconds or minutes for an IoT system to respond when their messaging services can send text, voice, and images around the globe within seconds. Unreliable cellular networks can have a significant impact on responsiveness. A very good example is the HiveMQ case study with BMW / ShareNow on how to gather data and execute remote commands for a carsharing fleet.
  3. IoT solutions need to scale to accommodate growth from 100’s to 1,000,000s of devices and scale up and down to accommodate spikes. But it is not just large-scale, there also have to be offerings for small-scale solutions.
  4. IoT data needs to integrate into enterprise systems such as ERP and CRM and allow stable device-to-cloud and cloud-to-device integration. In addition to system integration, management tools such as monitoring, alerting, backup, and the necessary security must be in place.
  5. IoT solutions from the edge to the cloud need deployment agnostic technologies: On-premises deployments of gateways close to the origin of the data such as for manufacturing or inside of connected cars that are deployed in private and/or public clouds up to fully managed services in the cloud benefit from non-proprietary technology.

Addressing these Challenges: Built-in Reliability and Flexibility

So how can these challenges now be tackled concretely? Let’s take a look at a high-level IoT architecture (see diagram below). Sensors and actuators are the heart of any IoT application. Usually an edge gateway is used to gather the data from the actual devices and send it to the backend. Edge computing also makes it possible to process data locally and avoid the additional latencies in transferring data to a central data center or the cloud. A streaming and routing layer is responsible for secure and guaranteed bi-directional data transfer across potentially unstable networks to the backend. From the backend, data is streamed into a hot data storage layer for real-time data processing. To support batch analysis and machine learning across a large amount of data, a cold data tier is used. The data can be used for visualization in dashboards and end-user applications as well as for advanced analytics and machine learning.

High-level architecture of an IoT platform

High-level architecture of an IoT platform

Let’s take a look at what makes MQTT the leading IoT and IIoT messaging protocol and why it is such a good fit for the streaming and routing layer. MQTT has a publish/subscribe architecture with minimal overhead, it is extremely easy to use, and its binary format makes it fast and data agnostic. The MQTT protocol is designed from the ground up for reliable communication across unreliable channels such as mobile connections. This design makes MQTT the ideal communication protocol between devices and IoT backends. The use cases for MQTT range from industrial applications in Industry 4.0 environments and connected vehicles to logistics and home automation. The MQTT broker is the heart of the communication. It is the integration point of frontend systems and backend systems, which is why reliability, security and an extension system are key.

The HiveMQ MQTT broker meets the challenges of reliability and flexibility with world-class MQTT scalability to 200 million connections and more. HiveMQ’s full support of MQTT 5 is important for IoT use cases and simplifies implementation. Elastic clustering capabilities and a resilient software design make the broker a perfect fit for cloud infrastructures. The HiveMQ extension framework provides an open API that allows developers to create custom extensions for their specific infrastructure. With the help of the HiveMQ Control Center, administrators can monitor a fleet of connected vehicles. A dashboard provides the operations team with a complete real-time overview of the broker cluster and general system status. Administrators can use the Control Center to monitor real-time data between the vehicle and the cloud platform. In addition, HiveMQ is built to be ultra-flexible so it can be integrated with virtually every existing enterprise system, such as message processing, databases, security, monitoring or complex business logic.


HiveMQ Architecture Diagram

HiveMQ Enterprise Platform Overview

MongoDB, and especially with the fully managed MongoDB Atlas service in the Cloud, is a good fit for the data storage layer because it offers a complete platform. Highly available replica sets that scale horizontally via sharding are the basis for ingesting streamed data. MongoDB can also provide workload isolation to have different physical machines for potential load-intensive tasks on data. This works ETL-free as these nodes are additional members of the replica sets. Atlas Data lake offers an offline queryable archive on object storage, including auto-archiving capabilities for your cold data. Atlas search allows you to integrate lucene-based full-text searches on your data. You can define the search index in a flexible way according to your application needs. All of these functionalities are accessible as one single database endpoint. The advantage is that you only need to learn one query language to provide access to all the different forms of data to all the different consumers. No matter if you develop mobile applications, microservices, visualization, perform advanced analytics, or do reporting on your data.

MongoDB Atlas data platform overview

MongoDB Atlas data platform overview

Let’s take a look at MongoDB’s flexible JSON-based document model. In contrast to a relational representation, it is simple to define and group together attributes and events from your things. Below we see a direct comparison between a relational schema and a JSON representation of the same data. While the data needs to be spread to multiple tables in relational databases, all the information for a particular asset is represented in one JSON document. Expensive joins are not needed and documents for different types of devices can look different, without additional needs for data modeling. In terms of the physical data storage, the relational tables could be spread on different areas of disks causing multiple I/Os to obtain the data necessary to perform the query. With the MongoDB document model, one disk I/O and has all the information that are needed to satisfy the query.

Comparison between relational data model and JSON-based document model

Comparison between relational data model and JSON-based document model

JSON is not just optimized for the storage of static data. As your needs change and you add more sensors with different attributes, or security policies and information about the actions of assets, data can easily be added without changes to your application. Being able to adapt to change quickly is even more important when handling large data volumes.

We’ve talked about the power and flexibility of JSON and there is a clear trend towards leveraging JSON and using JSON-based standards such as JSON-LD to represent linked data. This standard is also used for the Web of Things standard by the World Wide Web Consortium to describe digital representations. Since MongoDB is based on JSON, it is straightforward to adapt such a specification to your existing applications.

JSON-LD is an ideal way to represent data and metadata because it provides context to the stored data, provides globally unique identifiers, and maps the data to well-defined vocabularies. On a side note, JSON-LD is the representation behind the Google Knowledge Graph.

Extensibility on the fly in MongoDB’s document model

Extensibility on the fly in MongoDB’s document model

What about time series data? IoT is all about time series. It is important to note that there are a number of time series databases in the market that cover every potential edge case such as rolling up data on the fly from microseconds to years. While this is an amazing feature, it is unnecessary for most project requirements. MongoDB is a general-purpose database that handles time series data as well as many other use cases for you. What makes MongoDB a good choice for time series data? The flexible schema, scalability, and a modern query language that can not only handle simple queries but also perform complex analytics and integration with leading machine learning and AI platforms such as Apache Spark or Tensorflow.

MongoDB has published a whitepaper that does an in-depth analysis on how to design time series schemas in MongoDB leveraging the schema design pattern of bucketing. With bucketing we optimize the storage of time series data by defining a specific time series per document rather than a document per event.

Net net: Both tools can be combined in a perfect way, on-premise as well as in the cloud to build a scalable and reliable IoT platform. The two architectures below show the options via the flexible extension mechanism of HiveMQ. This is the recommended way to integrate with other tools directly and includes the necessary monitoring and scaling capabilities.

HiveMQ & MongoDB integration on-premises

HiveMQ & MongoDB integration on-premises

HiveMQ & MongoDB integration in the cloud

HiveMQ & MongoDB integration in the cloud

Seeing is Believing: Fleet Management Demo

We selected Fleet Management a demo scenario of our choice given it is a very broad use case. Although our demo focused on trucks, the concept applies to vehicles, forklifts, trains, goods, or anything that forms a fleet and needs to be monitored and managed. When the major challenges in this scenario, as related to the distribution of these devices, unstable connections, and a large scale are overcome, companies will see numerous benefits such as the reduction of shipping costs, fewer outages, less damage, fast reaction times to changes as well as carbon dioxide optimization and regulatory compliance of their fleet.

Overview Fleet Management Demo

Overview Fleet Management Demo

The engineers of HiveMQ have developed a truck simulator that extracts about 9000 warehouses from OpenStreetMap and sends trucks on random routes between the warehouses. Each truck sends data every second about the current location of the truck, the speed, the current speed limit, and a flag if the driver is taking a rest.

There are two subscribers to the data. The first subscriber displays the latest information on a map, the second subscriber transfers the data to MongoDB for analysis of historical data. We chose the option of Python-based subscribers to give you quick insight into how to build the toolchain. We strongly recommend using extensions in HiveMQ. The extensions are much easier to maintain, scale, and monitor as well as simple to set up for high availability.

We built a web application that works as the fleet management dashboard combining real-time and historical data.

You can find the sources of the demo in GitHub. We appreciate your feedback!

Further Reading and Resources

We recently presented a webinar on this topic and have made the slides and the recording available. Please also check out the websites of HiveMQ and MongoDB.

I appreciate any comments from your side, questions and feedback about your IoT projects - either in planning, in development or already in production.

author Christian Kurze

About Christian Kurze

Christian spent the last 10+ years on data management and data integration in order to generate value out of data. In MongoDB he works as a Principal Solutions Architect. Prior to joining MongoDB, he worked on data virtualization, data warehousing and active metadata management. He holds a PhD in data warehouse automation. When not working, he loves to play the Trumpet in a Big Band as well as traditional Bavarian wind music.

mail icon Contact HiveMQ
newer posts HiveMQ Testcontainer 1.1.0 Released
Comparison of MQTT Support by IoT Cloud Platforms older posts