MQTT Sparkplug Essentials Part 2 - Architecture
Written by Dominik Obermaier
Category: MQTT Sparkplug Essentials MQTT Sparkplug
Published: December 3, 2020
Sparkplug Essentials — Architecture
Welcome back to the Sparkplug Essentials Series. In the first part of the series we saw that Sparkplug is one of the most important protocols in the Industrial Internet of Things (IIoT) space. This part of the series will cover the fundamental architecture and the main principles it is built upon.
The old world — Industrial Spaghetti Architecture
The typical Industrial Internet of Things architecture works by connecting components with a poll / response approach. Applications poll data directly from PLCs, gateways or servers with protocols such as Modbus, Siemens S7 protocol or OPC-UA. While this works pretty well when there are only a few systems to integrate, with a larger number of components this will result in a huge spaghetti architecture that is very hard to maintain.
Systems are connected to each other point-to-point and thus the systems and data are hardwired to each other. Modern architectures require flexibility and a clear separation of concerns in an IIoT system. Many companies are looking for the adaptiveness, flexibility and ease of implementation found in IT landscapes but with the reliability, security and predictability required for OT landscapes. A new architecture is needed for this paradigm shift.
A new architecture for IIoT — Sparkplug & MQTT
This new IIoT architecture as (depicted in picture 2) blueprint adds benefits compared to the traditional IIoT architectures:
- Decoupling of producers and consumers of data.
- Report by Exception (RBE), which saves bandwidth, memory and computational power on the producer and the consumers of data.
- One-to-many communication. Data only needs to be sent once and multiple receivers can receive the data.
- Flexibility: Devices and applications can be added and removed anytime without affecting the system as a whole.
- Data governance by having centralized permission and policy handling.
- Shopfloor-to-cloud connectivity by distributing data from cloud to edge.
Many companies were using MQTT in the past for creating decoupled architectures for their factories. This is no surprise as MQTT was originally invented with SCADA systems in mind. For IIoT use cases there are still pieces of the puzzle missing, though: An MQTT topic structure definition, MQTT state management and payload data definitions. Sparkplug adds these capabilities to MQTT and a Sparkplug architecture usually looks similar to Picture 3.
Principles and mechanisms
The Sparkplug architecture is so elegant compared to legacy solutions as it builds upon the following principles and mechanisms:
- Pub / Sub: It uses MQTT as a publish / subscribe architecture for the underlying application transport layer and decouples producers and consumers of data. MQTT is based on push communication, which means data is distributed instantly to all interested parties.
- Report by Exception: Data and device state is only updated if it changes. This dramatically saves bandwidth and computing power for all components as only new and fresh data is sent over the network.
- Continuous Session Awareness: Sparkplug and MQTT have the concept of continuous session awareness. It informs all clients that care about the real-time information of the device online/offline state if it changes. This concept also makes sure that data in transit is continued to be sent to devices if they change state from offline to online again. With Sparkplug you get a real-time correct view of all the devices, gateways and applications of the deployment.
- Death and Birth Certificates: Sparkplug introduces Death and Birth Certificates that are used for the management and discovery of device state. Birth certificates encapsulate information about the device and the data it can and will send. Death certificates are using the MQTT Last Will and Testament mechanism to push device offline information to all interested applications.
- Persistent Connections: All devices, gateways and applications are by default always on and use persistent TCP connections.
- Auto discovery: Applications and devices can auto-discover what data (and the corresponding topic) will get sent by all participant in the Sparkplug deployment as well as the online/offline devices that are connected.
- Standardized payload definition: The Sparkplug data format for all messages is standardized and can be decoded and encoded by all communication participants.
- Standardized Topic Namespace: All Sparkplug participants use a common topic namespace. The topic namespace allows for fine-grained subscriptions of specific data and allows for dynamic addition and removal of participants.
Sparkplug recognizes that there are different types of devices/sensors, gateways, applications and other software (as well as hardware) involved in any non-trivial IIoT scenario. Sparkplug defines the behavior and semantics for the different kinds of participants in the architecture.
A traditional Sparkplug Architecture consists of the following components:
- SCADA / IIoT Host
- Edge of Network (EoN) Nodes
- Devices / Sensors
- MQTT Application Nodes
- MQTT Broker
We will look at these components now in detail.
SCADA / IIoT Host
The SCADA / IIoT Host, sometimes also referred to as Primary Application, is the supervising application responsible for monitoring and control of the MQTT EoN nodes and their respective devices and sensors. Continuous Session State Awareness is key in an IIoT system, which means the current state of all participants (machines, devices, PLCs, sensors, gateways and applications) needs to be known at a central place at any given time. This central application managing the state (and acting upon state changes) is the SCADA / IIOT Host application. It is the central application that is used by the operators of the system to manage and supervise the health of the overall system.
In contrast to most traditional SCADA system architectures, the SCADA / IIoT Host is NOT responsible for establishing and maintaining connections to the devices directly. In a Sparkplug architecture, devices, EoN nodes and the SCADA / IIoT Host connect to a central MQTT broker and publish and subscribe to data, which allows a report by exception (RBE) functionality to only update data when changed.
Edge of Network (EoN) nodes
An Edge of Network (EoN) node is one of the key roles in any Sparkplug system. EoN nodes usually provide physical or logical gateway functions for sensors/devices who don’t implement Sparkplug themselves and let them participate in the MQTT Topic namespace. EoN node manage the state and session of itself and the sensors and devices connected to this EoN node via protocols like OPC-UA, Modbus, proprietary PLC vendor protocols, HTTP, MQTT or local discrete I/O. The EoN node is responsible for managing the lifecycle and state of these connected devices and sensors as well as receiving and sending data for the devices to the Sparkplug infrastructure. EoN nodes are a critical part of any Sparkplug infrastructure and very often EoN nodes are used to bridge legacy infrastructure to Sparkplug.
Devices / Sensors
Devices and sensors are the backbone of any industrial automation. A device is usually a physical or logical thing that sends and/or receives data over one or multiple industrial communication protocols. Usually, these industrial protocols are based on poll/response protocols. In the Sparkplug context, devices are connected to the Sparkplug infrastructure via EoN nodes. The EoN nodes bridge the publish / subscribe nature of MQTT Sparkplug to these poll / response protocols.
MQTT enabled sensors and devices
While most devices and sensors use protocols like Modbus, OPC-UA, Beckhoff ADS and other standardized and proprietary protocols, many vendors offer native MQTT functionality with their devices and sensors. If the MQTT enabled device is already equipped with Sparkplug by providing the appropriate data format and topic structure, then the device can participate directly with the Sparkplug infrastructure. In this case, it will identify itself as EoN node to the Sparkplug infrastructure. If the MQTT enabled device supports only standard MQTT without Sparkplug awareness, then it still needs to connect to EoN node.
MQTT Application Node
MQTT Application Nodes are nodes participating in the Sparkplug communication and can produce and consume messages but are not the SCADA / IIoT Host. These are sometimes called secondary applications. Usually, these are software systems which provide dedicated functionality like MES (Manufacturing Execution Systems), Historian and Analytics. Many deployments also use customized software dedicated to specific use cases that need to consume data produced by the other Sparkplug participants.
The MQTT broker is the central data distribution component. All Sparkplug enabled devices, EoN nodes, SCADA / IIoT Hosts and MQTT applications connect to the broker via MQTT. The broker is responsible for authentication, authorization, state management of the participants and data distribution between Sparkplug enabled systems. The MQTT broker needs to be 100% compliant to MQTT 3.1.1 as features like retained messages, Last Will and Testament and QoS are needed.
Incomplete, non-MQTT compliant cloud brokers like AWS IoT and Azure IoTHub don’t work with Sparkplug as they only support a subset of MQTT features and are technically not MQTT compliant. If the Sparkplug MQTT broker should reside in the cloud, AWS and Azure can still be used with a fully compliant broker implementation like HiveMQ hosted in the cloud.
It’s important to understand that in MQTT architectures, the MQTT broker is a single point of failure as all communication fails when the MQTT broker is offline. This would mean the whole Sparkplug system is offline. While Sparkplug defines a very complex and limited way of high availability with multiple, separated brokers (more on this in a later blog post in this series), there is a better way to achieve high availability easily without any modification on the application and EoN side: Brokers like HiveMQ allow for elastic clustering that provides high availability and resiliency with a cluster architecture. Even if one or more instances of the broker cluster fail (e.g. due to hardware problems), the system as a whole is fully operable, resulting in zero downtime. This is also true if you update the broker version, as rolling upgrades allow for zero downtime upgrades. This is especially important for mission critical 24/7 operations.
In this blog post we have discussed the basic components and building blocks of Sparkplug. In the next post of this series we are going to discuss the basic data flow in Sparkplug. Stay tuned!