Guarantee Message Integrity between Kafka and MQTT
Written by Margaretha Erber
Published: April 16, 2020
For many companies, Apache Kafka (open source or Confluent (Cloud) platform) is the central messaging and real-time streaming platform of choice. Kafka’s distributed microservices architecture and publish/subscribe protocol make it ideal for moving real-time data between enterprise systems and applications. Since message content is irrelevant to Kafka, message formats of all kinds can be stored in a Kafka cluster. When a service adds a new message to a topic in a Kafka cluster, message correctness is not verified. This lack of verification is generally not a problem when you begin using Kafka. Initially, your systems are designed to be compatible with each other. However, over time systems evolve and message formats change. Changed message formats can cause interrupted data flows which, in turn, can lead to a system that is no longer able to process messages at all. If your use case relies on real-time data processing, any change in message formats becomes a major challenge.
The use of a schema registry makes it possible to guarantee the correct functionality of the messaging between the clients of a Kafka cluster. Before a message can be sent to Kafka, the message is checked against a schema that is stored in the schema registry. If the message is not compatible with the schema, the message is not sent to the Kafka cluster. In this way, the recipients of the message can assume that the message they read fits into the schema they are familiar with and that they can process it without problems.
HiveMQ’s Enterprise Extension for Kafka 1.1 supports the local version of a schema registry for reading messages from a Kafka cluster. But what if a schema needs to be changed? What if a new service joins the system and wants to add a completely new schema to a topic that already uses an existing schema? What if multiple versions of the same schema need to be supported simultaneously?
Confluent’s Schema Registry allows this kind of flexibility through schema evolution in multiple compatibility levels. Before a new message is sent to the Kafka cluster, a new schema can be registered with the Confluent Schema Registry. For example, a schema that contains new fields with default values for backward compatibility. Services that read messages from the Kafka cluster can get the newly added schema from the Confluent Schema Registry and can work with the new version of this schema from now on.
Message Transformation with Confluent Schema Registry
The new 1.2 release of the HiveMQ Enterprise Extension for Kafka supports use of the Confluent Schema Registry for Kafka to MQTT topic mappings. This allows HiveMQ to take advantage of the Confluent Schema Registry to support schema evolution as part of the process for reliably converting Kafka messages to MQTT. Relevant parts of the Kafka message can be extracted using the corresponding schema and used, for example, as the payload or topic of the MQTT message.
Message Transformation with JSON Schema
The HiveMQ Enterprise Extension for Kafka 1.2 now supports message transformation for Avro messages and JSON messages by supporting JSON schemas in the Local Schema Registry. The schema tells the HiveMQ Enterprise Extension for Kafka exactly which fields are available in the processed message and allows it to use these fields to generate a corresponding MQTT message.
RBAC for the Kafka Page in HiveMQ Control Center
The new version of HiveMQ Enterprise Extension for Kafka adds HiveMQ’s role-based access control for the Kafka page of the HiveMQ Control Center. Many of our customers have privacy and legal reasons to restrict access to certain data. The control is added automatically at extension start and can be used from now on to fine-tune access management for your HiveMQ Control Center users.
The new HiveMQ Enterprise Extension for Kafka 1.2 release is now available for download from the HiveMQ Marketplace. To see more details about the new features and to get deeper insights into how to use them, visit our documentation. As always, if you have questions, don’t hesitate to contact us. We are happy to help.