Skip to content

HiveMQ is the Missing Piece between MQTT and a SQL Database

by HiveMQ Team
15 min read

The MQTT protocol is awesome when it comes to Machine-to-Machine (M2M) Communication. Due to its applied Publish-Subscribe pattern it offers great scalability even with thousands of connected devices.HiveMQ MQTT Broker Diagram

The picture above shows a classic M2M landscape with a few publishers and a few subscribers.

Talking from the perspective of a provider of M2M services (which you are when you are hosting your own broker e.g. for homeautomation or your applications), you typically have additional needs to generate added value for yourself or your customer. So let’s say you want to store all MQTT publishes which are broadcasted to the broker for later analysis in a SQL database.

The concrete use case

So we want to store every message in a SQL database in our concrete use case. Let’s say we want to store them into a MySQL/MariaDB. The following simple database scheme will be used:Database Scheme

Implementation with a wildcard subscriber

The easiest way to achieve the storage is to add an additional client which subscribes to the Wildcard Topic (which happens to be # in MQTT). This ensures that the client receives all messages which are distributed by the broker. The client can now persist the message to the MySQL database every time a message arrives.

This would look like this:HiveMQ MQTT Broker Diagram with DatabaseWe chose to implement the client library with Eclipse Paho. For brevity only the relevant callback part on message arrival is shown here. The full source code can be found here.

    private static final String SQL_INSERT = "INSERT INTO `Messages` (`message`,`topic`,`quality_of_service`) VALUES (?,?,?)";

    public void messageArrived(MqttTopic topic, MqttMessage message) throws Exception {

        //Let's assume we have a prepared statement with the SQL.
        try {
            statement.setBytes(1, message.getPayload());
            statement.setString(2, topic.getName());
            statement.setInt(3, message.getQos());

            //Ok, let's persist to the database
        } catch (SQLException e) {
            log.error("Error while inserting", e);


So we essentially just implemented the messageArrived method, which is called every time a new message arrives. Then we just persist it with a plain ol’ JDBC Prepared Statement. That’s all.

Gotchas and Limitations

This approach works well in some scenarios but has some downsides. Some of the challenges we will face with that approach could be:

  • What happens if the wilcard subscriber disconnects? What happens if it reconnects?

  • Isn’t the wildcard subscriber some kind of bottleneck?

  • Do we need different wildcard subscribers when we want to integrate e.g. a second database?

  • Is there a way to ensure that each message will be sent only once?

Let’s look into these questions in more detail.

What happens on subscriber disconnect or reconnect?

A tough problem is how to handle disconnects of the wildcard subscriber. The problem in a nutshell is, that all messages which are distributed by the broker are never going to be received by the wildcard subscriber if it is disconnected at the moment. In our case that would mean, that we cannot persist these messages to the database.

Another challenge are retained messages. Retained messages are messages which are stored at the broker and will be published by the broker when a client subscribes to the topic with the retained message. The challenge here is, that these messages should not be written to our database in our case, because we most likely already received these messages before with a “normal” publish. To avoid this shortcoming, the wildcard subscriber could be implemented with clean session = false, so the broker remembers all subscriptions for the client.

Isn’t the wildcard subscriber some kind of bottleneck?

Short answer: Yes, most likely.

Slightly longer answer: It depends. In scenarios with very low message throughput there will be no problem with a wildcard subscriber from a performance perspective. When you are dealing with thousands, tens of thousands or even hundreds of thousands publishing clients, there is a chance that the client library is not able to handle the load or will thwart the system throughput. Another key factor here is, that all messages from the broker to the wildcard subscriber have to go over the network, which can result in unnecessary traffic. It is of course possible to launch the subscribing client on the same machine as the broker. This solves the traffic problem, but the broker and the subscriber share the same system resources and the messaging overhead is on both applications, which is not optimal. This is even more serious in a clustered broker environment.

Do we need different wildcard subscribers when we want to integrate a second database?

It depends on your use case and your expected message throughput. If for example all your writes to the different databases are blocking, you hit the bottleneck problem probably earlier than with just one integrated database. To distribute the “database-load”, it could be a smart idea to have different subscribers for different databases. If your actions are non-blocking, you could handle this with one wildcard subscriber.

Is there a way to ensure that each message will be only sent once?

This can only be achieved when all publishers publish with the MQTT Quality of Service of 2, which guarantees that each message is delivered exactly once to the broker. The subscriber client can subscribe also with Quality of Service 2 and now it is guaranteed that every message will arrive exactly once on the subscriber. This approach has two problems: It is unlikely that you can assure that all publishers send with Quality of Service 2 and with Quality of Service 2 it is much harder to scale.

Implementation with HiveMQs Plugin system.

To overcome these problems, we designed the HiveMQ MQTT broker with a powerful plugin system. This plugin system allows one to hook into HiveMQ with custom code to extend the broker with additional functionality to enable deep integration into existing systems or to implement individual use cases in an elegant and simple manner. Let us see how the SQL integration can be solved with the HiveMQ plugin system.HiveMQ MQTT Broker Diagram with PluginIn this scenario, the plugin system of HiveMQ takes care of persisting the messages. No subscriber (and no publisher) are aware of the persistence mechanism, which essentially solves all the problems we identified. But let us look first how this is implemented:


public class MessageStoreCallback implements OnPublishReceivedCallback {

    private static final String SQL_INSERT = "INSERT INTO `Messages` (`message`,`topic`,`quality_of_service`) VALUES (?,?,?)";

    private final BoneCP connectionPool;

    public MessageStoreCallback(BoneCP connectionPool) {
        this.connectionPool = connectionPool;

    public void onPublishReceived(PUBLISH publish, String clientId) throws OnPublishReceivedException {

        try {
            final Connection connection = connectionPool.getConnection();

            final PreparedStatement preparedStatement = connection.prepareStatement(SQL_INSERT);
            preparedStatement.setBytes(1, publish.getPayload());
            preparedStatement.setString(2, publish.getTopic());
            preparedStatement.setInt(3, publish.getQoS().getQosNumber());



        } catch (SQLException e) {
            throw new OnPublishReceivedException(e, false); //We do not disconnect the publishing client here

When looking at the code, we can see that this is almost completely the same as we implemented the wildcard subscriber. The slight difference is, that we get much more information about the publish message as before. We can access all attributes a publish message consists of (like retained, duplicate, etc) and we get information about the client which published the message. This enables finer control of what we want to persist. (What about only persisting messages from a specific client?). Additionally, it is possible to disconnect a client when something wrong or illegal was published. This can be achieved with the OnPublishReceivedException.

For better performance we inject a BoneCP Connection Pool to get the database connections. Since all plugins can hook into HiveMQ and reuse its components via Dependency Injection, optimal testability for plugins is ensured. Of course it is possible to write plugins without Dependency Injection, however, it is not recommended.

Key benefits

All the problems we identified with Wildcard subscribers are solved with the plugin system:

  • No messages are lost since the broker takes care of the message handling.

  • There is no bottleneck. All plugin executions are completely asynchronous and do not thwart the broker.

  • We can choose if we write different plugins for different use cases (e.g. a second database) but we do not need to.

  • Every plugin execution for a message will only occur once, so we do not have to care about duplicate handling.

These benefits are also true for a clustered HiveMQ environment. With the HiveMQ plugin system we are not only able to write MQTT messages to a MySQL database in an efficient way, we can also utilize the same mechanism to integrate HiveMQ to an existing software landscape. It is easy to integrate an Enterprise Service Bus (ESB), call REST APIs, integrate your billing system or even publish new MQTT messages on specific message occurrence.


We discussed two ways of how to handle the storage of MQTT messages to an existing SQL database. We discussed the downsides of using wildcard subscriber MQTT clients and why this approach does not scale well. We learned that the HiveMQ plugin system solves the problems and allows you to deeply integrate the HiveMQ broker with existing systems (which happens to be a SQL database in our example).

More information about the plugin system will follow up soon! Don’t hesitate to contact us if you want to learn more how HiveMQ and its plugin system can help you.

As a final note it is worth mentioning, that a SQL database can become a bottleneck pretty soon on high message throughput. We recommend using a NoSQL store for such tasks, but this will be discussed in a follow-up blog post.

HiveMQ Team

The HiveMQ team loves writing about MQTT, Sparkplug, Industrial IoT, protocols, how to deploy our platform, and more. We focus on industries ranging from energy, to transportation and logistics, to automotive manufacturing. Our experts are here to help, contact us with any questions.

HiveMQ logo
Review HiveMQ on G2