HiveMQ Extension SDK Services

HiveMQ services provide a convenient way for extensions to interact with the HiveMQ core. You can access HiveMQ Extension SDK services through the Services class.

The HiveMQ Community Extension SDK provides the following services:

Table 1. Available Community Extension SDK Services
Service Description

Client Service

Allows extensions to get client session information, disconnect clients, and remove client sessions.

Subscription Store

Allows extensions to get, remove, and add subscriptions for specific clients.

Retained Message Store

Allows extensions to get, remove, and add retained messages for specific topics. Or delete all retained messages at once.

Publish Service

Allows extensions to send PUBLISH messages.

Managed Extension Executor

Allows extensions to use a HiveMQ-managed executor service for non-blocking operations.

Admin Service

Allows extensions to get information about the broker instance.

Cluster Service

Allows extensions to dynamically discover HiveMQ cluster nodes.

The HiveMQ Enterprise Extension SDK offers these additional HiveMQ services:

Table 2. Additional Enterpeise Extension SDK Services
Service Description

Consumer Service

Allows extensions to consume messages from a set of specified topics and map the messages to other topics.

Session Attribute Store

Allows extensions to get, remove, and add session attributes.

Extension Messaging Service

Allows extensions to send and receive non-MQTT messages for internal cluster communication.

Control Center Service

Allows extensions to add custom views with different layouts to the HiveMQ Control Center.

REST Service

Allows extensions to register a custom REST API application with the HiveMQ REST API.

Client Event Service

Allows extensions to interact with the events of specific clients over a defined timeframe.

HiveMQ Community Extension SDK Services

Client Service

The Client Service allows extensions to gather information about clients:

  • Online status

  • Client identifier

  • Session expiry interval

Extensions can use the Client Service to do the following tasks:

For more information, see Client Services JavaDoc.

Access the Client Service

final ClientService clientService = Services.clientService();

Query Client Connection

This example shows how to get information about the online status of a client:

The isClientConnected method returns true for online clients and false for offline clients.

CompletableFuture<Boolean> connectedFuture = clientService.isClientConnected("my-client-id");
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String clientId = "client-123";
    final ClientService clientService = Services.clientService();
    CompletableFuture<Boolean> connectedFuture = clientService.isClientConnected(clientId);

    connectedFuture.whenComplete(new BiConsumer<Boolean, Throwable>() {
        @Override
        public void accept(Boolean connected, Throwable throwable) {
            if(throwable == null) {
                System.out.println("Client with id {" + clientId + "} is connected: " + connected);
            } else {
                //please use more sophisticated logging
                throwable.printStackTrace();
            }
        }
    });
}

...

Get Session Information

This example shows how to get all session information for a client with a specific client ID.

The getSession method returns an Optional of a SessionInformation object.

If no session is found, the object is empty. Otherwise, it contains the following information:

  • The online connection status of the client

  • The session expiry interval of the client

  • The client identifier that owns the session

CompletableFuture<Optional<SessionInformation>> sessionFuture = clientService.getSession("my-client-id");
Full Example Code
...
@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String clientId = "my-client-id";
    final ClientService clientService = Services.clientService();
    CompletableFuture<Optional<SessionInformation>> sessionFuture = clientService.getSession(clientId);

    sessionFuture.whenComplete(new BiConsumer<Optional<SessionInformation>, Throwable>() {
        @Override
        public void accept(Optional<SessionInformation> sessionInformationOptional, Throwable throwable) {
            if(throwable == null) {

                if(sessionInformationOptional.isPresent()){
                    SessionInformation information = sessionInformationOptional.get();
                    System.out.println("Session Found");
                    System.out.println("ID: " + information.getClienIdentifier());
                    System.out.println("Connected: " + information.isConnected());
                    System.out.println("Session Expiry Interval " + information.getSessionExpiryInterval());
                } else {
                    System.out.println("No session found for client id: " + clientId);
                }

            } else {
                //please use more sophisticated logging
                throwable.printStackTrace();
            }
        }
    });

}

...

Disconnect Client

This example shows how to forcibly disconnect a client with a specific client ID. You can also select not to send the optional last-will message of the client on this disconnect.

The disconnectClient method returns true when an online client is disconnected. Otherwise, the method returns false.

The following examples show you how to disconnect the client with and without sending the last-will message and with disconnect reason information:

Disconnect client and send Will message
clientService.disconnectClient("my-client-id");
Disconnect client and prevent Will message
clientService.disconnectClient("my-client-id", true)
Disconnect client and provide ReasonCode and ReasonString
clientService.disconnectClient("my-client-id", true, DisconnectReasonCode.NORMAL_DISCONNECTION, "my-reason-string")
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String clientId = "client-123";
    final ClientService clientService = Services.clientService();
    CompletableFuture<Boolean> disconnectFuture = clientService.disconnectClient(clientId, true);

    disconnectFuture.whenComplete(new BiConsumer<Boolean, Throwable>() {
        @Override
        public void accept(Boolean disconnected, Throwable throwable) {
            if(throwable == null) {
                if(disconnected){
                    System.out.println("Client was successfully disconnected and no Will message was sent");
                } else {
                    System.out.println("Client not found");
                }
            } else {
                //please use more sophisticated logging
                throwable.printStackTrace();
            }
        }
    });

}

...
Use of a deprecated DisconnectReasonCode throws an exception. Deprecated codes include: CLIENT_IDENTIFIER_NOT_VALID, DISCONNECT_WITH_WILL_MESSAGE, and BAD_AUTHENTICATION_METHOD.

Invalidate Client Session

This example shows how to invalidate a client session.

Invalidation of a client session forcibly disconnects an online client and sends the optional last-will message of the client. The session information of the client is removed and cannot be restored.

The invalidateSession method returns true when an online client is disconnected. Otherwise the method returns false.

clientService.invalidateSession("my-client-id");
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String clientId = "client-123";
    final ClientService clientService = Services.clientService();
    CompletableFuture<Boolean> invalidateSessionFuture = clientService.invalidateSession(clientId);

    invalidateSessionFuture.whenComplete(new BiConsumer<Boolean, Throwable>() {
        @Override
        public void accept(Boolean disconnected, Throwable throwable) {
            if(throwable == null) {
                if(disconnected){
                    System.out.println("Client was disconnected");
                    System.out.println("Will message was sent");
                    System.out.println("Client session was removed");
                } else {
                    System.out.println("Client was offline");
                    System.out.println("Client session was removed");
                }
            } else {
                if(throwable instanceof NoSuchClientIdException){
                    System.out.println("Client not found");
                }
                //please use more sophisticated logging
                throwable.printStackTrace();
            }
        }
    });

}

...

Iterate All Clients

You can use the Client Service to iterate the session information of all clients. This iteration includes all currently connected clients and all disconnected clients with sessions that are not yet expired.

To use iteration over all clients, every node in the HiveMQ cluster must run HiveMQ version 4.2.0 or higher.

The callback passed to the iterateAllClients method is called once for each client. By default, the Managed Extension Executor Service executes the callback. However, you can also pass your own executor. Session information is not provided to the callback in any particular order.

clientService.iterateAllClients(new IterationCallback<SessionInformation>() {
    @Override
    public void iterate(IterationContext context, SessionInformation sessionInformation) {
        // this callback is called for every client with its session information
    }
});
In large-scale deployments, iteration over all clients can be a very expensive operation. Do not call the method in short time intervals.
Example of searching a client with a pattern
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final ClientService clientService = Services.clientService();

    // this is the default executor but used as executor argument for demonstration purposes
    final Executor executor = Services.extensionExecutorService();

    final Pattern pattern = Pattern.compile("client-[1-9]+");

    CompletableFuture<Void> iterationFuture = clientService.iterateAllClients(
            new IterationCallback<SessionInformation>() {
                @Override
                public void iterate(IterationContext context, SessionInformation sessionInformation) {
                    final String clientIdentifier = sessionInformation.getClientIdentifier();
                    if (pattern.matcher(clientIdentifier).matches()) {
                        System.out.println("Found client for pattern " + clientIdentifier);
                        // abort the iteration if you are not interested in the remaining information as this saves resources
                        context.abortIteration();
                    }
                }
            }, executor);

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Iterated all clients");
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...
Example for counting all connected clients
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final ClientService clientService = Services.clientService();

    final AtomicInteger counter = new AtomicInteger();

    CompletableFuture<Void> iterationFuture = clientService.iterateAllClients(
            new IterationCallback<SessionInformation>() {
                @Override
                public void iterate(IterationContext context, SessionInformation sessionInformation) {
                    if (sessionInformation.isConnected()) {
                        counter.incrementAndGet();
                    }
                }
            });

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Connected clients: " + counter.get());
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...
If the topology of the cluster changes during the iteration, the iteration is canceled. For example, if a network splits or a node leaves or joins the cluster.

The following example shows how topology changes can be handled:

Example handling of cluster topology changes during iteration
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    iterate(0);
}

public void iterate(final int attempts) {

    final AtomicInteger counter = new AtomicInteger();

    CompletableFuture<Void> iterationFuture = Services.clientService().iterateAllClients(
            new IterationCallback<SessionInformation>() {
                @Override
                public void iterate(IterationContext context, SessionInformation sessionInformation) {
                    if (sessionInformation.isConnected()) {
                        counter.incrementAndGet();
                    }
                }
            });

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Connected clients: " + counter.get());

        // in case the cluster topology changes during iteration, an IterationFailedException is thrown
        } else if (throwable instanceof IterationFailedException) {
            // only retry 3 times
            if (attempts < 3) {
                final int newAttemptCount = attempts + 1;
                Services.extensionExecutorService().schedule(() ->
                        iterate(newAttemptCount), newAttemptCount * 10, TimeUnit.SECONDS); // schedule retry with delay in case topology change is not over, else we would get another IterationFailedException
            } else {
                System.out.println("Could not fully iterate all clients.");
            }
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...

Subscription Store

The Subscription Store allows extensions to do the following:

For more information, see Subscription Store JavaDoc.

Access the Subscription Store

final SubscriptionStore store = Services.subscriptionStore();

Add Subscription

This example shows how to add a subscription to a specific client with the Subscription Store.

TopicSubscription subscription = Builders.topicSubscription()
            .topicFilter(topic)
            .qos(Qos.AT_MOST_ONCE)
            .build();

Services.subscriptionStore().addSubscription("test-client", subscription);
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String topic = "topic";
    final String clientId = "test-client";

    TopicSubscriptionBuilder subscriptionBuilder = Builders.topicSubscription()
            .topicFilter(topic)
            .noLocal(false)
            .retainAsPublished(true)
            .qos(Qos.AT_MOST_ONCE)
            .subscriptionIdentifier(1);


    CompletableFuture<Void> addFuture = Services.subscriptionStore().addSubscription(clientId, subscriptionBuilder.build());

    addFuture.whenComplete(new BiConsumer<Void, Throwable>() {
        @Override
        public void accept(Void aVoid, Throwable throwable) {

            if(throwable != null){
                throwable.printStackTrace();
                return;
            }

            System.out.println("Successfully added subscription for topic: " + topic + " | client: " + clientId);
        }
    });
}

...
When you use the HiveMQ extension system or the HiveMQ Control Center to add a subscription for a client, the retained messages on the topics in the subscription are not published to the client.

Add Multiple Subscriptions

This example shows how to add multiple subscriptions to a specific client with the Subscription Store.

final Set<TopicSubscription> topicSet = new HashSet<>();

topicSet.add(Builders.topicSubscription().topicFilter("$share/group/topic1").build());
topicSet.add(Builders.topicSubscription().topicFilter("topic2").build());
topicSet.add(Builders.topicSubscription().topicFilter("topic3").build());

Services.subscriptionStore().addSubscriptions("test-client", topicSet);
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final SubscriptionStore subscriptionStore = Services.subscriptionStore();

    final String clientID = "test-client";

    final Set<TopicSubscription> topicSet = new HashSet<>();

    topicSet.add(Builders.topicSubscription().topicFilter("$share/group/topic1").build());
    topicSet.add(Builders.topicSubscription().topicFilter("topic2").build());
    topicSet.add(Builders.topicSubscription().topicFilter("topic3").build());

    final CompletableFuture<Void> addFuture = subscriptionStore.addSubscriptions(clientID, topicSet);

    addFuture.whenComplete(new BiConsumer<Void, Throwable>() {
        @Override
        public void accept(final Void result, final Throwable throwable) {

            if(throwable != null){
                throwable.printStackTrace();
                return;
            }

            System.out.println("Successfully added subscriptions to client: " + clientID);

        }
    });
}

...

Remove Subscription

This example shows how to remove a subscription from a specific client with the Subscription Store.

Services.subscriptionStore().removeSubscription("test-client", "topic/to/remove");
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String topic = "topic";
    final String clientId = "test-client";

    CompletableFuture<Void> removeFuture = Services.subscriptionStore().removeSubscription(clientId, topic);

    removeFuture.whenComplete(new BiConsumer<Void, Throwable>() {
        @Override
        public void accept(Void aVoid, Throwable throwable) {

            if(throwable != null){
                throwable.printStackTrace();
                return;
            }

            System.out.println("Successfully removed subscription for topic: " + topic + " | client: " + clientId);
        }
    });
}

...

Remove Multiple Subscriptions

This example shows how to remove multiple subscriptions from a specific client with the Subscription Store.

final Set<String> topicSet = new HashSet<>();
topicSet.add("topic1");
topicSet.add("topic2");

Services.subscriptionStore().removeSubscriptions("test-client", topicSet);
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final SubscriptionStore subscriptionStore = Services.subscriptionStore();

    final String clientID = "test-client";

    final Set<String> topicSet = new HashSet<>();
    topicSet.add("$share/group/topic1");
    topicSet.add("topic2");
    topicSet.add("topic3");

    final CompletableFuture<Void> removeFuture = subscriptionStore.removeSubscriptions(clientID, topicSet);

    removeFuture.whenComplete(new BiConsumer<Void, Throwable>() {
        @Override
        public void accept(final Void result, final Throwable throwable) {

            if(throwable != null){
                throwable.printStackTrace();
                return;
            }

            System.out.println("Successfully removed subscriptions for topics: " + topicSet + " | from client: " + clientID);

        }
    });
}

...

Get Subscriptions

This example shows how to get the subscriptions from a specific client with the Subscription Store.

CompletableFuture<Set<TopicSubscription>> future = Services.subscriptionStore().getSubscriptions("test-client");
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String clientId = "test-client";

    CompletableFuture<Set<TopicSubscription>> getFuture = Services.subscriptionStore().getSubscriptions(clientId);

    getFuture.whenComplete(new BiConsumer<Set<TopicSubscription>, Throwable>() {
        @Override
        public void accept(Set<TopicSubscription> topicSubscriptions, Throwable throwable) {

            if(throwable != null){
                throwable.printStackTrace();
                return;
            }

            if(topicSubscriptions.isEmpty()){
                System.out.println("Found no subscriptions for client: " + clientId);
                return;
            }

            System.out.println("Found subscriptions for client: " + clientId);

            for (TopicSubscription topicSubscription : topicSubscriptions) {
                System.out.println("---------------------");
                System.out.println("Topic :" + topicSubscription.getTopicFilter());
                System.out.println("Qos :" + topicSubscription.getQos().getQosNumber());
                System.out.println("No local :" + topicSubscription.getNoLocal());
                System.out.println("Retain as published :" + topicSubscription.getRetainAsPublished());
                System.out.println("Subscription identifier :" + topicSubscription.getSubscriptionIdentifier());
            }

        }
    });
}

...

Iterate All Subscriptions

You can use the Subscription Store to iterate all subscriptions of all clients. This iteration includes the subscriptions of all currently connected clients and all disconnected clients with sessions that are not yet expired.

The callback passed to the iterateAllSubscriptions method is called once for each client. All subscriptions of the respective client are provided per method call.
By default, the Managed Extension Executor Service executes the callback. However, you can also pass your own executor.
Subscriptions are not provided to the callback in any particular order.

To use iteration over all clients, every node in the HiveMQ cluster must run HiveMQ version 4.2.0 or higher.
subscriptionStore.iterateAllSubscriptions(new IterationCallback<SubscriptionsForClientResult>() {
    @Override
    public void iterate(IterationContext context, SubscriptionsForClientResult subscriptionsForClient) {
        // this callback is called for every client with its subscriptions
        final String clientId = subscriptionsForClient.getClientId();
        final Set<TopicSubscription> subscriptions = subscriptionsForClient.getSubscriptions();
    }
});
In large scale deployments, iteration over all clients can be a very expensive operation. Do not call the method in short time intervals.
Example for searching the subscriptions of a client matching a pattern
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final SubscriptionStore subscriptionStore = Services.subscriptionStore();

    // this is the default executor but used as executor argument for demonstration purposes
    final Executor executor = Services.extensionExecutorService();

    final Pattern pattern = Pattern.compile("client-[1-9]+");

    CompletableFuture<Void> iterationFuture = subscriptionStore.iterateAllSubscriptions(
            new IterationCallback<SubscriptionsForClientResult>() {
                @Override
                public void iterate(IterationContext context, SubscriptionsForClientResult subscriptionsForClient) {
                    final String clientIdentifier = subscriptionsForClient.getClientId();
                    final Set<TopicSubscription> subscriptions = subscriptionsForClient.getSubscriptions();
                    if (pattern.matcher(clientIdentifier).matches()) {
                        System.out.println("Found client with subscriptions " + subscriptions);
                        // abort the iteration if you are not interested in the remaining information as this saves resources
                        context.abortIteration();
                    }
                }
            },
            executor);

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Iterated all subscriptions");
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...
Example for counting all subscriptions
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final SubscriptionStore subscriptionStore = Services.subscriptionStore();

    final AtomicInteger counter = new AtomicInteger();

    CompletableFuture<Void> iterationFuture = subscriptionStore.iterateAllSubscriptions(
            new IterationCallback<SubscriptionsForClientResult>() {
                @Override
                public void iterate(IterationContext context, SubscriptionsForClientResult subscriptionsForClient) {
                    counter.addAndGet(subscriptionsForClient.getSubscriptions().size());
                }
            });

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Number of subscriptions: " + counter.get());
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...
If the topology of the cluster changes during the iteration, the iteration is canceled. For example, if a network splits or a node leaves or joins.

The following example shows how topology changes during the iteration can be handled:

Example handling of cluster topology changes during iteration
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    iterate(0);
}

public void iterate(final int attempts) {

    final AtomicInteger counter = new AtomicInteger();

    CompletableFuture<Void> iterationFuture = Services.subscriptionStore().iterateAllSubscriptions(
            new IterationCallback<SubscriptionsForClientResult>() {
                @Override
                public void iterate(IterationContext context, SubscriptionsForClientResult subscriptionsForClient) {
                    counter.addAndGet(subscriptionsForClient.getSubscriptions().size());
                }
            });

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Number of subscriptions: " + counter.get());

        // in case the cluster topology changes during iteration, an IterationFailedException is thrown
        } else if (throwable instanceof IterationFailedException) {
            // only retry 3 times
            if (attempts < 3) {
                final int newAttemptCount = attempts + 1;
                Services.extensionExecutorService().schedule(() ->
                        iterate(newAttemptCount), newAttemptCount * 10, TimeUnit.SECONDS); // schedule retry with delay in case topology change is not over, else we would get another IterationFailedException
            } else {
                System.out.println("Could not fully iterate all clients.");
            }
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...

Iterate All Subscribers with Subscriptions to a Specified Topic Filter

You can use the Subscription Store to iterate all subscribers that have subscriptions with a specified topic filter. Filtered iteration is best practice when you are only interested in subscribers that have subscriptions with a specific topic filter. This method is more resource-efficient than iterating all subscriptions.
To filter subscriptions even more precisely, you can limit the iteration to shared or non-shared (individual) subscriptions.

The callback passed to the iterateAllSubscribersWithTopicFilter method is called one time for each client that has a subscription with the specified topic filter.
By default, the Managed Extension Executor Service executes the callback. However, you can also pass your own executor.
Subscribers are not provided to the callback in any particular order.

To use iteration over all clients, every node in the HiveMQ cluster must run HiveMQ version 4.2.0 or higher.

Example: For the topic filter example/#, the iteration covers all clients that have a subscription with the exact same topic filter:

example/#

The specified iteration does not cover subscriptions with the following topic filters:

  • example/topic

  • example/+

  • +/#

  • and other wildcard matches

The method iterateAllSubscribersWithTopicFilter only provides subscribers that have subscribed with the exact same topic filter as specified.
To query subscriptions with the usual topic filter matching algorithm, use iterateAllSubscribersForTopic.
subscriptionStore.iterateAllSubscribersWithTopicFilter("example/topic",
        new IterationCallback<SubscriberWithFilterResult>() {
            @Override
            public void iterate(IterationContext context, SubscriberWithFilterResult subscriberWithFilter) {
                // this callback is called for every client that has a subscription with the specified topic filter
                final String clientId = subscriberWithFilter.getClientId();
            }
        });
In large scale deployments, iteration over all clients can be a very expensive operation. Do not call the method in short time intervals.
Example for counting all subscribers that have a subscription with a specified topic filter
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final SubscriptionStore subscriptionStore = Services.subscriptionStore();

    // this is the default executor but used as executor argument for demonstration purposes
    final Executor executor = Services.extensionExecutorService();

    final AtomicInteger counter = new AtomicInteger();

    CompletableFuture<Void> iterationFuture = subscriptionStore.iterateAllSubscribersWithTopicFilter(
            "example/topic",
            SubscriptionType.INDIVIDUAL,
            new IterationCallback<SubscriberWithFilterResult>() {
                @Override
                public void iterate(IterationContext context, SubscriberWithFilterResult subscriberWithFilter) {
                    counter.incrementAndGet();
                }
            },
            executor);

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Number of subscribers with specified topic filter: " + counter.get());
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...

Iterate All Subscribers with Subscriptions that Match a Specified Topic

You can use the Subscription Store to iterate all subscribers with subscriptions that match a specified topic. When you are only interested in subscribers with subscriptions that match a specific topic, filtered iteration is best practice. This method is more resource-efficient than iterating all subscriptions.
To filter subscriptions even more precisely, you can limit the iteration to shared or non-shared (individual) subscriptions.

The callback passed to the iterateAllSubscribersForTopic method is called one time for each client that has a subscription that matches the specified topic. By default, the Managed Extension Executor Service executes the callback. However, you can also pass your own executor.
Subscribers are not provided to the callback in any particular order.

To use iteration over all clients, every node in the HiveMQ cluster must run HiveMQ version 4.2.0 or higher.

Example: For the topic example/topic, the iteration covers all clients that have a subscription with the following topic filters:

  • example/topic

  • example/#

  • example/+

  • +/#

  • and other wildcard matches

subscriptionStore.iterateAllSubscribersForTopic("example/topic",
        new IterationCallback<SubscriberWithFilterResult>() {
            @Override
            public void iterate(IterationContext context, SubscriberWithFilterResult subscriberWithFilter) {
                // this callback is called for every client that has a subscription matching the specified topic
                final String clientId = subscriberWithFilter.getClientId();
            }
        });
In large scale deployments, iteration over all clients can be a very expensive operation. Do not call the method in short time intervals.
Example for counting all subscribers that have a subscription with a specified topic filter
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final SubscriptionStore subscriptionStore = Services.subscriptionStore();

    // this is the default executor but used as executor argument for demonstration purposes
    final Executor executor = Services.extensionExecutorService();

    final AtomicInteger counter = new AtomicInteger();

    CompletableFuture<Void> iterationFuture = subscriptionStore.iterateAllSubscribersForTopic(
            "example/topic",
            SubscriptionType.INDIVIDUAL,
            new IterationCallback<SubscriberWithFilterResult>() {
                @Override
                public void iterate(IterationContext context, SubscriberWithFilterResult subscriberWithFilter) {
                    counter.incrementAndGet();
                }
            },
            executor);

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Number of subscribers for specified topic: " + counter.get());
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...

Retained Message Store

The Retained Message Store enables extensions to interact with retained messages in the following ways:

  • Get the retained message for a specific topic

  • Add or replace the retained message for a topic

  • Remove the retained message for a topic

  • Clear all retained messages from the HiveMQ cluster

  • Iterate over all retained messages that are stored in the HiveMQ cluster

The RetainedMessageStore can be accessed through the Services class.

The retained messages that the Retained Message Store adds are processed differently than retained messages that clients send. Clients that are currently subscribed to the topic where the retained message is added do not receive the retained message from the Retained Message Store as a publish. The newly added retained message is only available to clients that subscribe or resubscribe to the topic after the Retained Message Store added the message.

For more information, see Retained Message Store JavaDoc.

Access Retained Message Store

final RetainedMessageStore store = Services.retainedMessageStore();
To avoid errors such as an IterationFailedException, verify that your HiveMQ instance has started successfully before you call methods in your extension start. For more information, see Admin Service.

Add Retained Message

This example shows how to add a retained message to a specific topic with the Retained Message Store.

When you add a retained message to a specific topic with the Retained Message Store, the newly-added message overwrites any existing retained message on the selected topic.
RetainedPublish retainedMessage = retainedPublishBuilder
        .topic("add/message")
        .payload(ByteBuffer.wrap("test".getBytes()))
        .qos(Qos.AT_LEAST_ONCE)
        .build();

Services.retainedMessageStore().addOrReplace(retainedMessage);
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    // 1. build the retained message via the RetainedPublishBuilder
    final RetainedPublishBuilder retainedPublishBuilder = Builders.retainedPublish();

    final RetainedPublish retainedMessage = retainedPublishBuilder
        .topic("add/message")
        .payload(ByteBuffer.wrap("test".getBytes()))
        .userProperty("reason","message-update")
        .qos(Qos.AT_LEAST_ONCE)
        .build();

    // 2. add the retained message (if a retained message already exists for the topic, it will be overwritten)
    final CompletableFuture<Void> addFuture = Services.retainedMessageStore().addOrReplace(retainedMessage);

    addFuture.whenComplete(new BiConsumer<Void, Throwable>() {
        @Override
        public void accept(Void aVoid, Throwable throwable) {

            if(throwable != null){
                throwable.printStackTrace();
                return;
            }

            // 3. log when the message was successfully added/replaced
            System.out.println("Successfully added retained message for topic: " + topic);
        }
    });
}

...

Remove Retained Message

This example shows how to remove a retained message from a specific topic with the Retained Message Store.

Services.retainedMessageStore().remove("topic/to/remove");
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String topic = "topic";

    // 1. remove the retained message from the given topic
    final CompletableFuture<Void> removeFuture = Services.retainedMessageStore().remove(topic);

    removeFuture.whenComplete(new BiConsumer<Void, Throwable>() {
        @Override
        public void accept(Void aVoid, Throwable throwable) {

            if(throwable != null){
                throwable.printStackTrace();
                return;
            }

            // 2. log when the message was successfully removed (also happens when no retained message for that topic)
            System.out.println("Successfully removed retained message for topic: " + topic);
        }
    });
}

...

Get Retained Message

This example shows how to get the retained message from a specific topic with the Retained Message Store.

CompletableFuture<Optional<RetainedPublish>> future = Services.retainedMessageStore().getRetainedMessage("topic/to/get");
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final String topic = "topic";

    // 1. request retained message for topic
    final CompletableFuture<Optional<RetainedPublish>> getFuture = Services.retainedMessageStore().getRetainedMessage(topic);

    getFuture.whenComplete(new BiConsumer<Optional<RetainedPublish>, Throwable>() {
        @Override
        public void accept(Optional<RetainedPublish> retainedPublishOptional, Throwable throwable) {

            if (throwable != null) {
                throwable.printStackTrace();
                return;
            }

            // 2. check if a retained message exists for that topic
            if (!retainedPublishOptional.isPresent()) {
                System.out.println("Found no retained message for topic: " + topic);
                return;
            }

            // 3. log some information about the retained message
            final RetainedPublish retainedPublish = retainedPublishOptional.get();

            System.out.println("Found retained message for topic: " + topic);
            System.out.println("---------------------");
            System.out.println("Topic :" + retainedPublish.getTopic());
            System.out.println("Qos :" + retainedPublish.getQos().getQosNumber());

        }
    });
}

...

Clear All Retained Messages

This example shows how to remove all retained messages from a HiveMQ cluster with the Retained Message Store.

Services.retainedMessageStore().clear()
Full Example Code
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    // 1. request to delete all retained messages from the HiveMQ cluster
    final CompletableFuture<Void> clearFuture = Services.retainedMessageStore().clear();

    clearFuture.whenComplete(new BiConsumer<Void, Throwable>() {
        @Override
        public void accept(Void aVoid, Throwable throwable) {

            if(throwable != null){
                throwable.printStackTrace();
                return;
            }

            // 2. log when all retained messages were removed
            System.out.println("Successfully removed all retained messages");
        }
    });
}

...

Iterate All Retained Messages

You can use the Retained Message Store to iterate over all stored retained messages in HiveMQ.

The callback passed to the iterateAllRetainedMessages method is called one time for each retained message. The call for each retained messages contains all information for the retained message in the form of a RetainedPublish and an IterationContext that can be used to cancel the iteration prematurely if desired.

By default, the Managed Extension Executor Service executes the callback. However, you can also pass your own executor.
Retained message information is not provided to the callback in any particular order.

To iterate over all retained messages, all nodes in the HiveMQ cluster must run HiveMQ version 4.4.0 or higher.
Services.retainedMessageStore().iterateAllRetainedMessages(new IterationCallback<RetainedPublish>() {
    @Override
    public void iterate(final @NotNull IterationContext context, final @NotNull RetainedPublish retainedPublish) {
        // this callback is called for every stored retained message
    }
});
If you have large numbers of retained messages, iteration over all retained messages can be resource intensive. Avoid calling this method in short time intervals.
Example of searching retained messages with a pattern
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final RetainedMessageStore retainedMessageStore = Services.retainedMessageStore();

    // this is the default executor but used as executor argument for demonstration purposes
    final Executor executor = Services.extensionExecutorService();

    final Pattern pattern = Pattern.compile("sensor/id-[1-9]+");

    CompletableFuture<Void> iterationFuture = retainedMessageStore.iterateAllRetainedMessages(
            new IterationCallback<RetainedPublish>() {
                @Override
                public void iterate(final @NotNull IterationContext context, final @NotNull RetainedPublish retainedPublish) {
                    final String retainedPublishTopic = retainedPublish.getTopic();
                    if (pattern.matcher(retainedPublishTopic).matches()) {
                        System.out.println("Found retained messages with topic matching pattern " + retainedPublishTopic);
                        // abort the iteration if you are not interested in the remaining retained messages as this saves resources
                        context.abortIteration();
                    }
                }
            }, executor);

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Iteration over retained messages complete"); // this will also be called if iteration is aborted manually
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}
...
Example of counting all retained messages with Quality of Service level 2
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final RetainedMessageStore retainedMessageStore = Services.retainedMessageStore();

    final AtomicInteger counter = new AtomicInteger();

    CompletableFuture<Void> iterationFuture = retainedMessageStore.iterateAllRetainedMessages(
            new IterationCallback<RetainedPublish>() {
                @Override
                public void iterate(final @NotNull IterationContext context, final @NotNull RetainedPublish retainedPublish) {
                    if (retainedPublish.getQos() == Qos.EXACTLY_ONCE) {
                            counter.incrementAndGet();
                    }
                }
            });

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Retained messages with QoS level 2: " + counter.get());
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...
If the topology of the cluster changes during the iteration, the iteration is canceled. For example, When a network splits or a node leaves or joins the network.

The following example shows how topology changes can be handled:

Example handling of cluster topology changes during iteration
...

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    iterate(0);
}

public void iterate(final int attempts) {

    final AtomicInteger counter = new AtomicInteger();

    CompletableFuture<Void> iterationFuture = Services.retainedMessageStore().iterateAllRetainedMessages(
            new IterationCallback<RetainedPublish>() {
                @Override
                public void iterate(final @NotNull IterationContext context, final @NotNull RetainedPublish retainedPublish) {
                    if (retainedPublish.getQos() == Qos.EXACTLY_ONCE) {
                        counter.incrementAndGet();
                    }
                }
            });

    iterationFuture.whenComplete((ignored, throwable) -> {
        if (throwable == null) {
            System.out.println("Retained messages with QoS level 2: " + counter.get());

            // in case the cluster topology changes during iteration, an IterationFailedException is thrown
        } else if (throwable instanceof IterationFailedException) {
            // only retry 3 times
            if (attempts < 3) {
                final int newAttemptCount = attempts + 1;
                Services.extensionExecutorService().schedule(() ->
                        iterate(newAttemptCount), newAttemptCount * 10, TimeUnit.SECONDS); // schedule retry with delay in case topology change is not over, else we would get another IterationFailedException
            } else {
                System.out.println("Could not fully iterate all retained messages.");
            }
        } else {
            throwable.printStackTrace(); // please use more sophisticated logging
        }
    });
}

...

Publish Service

The Publish Service enables extensions to send PUBLISH messages. These messages can also be sent to a specific client only.

PUBLISH messages that are sent through the Publish Service are processed in the same way as the PUBLISH messages that a client sends. All MQTT 3 and MQTT 5 features for PUBLISH messages are supported. The limits that are configured in the config.xml as part of the MQTT entity are also validated for the PUBLISH messages sent through the Publish Service.

PUBLISH messages that are sent to a specific client have some unique behavior and requirements.

  • The topic of the PUBLISH must match at least one subscription of the client, or the PUBLISH is not forwarded to the client

  • If the specified client has a shared subscription that matches the topic of the PUBLISH, the message is sent to the client but not sent to other clients with the same shared subscription

For more information, see Publish Service JavaDoc.

Access Publish Service

final PublishService publishService = Services.publishService();

Publish

This example shows how to send a regular PUBLISH message.

Example Code
Publish message = Builders.publish()
    .topic("topic")
    .qos(Qos.AT_LEAST_ONCE)
    .payload(payload)
    .build();

Services.publishService().publish(message);
Full Example Code
...
@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    // Create a publish builder
    final PublishBuilder publishBuilder = Builders.publish();
    final ByteBuffer payload = ByteBuffer.wrap("message".getBytes());
    // Build the publish
    publishBuilder.topic("topic").qos(Qos.AT_LEAST_ONCE).payload(payload);
    // Access the Publish Service
    final PublishService publishService = Services.publishService();
    // Asynchronously sent PUBLISH
    final CompletableFuture<Void> future = publishService.publish(publishBuilder.build());

    future.whenComplete((aVoid, throwable) -> {
        if(throwable == null) {
            System.out.println("Publish sent successfully");
        } else {
            //please use more sophisticated logging
            throwable.printStackTrace();
        }
    });
}
...

Publish to Client

This example shows how to send a PUBLISH to a client with a specific client ID.

Example Code
Publish message = Builders.publish()
    .topic("topic")
    .qos(Qos.AT_LEAST_ONCE)
    .payload(payload)
    .build();

Services.publishService().publishToClient(message, "test-client");
Full Example Code
...
@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    // Create a publish builder
    final PublishBuilder publishBuilder = Builders.publish();
    final ByteBuffer payload = ByteBuffer.wrap("message".getBytes());
    // Build the publish
    publishBuilder.topic("topic").qos(Qos.AT_LEAST_ONCE).payload(payload);
    // Access the Publish Service
    final PublishService publishService = Services.publishService();
    // Asynchronously sent PUBLISH
    final CompletableFuture<Void> future = publishService.publish(publishBuilder.build());
    final String clientId = "client";
    final CompletableFuture<PublishToClientResult> future = publishService.publishToClient(publishBuilder.build(), clientId);

    future.whenComplete((result, throwable) -> {

        if (throwable == null) {
            if (result == PublishToClientResult.NOT_SUBSCRIBED) {
                System.out.println("Publish was not sent to client ("+clientId+
                        ") because it is not subscribed to a matching topic");
            } else {
                System.out.println("Publish sent successfully");
            }
        } else {
            //please use more sophisticated logging
            throwable.printStackTrace();
        }
    });
}

...

Managed Extension Executor Service

Many MQTT integrations depend on operations that are potentially expensive in terms of CPU time:

  • Calling webservices

  • Persisting or querying data from a database

  • Writing data to disk

  • Other blocking operations

A central paradigm of HiveMQ extension development is to never block.
If your business model requires blocking operations, you can use the the ManagedExtensionExecutorService to enable asynchronous calls to these operations. The HiveMQ managed executor service is shared between all HiveMQ extensions and can be monitored with the standard HiveMQ monitoring system.

The ManagedExtensionExecutorService is a sophisticated implementation that can be used as a ScheduledExecutorService. This capability allows the use of a callback-based future handling for true non-blocking behavior. The extension executor service also allows the scheduling of tasks periodically.

Never create your own thread pools. Thread pools can significantly decrease the performance of HiveMQ. This is especially true for Java Cached Thread pools since these pools increase Java threads without any limit and can make your system unresponsive. If you use a library with thread pools such as the Jersey Client library, limit the number of threads.

The thread-pool of this executor service is dependent on the available cores of the JVM HiveMQ runs in.

Since HiveMQ cancels all schedulers when extensionStop() executes, no additional or new tasks can be submitted. After shutdown, HiveMQ continues to execute previously-submitted tasks for a three-minute grace period. After three minutes, the executor service shuts down ungracefully. Any tasks that remain at that time are not executed.

Access the Managed Extension Executor Service

final ManagedExtensionExecutorService executorService = Services.extensionExecutorService();

Log Incoming Publishes

This example shows how to log incoming publishes per minute with the ManagedExtensionExecutorService.

@Override
public void extensionStart(final ExtensionStartInput extensionStartInput, final ExtensionStartOutput extensionStartOutput) {

    final ManagedExtensionExecutorService executorService = Services.extensionExecutorService();
    final MetricRegistry metricRegistry = Services.metricRegistry();

    executorService.scheduleAtFixedRate(new Runnable() {
        @Override
        public void run() {
            Meter incomingPublishRate = metricService.getHiveMQMetric(HiveMQMetrics.INCOMING_PUBLISH_RATE);
            logger.info("Incoming publishes last minute = {}", incomingPublishRate.getOneMinuteRate());
        }
    }, 1, 1, TimeUnit.MINUTES);

}

Add a Callback

This example shows how to add a callback to the submitted task to receive the return value when the future completes.

private void methodWithCompletableFuture() {

    final ManagedExtensionExecutorService extensionExecutorService = Services.extensionExecutorService();

    final CompletableFuture<String> result = extensionExecutorService.submit(new Callable<String>() {
        @Override
        public String call() throws Exception {
            return "Test";
        }
    });

    result.whenComplete(new BiConsumer<String, Throwable>() {
        @Override
        public void accept(final String resultString, final Throwable throwable) {
            if(throwable != null){
                //please use more sophisticated logging
                throwable.printStackTrace();
                return;
            }
            if(resultString != null){
                System.out.println(resultString);
            }
        }
    });
}

Admin Service

At runtime, you often need to get information about the broker instance without a triggering event such as a client connect. For this purpose, the extension SDK offers an Admin Service that provides the following:

Access the Admin Service

Extensions can access the AdminService object through Services.adminService().

final @NotNull AdminService adminService = Services.adminService();

Lifecycle Stage

The Admin Service provides information on the current status of the broker lifecycle. Your HiveMQ broker can be in one of two states:

  • STARTING: The broker is in this state from the moment the JVM starts until the start procedure of the broker concludes.

  • STARTED_SUCCESSFULLY: The broker is in this state after the start procedure concludes. This means that the extension system is started, the persistence is running, the listeners are up, a cluster is joined, and the HiveMQ Control Center is accessible.

Before you announce that a HiveMQ instance is ready, we recommend that you verify the lifecycle state of the broker to ensure that HiveMQ has successfully completed startup.
Exmple to verify that the broker is ready
private final @NotNull AdminService adminService;
private final @NotNull ManagedExtensionExecutorService executorService;

public void schedulePublishing() {
    executorService.schedule(() -> {
        // check if broker is ready
        if (adminService.getCurrentStage() == LifecycleStage.STARTED_SUCCESSFULLY) {
            // do action
            publishListenerInformation();
        } else {
            // schedule next check
            schedulePublishing();
        }
    }, 10, TimeUnit.SECONDS);
}

Server Information

In the Admin Service and other parts of the Extension SDK, HiveMQ provides a ServerInformation object that contains runtime information about the broker node to which the extension is attached.

The ServerInformation object provides the following information:

  • The HiveMQ version

  • The folder structure:

    • The home folder. For example, set by the HIVEMQ_HOME environment variable.

    • The data folder. For example, set by the HIVEMQ_DATA_FOLDER environment variable.

    • The log folder. For example, set by the HIVEMQ_LOG_FOLDER environment variable.

    • The extensions folder. For example, set by the HIVEMQ_EXTENSION_FOLDER environment variable.

  • The active MQTT listeners

Example to publish the available listeners to an external directory service

private void publishListenerInformation() {
    final ServerInformation serverInformation = adminService.getServerInformation();

    for (final Listener listener : serverInformation.getListener()) {
        // publishes listeners to a registry
        publishListener(listener);
    }
}

Cluster Service

The Cluster Service enables extensions to dynamically discover HiveMQ cluster nodes.

Extensions can access the ClusterService object through Services.clusterService().

Example usage of the ClusterService
public class MyExtensionMain implements ExtensionMain {

    private final MyClusterDiscoveryCallback myCallback;

    public MyExtensionMain() {
        myCallback = new MyClusterDiscoveryCallback();
    }

    @Override
    public void extensionStart(
            final @NotNull ExtensionStartInput input, final @NotNull ExtensionStartOutput output) {

        Services.clusterService().addDiscoveryCallback(myCallback);
    }

    @Override
    public void extensionStop(
            final @NotNull ExtensionStopInput input, final @NotNull ExtensionStopOutput output) {

        Services.clusterService().removeDiscoveryCallback(myCallback);
    }
}

Cluster Discovery

To realize discovery of HiveMQ cluster nodes, an extension can implement a ClusterDiscoveryCallback and add it through the ClusterService.

To use cluster discovery in your HiveMQ extension, <discovery> must be set to <extension> in the <cluster> section of the your HiveMQ configuration file.
HiveMQ configuration for extension cluster discovery
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    ...
    <cluster>
        ...
        <discovery>
            <extension></extension>
        </discovery>
        ...
    </cluster>
    ...
</hivemq>

The lifecycle of a ClusterDiscoveryCallback consists of three methods:

  • init: HiveMQ calls the init method one time when the callback is added. You can use this method to register the HiveMQ instance with a central registry. For example, to save address information of the HiveMQ instance to a database or server.

  • reload: HiveMQ calls the reload method regularly to discover all currently available HiveMQ cluster nodes. The default interval between calls to this method is 60 seconds and can be overwritten by an individual ClusterDiscoveryCallback.

  • destroy: HiveMQ calls the destroy method one time in the following cases:

    • The callback is removed

    • The extension that added the callback is stopped

    • The HiveMQ instance is shut down

The destroy method can be used to unregister the HiveMQ instance on which the extension runs from a central registry.

If an exception is thrown inside one of these methods, HiveMQ ignores the provided output.

Example implementation of a ClusterDiscoveryCallback
public class MyClusterDiscoveryCallback implements ClusterDiscoveryCallback {

    private final MyClusterNodesService myService = ...

    @Override
    public void init(
            final @NotNull ClusterDiscoveryInput clusterDiscoveryInput,
            final @NotNull ClusterDiscoveryOutput clusterDiscoveryOutput) {

        myService.registerHiveMQNode(clusterDiscoveryInput.getOwnAddress());
        final List<ClusterNodeAddress> hiveMQNodes = myService.getHiveMQNodes();
        clusterDiscoveryOutput.provideCurrentNodes(hiveMQNodes);
        final int nextReloadInterval = myService.getNextReloadInterval();
        clusterDiscoveryOutput.setReloadInterval(nextReloadInterval);
    }

    @Override
    public void reload(
            final @NotNull ClusterDiscoveryInput clusterDiscoveryInput,
            final @NotNull ClusterDiscoveryOutput clusterDiscoveryOutput) {

        final List<ClusterNodeAddress> hiveMQNodes = myService.getHiveMQNodes();
        clusterDiscoveryOutput.provideCurrentNodes(hiveMQNodes);
        final int nextReloadInterval = myService.getNextReloadInterval();
        clusterDiscoveryOutput.setReloadInterval(nextReloadInterval);
    }

    @Override
    public void destroy(final @NotNull ClusterDiscoveryInput clusterDiscoveryInput) {
        myService.unregisterHiveMQNode(clusterDiscoveryInput.getOwnAddress());
    }
}

Next Steps