FAQ

Does HiveMQ Support Network Splits?
Yes, when used in a Cluster, HiveMQ can handle Network Splits very well. It’s self healing mechanisms will rebuild the cluster over time and no data will be lost.
What about new subscriptions happening during a network split?
Each sub-cluster will persist new subscriptions during the network split and after the self healing mechanisms rebuilt the cluster, the subscriptions persisted by all the
sub-clusters will be UNION merged. So the worst case scenario would be, that a possible unsubscribe could be reverted.
How does HiveMQ behave in terms of the CAP theorem?
In case of a partition happening, HiveMQ’s true distributed and masterless cluster combined with it’s self healing mechanisms will ensure the cluster stays available and consistency will be reached, once the disconnected nodes can reconnect.
Can MQTT session be resumed on any node?
Yes, an MQTT client can connect to any cluster node and resume their session.
What happens when a client reconnects to another node?
The client will resume its session seamlessly.
Does HiveMQ Support MQTT Load Balancers?
Yes, all prevalent load balancing solutions, both hardware and software, are fully supported by HiveMQ.
What happens if one node goes down?
The rest of the cluster will overtake that node’s duties. By default at least one other node replicates all the relevant information.
Is it possible to upgrade all cluster nodes without downtime?
Yes, HiveMQ supports Rolling Upgrades, which means that zero-downtime upgrades are possible when using a HiveMQ cluster. All nodes can be upgraded one-by-one.
How is the load distributed between the cluster nodes?
HiveMQ is designed with a true distributed and masterless cluster architecture. All persistent data and session information is distributed between the cluster nodes. Message are being handled by the node a client is connected to and distributed to the other nodes when needed.
Does HiveMQ lose messages when one or more nodes go down?
HiveMQ implements a sophisticated replication system. By default each node has 1 replicate within the cluster but that value can be configured. As long a node’s replicate(s) don’t go down at the same time, the node goes down, no messages will be lost.
How does HiveMQ distribute messages and queued messages in the cluster?
Sessions, including queued messages, are stored by multiple nodes. The number of replicates is configurable. Every node in the cluster can access that information.
Are there any performance issues when clients reconnect and queued messages are delivered? Especially when an entire nodes is down and a lot of clients ned to reconnect.
Significant performance issues are not to be expected in this case. HiveMQ will empty the messages one by one and make sure that consumer and broker can handle the speed of delivery.
We recommend anticipating an overall additional load of roughly 10-15% for such cases when setting up your system.
How often will a new version of HiveMQ be released and how long will each version be supported?
  • All major versions of HiveMQ will be supported for 2 years.
  • All minor versions of HiveMQ will be supported for 1 year.
    • For the first four month there will be a maintenance release each month.
    • For the next four month there will be security and bugfix releases as needed.
    • For the last four month there will be critical bugfix and security releases.
  • Hotfixes will always be released for the latest bugfix version a customer is using. For example if you’re using 3.2.1 while 3.2.5 is the newest minor release, the hotfix will be 3.2.5-hotfix.
HiveMQ has extremely slow start up times. What can I do?
How do you know if the start up of your HiveMQ machine can be considered slow? Before that question can be answered we have to consider the two possible scenarios for the HiveMQ startup process:

  • 1. The HiveMQ machine has no persistent data from previous sessions.
  • 2. The HiveMQ machine has persistent data from previous sessions, which has to be loaded beforehand.

We can only give a definite answer to the first scenario, because it has always the same preconditions (no persistent data to load) whereas the second scenario has the size of persistent data as a varying variable.

For the first scenario we consider a startup process of over 60 seconds to be slow. You can see how long your HiveMQ needed to boot in the terminal you started HiveMQ from or in the log file.
The line should look similar to:

2017-05-12 13:58:01,980 INFO  - Started HiveMQ in 7454ms 

For the second scenario we can give no answer but you can test/fix it anyway, because our test and fix ignore the mentioned scenarios.

One possible cause of the slow start up process of HiveMQ can be the method:
java.net.InetAddress.getLocalHost()
In order to bind the HiveMQ server to a local address this method is called. In a normal case this method will finish after some milliseconds have passed. But sometimes an event (e.g. OS-Update) can lead to a malfunction of the method, though it still works but now instead of some milliseconds this method runs up to several seconds.

How can you test if this method is responsible for the slow start up process?

Antonio Troina has written a little program that you can find here. First you need to download inetTester.jar in the bin folder. Then you run the jar file on the computer you are running the HiveMQ server with the command:

java -jar Path/To/inetTester.jar 

See how much time elapsed to call getLocalHost() once. An elapsed time in the range from 0ms – 30ms is perfect. If you see a result of 500ms or more we recommend that you fix this issue.

How can I fix this problem?

The issue can be fixed by adding the hostname to your local addresses in the ‘hosts’ file (see here to locate the hosts file for your OS). Following are the two entries you should add:

127.0.0.1   localhost hostname
::1          localhost hostname

To find your hostname type ‘hostname’ in the command line:

pc01:~ marysue$ hostname
pc01.hivemq.example

For our example the fixed hosts file looks like:

127.0.0.1   localhost pc01.hivemq.example
::1         localhost pc01.hivemq.example