Configure Your HiveMQ Cluster

The following configuration descriptions assume that you are using Helm to manage your deployment of the HiveMQ Operator and HiveMQ Cluster custom resource on Kubernetes. You can also use a different method to create and manage your manifests manually.

When you use the default configuration of the HiveMQ Helm Chart to deploy the HiveMQ Kubernetes Operator, the operator automatically creates a HiveMQ Cluster custom resource on your Kubernetes cluster.
For most use cases, you need to adjust some configuration settings. For more information, see our recommended settings.

Basic Configuration

Set HiveMQ License File

  1. Create a ConfigMap that contains your license file:

    kubectl create configmap hivemq-license --from-file=/path/to/my-license.lic
  2. Configure the mapping in your HiveMQ Cluster custom resource:

apiVersion: hivemq.com/v1
kind: HiveMQCluster
metadata:
  name: hivemq-cluster1
spec:
  configMaps:
  - name: hivemq-license
    path: /opt/hivemq/license

HiveMQ Extensions

Add and Remove Extensions

By default, your HiveMQ deployment includes all HiveMQ Enterprise Extensions, Prometheus, DNS discovery, and the HiveMQ allow-all extension. Use the extensions field in your custom values.yaml file to enable the HiveMQ Enterprise Extensions or to install your custom extensions.

extensions:
  - name: hivemq-enterprise-security-extension
    extensionUri: preinstalled
    enabled: false
    configMap: ese-configuration
    updateStrategy: serial
Table 1. Extension configuration options
Property Name Description

name

The name of the extension.

extensionUri

The URL where to the extension is stored. For example, the HiveMQ Marketplace or a publicly available URL.

configMap

The name of the ConfigMap that stores the configuration files for your extension. For more information, see Configuration of Extensions.

enabled

Sets the desired state of the selected extension.

static

Defines whether the extension is restarted when the linked ConfigMap changes. The default setting is false.

initialization

An (idempotent) initialization script that runs when the extension is installed or updated. The default setting is an undefined string. If you edit the script, the script automatically re-executes.

updateStrategy

Defines whether updates to the extension are processed in series or in parallel. The default setting is serial.

If you want to add a custom extension, consider using a Continuous Deployment pipeline to release the extension to a cluster-internal object storage such as MinIO. You can link to public objects or the extension URI to your artifact storage.

extensions:
  - name: your-custom-extension
    extensionUri: https://your-server/path/to/your-custom-extension.zip
    enabled: true
    configMap: your-custom-configuration
    updateStrategy: serial

To add multiple extensions to your HiveMQ cluster, specify a list of extensions. Each extension must have a name, extension URI, and an enabled flag.

To remove an extension from your HiveMQ deployment, remove the extension declaration in your custom values yaml file. For more information see, Revise HiveMQ Cluster Configuration with Helm.

Enable / Disable Extensions at Runtime

Removing an extension usually leads to a rolling upgrade of your HiveMQ deployment. Sometimes, it makes sense to disable an extension instead of removing it from the cluster. To disable or enable HiveMQ extensions at runtime, change the enabled flag of the extension in your custom values yaml file. For more information, see Revise HiveMQ Cluster Configuration with Helm.

Extension Configuration with a ConfigMap

HiveMQ extensions are configured with configuration files. To allow the HiveMQ Kubernetes Operator to manage the extension configuration files, you provide the extension configuration in a ConfigMap.

A ConfigMap is a Kubernetes API object that lets you store and share non-sensitive, unencrypted configuration information. ConfigMaps allow you to decouple your configurations from your Pods and components, which helps keep your workloads portable.

Plain text values in your ConfigMaps are not encrypted. Do not use ConfigMaps for confidential information such as passwords, OAuth tokens, or SSH keys.

ConfigMaps provide a data section where you can store items (keys) and their values.

ConfigMaps cannot be added at run-time. Adding, removing, or editing the configMap field initiates a rolling upgrade of the CR.

Create a ConfigMap

The following procedure shows you how to place the open-source message log extension into a ConfigMap that a HiveMQ Cluster configuration references.
1. Save the example ConfigMap yaml file to your local file system as myConfig.yaml.

Example ConfigMap yaml file
apiVersion: v1
kind: ConfigMap
data:
  mqttMessageLog.properties: |-
    verbose=true
    client-connect=false
metadata:
  labels:
    app: hivemq
  name: config-extension
2. To create the ConfigMap in Kubernetes, enter:
kubectl apply -f myConfig.yaml
3. Copy the HiveMQ Cluster configuration to HiveMQ extensions section of your values yaml file.

This example creates the following HiveMQ Cluster extension configuration that references the ConfigMap that contains your extension configuration information.

Example HiveMQ Cluster extension configuration
    - name: hivemq-mqtt-message-log-extension
      configMap: config-extension
      enabled: true
      extensionUri: https://www.hivemq.com/releases/extensions/hivemq-mqtt-message-log-extension-1.1.0.zip
      static: true
      updateStrategy: serial
Each time you change the ConfigMap, the HiveMQ operator automatically initiates a rolling update of the extension configuration.

Enable Monitoring

The HiveMQ Kubernetes Operator provides seamless integration with the Prometheus Operator. Use the monitoring field to enable Prometheus and an associated Grafana dashboard:

monitoring:
  enabled: true
  dedicated: false
Field Type Default Description

enabled

Boolean

true

Specifies whether the operator enables integrtation to your existing Prometheus monitoring solution. dedicated

The default login credentials for the Grafana dashboard that is created are username: admin password: prom-operator.

You must configure the serviceMonitorSelector of your Prometheus manifests to pick up the HiveMQ ServiceMonitor. Otherwise, Prometheus does not scrape the target.
Currently, when you deploy a Prometheus operator with the HiveMQ Helm Chart, multiple skipping unknown hook: “crd-install” warnings are logged. These warnings can be ignored.

Define Initialization Routines (deprecated)

You can pre-provision your HiveMQ container with init containers.

Use the initialization field in your custom resource to append init containers to your HiveMQ deployment:

We recommend the use of initContainer instead of initialization routines, as init-containers offer more functionality
hivemq:
  initialization:
  - name: init-plugin
    args:
      - wget https://www.hivemq.com/releases/extensions/hivemq-file-rbac-extension-4.0.0.zip;
        unzip hivemq-file-rbac-extension-4.0.0.zip -d /hivemq-data/extensions

All init containers provide a volume mount on /hivemq-data. The volume mount allows you to add files and folder structures that are recursively copied to the HiveMQ directory when the container starts.

To facilitate specification of basic script steps, the initialization field uses the default image busybox:latest and command ["/bin/sh", "-c"].
Ownership and file mode changes to the files in /hivemq-data can be overwritten upon startup.

Add Init Containers

If desired, you can add one or more specialized init containers that run before the containers in your HiveMQ pod. The init containers can contain utilities or setup scripts that are not present in an app image. For more information, see Init Containers.

To specify an init container for a HiveMQ pod, add the initContainers field into the pod specification as an array of container items.

hivemq:
  initContainers:
    - name: init-cfg
      image: busybox
      command:
        - /bin/sh
        - "-c"
      args:
        - |
          echo

The array in initContainers specifies a list of initialization containers that belong to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container must be unique among all containers. For more information, see Containers.

Each init container must exit successfully before the next container starts. If an init container fails to start, it is retried according to the pod restartPolicy. However, if your pod restartPolicy is set to Always, the init containers use restartPolicy OnFailure.

A pod cannot be Ready until all init containers have succeeded. If the pod restarts, or is restarted, all init containers must execute again.

Set Resource Limits and Requests

By default, your HiveMQ deployment sets sensible resource limits. To override the default resource limits, use the following fields:

Resource Default Description

cpuLimitRatio

1

The ratio of the CPU limit. For example, a ratio setting of 2 = cpu: 2 → limit ⇒ 4.

cpu

4

Amount of CPU requested

memoryLimitRatio

1

The ratio of the memory limit. This ratio is usually 1.

memory

4096M

Amount of memory requested

ephemeralStorageLimitRatio

1

The ratio of the ephemeral storage limit

ephemeralStorage

15Gi

The amount of ephemeral disk space requested. By default, the HiveMQ data folder uses this amount.
This space is used for the whole container. The usable storage size for HiveMQ is slightly smaller than the configured value.

Configure HiveMQ Ports

In the root of the specification, you can use the ports field to configure which ports are mapped to your pods. To map a port, use the following fields:

Field Description

name

The name of the port

port

The port number that is exposed

expose

Creates a service that points to the selected port. The naming schema is: hivemq-<cluster-name>-<port-name>.

patch

A list of strings with JSON patches that are applied to the resulting service when expose is set to true.

The default values for the ports field are as follows:

hivemq:
  ports:
    - name: "mqtt"
      port: 1883
      expose: true
      patch:
      - '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]'
      # If you want Kubernetes to expose the MQTT port
      # - '[{"op":"add","path":"/spec/type","value":"LoadBalancer"}]'
    - name: "cc"
      port: 8080
      expose: true
      patch:
      - '[{"op":"add","path":"/spec/sessionAffinity","value":"ClientIP"}]'
      # If you want Kubernetes to expose the HiveMQ control center via a load balancer.
      # Warning: You should consider configuring proper security and TLS beforehand. Ingress may be a better option here.
      # - '[{"op":"add","path":"/spec/type","value":"LoadBalancer"}]'

Configure Your HiveMQ Cluster

Field Default Description

clusterReplicaCount

2

The number of copies the cluster maintains for each piece of persistent data. A replica count of 2 = one original and one copy.

clusterOverloadProtection

true

Automatically reduces the rate of incoming messages from message-producing MQTT clients that significantly contribute to the overload of the cluster.

nodeCount

3

the number of cluster nodes in the HiveMQ cluster.

For more information on high availability clustering with HiveMQ, see HiveMQ Clusters.

TLS Listener

This procedure shows you how to configure and verify a TLS listener. You can create the necessary server and client certificates and the corresponding keystores with the keytool and OpenSSL command-line tools.

This sample procedure is not intended for production use. For more information, see TLS for your cloud-based MQTT broker.
1. To generate keystore, enter:
keytool -genkey -keyalg RSA -alias hivemq -keystore hivemq.jks -storepass changeme -validity 360 -keysize 2048
2. To generate the secret, in the same namespace as the cluster, enter:
kubectl create secret generic --from-file=hivemq.jks hivemq-jks
3. Either edit your custom values.yaml, or edit your HiveMQ cluster CR directly:
kubectl edit hivemq-cluster <my-cluster>
4. In the cluster specification, add mapping in the secrets area to mount the key store into the configuration directory:
hivemq:
  secrets:
    - name: hivemq-jks
      path: /opt/hivemq/conf
5. In the cluster specification, add the new listener in the listenerConfiguration field:
   <tls-tcp-listener>
        <port>8883</port>
        <bind-address>0.0.0.0</bind-address>
        <proxy-protocol>true</proxy-protocol>
        <tls>
            <keystore>
                <path>/opt/hivemq/conf/hivemq.jks</path>
                <password>changeme</password>
                <private-key-password>changeme</private-key-password>
            </keystore>
        </tls>
    </tls-tcp-listener>
6. In the cluster specification, edit the mqtt port so that it corresponds to the new listener.
  - expose: true
    name: mqtt
    patch:
    - '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]'
    port: 8883
You must always specify a port named mqtt as this port will be used for the liveness check of the resulting Pods.
7. Apply or save your changes. To verify your changes, wait until the status of your cluster returns to RUNNING and enter:
kubectl port-forward svc/hivemq-hivemq-mqtt-tls 8883:8883
8. To connect, enter:
mqtt sub -p 8883 -t test --cafile server.pem -d

Define DNS Suffix

If desired, you can specify a cluster domain suffix to use for DNS discovery.

When no dnsSuffix is set, the default is svc.cluster.local.

hivemq:
  dnsSuffix: svc.cluster.local

Add Pod Labels

You can specify labels for your HiveMQ pod templates as desired. Pod labels help you identify and organize your pods. Labels can be attached to objects at creation time and subsequently added and modified at any time.

hivemq:
  podLabels:
    test: “myTestLabel"

Add Pod Annotations

Pod annotations allow you to add non-identifying metadata to your HiveMQ pod templates. You can use annotations to provide useful information and context for yourself or your DevOps team.

hivemq:
  podAnnotations:
    my-informative-annotation: my-useful-value-1

Set Priority Class Name

If desired, you can specify a priority class name to set the priority to a HiveMQ pod template.

Kubernetes ships with two common priority classes that you can use to ensure that critical components are always scheduled first:

  • system-cluster-critical is the highest possible priority.

  • system-node-critical is the next highest priority.

hivemq:
  priorityClassName: system-node-critical

To use other priority class names, you must create a PriorityClass with the associated name. For more information, see PriorityClass.

If you do not specify a priority class name, the HiveMQ Kubernetes Operator automatically sets the pod priority to your defined default priority. If no default priority is present, the operator sets the pod priority to zero.

Set Runtime Class Name

If desired, you can specify a runtime class name to reference a particular RuntimeClass object in the underlying controller to run your HiveMQ pod templates. For more information, see RuntimeClass.

The Kubernetes RuntimeClass feature is used to select the container runtime configuration. Kubernetes uses the container runtime configuration to run the containers of a pod. You can set different RuntimeClass objects for your pods to provide a balance of performance versus security. You can also use RuntimeClass objects to run different pods with the same container runtime and different settings.

hivemq:
  runtimeClassName: myclass

If no RuntimeClass resource object matches the specified runtimeClassName, the pod is not run.
If you do not set a runtimeClassName or the value is empty, the HiveMQ Kubernetes Operator uses the default RuntimeHandler. The default handler is equivalent to the behavior when the RuntimeClass feature is disabled.

Define Tolerations

If desired, you can apply tolerations to your HiveMQ pods that allow the pods to schedule onto nodes with matching taints.

In Kubernetes, taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. You can apply one or more taints to a node. For more information, see Taints and Tolerations.

Taints are applied to nodes and allow a node to repel specific pods. You can put multiple taints on the same node. Each taint has a key, value, and effect.
Tolerances are applied to pods and allow (but do not require) the pod to schedule onto nodes that have matching taints. You can put multiple tolerations on the same pod.

A toleration matches a taint if the keys and effects are the same and one of the following operations applies:

  • The operator field is set to Exists (in which case no value is specified).

  • The operator field is set to Equal and the specified values match.

The way Kubernetes processes multiple taints and tolerations is similar to a filter. Kubernetes starts with all taints on the node, then ignores the taints for which the pod has a matching toleration. The remaining un-ignored taints have the indicated effects on the pod.

hivemq:
  tolerations:
    - key: "key1"
      operator: "Equal"
      value: "value1"
      effect: “NoSchedule"
Field Type Description

effect

String

Specifies the taint effect to match. An empty field matches all taint effects. The following values are possible:

  • NoSchedule: Specifies that Kubernetes does not schedule pods onto the node that have un-ignored taint s.

  • PreferNoSchedule: Specifies a preference for Kubernetes to avoid scheduling pods onto the node that have un-ignored taints.

  • NoExecute: Specifies that Kubernetes evicts pods that do not tolerate a taint from the node (if the pod is already running on the node), and does not schedule the pod onto the node (if it is not yet running on the node).

key

String

Specifies the taint key to which the toleration applies. An empty key field matches all taint keys.
If the key field is empty, the operator field must be set to Exists.

operator

String

Represents the relationship of the key to the value. The following operations are possible:

  • Exists: The Exists operation is equivalent to a value wildcard that allows a pod to tolerate all taints of a particular category.

  • Equal: Equal is the default setting for the operator field.

tolerationSeconds

Integer

When the effect is NoExecute, the tolerationSeconds field specifies the period of time the toleration of a taint lasts (otherwise, this field is ignored). By default, no duration is set and taints are tolerated for an unlimited amount of time (no eviction). Zero and negative values are handles as 0 (evict immediately).

value

String

Specifies the taint value to which the toleration matches. If the operator field is set to Exists, the value field is empty. If the operator field is Equals the value is a string. When the key field is empty and the operator field is set to Exists the combination acts as a wildcard that matches all values and all keys.

Additional Volumes

If desired, you can add further Kubernetes volumes to your HiveMQ pods. The named volumes that you add to a pod can be accessed by all containers in the pod.
Kubernetes supports several types of volumes. For more information see, Types of Volumes.

hivemq:
  additionalVolumes:
    - name: test-data1
      emptyDir: {}
Make sure that a volume directory is already created in your container.

Additional Volume Mounts

When you add further Kubernetes volumes to your HiveMQ pods, you must also define how you want Kubernetes to mount the volume within the container.

hivemq:
  additionalVolumeMounts:
    - name: test-data1
      mountPath: /cache
Field Type Description

mountPath

String

The path within the container at which the volume is mounted. The path must not contain colons :.

mountPropagation

String

Defines how mounts are propagated from the host to container and from the container to the host. The default setting is MountPropagationNone.

name

String

The name of the mount. This name must match the name of a volume.

readOnly

Boolean

Defines whether the volume is mounted in the container as read-only. true mounts the volume as read-only. The default setting is false.

subPath

String

Defines the path within the volume from which Kubernetes mounts the volume of the container. The default setting is "" (the root of the volume).

subPathExpr

String

Defines an expanded path within the volume from which Kubernetes mounts the volume of the container. This path behaves similarly to subPath but environment variable references $(VAR_NAME) are expanded using the environment of the container. The default setting is "" (the root of the volume). subPathExpr and subPath are mutually exclusive.

Add Topology Spread Constraints

If desired, you can define one or more pod topology spread constraints to control how Kubernetes schedules matching pods across the given nodes, zones, regions, or other user-defined topology domains of your Kubernetes cluster. For more information, see Pod Topology Spread Constraints.

hivemq:
  topologySpreadConstraints:
    - maxSkew: 1
   	  topologyKey: node
   	  whenUnsatisfiable: DoNotSchedule
      labelSelector:
        matchLabels:
          foo: hivemq
When you define multiple topology spread constraints for a pod, the constraints are combined with AND statements. The Kubernetes scheduler seeks a node for the incoming pod that satisfies all the constraints.
Field Type Description

maxSkew

Integer

Defines the degree to which pods can be unevenly distributed. maxSkew is a required field. The maxSkew value must be greater than zero. The default setting is 1.
The semantics of the maxSkew setting differ according to the whenUnsatisfiable value:

  • When whenUnsatisfiable = DoNotSchedule, the maxSkew is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. the global minimum is the minimum number of pods that match the label selector in a topology domain. For example, if you have 3 zones with 0, 2, and 3 matching pods respectively, the global minimum is 0.

  • When whenUnsatisfiable =ScheduleAnyway, the scheduler gives higher precedence to topologies that help reduce the skew.

topologyKey

String

Defines the key for the node labels. topologyKey is a required field. Nodes that have a label with this key and identical values are considered to be in the same topology. Each key/value pair functions as a bucket. Kubernetes attempts to put a balanced number of pods into each bucket.

labelSelector

String

A label that Kubernetes uses to find matching pods. Pods that match this label selector are counted to determine the number of pods in the corresponding topology domain.

whenUnsatisfiable

String

Specifies how Kubernetes handles pods that do not satisfy the spread constraint, whenUnsatisfiable is a required field. The default setting is DoNotSchedule:

  • DoNotSchedule: Specifies that the scheduler does not schedule any pod that does not satisfy the spread constraint.

  • ScheduleAnyway: Specifies that the scheduler can schedule pods that do not satisfy the spread constraint, but give higher precedence to topologies that help reduce the skew.

A constraint is considered unsatisfiable for an incoming pod if and only if every possible node assignment for the pod would violate the maxSkew on some topology.

Set Pod Security Context

If desired, you can provide a custom security context for your HiveMQ pods. The PodSecurityContext holds pod-level security attributes and common container settings. The security settings that you specify for a pod apply to all containers in the pod. For more information, see Configure a Security Context for a Pod or Container

Some fields in PodSecurityContext are also present in SecurityContext. Field values of SecurityContext take precedence over field values of PodSecurityContext.
hivemq:
  podSecurityContext:
    runAsUser: 8000
Field Type Description

fsGroup

Integer

A special supplemental group that applies to all containers in a pod. The format is init64. Some volume types allow the Kubernetes to change the ownership of the volume to be owned by the pod:

  • The owning group ID (GID) is the FSGroup.

  • The setgid bit is set (new files created in the volume are owned by the FSGroup).

  • Use OR statements to combine the permission bits. (rw-rw----).

If the fsGroup is unset, Kubernetes does not change the ownership or permissions of any volumes.

fsGroupChangePolicy

Integer

Defines the behavior for changing ownership and permission of the volume before the volume is exposed inside a pod. The format is init64. This field only applies to volume types that support fsGroup controlled ownership and permissions. fsGroupChangePolicy`has no effect on ephemeral volume types such as: `secret, configmaps, and emptydir. This field has two possible values. If no value is set, Always" is used.

  • OnRootMismatch: Only changes permission and ownership of the volume if the permission and ownership of the root directory do not match the expected permissions of the volume. This setting can help shorten the time it takes to change ownership and permission of a volume.

  • Always: Always changes permission and ownership of the volume when the volume mounts.

runAsGroup

Integer

Specifies the primary group ID for all processes that run in any containers of the pod. The format is init64. If the`runAsGroup` field is omitted, the primary group ID of the containers is root(0).
If runAsGroup is also set in SecurityContext, the value specified in SecurityContext takes precedence for the container.

runAsNonRoot

Boolean

Specifies whether the container must run as a non-root user. When true, Kubernetes validates the image at runtime to ensure that it does not run as UID 0 (root) and does not start the container if it does. When unset or false, no such validation is performed. runAsNonRoot can also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in securityContext takes precedence.

runAsUser

Integer

The user ID (UID) to run the entrypoint of the container process. If unspecified, runAsUser defaults to the user specified in image metadata. The format is init64. runAsUser can also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container.

seLinuxOptions

String

The SELinux context that is applied to all containers. If unspecified, the container runtime allocates a random SELinux context for each container. seLinuxOptions can also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. The following seLinuxOptions labels can be applied to the container:

  • level: The SELinux level label that applies to the container.

  • role: The SELinux role label that applies to the container.

  • type: The SELinux type label that applies to the container.

  • user: The SELinux user label that applies to the container.

seccompProfile

String

Defines the seccomp profile settings of a pod or container. Only one profile source can be set.

  • localhostProfile: Specifies that a profile that is defined in a file on the node is used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. localhostProfile must only be set for the profile type Localhost.

  • type: Required field that specifies the kind of seccomp profile that is applied. Valid options are:

    • Localhost: A profile defined in a file on the node is used.

    • RuntimeDefault: The container runtime default profile is used.

    • Unconfined: No profile is applied.

windowsOptions

String

The Windows-specific settings that are applied to all containers. If unspecified, the options defined in the SecurityContext are used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

  • gmsaCredentialSpec: Specifies where the GMSA admission webhook inlines the contents of the GMSA credential specification that is named in the GMSACredentialSpecName field.

  • gmsaCredentialSpecName: Specifies the name of the GMSA credential specification to use.

  • hostProcess: Determines whether a container runs as a 'Host Process' container. This field is alpha-level and is only honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag results in errors during pod validation. All of the containers in a pod must have the same effective HostProcess value. it is not allowed to have a mix of HostProcess containers and non-HostProcess containers. In addition, if hostProcess is set to true, then the HostNetwork must also be set to true.

  • runAsUserName: Specifies the username in Windows to run the entrypoint of the container process. if unspecified, runAsUserName defaults to the user who is specified in image metadata. runAsUserName can also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

Set Container Security Context

If desired, you can provide a custom security context for your HiveMQ containers. A security context defines privilege and access control settings for a pod or container. Security settings that you specify for a container apply only to the individual container. Some security configuration fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. When there is overlap, the settings you define in containerSecurityContext override settings made at the pod level.

Container settings do not affect pod volumes.
hivemq:
  containerSecurityContext:
    runAsUser: 1000
Field Type Description

allowPrivilegeEscalation

Boolean

Controls whether a process can gain more privileges than its parent processes. This bool directly controls if the no_new_privs flag is set on the container process. allowPrivilegeEscalation is always set to true when the container runs as Privileged or has CAP_SYS_ADMIN permission.

capabilities

String

Specifies POSIX capabilities to add or remove when the container runs.

  • add: Specifies an array of one or more capabilities that are added to the running container.

  • drop: Specifies an array of one or more capabilities that are removed from the running container.

privileged

Boolean

Specifies whether the container runs in privileged mode. When set to true the container is essentially equivalent to root on host. The default setting is false.

procMount

String

Specifies the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for read-only paths and masked paths. This property requires the ProcMountType feature flag to be enabled.

readOnlyRootFilesystem

Boolean

Specifies whether the container has a read-only root file system. The default setting is false.

runAsGroup

Integer

Specifies the group ID (GID) to run the entry point of the container process. The format is init64. If unset, the runtime default is used. If runAsGroupset is set in both SecurityContext and PodSecurityContext, the value specified in containerSecurityContext takes precedence for the container.

runAsNonRoot

Boolean

Specifies that the container must run as a non-root user. When set to true, Kubernetes validate the image at runtime to ensure that it does not run as UID 0 (root) and fails to start the container if it does. When unset or set to false, no such validation is performed. If runAsNonRoot is set in both SecurityContext and PodSecurityContext, the value specified in containerSecurityContext takes precedence for the container.

runAsUser

Integer

Specifies the user ID (UID) to run the entry point of the container process. The format is init64. If unset, defaults to the user specified in the image metadata. If runAsUser is set in both SecurityContext and PodSecurityContext, the value specified in containerSecurityContext takes precedence for the container.

seLinuxOptions

String

Specifies the SELinux context that is applied to the container. If unspecified, the container runtime allocates a random SELinux context for each container. If SELinux options are provided at both the pod and container level, the container options override the pod options. The following seLinuxOptions labels can be applied to the container:

  • level: The SELinux level label that applies to the container.

  • role: The SELinux role label that applies to the container.

  • type: The SELinux type label that applies to the container.

  • user: The SELinux user label that applies to the container.

seccompProfile

String

Defines the seccomp profile settings of a pod or container. Only one profile source can be set.

  • localhostProfile: Specifies that a profile that is defined in a file on the node is used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. localhostProfile must only be set for the profile type Localhost.

  • type: Required field that specifies the kind of seccomp profile that is applied. Valid options are:

    • Localhost: A profile defined in a file on the node is used.

    • RuntimeDefault: The container runtime default profile is used.

    • Unconfined: No profile is applied.

If seccomp options are provided at both the pod and container level, the container options override the pod options.

windowsOptions

String

The Windows-specific settings that are applied to all containers. If unspecified, the options defined in the PodSecurityContext are used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

  • gmsaCredentialSpec: Specifies where the GMSA admission webhook inlines the contents of the GMSA credential specification that is named in the GMSACredentialSpecName field.

  • gmsaCredentialSpecName: Specifies the name of the GMSA credential specification to use.

  • hostProcess: Determines whether a container runs as a 'Host Process' container. This field is alpha-level and is only honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag results in errors during pod validation. All of the containers in a pod must have the same effective HostProcess value. it is not allowed to have a mix of HostProcess containers and non-HostProcess containers. In addition, if hostProcess is set to true, then the HostNetwork must also be set to true.

  • runAsUserName: Specifies the username in Windows to run the entry point of the container process. if unspecified, runAsUserName defaults to the user who is specified in the image metadata. runAsUserName can also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

Volume Claim Templates

When you use StatefulSets, you can add volumeClassTemplates to provide stable storage for your HiveMQ pods with persistent volumes.

The volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. For more information, see Stable Storage and {persisten-volume-claim}[Persistent Volume Claims].

Claims listed in the volumeClaimTemplates list take precedence over any volumes in the template that have the same name.

Add Operator Hints

The operatorHints section provides options for configuring operations logic such as surge node orchestration and persistent volume claim (PVC) clean-up.

To set operator hints, the following setting must be present in a ControllerTemplate: cluster-stateful-set.yaml.
hivemq:
  operatorHints:
    statefulSet:
      surgeNode: true
      surgeNodeCleanupPvc: true
Field Value Description

statefulSet

Boolean

Speciifes properties that are relevant for deploying a StatefulSet:

  • surgeNode: In compliance with the recommended HiveMQ update strategy, surgeNode specifies that the operator automatically starts an additional node in response to each new configuration, before running a rolling upgrade. The default setting is true. If you are unable to schedule an additional HiveMQ node, you can use this flag to disable the update strategy at your own risk.

  • surgeNodeCleanupPvc: Specifies whether the operator automatically deletes the PersistentVolumeClaim for the added node after the rolling upgrade is finished. The default value is true. Automatic clean up is useful for availability zone-bound volume providers such as EBS.

Add Sidecar Containers

In some cases, it can be useful to add a container that runs along with a HiveMQ container in a pod to enhance or extend the functionality of the container (without changing the current container).

A Sidecar container is a second container that can be added to the pod definition. sidecars must be placed in the same pod as the main application container and use the same resources as the main container.
For more information, see Using Pods

Configure HiveMQ

This section lists specific sections of the config.xml that are represented in the HiveMQ Custom Resource Definition.

You must specify these parameters in the restrictions, MQTT, and security section in your manifest:

hivemq:
  mqtt:
    maxQos: 1

If you need to edit the config.xml of your deployment at a more granular level, use the configOverride field.

To ensure HiveMQ can still interact with the operator correctly, start with the default value of the configOverride field when you make low-level changes to the configuration.

Restrictions

Field Value Description

maxClientIdLength

65535

The maximum number of characters HiveMQ accepts in an MQTT-client ID

maxTopicLength

65535

The maximum number of characters HiveMQ accepts in a topic string

maxConnections

-1

The maximum number of MQTT connections HiveMQ allows. A setting of -1 = unlimited.

incomingBandwidthThrottling

0

The maximum incoming traffic as bytes per second (b/s)

noConnectIdleTimeout

10000

The time in seconds that HiveMQ waits for the CONNECT message of a client before closing an open TCP socket

For more information, see Restrictions.

MQTT Options

Field Default Description

sessionExpiryInterval

4294967295

The length of time in seconds that can pass after the client disconnects before the session expires

messageExpiryMaxInterval

4294967296

The length of time in seconds that can pass after a message arrives at the broker until the message expires

maxPacketSize

268435460

The maximum size, in bytes, of MQTT packets that the HiveMQ broker accepts

serverReceiveMaximum

10

The maximum number of PUBLISH messages that are not yet acknowledged by the HiveMQ broker each client can send

keepaliveMax

65535

The maximum value that the HiveMQ broker accepts in the keepAlive field in the CONNECT packet of an MQTT client

keepaliveAllowUnlimited

true

Allows connections from clients that send a CONNECT packet with a keepAlive=0 setting

topicAliasEnabled

true

To reduce the packet size of PUBLISH messages, an alias can replace the topic. The topic aliases must be a number between 1 and 65535. '0' is not allowed.

topicAliasMaxPerClient

5

Limits the number of topic aliases per client

subscriptionIdentifierEnabled

true

Associates an identifier with every topic filter in a SUBSCRIBE message

wildcardSubscriptionEnabled

true

Defines whether the HiveMQ broker accepts subscriptions with a topic filter that use wildcard characters

sharedSubscriptionEnabled

true

Defines whether the HiveMQ broker supports shared subscriptions

retainedMessagesEnabled

true

Defines whether the retained messages feature is enabled on the HiveMQ broker

maxQos

2

Defines the maximum Quality of Service (QoS) level that can be used in MQTT PUBLISH messages

queuedMessagesMaxQueueSize

1000

Limits the number of messages the HiveMQ broker queues per client

queuedMessageStrategy

discard

Defines how the HiveMQ handles new messages for a client when the queue of the client is full

For more information, see MQTT Specific Configuration Options.

Security

Field Value Description

allowEmptyClientId

true

Allows the use of empty client IDs. If this is set to true, HiveMQ automatically generates a random client ID when the clientId of a CONNECT packet is empty.

payloadFormatValidation

false

Enables UTF-8 validation of UTF-8 PUBLISH payloads

topicFormatValidation

true

Enables UTF-8 validation of topic names and client IDs

allowRequestProblemInformation

true

Allows the client to request problem information. If this is set to false, no reason string and user property values are sent to clients.

controlCenterAuditLogEnabled

true

Enables audit logging for the HiveMQ control center

For more information, see the Security Configuration section of MQTT Configuration.

Allow-all Extension

By default, the HiveMQ Docker image comes with the allow-all extension that permits all MQTT connections without requiring authentication. Before you use HiveMQ in production, add an appropriate security extension and remove the HiveMQ allow-all extension.
To disable the extension, set the HIVEMQ_ALLOW_ALL_CLIENTS environment variable to false:

hivemq:
  env:
    - name: HIVEMQ_ALLOW_ALL_CLIENTS
      value: "false"

For more information, see Default Authentication Behaviour

Use a HiveMQ Custom Image

Currently, the HiveMQ Operator renders the hivemqVersion as the image tag.

hivemq:
  hivemqVersion: latest
  image: my-repo/hivemq-k8s-image
If necessary, You can also define imagePullPolicy.

Specify Log Level

Use the logLevel field to specify the log level for the root logger:

hivemq:
  logLevel: INFO

Specify Custom Java Options

To specify java flags such as GC options or network properties, use the javaOptions field.

The default value of the javaOptions field works well on most environments:
-XX:+UnlockExperimentalVMOptions -XX:InitialRAMPercentage=40 -XX:MaxRAMPercentage=50 -XX:MinRAMPercentage=30.

Specify Custom Environment Variables

To append custom variables to the existing environment of the HiveMQ container, use the env field:

hivemq:
  env:
    - name: TEST_VAR
      value: FOO

It is also possible to specify environment variabless directly from secret objects:

env:
  - name: CLUSTER_KEYSTORE_KEY_PASS
    valueFrom:
      secretKeyRef:
        key: keystore-key-pass
        name: hivemq-cluster-tls-secrets

Use a Custom Controller Template

The HiveMQ Operator supports the use of custom controller templates to deploy HiveMQ. Custom templates make it possible to use controllers such as StatefulSet and DaemonSet for your cluster deployments.

The method is similar to how Helm templates are written, but instead of a gotemplate the HiveMQ operator uses a Jinja-like language (Jinjava).

The context provided to the template consists of the HiveMQCluster object (variable name spec) as well as some built-in templating functions.

hivemq:
  controllerTemplate: "my-deployment.yaml"

Deployments can be YAML based (.yaml, .yml) or JSON based (.json).

Template Functions

The template context provides built-in functions for some common tasks:

  • util:escapeJson(String): Escapes a given input string to be JSON compliant

  • util:indent(Integer, String): Indents the given multi-line input string for YAML templates

  • util:getPort(ClusterSpec, String): Returns the port object for the given port name

  • util:stringReplace(String, String, String): Runs replaceAll on the first argument and replaces the 2nd argument string with the 3rd argument string.

  • util:render(ClusterSpec, String): Renders a given string with the same templating context as the template itself. For example, renders a custom property from the cluster specification.

Custom Variables

You can also specify additional properties on the HiveMQCluster specification:

hivemq:
  customProperties:
    myCustomProperty: "customValue"

This value can be used in custom templates. For example, {{ spec.customProperties.myCustomProperty }}.
In this example, the value evaluates to customValue.

HiveMQ Custom Resource Patches

Use these files as a basis for your own custom resource file structures. The sample files include patches that you can use to update your HiveMQ Cluster deployment is various ways:

  • Install and configure your extensions

  • Configure your HiveMQ licenses

  • Configure how ports are mapped and exposed

for more information, see Patch Kubernetes objects.

Configuration Override Patch

The example config-override.yaml patch shows how you can override the default config.xml template of your HiveMQ cluster custom resource. The override is useful when you need to configure detailed parameters that are not included in the hivemqCluster.json schema.

To demonstrate how block scalar strings are formatted for this kind of structure, the patch file applies the default template that is configured in the hivemqCluster.json schema.

Example comfig-override.yaml patch
hivemq:
  configOverride: |-
    <?xml version="1.0"?>
    <hivemq>
      <listeners>
        --LISTENER-CONFIGURATION--
      </listeners>
      <control-center>
        <listeners>
          <http>
            <port>${HIVEMQ_CONTROL_CENTER_PORT}</port>
            <bind-address>0.0.0.0</bind-address>
          </http>
        </listeners>
        <users>
          <user>
            <name>${HIVEMQ_CONTROL_CENTER_USER}</name>
            <password>${HIVEMQ_CONTROL_CENTER_PASSWORD}</password>
          </user>
        </users>
      </control-center>
      <cluster>
        <transport>
          --TRANSPORT_TYPE--
        </transport>
        <enabled>true</enabled>
        <discovery>
          <extension>
            <reload-interval>${HIVEMQ_DNS_DISCOVERY_INTERVAL}</reload-interval>
          </extension>
        </discovery>
        <replication>
          <replica-count>${HIVEMQ_CLUSTER_REPLICA_COUNT}</replica-count>
        </replication>
      </cluster>
      <overload-protection>
        <enabled>${HIVEMQ_CLUSTER_OVERLOAD_PROTECTION}</enabled>
      </overload-protection>
      <restrictions>
        <max-client-id-length>${HIVEMQ_MAX_CLIENT_ID_LENGTH}</max-client-id-length>
        <max-topic-length>${HIVEMQ_MAX_TOPIC_LENGTH}</max-topic-length>
        <max-connections>-${HIVEMQ_MAX_CONNECTIONS}</max-connections>
        <incoming-bandwidth-throttling>${HIVEMQ_INCOMING_BANDWIDTH_THROTTLING}</incoming-bandwidth-throttling>
        <no-connect-idle-timeout>${HIVEMQ_NO_CONNECT_IDLE_TIMEOUT}</no-connect-idle-timeout>
      </restrictions>
      <mqtt>
        <session-expiry>
          <max-interval>${HIVEMQ_SESSION_EXPIRY_INTERVAL}</max-interval>
        </session-expiry>
        <packets>
          <max-packet-size>${HIVEMQ_MAX_PACKET_SIZE}</max-packet-size>
        </packets>
        <receive-maximum>
          <server-receive-maximum>${HIVEMQ_SERVER_RECEIVE_MAXIMUM}</server-receive-maximum>
        </receive-maximum>
        <keep-alive>
          <max-keep-alive>${HIVEMQ_KEEPALIVE_MAX}</max-keep-alive>
          <allow-unlimited>${HIVEMQ_KEEPALIVE_ALLOW_UNLIMITED}</allow-unlimited>
        </keep-alive>
        <topic-alias>
          <enabled>${HIVEMQ_TOPIC_ALIAS_ENABLED}</enabled>
          <max-per-client>${HIVEMQ_TOPIC_ALIAS_MAX_PER_CLIENT}</max-per-client>
        </topic-alias>
        <subscription-identifier>
          <enabled>${HIVEMQ_SUBSCRIPTION_IDENTIFIER_ENABLED}</enabled>
        </subscription-identifier>
        <wildcard-subscriptions>
          <enabled>${HIVEMQ_WILDCARD_SUBSCRIPTION_ENABLED}</enabled>
        </wildcard-subscriptions>
        <shared-subscriptions>
          <enabled>${HIVEMQ_SHARED_SUBSCRIPTION_ENABLED}</enabled>
        </shared-subscriptions>
        <quality-of-service>
          <max-qos>${HIVEMQ_MAX_QOS}</max-qos>
        </quality-of-service>
        <retained-messages>
          <enabled>${HIVEMQ_RETAINED_MESSAGES_ENABLED}</enabled>
        </retained-messages>
        <queued-messages>
          <max-queue-size>${HIVEMQ_QUEUED_MESSAGE_MAX_QUEUE_SIZE}</max-queue-size>
          <strategy>${HIVEMQ_QUEUED_MESSAGE_STRATEGY}</strategy>
        </queued-messages>
      </mqtt>
      <security>
        <!-- Allows the use of empty client ids -->
        <allow-empty-client-id>
          <enabled>${HIVEMQ_ALLOW_EMPTY_CLIENT_ID}</enabled>
        </allow-empty-client-id>
        <!-- Configures validation for UTF-8 PUBLISH payloads -->
        <payload-format-validation>
          <enabled>${HIVEMQ_PAYLOAD_FORMAT_VALIDATION}</enabled>
        </payload-format-validation>
        <!-- test-->
        <utf8-validation>
          <enabled>${HIVEMQ_TOPIC_FORMAT_VALIDATION}</enabled>
        </utf8-validation>
        <!-- Allows clients to request problem information -->
        <allow-request-problem-information>
          <enabled>${HIVEMQ_ALLOW_REQUEST_PROBLEM_INFORMATION}</enabled>
        </allow-request-problem-information>
      </security>
    </hivemq>
To eliminate the need for any special formatting, you can also use a JSON patch.
For more information, see JSON Patch.

Initialization Patch

The example initialization.yaml patch shows how to use initialization routines. This example shows how to install an extension. However, you usually use the 'extensions' field for this type of task.

Example initialization.yaml patch
hivemq:
  initialization:
    - name: init-kafka-plugin
      args:
        - |
          # Setup extension
          wget https://www.hivemq.com/releases/extensions/hivemq-kafka-extension-1.0.0.zip
          unzip hivemq-kafka-extension-1.0.0.zip -d /hivemq-data/extensions
          rm /hivemq-data/extensions/hivemq-kafka-extension/kafka-configuration.example.xml
          chmod -R 777 /hivemq-data/extensions/hivemq-kafka-extension

HiveMQ Enterprise Extension for Kafka Patch

The example kafka.yaml patch shows how to manage extensions.
For more information, see Kafka Extension Configuration.

Example kafka.yaml patch
hivemq:
  extensions:
    - name: hivemq-kafka-extension
      extensionUri: https://www.hivemq.com/releases/extensions/hivemq-kafka-extension-1.1.0.zip
      configMap: kafka-configuration
      enabled: true
Before you apply the Kafka extension patch, you must create ConfigMaps for the configuration of the extension and your enterprise extension licence.
Example ConfigMap for the Kafka extension configuration
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: hivemq
  name: kafka-configuration
data:
  kafka-configuration.xml: |-
    <kafka-configuration
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xsi:noNamespaceSchemaLocation="kafka-extension.xsd">
        <kafka-clusters>
            <kafka-cluster>
                <id>cluster01</id>
                <bootstrap-servers>kafka.operator.svc.cluster.local:9071</bootstrap-servers>
                <authentication>
                    <plain>
                        <username>test</username>
                        <password>test123</password>
                    </plain>
                </authentication>
            </kafka-cluster>
        </kafka-clusters>
        <topic-mappings>
            <topic-mapping>
                <id>sensor-data</id>
                <cluster-id>cluster01</cluster-id>
                <mqtt-topic-filters>
                    <mqtt-topic-filter>vehicles/sensor/data/#</mqtt-topic-filter>
                </mqtt-topic-filters>
                <kafka-topic>sensor-data</kafka-topic>
            </topic-mapping>
        </topic-mappings>
    </kafka-configuration>
Example ConfigMap for a Kafka extension license
apiVersion: v1
data:
  hivemq.lic: |-
    my-license-file
  kafka-license.elic: |-
    my-extension-license-file
kind: ConfigMap
metadata:
  labels:
    app: hivemq
  name: hivemq-license

To apply the Kafka extension patch, after your create the necessary ConfigMaps, enter:

kubectl patch hmqc <cluster-name> --type=merge --patch "$(cat kafka.yaml)"

License Patch

The example license.yaml shows how to install a license when you use the HiveMQ operator.

Example license.yaml patch

[source,yaml

hivemq:
  configMaps:
    - name: hivemq-license
      path: /opt/hivemq/license
Before you apply the license patch, you must create a ConfigMap for the associated license.
For more information, see the Example ConfigMap for a Kafka extension license.

Listener Patch

The example listener-config.yaml shows how to configure additional listeners.

This example uses the default listener and templated environment variable as well as an additional hardcoded listener on port 1884.

You can use this method to configure other types of listeners. For more information, see Listeners.

To directly reference a service on Kubernetes and use the correct port even if the loadbalancer port changes, you can use service port environment variables in this definition. For more information, see Kubernetes Environment Variables.
hivemq:
  listenerConfiguration: >
    <tcp-listener>
      <port>${HIVEMQ_MQTT_PORT}</port>
      <bind-address>0.0.0.0</bind-address>
    </tcp-listener>
    <tcp-listener>
      <port>1884</port>
      <bind-address>0.0.0.0</bind-address>
    </tcp-listener>
To configure a TLS listener, you must provide the associated keystore and truststore in the configurations field.

Ports Patch

The example ports.yaml shows how to configure additional ports. When you apply this patch to a HiveMQ cluster that uses the default configuration, this example simply adds an additional API port and expose it as a service.

Example ports.yaml patch
hivemq:
  ports:
    # These are the default ports that get exposed if you don't override this field.
    - name: mqtt
      port: 1883
      patch:
        - '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]'
    - name: "cc"
      port: 8080
      expose: true
      patch:
        - '[{"op":"add","path":"/spec/sessionAffinity","value":"ClientIP"}]'
    # If you want Kubernetes to expose the HiveMQ control center via load balancer.
    # End of default ports
    # If your extension exposes a custom REST API, you can expose the port to a service like such:
    # The service will be called "hivemq-<cluster-name>-<port-name>"
    - name: my-api
      port: 8082
      expose: true