Watch the Webinar
- 00:00:00 - Introduction
- 00:04:05 - Sparkplug: for a non-primary application. What is the expected startup procedure?
- 00:09:25 - One of our big concerns is scalability. We are looking to develop infrastructure that will need to cope with potentially hundreds of thousands if not millions of devices. Is MQTT suitable in this instance, and if so what are the bottlenecks to look out for?
- 00:14:40 - What's the best way to connect to multiple remote brokers?
- 00:19:38 - Does anyone consider the need to certify the implementation of MQTT / Sparkplug Clients on devices / applications for industry?
- 00:21:58 - We are using AWS for cloud. Is it possible to use HiveMQ together with AWS IoT, or does HiveMQ replace AWS IoT? What are the advantages and disadvantages of using HiveMQ over AWS IoT
- 00:28:22 - Why I need to implement Sparkplug specification instead of MQTT?
- 00:36:03 - Does HiveMQ support communication from/to IoT devices based on microPython?
- 00:38:05 - Is your product mainly located to bring one or more production lines to cloud to combine and be reachable on cloud worldwide in IIoT architecture?
- 00:42:58 - Is it possible for customers to organize their own certificate system?
- 00:49:14 - OPC UA vs Sparkplug. What separates MQTT from other similar standards based protocols when it comes to enabling IIoT data connectivity?
- 00:54:19 - What would be the expected behavior of a subscriber when an out of order message comes in (sequence number not as expected)?
- 00:58:58 - What is the most common form of broker cluster deployment in Industrial/Manufacturing projects? In the cloud or on-prem? For the cloud architectures, do IIoT devices connect to the broker directly over the public internet or are some additional broker instances running in the DMZ?
Drawing upon the success of our previous ‘Ask Me Anything About MQTT' webinar and on popular demand, we continue a webinar series under the same name. In this August 2022 edition, our MQTT experts answered questions around MQTT Topics, MQTT Security, MQTT Sparkplug, and OPC UA.
Jens Deters, Head of Professional Services at HiveMQ, and Matthias Hofschen, Senior Consultant Professional Services at HiveMQ, personally answered questions moderated live by Ravi Subramanyan, Director of Industry Solutions Manufacturing during the webinar.
Feel free to ask questions on the HiveMQ Community Forum.
MQTT is a general-purpose messaging protocol, while Sparkplug is a built-on specification for industrial automation specifically. Sparkplug defines additional features like topic structure, payload format, and state management to address the needs of industrial applications.
HiveMQ MQTT Broker is designed to handle millions of connections and scales elastically to accommodate growing needs.
The Bridge Extension in HiveMQ allows connecting remote brokers and forwarding messages between them, creating a hierarchical system for data collection and distribution.
HiveMQ can be used alongside AWS IoT or replace it entirely. HiveMQ offers full MQTT compliance, flexibility in backend integrations, and avoids vendor lock-in.
HiveMQ supports self-signed and CA-signed certificates, mutual TLS, and integration with existing IT security policies.
Sparkplug's Pub/Sub pattern and unified namespace (UNS) make it better suited for large-scale industrial data sharing than OPC UA's request-response approach.
Broker cluster deployment can be on-prem, in the cloud, or a hybrid of both, depending on the customer's use case and requirements.
Welcome and poll
Maryna Plashenko: 00:00:08.430 Hello, everyone. Good morning, good afternoon, good evening. Welcome to our Ask Me Anything About MQTT session. Thank you for taking your time to attend this event today. A very warm welcome to you. I'm Maryna Plashenko. I will be moderating this session today. With me are Jens Deters, Head of Professional Services, and Matthias Hofschen, Senior Consultant Professional Services at HiveMQ, who will be helping us tonight answer the questions, joined by Ravi Subramanyan, Director of Industry Solutions Manufacturing at HiveMQ. Ravi will be moderating the Q&A session today. A very warm welcome to you, Jens, Matthias, Ravi.
Jens Deters: 00:00:58.897 Hello.
Ravi Subramanyan: 00:01:00.316 Hello.
Maryna Plashenko: 00:01:02.206 Yeah.
Jens Deters: 00:01:02.731 Hi.
Ravi Subramanyan: 00:01:03.724 Thank you. Go ahead, Maryna.
Maryna Plashenko: 00:01:06.306 Before we kick off, I'd like to say that we are recording this webinar and we will be sharing the recording in our follow-up email. And also, please feel free to submit your questions. We have a Q&A panel at the Zoom control panel below, so you can do it there. We have already received some of your questions and we will be answering them shortly. And lastly, we will be having two polls running today. So one will be at the very beginning in a minute, and the second one closer to the end. And I encourage you to participate in these polls because we would be thankful for your feedback. So without further ado, I will now start our first poll. Okay. So I have launched the poll now. I encourage you to participate. It's a couple of questions that you can see on the screen now, so please take your time and answer those. So I'll give you a minute, and then we'll start with the main session.
Maryna Plashenko: 00:02:39.144 I can see your answers coming in. Thank you for your active participation. Yeah. Let's leave it for another 30 seconds.
Maryna Plashenko: 00:03:18.430 Okay? Okay. I'm ending the poll now. Thank you very much for your participation. And yeah, so now, I'm ready to hand it over to you, Ravi, and let's get it started.
With regard to Sparkplug, for a non-primary application, what is the expected startup procedure?
Ravi Subramanyan: 00:03:34.677 All right. Thank you, Maryna. Thank you so much for the opportunity. Good morning, good afternoon, good evening, folks. Happy to be here coordinating the session with Jens and Matthias. And like Maryna said, we already have some questions that were submitted, and I see some of the participants that submitted the questions here as well. But please feel free to submit your other questions. We'll definitely try to answer. Let's get started with the first question here, and that's related to Sparkplug, right? The question is: For a non-primary application, what is the expected startup procedure? From the specification, they do not have immediate access to birth certificates or current state. And this was submitted by Leigh, who's also there in this meeting. Gentlemen, who would like to take up this question?
Jens Deters: 00:04:24.924 I can.
Maryna Plashenko: 00:04:26.613 Okay.
Jens Deters: 00:04:29.583 Please. So I assume Leigh is pointing here to the direction that for now, birth and death messages or certificates are not sent retained, right? So could you get Leigh on the call that he's able to give more details on that?
Ravi Subramanyan: 00:04:57.926 All right. Leigh, if you can hear us, could you answer?
Leigh van der Merwe: 00:05:01.173 Hi. Hello.
Ravi Subramanyan: 00:05:02.828 Hi.
Jens Deters: 00:05:03.622 Hi.
Leigh van der Merwe: 00:05:04.485 Yeah. Yeah, essentially, yeah, they're not published as retained. And also, you can't connect with a persistent session because any of the current state that you'd receive from that birth certificate is well and truly out of date. So the primary application probably wants to know everything as soon as it connects. We've got a number of consumers that we're writing that also want to know everything when they connect.
Jens Deters: 00:05:31.832 Yeah, you're right. So with the current Sparkplug specification 2.2, it's not specified that you are forced, according to the specification, to send these messages as retained. You can do, actually. So it's not forbidden to do that, but it's not according to the specification, right? But the new specification, 3.0, is almost ready to be released in the next one or two weeks, and there is a new — optional, actually, but there is a new topic structure recommended to solve this. So it starts with data Sparkplug. And then you have, yeah, certificates, namespace, group ID, and then NBIRTH or DBIRTH and so on, so the birth and death types, and the edge node ID. And these messages should be or shall be sent retained to solve this. This is —
Leigh van der Merwe: 00:06:48.839 I guess that doesn't help with the current state. So if we're using the Sparkplug as a SCADA system, you'd want to load it up. And because that is only sent on change, you might have a set point or something that's set in the PLC that's only going to change once a day or week, even. And so those will be unknown values until you get a birth certificate or you get it changed because that retained message has an old state.
Jens Deters: 00:07:20.692 Yeah. So basically, you also have to have a look at the messages, so which one of the birth or death messages is the latest one, right?
Leigh van der Merwe: 00:07:33.616 Yeah.
Jens Deters: 00:07:35.017 So in terms of the death message, which is obviously sent by Last Will and Testament, the timestamp when this message is actually sent out also has to be set to the current value when the message is actually sent out, right? So when you're storing this message during your connect, it contains the wrong timestamp as you can't know when this message is going to be sent. We have created an open source extension for HiveMQ, which is called Sparkplug Aware Extension, to handle this; to either handle the optional [inaudible] Sparkplug topic structure with all the birth and death certificates, and also, it will change the timestamp in the payload of the death message to the correct value.
Leigh van der Merwe: 00:08:39.257 Yeah, that's where the magic is.
Jens Deters: 00:08:44.243 I think I can post the link to that repo.
Leigh van der Merwe: 00:08:52.451 Yeah, that'll be useful. Yeah, that'll be good. And the —
Jens Deters: 00:08:56.124 That's one.
Leigh van der Merwe: 00:08:56.455 — version 3.0.
Jens Deters: 00:08:58.250 It's not yet production ready, but it works. So you can use it already if you like.
Leigh van der Merwe: 00:09:07.100 Awesome, thank you very much.
Ravi Subramanyan: 00:09:08.888 All right.
Jens Deters: 00:09:09.709 You're welcome.
Ravi Subramanyan: 00:09:09.925 Thank you, Leigh, for the clarification. Jens, anything else you want to add about this?
Jens Deters: 00:09:16.225 No, I think that's —
Is MQTT suitable for an infrastructure that needs to scale to cope with hundreds or thousands of connections?
Ravi Subramanyan: 00:09:17.978 Okay, great. Thank you. Thank you again, Jens, and thank you, Leigh, for the clarification. All right, the second question that we received is regarding stability. That's always the biggest thing on people's minds. And we are looking to develop infrastructure that will help us cope with potentially hundreds or thousands of connections, if not millions of connections, with devices. The question is: Is MQTT suitable for this instance? And if so, what are the bottlenecks to look out for? And this was submitted by David from MyEnergy. Who would like to take this question?
Matthias Hofschen: 00:09:54.937 Yeah, I can add a couple of thoughts to this one. So the simple answer is yes. The more complicated answer is yes — and choose HiveMQ. So essentially, scalability is, of course, one of the big questions. There's nothing in the MQTT protocol that talks about scalability. So it's really critical to choose an MQTT broker that places a lot of value on scalability and ideally, elastic scalability. And ideally creates a distributed system, a cluster, with no single point of failure. And that's why I'm pointing out HiveMQ. Of course, I'm working at HiveMQ so you have to make up your own opinion about this, but here's what we offer in this area. First of all, millions of clients is not a problem. We have many production environments with customers where the number of client connections go into the millions. That's doable. The only thing that — so practical advice here, the thing that we need to keep in mind here is that each HiveMQ process that is — so let's assume we have a cluster of five HiveMQ nodes, and each one of those nodes runs on a system. Either it runs on bare metal, it runs on a VM in AWS or Azure, or it's even a part on a Kubernetes cluster.
Matthias Hofschen: 00:11:48.274 That all doesn't matter, but there is a limit in terms of the number of connections a single machine can take. That's determined essentially by the Linux subsystem, which — let's say — it should be easy to take about 200 to 300 thousand client connections. With a bit of tuning of the TCP stack, you can go higher, but essentially, there is going to be a limit on a single machine. So you need to have a system that clusters. If you wanted to have 3 million connections, then you need about 10 machines that work together in a cluster. HiveMQ has that capability, and what's really important here is that it scales elastically. So that means that if you're running, let's say, an edge node cluster and you observe that the number of clients is increasing and increasing and you need to add capacity, then you can start new nodes that join the cluster elastically. You don't have to restart the entire system. You just start nodes that elastically join the existing cluster and extend the scalability. Another thing that I wanted to point out here is that there's no single point of failure for HiveMQ since it's a system that does not rely on a master process to make sure that nodes are up and running. So that's also good.
Ravi Subramanyan: 00:13:15.770 Great.
Matthias Hofschen: 00:13:15.832 David is on the call, right?
Ravi Subramanyan: 00:13:18.586 Yes, he's on the call. I was going to suggest maybe we can bring in David to see if we answered his question or if he has any further questions. Could you bring David in?
Maryna Plashenko: 00:13:30.140 Yeah. Sure.
Matthias Hofschen: 00:13:30.932 David is [inaudible]?
Ravi Subramanyan: 00:13:32.412 David [inaudible].
Maryna Plashenko: 00:13:33.745 Yeah, sure.
Ravi Subramanyan: 00:13:35.435 So while you're bringing him in, right, I just wanted to also add the point that, obviously, Matthias mentioned the technical capabilities. HiveMQ started in connected cars. And one of the first requirements we got was we had to connect up to millions of devices. So that's kind of in our bedrock, if you will, of our solution, if you will, right? So I just wanted to add that as well, that that's kind of what we do day in and day out. Go ahead, David. Do you have any further clarifications or questions? Did we answer your question?
Matthias Hofschen: 00:14:13.573 Yeah, you're still muted, just in case you want to —
Ravi Subramanyan: 00:14:16.085 Yeah.
Maryna Plashenko: 00:14:18.244 Yeah. David, so you have — you can talk now.
Matthias Hofschen: 00:14:20.681 Yes, go ahead.
Ian: 00:14:22.052 Yes, so it's actually Ian here. So I think David has been unable to make it, but —
Matthias Hofschen: 00:14:27.733 It's okay.
Ian: 00:14:27.818 — yes, thank you. I did get the answer, so thank you very much for that.
What's the best way to connect multiple remote brokers?
Ravi Subramanyan: 00:14:32.092 Okay, thank you so much. Thank you. Appreciate that. All right, so we can move on to the next question here, which is, yeah, so what's the — this is another good one that a lot of our customers ask. What's the best way to connect multiple remote brokers? Can I run one broker and connect to other brokers using multiple credentials simultaneously? And this was submitted by Taichi Hirio from Docomo USA. Who would like to take this question?
Matthias Hofschen: 00:14:59.439 Yeah, I can address that question. So obviously, if you take one client, like a sensor or some device, or something like that, theoretically it could connect to multiple brokers. Of course, then that client would have to manage these multiple standing TCP connections, and that's usually not a good idea. Something that we see a lot in IIoT in industrial use cases, we see that there is a need to set up a hierarchical system. So to say, brokers that talk to brokers. And as far as HiveMQ goes, our solution for that is called the Bridge Extension. So what it would look like is that within a factory, you would operate a broker cluster, a HiveMQ cluster, that connects the sensors and the devices and the PLCs from the factory. Then on that device, there is a Bridge Extension installed that knows which cloud broker to connect to and forward MQTT messages to. In a sense, then, from multiple factory locations, we can then collect messages and send them further up the hierarchical chain into the central cloud broker installation.
Matthias Hofschen: 00:16:31.238 Now, the question is, how far do you want to take this? Should there be another level on top? So probably, in terms of the number of hierarchies, I would put a cap. Otherwise, the architecture gets too complicated. But that would be the solution. So if you check our documentation, it's the HiveMQ Enterprise extension, the Bridge Extension that would address that. There's also nice things like a loop prevention encoded there because one of the things that you need to make sure of is that you don't ping-pong messages around like crazy. So yeah, that's my answer there.
Ravi Subramanyan: 00:17:09.458 Thank you. Thank you, Matthias. Jens, would you like to add anything to that or?
Jens Deters: 00:17:14.984 No, I think it's fine.
Ravi Subramanyan: 00:17:16.335 I think we're good. All right. Okay. So we do have Taichi on the line. Should we maybe bring him in to see if we answered his question? And while we do that, right, what I would also like to add to Matthias's point, a lot of our clients use our Bridge Extension, especially on the manufacturing side of things, where they want to have local, kind of like factories, to be able to consolidate all of the data. And that consolidated data needs to then be preconsolidated on an enterprise level. So we have local brokers that do the consolidation in individual sites. And this then gets consolidated into the Enterprise broker through the Bridge Extension. So a lot of our customers use that heavily. Will they be able to bring in Taichi or —?
Maryna Plashenko: 00:18:11.514 Yeah, just a second.
Ravi Subramanyan: 00:18:13.084 Okay.
Jens Deters: 00:18:13.869 Yeah, just to add to that. We have worldwide operating manufacturing clients using our Bridge Extension really to share MQTT messages completely around the world, so.
Ravi Subramanyan: 00:18:30.370 Awesome. Awesome. Yeah, it makes sense. Taichi, I see that — okay, I think you're still muted. Yes. Can you hear us Taichi? I think he went back on mute. If you can hear us, or if you can speak, can you unmute and then let us know if we answered your question? Or would you like to ask any follow-up questions? Okay, looks like we might be having some issues here. In the interest of time, is it — okay, one more time, Taichi, are you able to hear us? Okay. Looks like they're having some technical issues. If we can bring him maybe later, that's fine. We can move on to the next question.
Matthias Hofschen: 00:19:19.675 He doesn't have the working mic, he just writes, so that's fine.
Ravi Subramanyan: 00:19:23.448 All right. We'll just move on.
Matthias Hofschen: 00:19:23.988 Thanks for your question.
Does anyone consider the need to certify the implementation of MQTT Sparkplug clients on devices/applications for industries?
Ravi Subramanyan: 00:19:25.463 Okay, I think you did a great job answering the question. No further questions, it looks like. All right. Okay, The next one is: Does anyone consider the need to certify the implementation of MQTT Sparkplug clients on devices and applications for industries? For example, confirmation of compliance with the specification by an accredited test unit. This was submitted by Jakub Wesołowski from PSI Polska. Who would like to take this question?
Jens Deters: 00:19:57.293 I'm happy to pick this one. [laughter]
Ravi Subramanyan: 00:19:58.977 Yes, please. Yes. Yep.
Jens Deters: 00:20:00.830 So yeah. I think basically, to test compliance and comparability makes total sense here. As far as I know, for now, there's no official certification authority for Sparkplug devices and applications, but there is a Sparkplug test compatibility kit, which is also contributed by us at HiveMQ. This is also on its way to be released according to the Sparkplug 3.0 specification. It's basically a web application that contains a test suite to cover several topics in terms of client and application and also primary application compatibility. I think I can also paste the link to that repository because yeah, it's already there. It's available. It's at the official Eclipse Tahu Sparkplug repo. And yeah, as I said, this is basically — if you start and run this, this will start a local web console you can easily access with your browser and yeah, start testing.
Ravi Subramanyan: 00:21:36.569 Awesome. Great. Cool. Thank you for that answer. Oh, you wanted to add something, Matthias?
Matthias Hofschen: 00:21:44.348 No, no. I was just going to point out the follow-up question from David's account.
Is it possible to use HiveMQ together with AWS IoT, or does HiveMQ replace AWS IoT? What are the advantages and disadvantages of using HiveMQ over AWS IoT?
Ravi Subramanyan: 00:21:50.827 Yes, yes. I was just going to bring it up. So yeah, let's address that. Actually, what he's asking is, "We're using AWS for cloud. Is it possible to use HiveMQ together with AWS IoT or does HiveMQ replace AWS IoT? What are the advantages of using HiveMQ over AWS IoT?" Matthias, would you like to take that?
Matthias Hofschen: 00:22:10.785 Well, the simple answer is yes, of course you should replace AWS IoT with HiveMQ. What are the reasons? So with a grain of salt, essentially, the question is: "Do you need AWS IoT? Can you replace it? Are you using features that you're reliant here on?" So for example, AWS IoT is not a 100% MQTT-compliant implementation of the protocol. And if you wanted to use — for example — the Sparkplug specification with AWS IoT, you will find that difficult to do. So at HiveMQ, of course, we would prefer you to use HiveMQ instead of AWS IoT. And that is not a problem on AWS Cloud. So to run a HiveMQ cluster on AWS Cloud, everything that you need can be provided. And if you do that, then the next question will be: "How can you forward messages, MQTT messages, for backend processing, for example?" On AWS, the simplest way to do that, I found, is using for example, the MSK service from AWS, which is a managed Kafka service. And from there, you could, for example, use Kinesis Data Analytics and process your data further.
Matthias Hofschen: 00:23:51.624 So a lot of times using HiveMQ instead of AWS IoT, the question is: "What other backend services do you want to use?" And a lot of times, it's about having a queue that you can let your messages flow into from which all other backend systems can retrieve and process messages for predictive analytics or whatever your use case would be.
Ravi Subramanyan: 00:24:22.867 Yeah, yeah. And I know that we had — go ahead, go ahead, Jens.
Jens Deters: 00:24:26.820 No, the same combination of HiveMQ with the Kafka Extension you can also use on Azure, for instance, to connect to Azure Event Hubs, right? And if you are — in terms of AWS IoT, you are depending on the AWS IoT device management stuff and you're using Greengrass. It's also possible to do a mixture here. So use AWS IoT resources for your device management and use a different way into the cloud using HiveMQ just for the data streams on the data messages. You can use both in parallel if there's a need for that.
Matthias Hofschen: 00:25:13.975 I want to add one thing here. So it shouldn't come across that we're just wanting to sell HiveMQ. Of course, it's always nice, right? But there's a larger point here. And the point is there's a specification. There's a specification, MQTT 3.1.1. And then there is the follow-up specification, MQTT 5. And if vendors don't follow in their implementation of the specification, if they don't implement the full specification, we have compatibility problems. I would really strongly suggest to whatever MQTT broker you want to use to check, do you have the 100% compatibility? Because as time goes on in the next 10 years, year after year, the capabilities of devices and other actors in the MQTT space, they're going to increase and they're going to rely on using these MQTT features. And if I have a broker that doesn't fully support that, then that is a problem that will limit your use cases. And I wouldn't do that.
Ravi Subramanyan: 00:26:23.842 Absolutely. That's a great response. Actually, this addressed one of the other questions that we had regarding vendor lock-in, right? And I think you've made all the points regarding that, but didn't quite mention that. So the only thing I would like to add is that if you go with say, AWS IoT or Azure IoT Hub, if you will, right? So you're pretty much locked into that ecosystem, right? So apart from the benefits that we provide from a general broker, implementing all of the feature functionality of MQTT and the rich experience that we provide, we also avoid vendor lock-in. We provide the variety of being able to connect to multiple different applications apart from just Azure or AWS. Anything else you wanted to add on that point, Matthias or Jens around the vendor lock-in that you may not have said already?
Matthias Hofschen: 00:27:13.132 No.
Ravi Subramanyan: 00:27:14.413 Okay. I think pretty much we [crosstalk].
Jens Deters: 00:27:16.773 I just want to emphasize here that it's not even the MQTT specification the vendors are implementing here — it's just a really basic subset. So it's more or less about — you are able to deliver MQTT messages via the IoT hubs of the different vendors. So there's almost no quality of service (QoS). You're not free to define your own topic structure. There's no support of retained messages. Basically, it's just to connect and to deliver MQTT messages, just to publish in a very limited way.
Ravi Subramanyan: 00:28:02.503 Yes. Okay, great.
Jens Deters: 00:28:04.471 And you also are able to subscribe to topics for sure.
Why do I need to implement Sparkplug specification instead of MQTT?
Ravi Subramanyan: 00:28:09.053 Yes, of course, of course. In the meantime, we did get another question on the chat or the Q&A. Gürkan Cekic had a question about “why I need to implement Sparkplug specification instead of basic specification”. I believe he means MQTT. So maybe if Gürkan is available, if we can maybe bring him in to clarify his question, that would be great. Could we bring him in, Maryna?
Jens Deters: 00:28:39.659 Yeah, I think he's on the way.
Maryna Plashenko: 00:28:41.728 Yeah, he's here. Gürkan, if you can unmute yourself, please, you can talk.
Ravi Subramanyan: 00:28:53.665 Can you hear us, Gürkan? Hello? Yeah, it looks like we might have the same technical issues. David, for example, came back — yeah, sorry, the other person that wasn't able to talk came back and said that his mic wasn't working. So I'm not sure if — yeah, yeah, he clarified actually that what he means is MQTT. So why implement Sparkplug over basic MQTT is what he's asking.
Matthias Hofschen: 00:29:23.089 I'll start with a short answer — the clarification. So MQTT is a wire protocol. So it basically describes how to publish and subscribe messages. And it doesn't say anything about the payload of the message or any other — it doesn't contain anything in its specification that limits, basically, the usage of this protocol. And then that's where Sparkplug comes in. And Jens, you probably should then continue.
Jens Deters: 00:30:03.637 Yeah, sure. So yeah. As Matthias already said, MQTT is the protocol, which is, yeah, connecting clients, delivering messages, receiving messages which make this happen. And Sparkplug is a specification on top of MQTT, which defines a topic namespace, which defines the payload format, which also defines state management. So it covers all the, yeah, at the end, the disadvantages you will get from plain MQTT because MQTT basically is so versatile that you can use it for a lot of use cases. But yeah, with not a defined topic structure, with not defined concepts like Report by Exception, for instance, and so on.
Matthias Hofschen: 00:31:13.334 Of course. Of course, it's a positive thing that MQTT doesn't do that because all verticals will have their own requirements in terms of what might have to be specified. I'm just, for example, pointing to SFERA — that would be a specification for train systems, for driving trains, as an example.
Jens Deters: 00:31:41.550 Yeah, there are more and more specifications coming up for the industry to make use of MQTT, but to specify for a specific industry. Yeah, Matthias already mentioned the SFERA specification for international train connections. There's a new specification from the German Public Transportation Organization, the VDV, the Die Verkehrsunternehmen. They are creating a specification on how to use MQTT and to drive the digital transformation of public transportation.
Ravi Subramanyan: 00:32:30.888 Absolutely.
Jens Deters: 00:32:31.412 There's another standard, which is called a VDA 5050, which defines how robots, in terms of logistics, should coordinate themselves. So to make them interoperable if you have different vendors of logistic robots, for instance. And Sparkplug is the specification to cover mainly, or to focus the manufacturing industry.
Ravi Subramanyan: 00:33:05.872 Yes, yeah.
Jens Deters: 00:33:06.654 But I'm also using Sparkplug for my smart home here at home. So there's a defined payload. There's a defined topic structure. I have concepts like Report by Exception. I have a built-in state management and command messages, and so on. So it defines what I basically have to agree on myself or on my enterprise. Yeah, when it comes to how to use MQTT, so that brings a lot of freedom and solves a lot of problems. Especially with the state management, you have the support of plug-and-play for devices. You're just switching on your Sparkplug-compatible device. And it immediately sends out these — we had this already, the birth message to a certain topic to state, "Here I am. It's a new device." There's a bunch of meter information you can send for the device manufacturer and firmware version, and also the set of commands the device understands and how you can work with that device. So that's really great, I think.
Ravi Subramanyan: 00:34:35.600 All right, good, good. And if I could just add, yeah, I mean, obviously, like you said, Sparkplug is heavily used in manufacturing from my perspective. What I see is that in manufacturing, it's not individual devices, right? You have a subsystem of devices that need to all send messages together. And that's where using the data model that Sparkplug provides, where you can define a complex structure of machine subsystems, really comes into play that I've noticed. And there is actually another comment by Matthew Paris on one of the previous responses that you gave. It's just some clarification on Leigh's question that you answered, Jens. In regard to Leigh's question for Sparkplug, yes, today's Sparkplug requires a client to request a rebirth to obtain all current values. Data is not retained on the broker. People in Sparkplug Working Group are considering improvements to the specification to decouple two purposes of birth. One is the metadata, including data types, templates, etc., and the data types most likely to be stale, where the metadata is kept as retained. And there is some command to request all current values. I think that's a great clarification, Matthew. Thank you so much for that.
Jens Deters: 00:35:55.838 Yeah, that's good.
Does HiveMQ support communication from/to IoT devices based on MicroPython?
Ravi Subramanyan: 00:35:57.742 All right. So we have another question from an anonymous attendee: Does HiveMQ support communication from our two IoT devices based on MicroPython, like ESP32 IoT devices, for example? Anybody would like to take that up?
Jens Deters: 00:36:17.591 So HiveMQ supports MQTT communications. And as long as you have support on these devices with the MQTT library, clearly, you can use it. I also have a bunch of ESPs here at home. They have a Wi-Fi stack on board, so you're able to establish TCP connections. MQTT is TCP-based. So yeah, it works. It works really great.
Ravi Subramanyan: 00:36:51.626 Great. Awesome. Awesome.
Jens Deters: 00:36:52.842 It has some limitations when it comes to deal with certificates or with encoding and decoding payloads. But I think the ESP32 is capable of handling this. Yeah, with an Arduino, you have limitations on the capacity, the CPU power, and so on.
Ravi Subramanyan: 00:37:18.328 Yes, absolutely. Great. Anything that you would like to add?
Jens Deters: 00:37:21.274 But even an Arduino device is able to send MQTT messages and basic values like temperature or humidity or something.
Ravi Subramanyan: 00:37:28.656 Got it. Okay. Perfect. Anything that you would like to add, Matthias, to that?
Matthias Hofschen: 00:37:32.713 Absolutely nothing to this question.
Is your product mainly located to bring one or more production lines to cloud — to combine and be reachable on cloud worldwide in IIoT architecture?
Ravi Subramanyan: 00:37:34.364 Okay. Thank you. All right. So we have another — so Leigh basically came back and said to our conversation, awesome. So the expectation is that the consumer will send an end-of-node command and request the rebirthing. So back to the conversation around Sparkplug, just a clarifying statement. And thank you, Leigh, for that. All right. The next question is, again, from Gürkan. And I know that you probably won't be able to come live, but here is the question: Is your product mainly located to bring one or more production lines to cloud to combine and be reachable on the cloud worldwide in IoT architecture? Honestly, I will replay and record and take notes as I don't have a stable internet connection. So basically, that's what he's asking. Yeah. Who would like to take that?
Matthias Hofschen: 00:38:24.762 Let me try to give an answer to this one. So yes, combining production lines and sending telemetry data from devices via, for example, a local HiveMQ Broker onwards to a cloud broker. Yes, absolutely possible. And now the question is gone.
Ravi Subramanyan: 00:38:55.020 Oh, I'm sorry, I'm sorry, I'm sorry. I think it's in the answer section.
Jens Deters: 00:38:58.709 Yeah. Answer section.
Ravi Subramanyan: 00:39:00.982 My apologies.
Matthias Hofschen: 00:39:01.988 There it is. And well, what does it mean reachable? So usually what you want to do in terms of a manufacturing architecture is that the central HiveMQ installation or the central MQTT broker installation collects all the messages from all the production lines. And backend systems subscribe either directly as MQTT clients or you're using a system, and I favor that in architectures. Or you're using a queue, for example, the Kafka queue that we have mentioned earlier. So basically, you have your HiveMQ Broker and install the HiveMQ Kafka Extension on it, connect to a Kafka queue and have it forward all the relevant MQTT messages to Kafka. From Kafka, you can then run as many consumers as you want, backend consumers to consume these messages and act on the messages for predictive analytics, for storing in a data lake or in three buckets, or whatever you need to do, depending on your use case. That's a good pattern because the backend systems will not add load to the actual running MQTT broker. And so you can, at any point, have exploratory data analytics on the Kafka queue or on the other queues that you're using and not worry about implicating any production setups.
Ravi Subramanyan: 00:40:43.654 Awesome. Thank you, Matthias.
Jens Deters: 00:40:44.686 Yeah. That's how fast data architectures are built. So for instance, if you have millions of cars and each car has 10 or 15 MQTT connections, mostly, it's not one MQTT connection per car. So there are many devices establishing MQTT connections. So we are talking about millions of MQTT connections to a broker cluster, sending very high-frequency 10K, 20K messages. So you have to deal with all that. And we have to build up a resilient data pipeline for that. And having a HiveMQ cluster with the Kafka connection — that's really a dream team here.
Is it possible for customers to organize their own security certificate system?
Ravi Subramanyan: 00:41:37.562 All right. Thank you, Jens. Thank you, Matthias, for answering that question. So if I could just add from our customer perspective, right? I mean, just looking at this, we have a lot of customers, again, that are using HiveMQ to connect their production lines. They have multiple locations, like we mentioned, across the world, and they want to ensure that all the devices within the individual locations can talk to each other, bring the data together using our broker. And then all of that can be combined on an enterprise location. Now, the enterprise location can be on the cloud, of course. That's always great because cloud provides a lot of great capabilities. But if cloud is not possible, it could also be on-premises, but it could be in centralized enterprise location defined by the client. We have also had that implementation as well. And the Bridge Extension that we talked about is the one that bridges from the local individual location to the enterprise location. So a lot of our customers have used that. Great. Thank you. Okay. Any other? Yeah. So I see one other question. Okay. So he just says, "Thank you." All right. At this point, I know we talked a lot about communication. Let's fork over to maybe security. I think that's a question that everybody is always interested in. And this question is posted by Andreas. I know Andreas is not on the line, but specifically: Is it possible for customers to organize their own security certificate system? This is the question. Who would like to take this question up?
Matthias Hofschen: 00:43:09.031 I can give it a shortish answer. So essentially, what to keep in mind here is that the MQTT protocol in itself doesn't specify security. So it's up to the actual MQTT broker implementation on what is offered in terms of security. So I'll take a reference to HiveMQ, of course, and what we would offer here. Certificate management is basically two things. It's the server certificate. And maybe also as a contrast to some of the cloud providers, you can configure HiveMQ service certificates any which way you want. It can be a self-signed certificate, or it can be a CA-signed certificate that is flexible. And essentially it is being configured onto the actual HiveMQ node. What's important to remember here is that self-signed certificates usually require clients to have access to the public key. Whereas CA-signed certificates, if they are part of — for example, let's say you have a Java client, if they're part of the CA certs file from Java, so they are inside of the certificate store of the Java installation that you're running, then you would not necessarily have to have a public key here. The second part here is, of course, you can also do mutual TLS, which means that there is a client certificate. And the client essentially authenticates by providing his client certificate.
Matthias Hofschen: 00:45:07.531 Here we offer, from HiveMQ, the Enterprise Security Extension. And the Enterprise Security Extension can access the incoming certificate, parse it, and for example, retrieve a value from the certificate that then is further used for authorization. So basically to figure out what is this client allowed to publish to or subscribe to. So in terms of a certificate system, it's flexible. You can set up PKI infrastructure. You build up your keystores and truststores and then incorporate them into HiveMQ and use them there.
Ravi Subramanyan: 00:45:57.134 Thanks, Matthias. Jens, would you like to add something?
Jens Deters: 00:45:59.611 Yeah. And with the Enterprise Security Extension, you are also able to deal with JWT tokens and OAuth 2.0 implementations, right, so?
Ravi Subramanyan: 00:46:16.335 Yeah, yeah, absolutely. Absolutely.
Matthias Hofschen: 00:46:18.681 Maybe just to mention this here very briefly — we have talked about all these different extensions, the Kafka Extension, the Bridge Extension, the Security Extension, and so on. So obviously, within HiveMQ, we have an extension system, which comes with an SDK, like a programmable SDK. And essentially, it's like a set of LEGO stones. So you can extend and implement custom functionality with this. And we, of course, offer lots of services around that, just to round out the discussion of extensions.
Ravi Subramanyan: 00:46:58.463 Yeah, yeah, absolutely. Absolutely. A lot of our customers actually use the SDKs to build their own extensions, to their own custom applications that they may have. And I understand that we actually help develop some of those custom extensions as well on a client basis. Yeah.
Jens Deters: 00:47:15.629 Yeah, yeah. We are not only supporting implementing custom extensions, we are also reviewing custom extensions. And yeah, to make sure they are properly programmed and will not harm an existing cluster. And we also grant —
Ravi Subramanyan: 00:47:45.951 I want to add, that is — I think, Matthias, you mentioned that we can actually use some of the security postures that the IT team of the customer already has in place with our security extension, right? Meaning a lot of our customers come and say, "Hey, we already have IT policies. These are the policies from a security perspective. We don't want yet another software to have its own security policy that we have to manage. We want to reuse whatever we have." And the security extension helps with that, which a lot of our customers really like.
Matthias Hofschen: 00:48:17.551 So I want to be careful here because there are a lot of security systems out there, a lot of ways on how to implement security. The Enterprise Security Extension tries to be as flexible as possible. But of course, we would have to look in detail on what is entailed here. If there's an LDAP system, yes, that can be incorporated. An SQL database that contains security setups can be incorporated. Jens already mentioned JWT token parsing. So OAuth workflows can be implemented. So there's a whole lot of things possible. But of course, in each individual instance, we'll have to take a detailed look at how to do it.
OPC UA vs. Sparkplug: What separates MQTT from other similar standards-based protocols when it comes to enabling IIoT data connectivity?
Ravi Subramanyan: 00:49:04.030 Absolutely. And thanks for the clarification. Great. So the other question that I have is, again, another favorite of our customers, OPC UA versus Sparkplug, right? What separates MQTT and other similar standards-based protocols when it comes to enabling IoT data connectivity? Now, obviously, we provided some of these things, but specifically, OPC UA vs. Sparkplug is the question.
Jens Deters: 00:49:28.644 Yeah. I already mentioned some features of the Sparkplug specification or what is defined within the Sparkplug specification. And basically, OPC UA is not a protocol. Also, Sparkplug is not a protocol. The protocol we are talking about here is MQTT for Sparkplug. And HTTP with the sub on top of it when it comes to OPC UA. And basically, OPC UA and MQTT follow different communication patterns. So obviously, MQTT is Pub/Sub and OPC UA is HTTP-based — it's request response. So with the Pub/Sub, you have a very distributed connection for all the clients and consumers. With OPC UA and HTTP, you have a direct client-server connection, which in large enterprises, leads to kind of a spaghetti architecture. So anyone who is interested in data has to connect to the right server, which is providing the data you might be interested in.
Jens Deters: 00:51:01.681 This might not fulfill the requirements of the Industry 4.0 movement. So if you want to turn your enterprise or your business into a data-driven company, you have to implement a different information model for your enterprise. For industrial IoT, this means you have to implement a Unified Namespace (UNS) as the center of your data connectivity and your data exchange. So this means what you would like to achieve here is that anyone in the business, who is interested in any kind of data or information, should be able to be provided with this data and information in almost real time. So when dealing with OPC UA infrastructure, it's really complicated to share all the data in this Unified Namespace way. You have to deal with several steps using middleware to make sure that the different layers of your enterprise are able to share data here. With a Unified Namespace, anyone of the enterprise is able to access and to consume data. And I think with OPC UA, it's really hard to create a Unified Namespace.
Jens Deters: 00:52:46.419 And the fun fact here is that recently, the OPC Foundation added a new chapter to the already existing 1,200-plus pages of the OPC UA specification. It's chapter 14, and it's about Pub/Sub. And yeah, so they recommend using MQTT or AMQP, but they basically recommend MQTT for that when it comes to Pub/Sub to create a Unified Namespace. In that case, I would really recommend to consider changing your complete information model of your enterprise. You can still use OPC UA for different things, but you need to implement gateways to provide MQTT traffic.
Ravi Subramanyan: 00:53:55.129 Got it. In other words, we could use the Pub/Sub capabilities of OPC UA to bridge to an MQTT cluster.
Jens Deters: 00:54:01.498 Right. That's right.
What would be the expected behavior of a subscriber when an out-of-order message comes in (sequence number not as expected)?
Ravi Subramanyan: 00:54:02.482 So that would be an effort, but it's doable. Yeah. All right. Okay, so I think we're getting to almost the end. We can probably take a couple more questions. One is back to Leigh. So the question he asked is: What would be the expected behavior of a subscriber when an out-of-order message comes in? Sequence number not as expected. Would you like to address that, Jens, since you were already answering Leigh's question or?
Jens Deters: 00:54:31.372 Yeah, so it really depends what Leigh is focusing here, so.
Ravi Subramanyan: 00:54:39.705 Yeah, maybe Leigh. I don't know —
Jens Deters: 00:54:40.348 It would be great to get more context here. So basically, a primary application would behave differently than a data consumer at a historian database or other MQTT application.
Leigh van der Merwe: 00:54:53.766 Yeah. As you can see, I'm very interested in getting into the nitty-gritty of Sparkplug, where we're implementing — [crosstalk] that's going to, yeah, essentially read — sorry, and write into a historian. So we've got specific cases, which are a bit outside of the primary application. We're trying to figure out, so 0 to 255, what do we do when we do get an out-of-order message? Do we just continue on as if nothing's happened? The sequence numbers are there for a reason. What we can't quite figure out is what the expected behavior is meant to be when we get an out-of-sequence payload.
Jens Deters: 00:55:37.202 How do you detect that a sequence number is not as expected? So it's a duplicated sequence number of an existing message, or?
Leigh van der Merwe: 00:55:59.004 Yeah, so we keep a state on the consumer side that knows what its last sequence number was and attempts to process them in order. So if we just happen to not find number 128 for whatever reason, what do we do when we get to 129?
Jens Deters: 00:56:17.027 And don't you also have a look at the timestamp of the message or the metric you would like to store to this?
Leigh van der Merwe: 00:56:28.268 No, we don't, actually. We write into the database using the metric timestamp, but we don't process them in time order. We'd be processing them within a sequence order.
Jens Deters: 00:56:42.914 Ah, okay, okay.
Leigh van der Merwe: 00:56:44.469 So our timestamps are driven by the PLCs, the devices that we're reading from. So if they've got some timestamp — some in a time drift, then we're going to have out-of-time order messages. So we found that atomic clock.
Jens Deters: 00:57:00.698 Yeah. Also, some PLCs don't have a clock on it, right? So you can rely on the timestamp sometimes.
Leigh van der Merwe: 00:57:09.104 Yeah. Yeah.
Jens Deters: 00:57:10.480 All right. Yeah.
Leigh van der Merwe: 00:57:15.156 I'm happy to take these sorts of things offline if you want to, so [crosstalk].
Jens Deters: 00:57:18.191 Yeah. So I don't actually have a quick answer on that, but I'm happy to keep in touch with you to discuss this and to find a solution for you.
Ravi Subramanyan: 00:57:29.184 Yeah, and I know Leigh, while you're on the line, you did have another question about persistence, persisting the data on a historian. So maybe that could be something that we could also take offline.
Leigh van der Merwe: 00:57:40.285 Yeah, I've hogged it enough with Spark questions.
Ravi Subramanyan: 00:57:43.373 No, no, no, no, that's fine. It's all good questions.
Jens Deters: 00:57:44.691 No, no, that's great. So basically, we see a lot of attention to Sparkplug in the industry. That's really great. It's really increasing very fast.
Leigh van der Merwe: 00:58:01.177 I don't know if the other person's still on, but we implemented our own Protobuf-based protocol or messaging format. And we found that there was a bunch of gaps that we hadn't considered. So interoperability, and also there's a bunch of people who are thinking about problems that you're probably not going to think of. So if you have the opportunity, instead of using MQTT Basic, I think as it was put, Sparkplug solves a bunch of problems that you don't even know about.
Jens Deters: 00:58:27.406 Exactly. Yeah.
Ravi Subramanyan: 00:58:29.581 Yeah. We do have Matthew still on the line. I know we are up against the time. I don't know if we can chime in or maybe we can have an offline conversation. But thank you so much for your feedback. Leigh, much appreciated.
Leigh van der Merwe: 00:58:42.933 Thanks so much.
Ravi Subramanyan: 00:58:44.521 Thank you. So we have a couple minutes. Maryna, are we okay to just take one last question or?
Maryna Plashenko: 00:58:51.255 Yeah, sure, sure. Let's go.
What is the most common form of broker cluster deployment in Industrial/Manufacturing projects? In the cloud or on-prem?
Ravi Subramanyan: 00:58:53.563 Yeah, so just one last question, and this is about the common form of broker cluster deployment in industrial manufacturing projects — is it cloud or on-prem? I think we kind of answered some of these questions, but the question is from a cloud architecture perspective: Do IoT devices connect to the broker directly or public internet or additional broker instances running on DMZ? Submitted by Jakub. You want to —?
Matthias Hofschen: 00:59:21.379 So just a very quick answer here. Probably hard to cut it into one single answer. It will depend on the individual use case and situation of the customer. Some of the building blocks would be the Bridge Extension to connect brokers with each other. So yes, public internet is most of the time part of the solution. Sometimes we have customers that have internal company networks, so that would be okay too. If it's public, then one of the requirements would be TLS so that connection is secured. Yeah. And so having several clusters connected with the bridge, I think is the main answer to this question.
Ravi Subramanyan: 01:00:18.036 Awesome. Yeah, thank you. It's much appreciated. Jens, anything to add from your side or good?
Jens Deters: 01:00:24.942 No, it's really, as always, it depends. So it really depends on your use case, on your requirements. Do you have a distributed environment with several plants all around the planet? Or is this a local shop floor only? So do you want to connect some kind of your data to cloud services for analytics? It really depends. But you can do anything you can imagine in combination.
Ravi Subramanyan: 01:01:00.859 Absolutely. And it also depends on the customer use case, whether they want to stay on-prem or cloud or the region of the world they're in. Sometimes public clouds don't work for them. They have to be on private. So they might choose not to go public over the internet. So that also. All right. With that, I would like to hand it back to Maryna because I think we have exhausted all the questions. And thank you so much for the time. Maryna, go ahead. It's all yours.
Maryna Plashenko: 01:01:27.036 Yeah, thank you. Thank you so much, Jens, Matthias, and Ravi. Yeah. And thank you, everyone, for attending and for interacting and submitting all of these questions. So I will run the second poll now and we'll leave it for a minute so that you could provide your feedback. Yeah, so let me do this. Okay. The poll is up and running. Yeah. So I'd like to say that we will be sharing the recording of this webinar together with the links to some useful resources in the follow-up emails. You will be getting those in the next couple of days. Also, soon we will announce our next Ask Me Anything About MQTT session on our website. So keep an eye on it. And always feel free to reach out to us in case you have any questions. We're here to help. Yeah. So I'll give you another 30 seconds to provide your feedback. And thank you so much. I can see that you're participating in the poll. It's wonderful. Yeah, while we are finishing the poll, maybe you have any additional closing thoughts, Ravi, Jens, or Matthias on our session today?
Ravi Subramanyan: 01:02:57.933 Yeah, I would like to defer to Jens and Matthias to provide their closing thoughts.
Matthias Hofschen: 01:03:03.873 I can only say 100% MQTT.
Ravi Subramanyan: 01:03:08.003 Awesome. Jens, second that? [laughter]
Jens Deters: 01:03:10.336 Yes. Yeah, as I already mentioned, I see a lot of traction in the direction of Sparkplug. And yeah, I really appreciate that. I really like that. So I think Sparkplug is just starting and it's getting more and more important for the industry.
Ravi Subramanyan: 01:03:30.865 Absolutely. Absolutely. And I can just add that the customers that I talked to from a manufacturing perspective — they are very excited about MQTT, right? Even without being technically oriented, just the feature functionality that we provide, the ability to basically use a quarter of the bandwidth to provide a lot more output is enough for them, right? I mean, the amount of money they can save on cellular costs and other costs is the clincher, not to talk about the efficiency that it brings as well, so.
Maryna Plashenko: 01:04:02.994 Yeah, thank you so much. I have ended the poll now. Thank you very much for participation. Thanks again, Jens, Matthias, and Ravi. Thank you, dear audience, once again. And let us call it a day. It was great to see you all. And goodbye. Have a great day, everyone.
Matthias Hofschen: 01:04:22.768 Thanks, everyone. Bye-bye.
Jens Deters: 01:04:24.036 All right. Excellent. Bye-bye.
Maryna Plashenko: 01:04:26.164 Bye-bye.
Ravi Subramanyan, Director of Industry Solutions, Manufacturing at HiveMQ, has extensive experience delivering high-quality products and services that have generated revenues and cost savings of over $10B for companies such as Motorola, GE, Bosch, and Weir. Ravi has successfully launched products, established branding, and created product advertisements and marketing campaigns for global and regional business teams.
Jens Deters has held various roles in IT and telecommunications over the past 22 years: software developer, IT trainer, project manager, product manager, consultant and branch manager. Today Jens leads the Professional Services Team at HiveMQ. As a long-time expert in MQTT and IIoT and developer of the popular GUI tool MQTT.fx, he and his team support HiveMQ customers every day in implementing the world's most exciting (I)IoT UseCases at leading brands and enterprises.