Skip to content

Smart Factory Platforms, Machine Connectivity, and the Unified Namespace

by HiveMQ Team
51 min read

In the dynamic landscape of industrial technology, the push towards fully integrated smart factory platforms is revolutionizing how manufacturers operate. The transformative power of machine connectivity and the Unified Namespace within the manufacturing sector is now evident as many enterprises implement them. Embracing these technologies not only enhances operational efficiency but also prepares enterprises for future scalability and sustainability. 

Touching upon this topic, HiveMQ Community team hosted an event titled CONNACK, where Marc Jäckle, Technical Head of IoT at Maibornwolff, gave a talk on ‘Smart Factory Platforms, Machine Connectivity, and the Unified Namespace.’ In this talk, Marc highlighted the importance of strategic digitalization and how companies can leverage IoT and industrial IT to navigate the complexities of modern manufacturing challenges.

The idea of the Unified Namespace is to really publish everything in real-time in the Unified Namespace and not just the process data coming from the machine. This includes data from ERPs, MEES calculating KPIs, new material information, and even the positions of AGVs. By making this data instantly available, we can answer specific questions much more quickly and dynamically, facilitating a transformative shift from traditional data access methods.

Marc Jäckle
Marc Jäckle Technical Head of IoT at Maibornwolff

Developing Modern, Cloud-native Smart Factory Platforms

Watch Marc Jäckle, Technical Head of IoT at Maibornwolff, discuss Unified Namespace and the importance of connectivity solutions allowing users to connect to machines using industry protocols, define the data points they need, and effectively retrieve them.

Transcript of the Video

[00:00] Hi everybody, my name is Mark. I'm technical head of IT at Maibornwolff. Maybe just a quick note on my one off. We do custom software development, custom solution development, and as a company we're quite focused on product digitalization and digitalization of the factory. So about half of our 930 employees are currently working on such projects, meaning on IoT or industrial IT projects, from building web apps for these mobile apps, doing data analytics, machine learning, use cases, connecting devices, plans, building IoT platforms, and so on. So the whole end-to-end journey, sometimes everything, sometimes just part of it. And I will build a bit on what Walker presented now. So we've been introduced to the Unified Namespace concept. Some of you probably already knew about it. And I'm going to present on how we at Maibornwolff typically build smart factory platforms, how we build Unified Namespace, and how we do machine connectivity. So I'm going to talk a lot about technology and to some extent also about more conceptual changes like the Unified Namespace. The picture here shows a CIO or plant manager that realizes a few years too late that he should have built a Unified Namespace, a smart factory platform, and to transform his enterprise digitally. And unluckily, this was too late. Not sure if it's Ford that Walker keeps talking about, but yeah, so let's dive in. 

Why Smart Factory Initiatives Fail

[01:55] So actually there are a lot of reasons why smart factory or industrial IoT initiatives fail. Sometimes it's that the company doesn't have a digital strategy, that it's just this one person that's responsible for the smart factory and doesn't have any C-level support. One of my favorites is we had two companies now that were busy with the integration of a new ERP system which kept them from transforming their enterprise. Companies get stuck in a PoC purgatory, moving from PoC to PoC to PoC to prove that you can actually get data from the shop floor into the cloud. Yeah, nobody knew that before. And the good news is I'm going to help you at least with one of these reasons. And the reason is that a lot of companies lack a unified end- to-end infrastructure and architecture on which they can build their smart factory use cases. And so let me start with the five key success factors from my perspective that you need to successfully build such a platform. Such a solution. The first one is to use Kubernetes from cloud to the edge to build a really unified platform. The second one is to use multicluster management tools. The third one is to get the machine connectivity. Right, going to talk more about this. Obviously, the fourth one is to build a UNS. And the fifth one, just as important, is to build a data catalog. So let's dive in maybe to start with those not familiar with the terms on the left. Those signify programmable logical controllers, logic controllers. Those are essentially industrial PCs that control and automate industrial machines. And usually we also have some sensors on the shop floor, actuators and so on. 

Different Types of Smart Factory

[4:00] So why do we even have different types of environments in a smart factory platform? So the reason is quite simple, we have different workloads that require different environments. So a lot of our customers, for example, follow cloud-first approach. That means they want to do as much as possible in cloud. So we often do data analytics in the cloud. We build services that implement some smart use cases, or we do machine learning development, we do experiments, machine learning, model training, etc., everything in the cloud. We do CI/ CD there and so on. So we try to deploy as much as possible in the cloud, probably about 80% of the stuff that we deploy. Then we also have things that we can do on the cloud side. The first thing obviously is to do machine connectivity. So very often we're not allowed to connect to industrial machines on the shop floor from the cloud or even from corporate networks. So we need a second environment that sits somewhere in the IT/OT DMZ or some part of the production network from where you're allowed to connect to industrial machines. So you can connect to machines, get data from them. Then often we also want to run production critical services on-prem as well. Production critical means that these are services without which the production would stop. So for example, in case we have a cloud connectivity failure which happens from time to time, these services should keep running and they should be able to run in a highly available manner. So these are workloads that we usually put on-premise in a highly available environment. Then you also might want to do some data reduction to move less data to the cloud, maybe have some requirements to do low latency machine learning, for example for visual inspection or something like this. So we need some central environment in the plant. The third environment, which I call Edge, is essentially an environment that runs on a single-instance edge device, which we actively try to avoid because they're expensive. You need a lot of them, they need electricity, you need to install them, you need to operate them. So you actually don't want them. Instead you want just maybe 3 19-inch pizza boxes in a rack somewhere that can be managed very easily and of which you don't have too many, but in some cases we're not able to connect from a central environment. In a plant to PLCs, for example, we often have the rule that this is only allowed if we use encrypted protocols. So it's not allowed if you use, for example, unencrypted S7. If they have some S7 300-400 systems that only speak unencrypted S7, we have to put an edge device on a network segment on which the machine runs, so it's allowed to connect to this machine. Then sometimes also we have to put an edge device on these network segments, for example to do visual inspection if it's not allowed to connect the camera directly to the central environment. But you have to connect it to an edge device and then the edge device to the network of the machine. So in these few cases, we can't avoid using edge devices, but we actually try to avoid them. 

Importance of Having a Unified Technology Stack From Cloud to the Edge

[07:35] Now, the first success factor, in order to simplify development and operations of these environments, you really need to have a unified technology stack. So you can't handle having three technology stacks on each of these environments. So what we do is we use Kubernetes from the cloud through the central cluster in the plant to the edge device in order to unify the environments as much as possible from a development and an operations perspective. And since we often have different Kubernetes distributions, for example maybe AKS or EKS on the cloud, and then we have K3 as on-prem, we also have to unify these environments through adding add-ons for things like monitoring, logging, deployments, etc.. So these different distributions are as identical as possible. Again, from a development operations perspective. Now we already have three different types of environments. We have the cloud, we have the central cluster in the plant, and we have edge devices. Now this means that there can already be multiple environments per plant. So you have the cloud on the left, and you have maybe some edge devices, and you have a central cluster. And as many companies don't have just one plant, this gets even worse because each additional plant adds additional environments. So if you have ten plants and you have ten environments per plant, you're already at 101 environments that you have to operate, that you have to deploy to. So these are a lot of environments to manage. And this is where the second success factor comes in. You need some form of multi-cluster management to be able to manage all these environments in an efficient way from a small platform team that can take care of all these environments. 

Why Have a Central Management System in Smart Manufacturing?

[09:40] So there are different types of solutions that you can use like D2iQ, DKP, Rancher, etc.  One has some more features than the others, and you have to do more or less depending on which solution you choose. But in the end, you need a solution that helps you to centrally manage the lifecycle of all these environments. Meaning that you can create new clusters, that you can update the clusters, that you can shut down clusters, and it should be as easy as creating a cluster YAML, pushing that to git, and then all the things will be done automatically. So for example if you create a new cluster, you create such a cluster YAML with defining how the cluster should look like. You push it to git, then it will be picked up from a central management cluster that will maybe use vCenter APIs to first instantiate VMs on the on-prem infrastructure, install the operating system and Kubernetes, and then install all the add-ons or it will use lights-out management to provision the operating system directly on the bare metal hardware. There are a few more things that are important about this central management. It’s that you also need centralized monitoring and alerting. So the idea is that as soon as a developer deploys something to one of these environments, to these workload environments, that he will instantly have all the metrics from these applications available in a central location. So he can see all the metrics in his dashboards, he can deploy his dashboards’ alert rules in the central location and monitor all of the clusters. The same goes, of course, for a platform team that manages all these clusters. So what we do here, for example, is we have Prometheus instances running in all the environments that use remote-write to write all the metrics to a central management cluster. And then we have all the data coming together there and logging. We usually don't do this because of the amount of data. What we do there is that we collect the log files from the edge devices and push them to the central cluster in a cloud and then have something like OpenSearch or Elasticsearch to be able to look at the log data. Another thing that's important is also team management. So you have 100 of these environments. So if you want to set up a new team, you just want to create this in a central location and then automatically for this team, Kubernetes namespace will be created in all of these environments automatically. They will get all the access rights, single sign-on will work for them, and it's all being automated. And you just have to maybe define a new team as a YAML file and deploy this to your central management cluster. Then another aspect is, again, you don't want to write deployment pipelines for all these environments. Instead, what we do here is use GitHub's approach to do multicluster deployments using, for example, FluxCD or ArgoCD, where you then just define in your deployment definition to which of these environments your deployment should be deployed. Could be all of them, could be a group of them, could be just one. And what these tools also bring, or what we add later on, is things like security, scanning, policy management, and other things like this. 

Common Challenge in Smart Manufacturing 

[13:20] As many of you might know, a further challenge in many factories or many companies is that you have an awful lot of different PLC's per plant, different sensors, and they all come from different vendors, maybe from different time periods. And of course each plant is going to be different because they've been built at a different time. So they may just have different hardware generations or devices from different vendors altogether. So, and what makes it worse is that all these devices actually even often speak different industrial protocols. So this is just to give you an idea, the list of protocols that Litmus Edge, that's one of these cloud connectivity solutions, supports. Don't have to read them all. It's just that there are a lot of industrial protocols, some are more common than others, but at least in brownfield projects, you’re gonna meet quite a few of them. In green field projects, when newer factories are built, you can usually live, you only have to live with maybe two OPC UA client servers and maybe something like S7. So how do we deal with this? 

Machine Connectivity and IIoT Data Interoperability

[14:40] Well, the first step is obviously to get data from this machine, meaning you need the solution that speaks all of these protocols. So could be OPC UA, for example, with S7 here, and maybe a sensor that you connect to via HTTP. So need a solution that lets you deploy connectors either to the central cluster or to this, to edge devices that you might need to connect through unencrypted protocols. Now the problem is — that's only the first step because all of these machines, if they come from different vendors, even if the same type, so you have a robot from Kuka and so on, they all have different data points. They have different data structures. And if you just get the data and send that to the Unified Namespace, you will have a problem, because whatever you're building needs to deal with these different data structures and different data points. So it's extremely hard, if not impossible, to scale a use case across production lines or even across plants, because each time you have to implement logic that deals with these differences. Same goes for data analytics in a data preparation phase — you don't want to have to handle all these differences. Instead, what you want is a standardized model for a certain type of machine. So we have an example here. We have a type of machine that comes from Vendor A and from Vendor B. They have different data points. They have different data structures internally. So what these connectivity solutions must let you do is to obviously connect through these industry protocols to the machines, define which data points you want, get the data points, and then that's the really important part, let you define a standardized model for the machines and transform and normalize the data into the standardized model. So you're able to treat these machines in the same way, from here on, as soon as you publish them. To an MQTT broker, this is not always as easy as it sounds, because sometimes a machine might not have certain data points, you might have to add an additional sensor and get that data point from that sensor. And so it can get a bit complicated, but you can get quite far with just doing transformation and normalization of the data. So by doing this, you're then easily able to scale the use case that builds on the standardized model to new production lines or even new plants. So it reduces the cost and effort considerably. 

Role of MQTT and MQTT Broker in Building a UNS

[17:27 ] So now we got the data, we transformed the data into a standardized model. The next thing we want to do is to make this data available in a scalable manner. And this is where MQTT comes in and MQTT broker. So we want to use publish-subscription mechanism and not point- to-point connections because the point-to-point connections don't really scale. If you've tried to connect, let's say more than five services to an OPC UA server from Siemens for example, you know that this is not going to work. Even a S7 1500 will not scale beyond a couple of connections. So instead what we use is MQTT as a publish-subscribe mechanism. So you publish the data once that you're getting from the machines here on the left through those connectors. And then multiple services applications can subscribe to this one publish and use that information to do something else. And then again, the idea is that these services also publish the information they create, the results they create, in this central MQTT broker. And we bridge the broker in the plant to a broker cluster in the cloud. So all of the data from all of your plants come together in the cloud. So you can build services that compare the data from the different plants. For example, to figure out why one plant is doing something, why one KPI is better in one plant than in the other plant. What are they doing differently? So you can compare the data between these two plants and build additional services there. 

Efficient Data Analytics with MQTT and UNS 

[22:35] And now we already publish the data on MQTT broker, and now this is where the Unified Namespace comes in that Walker was talking about. And that's not just the technological challenge, but it's more of a mindset, cultural change in your company, because the idea of the Unified Namespace is to really publish everything in real-time in the Unified Namespace and not just the process data coming from the machine, the data from the MES from an ERP. So if there's a new work order being created from the ERP, it's being published in a Unified Namespace. If the MES calculates some KPI, like some availability, it's going to be published in UNS. If you get some new material, information is going to be published. If something is stored in a storage location, it's going to be published in the Unified Namespace. If you have AGVs, the position and their status is published in Unified Namespace. If you have forklift drivers, their position and what they're currently doing is published in Unified Namespace. Everything is being published in Unified Namespace. So other services, tools, applications can consume this information and do something else with it and then again publish their result in Unified Namespace. One example might be — the ERP issues a new work order for filling Line 1 here, and then there might be some service that loads the respective configuration for this work order and reconfigures the machine automatically based on the new work order. So you can completely automate the reconfiguration of machines. For example for this. Or the MES might subscribe on all the process data coming in from the filling lines and then calculate some KPI and again publish this case API to Unified Namespace. So the structure is also based on the ISA 95 equipment model of the model levels here. But you should always adapt this a bit to your specific needs. For example, you might also combine all the KPI's under a KPI subtopic, or you might not have separate areas and instead just lines, whatever is really fitting for you. So that's the idea of the Unified Namespace. And this is really essential. And it's more of a cultural change because so far, if you wanted data from the ERP, you had to go to the SAP team and then it's been, it's going to go for weeks or months until you get data from the ERP system and then they tell you but you can only query it every 5 seconds because it can cannot handle a load and so on. So this is a transformation from doing this in weeks or months to having it instantly available when you want to answer a specific question that comes up because you have all the data available. Now we already have the data in the cloud, so how do we get this into data analytics? Very easy and probably, Kai is going to be happy, we add Kafka to the MISC or some at least Kafka-compatible managed service, which means we typically move all the data that comes in from the MQTT broker also into Kafka and use Kafka as starting point to do stream processing via Apache Flink or Spark structured streaming if we can't avoid it. And also to store all of the data into a time series database and into some form of object storage like Azure OneLake or S3D, whatever cloud provider you're on and from where you can then use some query engine or notebooks to query the data and do any form of analytics or build machine learning models on the cloud providers platform and train your data, train your models, et cetera. Also what's helpful is to have a machine learning platform like Azure ML that lets you also deploy models to any on-prem Kubernetes cluster so you can serve and monitor your models running on-prem as well in the cloud in a central location. 

Making Data Discoverable and Understandable Through a Data Catalog

[24:00] Now let's come to the last success factor, and that's one I think that people are not aware of enough yet. So it's to integrate a data catalog in your platform because you have a lot of data sources. Now you have maybe some OPC UA data sources on the left and you have MQTT topics, you have Kafka topics, you have data in object storage, you have time series DB, you have other data in the rest of your enterprise. And how do you discover this data? You need something where people can search for data sources to discover them, to understand them. A place where everything is documented, where the schema is documented, where you can, for certain data sources, even query the data right away. For example, if you use something like Azure Purview, you can also directly query data that's stored in about 15 or 20 database types, I think. But you also need to document your Unified Namespace — what data structures are published there, what does it all mean? So people building systems, building tools, really understand the data and don't have to start researching what certain data points mean or where they can find it. So this is really another important thing in such a platform to have a data catalog to really let people discover data to be able to work with it. And maybe, if you look ahead, you can even then use this to feed all of your historic data and all of your real-time data into ChatGPT, OpenAI based models, for example, and enable your regular employees to become data scientists because then they can use ChatGPT to ask questions to the data. And you don't need some data analyst implementing some query for the user, but instead they can really interact with the data and ask new questions that nobody wrote source code for. So this is another thing why data catalogs are important, where you really make all of the data source that you have discoverable and usable. All right, so let me summarize this again. The five key factors, five key success factors so I can wake you up tomorrow morning at three and ask you about them. So use Kubernetes from the cloud to edge, use multicluster management tools and things like centralized monitoring, get the machine connectivity, build a Unified Namespace, and also important, build a data catalog. Thank you.


Attendee: We're quite familiar with Sparkplug. Operationally do you use Sparkplug? 

[27:30] More often not than yes, because of the limitations that Sparkplug currently has. It starts with them requiring you to use Quality of Service Level 0 for all of your data, which is a no no for many customers. So they don't accept this. So they want to use Quality of Service level ions to make sure they receive all the data, especially if you're in pharma or in a chemical industry where you have GXP data that may not get lost. So you have, maybe, legal reasons that you may not lose the data. So in these cases, Sparkplug is out of the picture right away. And then you have those strong limitations on how you structure your topic namespace. What’s also a problem is that it's very device centric, so all the data for a device is published under the same topic and you can subscribe to individual data text. That's also an issue. There are some workarounds. You can introduce virtual devices and things like that, because that means that if I only care about a certain data tag, I still have to process all of the data coming from the device and then throw away 99%. And this just adds to cost. So these are examples for reasons why we try to avoid Sparkplug. Sparkplug makes sense if you use a lot of off-the-shelf software where the standardization helps. So it's, I tend to say smaller enterprises that use the off-the- shelf tools. For them, Sparkplug is more fitting than for larger companies. Yeah, essentially we put ISA 95 levels into the group id, as most people probably do, but you just don't have the flexibility that you really want. And unluckily, it doesn't seem to move forward in that direction. But I'm still hopeful. It has so much potential. It would be, yeah, sad if they didn't live up to it. 

[30:29] There are two parts to my answer. The first one is it helps if you use a machine connectivity tool, for example, that lets you implement some form of governance process. Ideally one that uses git and YAML files where you define everything, and then you can review merge requests and maybe just deploy this to a staging environment first before you put it into production. Something like Cybus Connectware or edge, that are YAML-based, make this much easier than tools that are using more low-code visual front ends, like Litmus Edge, because they have to implement it in their tool. So this is one thing I would do. Of course, we only use these Gitops-based approaches in more mature enterprises that are not overwhelmed by the approach. And the second part is you can talk to the HiveMQ colleagues. They have a solution to be able to check the schemas of the messages that are coming in, so nobody is sending any rubbish to your Unified Namespace. And even, yeah, since yesterday we were able to fix it also. 

Attendee: Do you use a repo per customer? A git repo per customer, or do you have a central one? 

[32:04] No, whatever we do is always customer specific. We don't have any products. We build a solution specific for our customers. So it's always their own git repos, it's always their Azure subscription, AWSaccounts that these things run in. 

Attendee: I think this is off now. So great presentation. Thank you very much. Glenn Frey from Australia. I'm interested in your thoughts on how some of these standard models might fit around the work going on from firstly, OPC foundation with their standardized models or the companion specifications, but also the bigger picture with asset administration, shell and digital passports that are now coming very quickly to this industry. 

[33:00] These standardized information models, like it would be great if the machine manufacturers would implement them. So the general idea is really good. It's just not being implemented enough. Our larger customers like automotive OEM — what they do or what we do with them is to define their custom models that they put into the contracts that the machine manufacturers have to fulfill. But of course, not every enterprise can do that. So they put it into the procurement process as a requirement, and then the machine manufacturers just have to implement these models. So in the ideal case, those machines that you saw on the left would, of course, already publish these standardized models so you don't have to build them yourself. Thank you. So actually, Mark is going to be back on the UNS panel, and I'm also going to be here all day, so.

We use Kubernetes from the cloud to the edge to build a really unified platform. This unification is essential not just for simplifying development and operations but also for ensuring that these environments are as identical as possible from a development and operations perspective. It's about creating a seamless technology stack across all environments to maximize efficiency and scalability.

Marc Jäckle Technical Head of IoT at Maibornwolff


The integration of smart factory platforms and the Unified Namespace have illuminated the path forward for digital transformation in manufacturing. The key takeaways from Mark’s talk emphasize the necessity of adopting a unified technology stack and the strategic management of multicluster environments to streamline operations and facilitate seamless connectivity. By understanding and implementing these principles, manufacturers can enhance their operational efficiency, improve data accessibility, and ultimately drive innovation. As the industry continues to evolve, these practices will become increasingly crucial in maintaining competitive advantage and achieving long-term success in the rapidly changing digital landscape.

Join a vibrant community of people using MQTT across industries, including Smart Manufacturing, Connected Cars, Energy, and more. If you are on a digital transformation journey within your organization, read our eBook Architecting a Unified Namespace for IIoT with MQTT authored by Kudzai Manditereza.

Download the eBook

HiveMQ Team

The HiveMQ team loves writing about MQTT, Sparkplug, Industrial IoT, protocols, how to deploy our platform, and more. We focus on industries ranging from energy, to transportation and logistics, to automotive manufacturing. Our experts are here to help, contact us with any questions.

HiveMQ logo
Review HiveMQ on G2