Skip to content

Conversation with Kudzai: What is the Business Value of the Unified Namespace?

60 Minutes

Webinar Overview

In this session, we explore the essential requirements for industrial businesses to succeed in digital transformation. We discuss the challenges businesses encounter with traditional digital transformation approaches and highlight the crucial elements that make the UNS model particularly effective in facilitating successful digital transformations.

Transcript

Introduction

[00:00:00] Scott Baldwin: Hi everybody, my name is Scott Baldwin. I'm the Community Leader here at HiveMQ and really happy to have everybody joining us today.

[00:00:12] And let's get things started with a quick introduction of Kudzai who of course, probably needs no introduction.

[00:00:17] He's a long time influencer. Many of you have seen him talk about everything from MQTT to IOT. He's been recognized as one of the top 100 global influencers talking about Industry 4.0 and can often be found working with folks across the industry advocating for better approaches to IT/OT work, and digital transformation.

[00:00:36] His work at HiveMQ has been transformational from his writing and thought leadership on various topics. He recently started a new role as a Senior Industrial Solutions Advocate, and I'm really lucky to work alongside him every day here as part of the community advocacy team at HiveMQ. Outside work, Kudzai also runs a popular YouTube channel called Industry 4. 0 TV, which is focused on IoT and smart manufacturing technologies. And he's a busy father, husband, and a million other things, I'm sure, on the side. Welcome Kudzai, really glad you could join us today.

[00:01:10] Kudzai Manditereza: Thank you. Thank you so much, Scott. Yeah. That's the first time someone has ever introduced me this comprehensively. Thank you so much.

The Business Value of the Unified Namespace

[00:01:20] Okay. So welcome everyone. So in today's session What is called Conversations with Kudzai, or maybe just to give you some context around what we're doing here is we're going to be holding these as a monthly events having me and having you come together and discuss hot topics that are currently in the industry. It is a way to really guide you into like thinking about it and also taking your feedback and answering your your questions in that regard.

[00:01:48] So in today's session, we're going to be specifically talking about what is the business value of the unified namespace. So if you have been following some of my content and also some of the webinars that we do here at HiveMQ around the unified namespace you, for the most part, we focus on the tech technical aspects of how to build the unified namespace and how do you put all these pieces together.

[00:02:17] We are going to get into that in later editions of this event format. But for today, we're going to try and focus on the business aspect of it. Exactly, what is the value proposition when it comes to the unified namespace and more importantly, why are we seeing a lot of customers and a lot of firms starting to adopt the unified namespace?

[00:02:38] The assumption is that a lot of you on this call are familiar with MQTT to some extent, and maybe have heard about the unified namespace. But if not, this conversation is also designed to provide you with that kind of like background of exactly where are we coming from as an industry? Where are we going? And what are the challenges that we're facing along the way and how exactly does the unified namespace make that a reality. What is the valuable aspect of the unified namespace as far as the business is concerned?

[00:03:08] And also what I'll do is show you a demo. Because that's always a big part of showing the value of the unified namespace. Because for the most part, you're talking about this concept, this abstract concept, and it's not really easy to see the value of it without seeing a tangible demo of exactly what it is or the kind of information that it makes available to the business for you to be able to make all those informed decisions.

The Current State of Digitalization

[00:03:34] So as Scott mentioned the hope here is that we make this conversational. So at any point during the session, if you feel you want me to expand on a point, or you've got a question, a comment, an addition, please feel free to drop your comment or question in the chat. And we're also able to unmute you if you want to come on.So I think the best place really to start as far as the the value of the unified namespace is concerned is to look at the current state of digitalization or digital transformation in industry.

[00:04:05] As you may know, digital transformation or digitalization started, there isn't a clear date exactly when it started, but 2013, 2012, 2014, some way before that, some after that, but that's the general timeline. And throughout that whole period there's been a lot of manufacturing companies, industrial companies that have adopted some sort of digital transformation initiative or strategy or project, and mainly in the form of pilots.

[00:04:32] And a recent study by McKinsey states that 74 percent of all the digital transformation projects that they actually conducted interviews of all the executives in the manufacturing space, they're still stuck in that pilot phase. And this is a staggering figure and it's something that we also see, when we interact with a lot of customers, they're stuck in that phase where they can really move out of it.

[00:05:00] And one of the big reasons why that is the case is first of all, it becomes difficult to scale. So they can't scale out of the pilot phase for a lot of manufacturing companies to really realize the full value of the unified namespace. And we'll get into the reasons exactly why it is that they are failing to scale their solution. But that inability to scale solution out into your entire enterprise is one of the biggest reasons why customers are stuck or whoever is installing this unified namespace, implementing digital transformation, is failing to realize that value.

[00:05:35] Secondly, it is an expensive exercise . Again, we're going to get into the actual details of why it is that it is super expensive to implement that specific digital transformation project the way that it is currently being done by a lot of industrial companies.

[00:05:53] And then also the other issue is this idea that when manufacturers or industrial companies invest in this digital transformation strategy, they do have expectations that this will address all of their data aspiration. So maybe as much of their data aspirations as possible, but this really gets stuck also in the pilot phase because they can't keep up with those data use case demands based on the current implementations that they have. So it really keeps them in that phase where they're really stuck. They can't really break out of that because there isn't really a case to make for the return on investment. And we also are going to get into how that is the case.

[00:06:33] And another thing finally is this idea of time, just the time to value is too long with the traditional approach and implementing digital transformation strategies where some companies are only starting to realize value after three years, four years, and so on and so forth. So the time to value is exceptionally long. So this is really what keeps companies stuck in that specific pilot phase.

Why Digital Transformation Fails

[00:06:57] So I like this graphic a lot. So this is graphic by CESMII, which really shows the current lay of the land as far as digital transformation is concerned, or the industrial automation space in general, right?

[00:07:10] Where we have gone through this two or three or four decades. Where companies and vendors have been building and selling these solutions that are built for a specific function. So this could be either it's a logistic function, could be shipping, or it could be a quality function where it's defect tracking and all these different things that have resulted in information existing in these different silos. And then also not to mention this idea that between OT and IT, to a large extent, there's still this paper based information exchange or transfer or integration of data where you get a production order that is printed out and then you get someone walk physically into the shop floor, stick this into the notice board. And then these are the production instructions. This is what needs to be produced. And when stuff is produced, a printout is also done or it is filled in by manually by a pen and paper.

[00:08:11] But now the idea with Industry 4.0, how traditionally companies are approaching this, which is causing a lot of the problems that I spoke about earlier, is that they are propagating that same mentality of keeping information in silos. But this is unintentional because when you are integrating your OT and your IT using this traditional approach, you are basically bringing in an application and connecting it directly to say a piece of equipment. So if it's a quality monitoring application, traditionally, how you do it is that you connect it directly to say a SCADA system or a specific piece of equipment. Now to address the Industry 4.0 use cases for digital transformation, a lot of customers are making the mistake of adopting or propagating that same approach of creating this use case specific approach to digital transformation.

[00:09:10] For example, you could see here under industry 4. 0 use cases, we've got digital twin predictive analytics, predictive maintenance. So how they are approaching this is that if you are addressing one use case, let's say predictive analytics or predictive maintenance, you acquire this application or system, connect it directly to all these different machines and systems on your shop floor. And then start to get value out of it. So sure enough, you might get some value by connecting and getting this data and turning it into information. But that's basically where it ends, right? Because this is also in a way, a silo that you're creating, even though, using what is called Industry 4.0, and I'm like putting that in quotes because it's ironic because it's still propagating that same mentality of really locking data and not making sure that data is available across the entire enterprise, which is really what is required for you to become a truly data driven industrial company or manufacturing enterprise.

[00:10:11] So when you move on to use case number two, you still need to again, create those direct connections to those same applications or different applications. So here you're also repeating that same process again, for you to be able to set up for use case two. So as you can already tell, there's time involved here. There's a lot of effort involved here. And then there's a lot of cost involved in setting up all these different use cases by simply plugging these applications into all the systems and the devices that are generating all of this information.

[00:10:45] And we like to speak about this as being a spaghetti architecture, a system where you can't really scale out of that system. You can't really scale out of that system because it's use case specific from the get go. So that is the biggest reason why manufacturers are failing to realize value out of this approach of OT, IT data integration or digital transformation as it were.

[00:11:07] So now, so again, here I'm going to also go into the specific points on exactly why this is really making companies not be able to realize value from their digital transformation effort. But what I want to talk about also is this approach where What companies have been doing for the past decade and so is really putting together this digital transformation team or this data science team where you've got the Chief Data Officer or Chief AI Officer who is responsible for gathering or consolidating all of this data from many different parts of the enterprise into one data lake or data warehouse or data store.

[00:11:49] And again, I spoke about this idea that the intention here or the plan here is to make sure that manufacturers get maximum return on investment on their data infrastructure. And what that means is that they expect that this data is going to be able to make them fulfill all of their data aspirations. But this kind of like has become a problem because this central team, this data science team, in a way becomes a bottleneck very quickly because it all parts of your organization, all parts of your enterprise, they depend on this specific team to make sense out of the data. And this also involves a long lead time where if you want to get a certain information, like data turned into information for you to be able to do the work that you want to do to be informed by their data, you need to make that request to the data science team. Sometime it will take three weeks just to get that a piece of information that you need to be able to do whatever job it is that you want to do. So there isn't that real time aspect to data driven decision because we know truly for you to be able to actually realize full benefit for all your data investments you need to be able to empower everyone across your entire organizations to be able to make instant decisions . That way it will help you cover all the data use cases, all the data aspirations that you have because everyone is essentially being turned into an information worker. Everyone can access information whenever they need to. Everyone can create information and contribute it back into the ecosystem whenever they've got something to contribute back. So if you've got a central data system, a data science team, that is a bottleneck to that, you are certainly going to be stuck in that pilot phase because you can't keep up with the demand or the data, the demands of the data use cases that are exponentially growing.

[00:13:44] Bear in mind that when you go into a digital transformation journey, which is what it is a journey the goal really is for you to address as many data use cases as possible. And for the most part, you don't really know what data use cases you want to address. You may have like a [00:14:00] target, but the idea is that you want to be able to experiment with what could be the answer to your solution. And if that experimentation has friction associated with it, it disencourages everyone to be innovative within your organization because they don't have access to the data. So this is the other big issue that has contributed to the lack of value realization in the digital transformation sphere using specifically traditional approaches to it.

Improving Infrastructure Redundancy

[00:14:26] So I've got an example of an actual customer at HiveMQ who had identified 139 use cases that they wanted to address. So if it takes you six years to address those 139 use cases, certainly you're not going to realize value from your data investments because there's just so much more use cases that you want to be able to address.

[00:14:48] But as soon as they started looking at unified namespace and implementing it, they've brought that projection way down to below half that figure that they already had projected. So that's something to take into consideration to put that into your into perspective of exactly what it means when we say the time to value is long for whenever you are using this traditional approach to data integration.

[00:15:11] And so again, just referring back to that architecture where you are connecting applications, the way that you're addressing data use cases for digital transformation it's use case specific. You're connecting your applications to equipment and data sources. What that does is that again, you're going to repeat most of the connectivity infrastructure up to 80 percent of the time. So again, this is also research that was done by CESMI to really calculate how much repetition you would need to do for you to be able to set up or to reflect address your digital transformation data use cases for a typical industrial company, you find that you have to actually repeat 80 percent of the time, and that's a huge cost to bear if you are wanting to get value out of your data. And the reason why is because again, as I explained, if you want to get data from systems, so let's say I've got a compressors or a set of compressors, I want to implement a predictive maintenance for those specific compressors. And maybe they're exporting that information using OPC UA. And then maybe I'm using MQTT as my infrastructure for OT/IT integration and things like that. Or if I'm using proprietary technologies, just direct point to point connection, is the fact that you need to set up that OPC UA connection, or you need to set up those API connections directly from the the application, predictive maintenance application to those system.

[00:16:40] You move on two months later, you want to be able to address a digital twin use case. You need to bring that same team back again. So again, bear in mind, and the assumption is that a lot of you on the call understand this idea that in an industrial company, you have got your automation stack, your automation pyramid, where you've got your PLCs, SCADA system, MES system, ERP, and so on from level zero to four going up. And all of those segments have got dedicated specialists or system integrators that kind of know exactly where do I need to get this specific data point to be able to fit this use case. And you need a SCADA system integrator to be able to point you where to get that data source. So each time you repeat that use case, it's not only taking you a long time, it is also costing you more in just by the fact that you need to bring in all these dedicated experts to set up that specific connectivity infrastructure for each and every use case that you want to address.

[00:17:43] I hope this is painting the picture of why you're stuck in that pilot phase, because to move from one use case to the next, there's such huge friction associated with that. And that's why the unified instance, again, is getting a lot of traction because it really allows you to move from use case one to use case two with ease, simply and cost effectively. So again, application integration is repeated. There's need for multiple services. If you're getting data from one service, you need to repeat those connections over and over again. So this is one big problem that is associated with that.

AI Use Cases are Hampered by Inadequate Data Quality

[00:18:17] The second big point when it comes to that approach is this idea that, to be able to make sense of the data, for the most part, if you are using AI or advanced analytics, the data needs to be of high quality.

[00:18:30] So again, here I'm referring to this research by MIT technology review that was done recently, early this year, where a lot of manufacturing executives were asked about what are the things that are really hampering their approaches or their AI use cases. And 50 percent of them pointed out that the lack of data quality is really hampering their AI use cases, which again, limits that ability to turn data, into information.

[00:18:57] And the reason why again, manufacturers are in that space where they're not able to realize value due to the lack of quality is because quality is for the most part ignored. The quality of the data is ignored. We went through a period where manufacturers were being told to get data, put it in the cloud connect your PLCs to the cloud, send everything to the cloud. We are going to have AI models at some point that are going to go through your data and make sense out of it. But there is that ETL process that is still involved. There's still that amount of work that needs to be done to turn that data into high quality data. And the bad part about it is that once the data is left the shop floor is now in the IT domain it's now in the hands of IT people who may not be the best people to make like corral that information. Because when it comes straight out of a PLC, you need to know exactly what this means. Some of it, even the naming convention itself it's not something that an IT person would be able to immediately recognize what that data point means or what is the context here. And just the idea that information means, or data means different things, right? In different domains. So in, in OT or in a PLC system, information might mean something totally different to what it means in an ERP system or in a data lake. So there's that lack of context, the lack of unification of that information. So the idea that you just get data, check it into a data warehouse, into a data lake, and then use AI to make sense out of it and generate value from a data is basically based on this research also proving to be not really true. And which is sad because it's very easy nowadays to sell an AI sort of platform or say maybe generative AI. If you talk to a CEO or any C level executive about the capabilities of generative AI, it's so easy to convince them because the value is very obvious of what that can provide you. But when you talk about data infrastructure, it's not so obvious what that value is. And to a large extent, that quality aspect of it goes ignored until they realize that we're now stuck and we can't really scale or turn the data into information.

[00:21:05] So all of this data, lack of contextualization, normalization, transformation, and the lack of data integrity is a huge factor that is embodied within the unified namespace itself that makes unified namespace, such a very attractive architectural approach to digital transformation. As we have seen a lot of companies starting to adopt it, seen a lot of analysts, firms, Gartner starting to call it as the new OT IT architectural approach for digital transformation, because it has been proven that the old way of approaching digital transformation doesn't work. Enough time has passed for that to be proved a huge problem.

Delayed Time to Market (Internal and External)

[00:21:45] So this is a summarization of these problems where it is really essentially delayed time to market. The delayed time to market of ideas, whether internally as a team, as an innovation team, we've got all these different companies. So again, in my interaction with companies, they likely have the most of the time they have an innovation department, whether it's in production, engineering, or IT, they've got this innovation team. So you want to be able to empower that innovation team to be able to put ideas and test them rapidly with agility and not be made to wait like a year to just to prove whether an idea works or not. It really goes against innovation in the true sense of the word . So this is a huge problem because the delayed market is something that is a reality for a lot of companies that have gone down the path of this point to point interaction without a central data management or central data management strategy as it were. So I've already spoke about this idea of having to bring all these different specialists and also the idea that if you buy . Some companies, what they do is for digital transformation, they go with one vendor so you need to stick to that solutions stack. Forced to use a specific tool, even if it's not the best tool for the job, but that's the tool that works with the platform that you're using. So this idea that you could break that apart and put it into an open ecosystem where you're able to select best in class tool for a specific job is also one of the things that is making a unified namespace an attractive option for really accelerating innovation within companies. And also not just internally, even just the market of ideas to keep up with competition. You want to be able to meet customer demands fast. You want to be able to adjust or adapt to changes as change is now a permanent state of existence for manufacturers and industrial companies. So you want to have that agility to be able to try out things without having to this huge time that you need to wait for.

Unified Namespace Approach

[00:23:45] So again, here, this is basically a new way of looking at integrating data OT and IT data whereby instead of having all these direct connections and all these point to point integration, rather you are establishing a common data infrastructure, whereby all your business domains are now plugging into infrastructure. They're not connecting to applications, they are plugged into infrastructure, and then they're using it as a pool of data and information to power their innovation agendas. Because now you've got some pool of information and data that you can consistently reuse. So you set it up once. Of course, you still need to go through that first process of bringing in the system integrators, OT system integrators, the ERP integrators. You define your events, you define your namespace, you define your data structure, you define your information models, set up your governance, your policies, and you only do it once and you bring your data together. And then you use that same piece of or that same pool of information to be able to move from use case one to use case two super fast and also be able to scale out from these small or pilot use cases.

[00:24:54] So again, it is this idea that you're bringing contextualized, normalized, standardized, and unified data. You're unifying data from all these different sources of information. So again, I spoke about this idea that data in one domain doesn't necessarily mean the same. Or information in one domain, whether contextualized or not, does not necessarily mean the same thing in another domain. So it is this idea of unifying that and putting all of that information in a way that it is understood by whoever is meant to consume that specific information.

Reference Architecture Model

[00:25:26] So this is also a high level view to give you a picture of what that looks like. So if you're using MQTT to implement a unified namespace, typically what companies are doing is that they're having a site level MQTT broker as that single source of truth for all the events that you have defined. And then at site level, you are then able to federate all of that information to one centralized enterprise wide MQTT broker. So that unified source or pool of information is available across your entire enterprise. So meaning that, anyone across the entire enterprise is able to look or find the information that they need without having to go through all these points of friction that I spoke about earlier. So this is just to give you a picture of what that looks like at a high level view or architectural view.

Data Integration with UNS

[00:26:17] And then this is also the real structure of the unified namespace. So maybe this is also a good point to, for me to show you a demo of what unified namespace looks like so you can start to internalize how that could really bring value to your business.

[00:26:33] But do we have questions at this point, Scott?

[00:26:37] Scott Baldwin: We've got one here for Rodrigo mostly asking where and how would you start implementing UNS? So I think it's maybe getting to the nitty gritty of okay, this is all great, maybe in theory, we understand the business value, but how do I get going? How do I start doing this?

Exploring a UNS with MQTT Explorer

[00:26:53] Kudzai Manditereza: Okay, perfect. So hopefully this will answer part of what Rodrigo is asking for. So again, we [00:27:00] spoke about this idea. I showed you this picture where we had different systems, quality systems these ERP enterprise systems and Industry 4.0 use cases that are in these siloed environments. And you need this point to point integration that creates a tight coupling for you to be able to really address your use cases. Now here, what you have is that single point or that single interface that I spoke about. So instead of having a thousand servers that you need to understand and talk to and get information from. Now all you need to do is that one end point. If I'm a manufacturing executive, and you're building a dashboard for a manufacturing executive, you connect to that one broker endpoint, MQTT broker endpoint, that UNS endpoint that gives you information, you're then able to navigate to use the way that your organization is already structured to navigate it and find the information that you're looking for without having to put that request first of all to a data science team. Secondly, without having to try to figure out which server could have that information. Is it already available in a predictive analytics solution? Is it available in an OEE system? How do I consolidate all of these solutions into one single interface where everything is a single source of truth.

[00:28:18] So for example, here, this is the root namespace, which is the name of the company. So here I'm using MQTT Explorer, and all of this data is being simulated and generated and published into an MQTT broker, which I'm accessing here using MQTT Explorer. So this again is what all the different components will be using as a way of exchanging information and interacting and exchanging, and unifying all of this information at once.

[00:28:42] So again, moving into that MQTT namespace, you can see here at this level, which is like an enterprise level. I do have a KPI namespace. So here I just save it as KPI one zero or two, but this could be say profitability. This could be a maintenance that is showing a manufacturing executive, how profitably are we running at this exact moment. If I want to know exactly what's going on across my entire enterprise, I can know just by subscribing, or if I'm building a dashboard for the executive, I just need to subscribe there. All these namespaces I defined. So again, as I spoke about this idea that you set up, you define those events from the get go. So as you define those events, You're going to define what is it that the manufacturing executive needs to know to be able to run the business efficiently at any given time? You define that. You allocate a namespace or a, a level where, which makes sense for that leader to find that information. And in this case, it is under the entire enterprise.

[00:29:41] So here you could see we've also got Munich, which is the site, your manufacturing site, but we could have many more sites again, listed here. So if I go into this specific site here, you could see that again, I've got a KPI namespace. So I came this KPI may not be relevant to the leader that I spoke about [00:30:00] previously, but maybe it's relevant to the plant manager to be able to be informed to make sure they make decisions in real time and be able to understand how effectively they're going. What is the OEE rate of performance? What is the quality? Not have to wait for this information, because as you, some of you are working in manufacturing would know, you have to wait 24 hours in some instances to get this information of how were we doing today. How are we running production today? So you are not able to understand the current state of operation. So having that real time view of how we're doing is the valuable aspect of the unified namespace to a business. Just for that ability of being able to access the information that you need to be able to solve that specific problem. And then again, this information could be used also to fuel some external system. So you might have a data lake again. So the unified namespace does not replace a data lake, but the data lake would be a node in that unified namespace ecosystem. Subscribing to data that has already been contextualized, normalized, and high quality data such that when it lands there that friction of moving from data to insight, whenever you apply machine learning, is shortened because that data cleaning and data preparation phase is taken care of because data is already prepared by whoever is sharing that information into the unified namespace. So here again, you're able to see all of this. This is by way following the ISA 95 hierarchy so we are already able to see all the different areas of production here.

[00:31:30] Now, if you commission a new production area, set it up, connect it to MQTT, publish, define those events, push them into the unified namespace using MQTT and then able to see that area automatically show up here. So whoever is consuming this information is going from a stage of not knowing that there's a new production line to automatically discovering it and all of the data and information associated with that.

[00:31:58] And this is super, super useful because without the ability to do this, it will take months just to set up that kind of visibility to be able to discover all of these new data sources, because everything has to be independently connected to that specific use case or to that specific consumer. So you're able to then see all of this information in one line.

[00:32:20] And if you go into again, each specific area of production and then able again to see all the different lines of productions, again, different KPIs. So you could infuse KPIs at every level of your organizational hierarchy. And if you notice here, this really follows how your organization is already structured. So if I bring in an automation engineer to say, can you find out what is the KPI on line one of a certain side? They are able to naturally, intuitively navigate because the structure is a representation of how the organization is already structured. They know that in Munich, I expect to find fruit [00:33:00] juice production area. I expect to find line one, under line one, I expect to find it. So if I'm looking for a piece of information that I did not have knowledge about, I'll simply go under that specific namespace, I'll find it. So if it is in this case, if I'm going to line one. I want to find out what is the OEE. Then I know that this is an MES function, and I know that I'm going to a line one, I go under KPIs, I then can find out what is the current OEE, or what is the mean time to repair. So if I'm a plant manager, I want to find out on this specific line, what is our meantime to repair metric without having to know what data sources producing this information or what servers or different a combination of sources I'm able to just simply navigate there and find it because it has been defined and prepared and it is being fed by the all different sources. So this is information that is being built dynamically, by the way, this is not, there is obviously governance as you start out, you come up with the governance, how we're going to structure the exchange of information, but then all of this information it dynamically shows up. It's a bottom up approach to IT integration, it's not a top down approach. We're saying, this is what you need to do because as you all know, industrial settings are messy, things change. And you want to be able to just put information where you believe it makes sense for you. For it to live so that anyone who's looking for it is able to find it. So data access is a huge accelerator for turning data into information or really finding the information that you need to be able to solve the problem that you want to be able to solve.

[00:34:33] So maybe to address specifically Rodrigo's question. So it is this idea that with the unified namespace, you first obviously put some convertors because a lot of systems on the shop floor don't natively talk MQTT. So you want to be able, first of all, put converters that are converting from your API endpoint, your oPC UA, Modbus or whatever systems, HTTP endpoints to MQTT. So you're converting it to that one common standard of exchanging information and you're bringing it into the unified namespace. So that is the first step. Connect to data sources. Put that information into the unified namespace. So this is what is called application event or the raw data. So this is the data that you're saying. I don't already know the use cases that I want to address, but I want to have all of my data in a semantically understood structure such that when I come up with the use case and say, Oh, I need to calculate energy efficiency. I know that I've got data that is coming from line one, okay it is coming from here. I can pick that specific data point. Another data point derive a metric from it, so I combine the data. If I need to transform those data points, normalize it, and then I can then create those metric this is an energy efficiency metric, for whoever wants to find out that information.

[00:35:51] So it is this idea that you get all of your data into a semantically understood structure first, and then start to address your use cases so much [00:36:00] faster because your data is already there, it is super fast, accessible. So it is primarily data access making sure that all everyone who needs the information is able to access it and do the job that they want to do. This really makes it possible for companies to turn to get the maximum return on their data investments because everyone is now an information worker. Everyone is now producing and consuming information. This idea of a citizen developer. It doesn't need to be a central data science team that is only able to make sense out of information because then that becomes a bottleneck. You want to have a platform where everyone can participate. Everyone is an information worker. If I make a prediction about the state of a certain piece of equipment, I want to be able to share that with an automation engineer or a plant manager instantly, they're able to find out, oh, this compressor has got about five months left, or this compressor has got such and such a defect. So that information is right there. They can make that decision instantly because that has been shared with them from another domain. They didn't need to go through this whole chain of finding the information that they need. So this really makes them innovative about how they want to approach this, the problem that they are faced with.

[00:37:17] So do we have questions on this Scott?

[00:37:20] Scott Baldwin: Definitely, definitely some more, a couple ones here.

[00:37:22] First moving beyond the technical implementation, how do we, how does an organization, make that decision to transit or migrate to UNS and particularly how do they get the organizational buy in around that transformation?

[00:37:37] And then a little bit of questions around Sparkplug B and how it relates to MQTT and how it ties into that whole UNS infrastructure.

[00:37:45] Kudzai Manditereza: Okay. So I think the buy in that's a good question. Once you are over that bridge, then everyone is in consensus. It's easy to like really implement it. But the idea here this really is a cultural transformation. So the convincing getting that buying is really coming from a cultural perspective really demonstrating why it is important for everyone to become an information worker. I think a lot of executives understands the value in that everyone needs to be an information worker because it is again this idea that you want to get maximum return on investment for, from your data infrastructure, from a digital transformation. And the only way that you could do that frankly, is by everyone becoming an information worker, because turning data information ultimately is empowering everyone within your organization to be able to be innovative, it's really empowering your innovation agenda. So it's really speaking more to the cultural aspects of it.

[00:38:41] And I can, really say, from company to company, it differs. I have been involved in situations where the company, there's already buy in, but the consultants, the system integrators, for them, it's just, it just makes things too simple, right? And it's makes them redundant in a way, so they are like fighting that. So that could be coming also from outside where the consultants want to make things as complex. They want to keep themselves busy. And also within the organization itself, there might be individuals who think this is really opening up, maybe making them obsolete in a way. So it's more cultural, a navigation of how do we change the culture of looking at it. And, but if you really look at what returns that investment needs to bring you and you look at the current approach to it, is it fulfilling those or not? And then like really breaks down to, do we really continue down that path or do we change how we approach digital transformation? So it's a more complex problem that is more specific to each and every organization.

[00:39:41] And then also someone asked also about Sparkplug integration. So again, as you see here, I've got the Sparkplug namespace that lives within that same MQTT broker, but it is not part of the unified namespace as it were. But, we're then able to bring it into the [00:40:00] unified namespace so you maintain that Sparkplug namespace because it is useful for the SCADA layer from level zero up to two, where you want to use all that goodness of being able to do automatic discovery, to be able to use this plug and play functionality for SCADA to device connectivity. So this is what Sparkplug is good for. But as soon as you move into IT/OT integration, there's a lot of challenges that many of you on this call, are aware of. This idea that Sparkplug limits, obviously the topic namespace. It also supports protobuf encoding, which some enterprises may find limiting. They want to be able to consume that information as JSON. So what I have here is I've got data that has been published into this Sparkplug namespace and then I've got a logic that is consuming this Sparkplug namespace and republishing it into the unified namespace as a JSON. So if you notice here, I've got this data, I've got this fruit juice production, and I've got this refrigeration as the device ID, and fruit juice production as being the node ID. So I'm taking this, and then I'm republishing it here into this namespace, if I can find it. So I'm republishing it here, and, um, refrigeration, yeah, so I'm republishing it here. So again, you can see I've got this metadata where I'm saying this is coming from this specific Sparkplug edge node. And these are the values that are, so this is the UDT that I've converted then into flat MQTT. So this is all being converted, unpacked, and then you can then see all these values as you can see now I can read all these values. So this is like how you integrate Sparkplug by bringing it from, they could live on the same broker at different namespaces, but then map it into that specific part of the unified namespace where you want it to live. And or it could live in a totally different broker altogether. It depends.

[00:41:47] Scott Baldwin: Another question here who decides what the organizational structure of the UNS looks like and where data should be published to and in particular, who determines what information goes where? I imagine this is a big conversation between IT and OT teams on that infrastructure, but maybe you can talk a little bit about how that decision gets made and how they manage where things go and how things are consumed.

[00:42:09] Kudzai Manditereza: Yeah, absolutely. So one of the things that you're going to find that once you go down the path of building a unified namespace, that it is very much an engineering exercise. So I recently participated in a workshop last month for one of our customers, a big company where we had a team of about, 25 professionals, about 15 of those were from production, and about 10 of those were from IT. And the goal, the purpose of that workshop was to come up with a structure. You might have a structure that already exists in your, say, ERP system, but it may not be the right structure for you to transition into unified namespace. So it is a collaborative effort where IT and OT are coming together and coming up and deciding what makes sense, how we put together this structure. So it is very much a, an engineering exercise where you've got different team. For a big company such as the one that I'm referring to, it was 25 individuals coming together from production and IT, coming up with this exercise. How do we structure? Where do you expect to find information? And where do you expect to find information? And collaboratively building this. If it's a small company, obviously it might be coming from one or two people, but it is always OT and IT coming together. Because this is a fusion of two domains that have for the longest time not interacted with each other.

[00:43:31] Scott Baldwin: I don't have any other questions coming up here. We'll leave maybe another few seconds here if anybody else wants to throw some things in before we wrap up today's event.

Data and Information Re-Use

[00:43:40] Kudzai Manditereza: Absolutely. So I think while we wait for a few questions, I think we still have got a few more minutes here let's quickly run through the rest of the slides here, but I've pretty much touched on these aspects here, like the value of unified names again, enabling citizen developers, you can enable automated planning. You can create a data driven innovation culture and also you creating this resilience to change because then it doesn't matter if you change a system from ABB to Siemens. It doesn't matter. All information is abstracted. Your business doesn't stop. All your business metrics are there. You're able to make decisions. And again, it is this idea of frictionless and intuitive access to high quality data. You're not only feeding humans. You're also feeding algorithms. You're feeding reports. You're feeding Excel. Whatever it is that you want to feed. It's now accessing all this high quality data through that single interface. And you can already see the implication for business value in being able to move information across all these different domains.

[00:44:42] Again, the idea of information reuse, right? Where information is already there. You just need to reuse for a different purpose. It just makes you move so much faster. Where you'd had to wait for one year just to get that, you're able to do it in hours because information is already there. And that's a significant improvement. Actually, this is one of the biggest values that we get from our customers, this idea that what they would have taken years or months to get access to is already there. And they can literally just move within a matter of hours and get that information in a matter of days or weeks, depending on the complexity of the problem. The data is already there. It's already contextualized. It's already high quality. And then able to like just address as many use cases as you want.

Faster Time-To-Market

[00:45:25] And also, obviously this resulting in so much faster time to market and don't need to explain the advantages of being able to like really be fast to market in terms of internal ideas. And also. outside competitive advantage and also just lowering the delivery times and costs. Now you're cutting out all this data cleaning and data preparation phase for AI training or AI inference because all of the data is contextualized. If I'm an automation engineer, I want to share a certain piece of information with you. What unified namespace does is that it incentivizes everyone to share data as a product. So it's that incentive where that data is carefully packaged, if I'm sharing data from the automation domain, it's carefully packaged for training a machine learning model. If I'm preparing data from the machine learning domain back into the shop floor, it's carefully packaged for consumption by a SCADA system for display. So you're cutting out that stage where data needs to be massaged.

[00:46:23] That brings us to the end of the session. I don't know if we have got some more questions.

[00:46:27] Scott Baldwin: Yeah, it's just one question left here. How much data can UNS handle at any given point in time in comparison to data science platforms?

[00:46:36] Kudzai Manditereza: So the UNS is not really meant to historize data. So a data science platform, maybe say, you've got your snowflake or you've got whatever data warehouse. So data like you've got, it could hold data indefinitely depending on how much information you check to it. But the unified namespace is about giving you the most current, the current state of events that are then used to feed those long term data stores. So with the unified namespace, you get the current snapshot of what is the current state of the business at any given time, So the unified namespace gives you that specific information.

[00:47:12] Scott Baldwin: Yeah. Yeah. And I think probably there are realistic limits to any system, but I think UNS is fairly robust. It's been proven through lots of different use cases, lots of different different organizations using it. Have you ever seen an organization run into an upper limit?

[00:47:25] Kudzai Manditereza: As far as limit is concerned, actually we did a benchmark last year, where we were able to create 200 million standing connections. I can't tell if there's any use case here that would demand so much concurrent connection of clients. So that was like just to prove how big it can scale. So if you have 200 million concurrent connections, then definitely for manufacturing use cases, which typically run in the tens of thousands or in the hundreds, or even for small manufacturer is like tens of connections, right? So the scale is really not an issue. So especially with HiveMQ clustering capabilities, we're able to like scale horizontally.

[00:48:06] Scott Baldwin: Awesome. That's great.

Wrap Up

[00:48:07] So we're going to wrap things up. I really appreciate everybody taking the time to attend. I know we've had a few people already drop off, but if you do have a moment, please take our post event survey we will get that over a message when you leave and also over email. We'd love to have your feedback. This is the first of these sessions and we have a couple more coming up.

[00:48:22] If you have more questions, things that come up that you are experiencing as you go through your UNS journey, we'd love to hear them. We've got our Slack community. We have a channel on there called #talk-uns and we encourage you to ask questions there. We'd be happy to answer them provide advice and feedback or any other support that you need and require on that side.

[00:48:39] And if, like Kudzai, you've got something interesting to share about your experiences with anything UNS related or IoT or MQTT related, we'd love to have you as a speaker here in our community. You can find out details on our community webpage under the Get Involved section.

[00:48:53] We've got a few events coming up in the next little bit. Tomorrow, we're talking transforming agriculture should be a really great event. We've also got our community chat session on the 26th, our monthly AMA on the 27th, and then two of these sessions running in July and August. You can register for all of those webinars online at www.hivemq.com/webinars.

[00:49:15] And with that, thank you very much. Kudzai really interesting session. Looking forward to the next couple, we're going to dive in a little bit deeper. I know. And should be a lot of fun.

[00:49:23] Kudzai Manditereza: Thank you, Scott. Thanks everyone.

Kudzai Manditereza

Kudzai is a tech influencer and electronic engineer based in Germany. As a Developer Advocate at HiveMQ, he helps developers and architects adopt MQTT and HiveMQ for their IIoT projects. Kudzai runs a popular YouTube channel focused on IIoT and Smart Manufacturing technologies and he has been recognized as one of the Top 100 global influencers talking about Industry 4.0 online.

  • Kudzai Manditereza on LinkedIn
  • Contact Kudzai Manditereza via e-mail
HiveMQ logo
Review HiveMQ on G2