Skip to content

How to Achieve Data-Driven Manufacturing with UNS and MQTT Sparkplug

Time: 2 hours

Watch Webinar

Chapters
  • 00:00:00 - Introduction
  • 00:17:00 - What is Unified Namespace and why implement it?
  • 00:21:01 - Demo Architecture
  • 00:23:52 - N3uron application for building UNS
  • 00:27:29 - Opto 22 groov EPIC system architecture
  • 00:29:30 - Demo of building a UNS with HiveMQ Cloud MQTT Broker
  • 01:36:51 - Q&A

Webinar Overview

Are you new to Unified Namespace and trying to wrap your head around it? This workshop recording will help you understand the concepts of UNS and how it can be built using MQTT Sparkplug. You’ll witness firsthand how integrating network participants such as an MQTT Sparkplug Edge of Network Node from Opto22, an MQTT Sparkplug Application from N3uron, an MQTT Sparkplug Host in the form of Ignition SCADA/IIoT platform, and a HiveMQ MQTT broker as the central hub, can revolutionize your manufacturing digital transformation strategy and help you move to become a data-driven organization.

By the end of the session, you’ll understand how the UNS approach simplifies integration, reduces integration costs, improves agility and scalability, and accelerates time to results. This is a unique opportunity to gain insights and knowledge from industry experts and learn how to implement this cutting-edge technology in your own organization.

Key Takeaways

  • The webinar shows how to implement a Unified Namespace (UNS) solution for tracking Overall Equipment Effectiveness (OEE) across multiple production sites using MQTT Sparkplug.

  • UNS provides a single source of truth for all data and events within a manufacturing process by modeling assets and adding context. It follows the ISA-95 hierarchy of enterprise, site, area, line, cell.

  • Demo involves:

    • HiveMQ Cloud, a managed MQTT broker, is a secure, scalable platform. You can sign up for free.

    • Opto 22 EPIC system capturing data from a simulated bottling line in California, using Ignition to model and publish data to HiveMQ Cloud broker.

    • N3uron platform capturing data from another bottling line in Madrid, using N3uron to model and publish data.

    • MES system subscribing to broker data, calculating OEE and creating work orders.

  • Using MQTT Sparkplug, data from different systems and locations is published to one unified broker namespace. 

  • N3uron is an industrial edge platform that models data and connects systems. 

  • Opto 22 Groov EPIC combines OT and IT capabilities. 

  • Ignition is used to model Opto 22 data and publish to the broker using Sparkplug. 

  • UNS provides a real-time view of operations and enables informed decisions.

  • UNS architecture accelerates digital transformation, with key benefits including: single source of truth, interoperability, efficiency, scalability, and flexibility. 

Transcript

Introduction

Jayashree Hegde: 00:00:10.166 Hello, everyone. Good morning, good afternoon, good evening to you all. I am Jayashree Hegde, welcoming you all to our first edition of UNS Workshop titled How to Achieve Data-Driven Manufacturing, where experts will demonstrate to you live how to integrate data from various industrial data sources into a Unified Namespace using a real example of a manufacturing process. Now, allow me to introduce you all to our speakers: Kudzai Manditereza, Developer Advocate at HiveMQ; Benson Hougland, VP of Marketing and Product Strategy at Opto 22; David Schultz, who is Principal Consultant at Spruik; and Jose Granero Nueda, Head of Customer Success and Sales Engineering at N3uron Connectivity Systems. Welcome, Kudzai. Welcome, Benson. Welcome, David. Welcome, Jose.

Jayashree Hegde: 00:01:00.979 Before we kick off the session, I would like to let you all know that this is a two-hour hands-on workshop, and we are recording it. We will share both the recording and slide presentation with you all in a follow-up email. During the session, if you have any questions, feel free to drop them into the Q&A box. You will find it at the control panel below. Lastly, we'll be running two polls. I request you all to participate. Now, without further ado, I will hand it over to Kudzai and let him and other speakers do a quick round of introductions. And I will launch the first poll and run it for about a minute. So over to you, Kudzai. Welcome, everyone.

Kudzai Manditereza: 00:01:41.325 Thank you, Jayashree. And thank you so much to the panelists. And also I'd like to thank the audience for joining us today in this session. So my name is Kudzai Manditereza, and I'm a Developer Advocate here at HiveMQ. And my role here really revolves around educating the community on MQTT and Sparkplug and its application in digital transformation projects, right? So to quickly give you a background about HiveMQ. HiveMQ was founded in 2012 in Landshut, Germany. And our main goal, really, here is to provide an enterprise data infrastructure that enables companies in various different vehicles to achieve connectivity demands and digital transformation capabilities that are essential to staying competitive in the modern-day business environment. And some of our customers include the likes of Audi, BMW, Siemens, Daimler, among many others. And we're also a venture-backed company, having recently raised 43 million euros in seed and Series A funding.

Kudzai Manditereza: 00:02:57.658 Now, our core service offering at HiveMQ is an enterprise MQTT platform, which features an MQTT broker that can connect up to 200 million clients. In fact, we recently did benchmarking to achieve those figures. That's thanks to a cluster design for horizontal scalability and redundancy. And the platform also features a Control Center that allows DevOps administrators to monitor and troubleshoot MQTT client deployments. And this Control Center also exposes a REST API interface, which allows you to build custom apps to manage your MQTT infrastructure. And then in addition, we've got — and HiveMQ has got — a flexible extension framework, which makes it easy to integrate HiveMQ into a modern microservice architecture. Now, the HiveMQ Platform can be self-hosted in third-party cloud platforms or Kubernetes cluster. And also we offer a managed cloud service called HiveMQ Cloud, which is what we're going to be using for our demonstration today. And with that, I will hand it over to Benson to introduce yourself and tell us more about Opto 22.

Benson Hougland: 00:04:11.258 Sure. Thanks, Kudzai. And thank you, HiveMQ, for inviting Opto 22 to attend this UNS webinar. We're excited to participate. For those of you not familiar with Opto, we're a decades-long developer and manufacturer of industrial edge systems, which of course includes I/O systems, PLCs, PACs, and IoT gateways. All of our manufacturing, distribution, sales, and support of all of our products come out of this building I'm in right now, which is our headquarters just north of San Diego, California. And one of the things that kind of sets us apart from many other PLCs, edge systems out there is we've got 50 years of OT experience combined with a couple of decades of IT technologies and capabilities that we put all into a single backplane. And that's really made us suitable for these digital transformation UNS-type applications that a lot of folks are involved in right now, trying to give a combined solution that gets the job done. Our flagship products all belong to what's called the groov family, including the groov EPIC, groov RIO, remote I/O, and groov EMU, which is our energy monitoring unit that supports all these technologies we're about to talk about. And if Kudzai, you want to kick the slide. For this particular presentation, we'll be focusing on the groov EPIC. And the groov EPIC is really —  it's not your traditional PLC or HMI or even IPC or Edge gateway. It's really all of those combined into a single backplane with a lot of tools to be able to manage that very easily, all through a web browser and so on.

Benson Hougland: 00:05:58.277 This is a device that's meant to be edge-driven, lightweight, and Report by Exception, just as we know is important in these types of applications. It is Linux-based. It is a real-time controller, so it will run a PLC-type program. I'll be talking about that program as part of this demonstration. Lots of programming options in there. I'll go through that. It does have a built-in web-based HMI that's visible on the front screen externally to an HDMI monitor or right to your web browser. Lots of software pre-installed. I won't be going through a lot of that, but we will be using Ignition Edge for this particular application. And then indeed, we'll also be using a lot of the gateway functions that are built into the EPIC, things like tunneling and VPN tunneling, segmenting, conduits, all kinds of cool stuff there. And again, it's really important that when we start looking at these types of solutions, that we have a very cybersecure box doing the job. So we want to prevent bad actors from getting in, but still allow the free flow of OT data to systems that can consume it. So it is cybersecurity out of the box with a Zero Trust model. You got to turn all that stuff on, set up accounts. I'll show you all that when we get to the demo. Thanks, Kudzai.

Kudzai Manditereza: 00:07:14.132 Thanks, Benson. And then next, Jose, could you please introduce yourself and your company? I think you're on mute.

Jayashree Hegde: 00:07:25.544 You're on mute.

Jose Granero: 00:07:31.522 I would say I'm super excited to be here today. Such a good company. I'm Jose Granero, the Head of Customer Success and Search Engineering at N3uron Connectivity Systems. And I have over 20 years of experience in industry automation and system integration in various verticals such as food and beverage, pharma, and energy. And in my current role, I work with companies on accelerating their digital transformation journey. Let me give you a couple of tips about N3uron. Well, N3uron was founded in 2018 by system integrators after identifying the role we need in the market to break down operational data silos and make use of this captive data on the plant floor to enhance productivity and decision-making. We're based in Europe, in Madrid, Spain, and currently we offer a product — our flagship product — the N3uron Industrial Edge Platform for DataOps, which we will be discussing in detail today. And Fleet Manager, comprehensive service that we offer together with our support and maintenance service. And Fleet Manager is a service that allows our customers to monitor and manage their fleet of nodes in real-time from a single secure location. One interesting thing about the Fleet Manager is that it also enables configuring secure tunnels to remotely access any other systems or devices connected to the same network as anyone knows. So you can, for example, upload a program to a remote PLC or update the firmware of a variable frequency drive or whatever. I'm very pleased to say that we've got installations in over 50 countries now that have chosen N3uron for their IoT and industrial needs.

Jose Granero: 00:09:28.502 And by the way, we have a systematory or a program. And if anyone is interested, you can fill in the form on our website and we will reach out to you. And since our birth, we've experienced pre-explosive growth with an annual average growth rate that exceeds 50%. Next one, Kudzai, please. Well, what's N3uron? N3uron is a complete industry platform for DataOps that enables seamless integration between the industrial platform and third-party applications, either on-premise or in the cloud. With N3uron, you can easily create bidirectional data pipelines between OT and IT systems and decouple devices from applications by consolidating modeling and processing all your operational data in a single source of truth and ultimately making all this data available across the entire organization. You're going to say fully modular, we have 40 modules at the moment and there are more coming soon, which you can stack as you require to meet your operating needs. Of course, you only need to acquire the models really necessary for your application.

Jose Granero: 00:10:41.857 In the imagery, we can see the three different sets of modules. On the left-hand side, we have the data acquisition modules here. We have the typical protocols to connect to devices and systems on the OT side. For example, OPC UA clients, Modbus clients, Siemens, OPC UA clients, etc. By the way, there are three modules which fall into two families, MQTT client, REST API client, and SQL client. And we also have the bottom, another set of modules for experiencing visualization. Yeah, we have, for example, Derived Tags and scripting, which are two modules that we use to add intelligence to the node. With Derived Tags, you can, for example, create special tags, and you can write scripts in Node.js, and the output of those scripts is automatically assigned to tags in the data model. Or if you need to go one step further, you can use our scripting model to do whatever with the Node.js. You can import external libraries, create your own libraries, etc. In this set of modules, we also have other modules for special things, such as historian, which uses MongoDB database. You can simultaneously store your historical data in a local database, in a remote database, or in both. We also have a web vision module to create HMI interfaces. And N3uron is cross-platform — sorry, finally.

Jose Granero: 00:12:19.432 Finally, on the right-hand side, we have the data delivery modules, which allow us to push all the data model or part of the data model to other applications or systems. If we have Sparkplug, MQTT client, OPC UA server, REST API server, etc. And N3uron is a cross-platform, meaning it can run on most versions of Windows and Linux distributions, as well as on ARM architectures such as Raspberry Pi. We have an unlimited licensing model. It is unlimited tags, users, devices, and connections for an affordable price. Once installed, all you need to access the node is our browser, nothing else. And you can install it in less than a minute. And the development environment and Web UI allows you to create your data model very quickly, especially if you use the templates, as we were going to see in a few minutes. It's extremely efficient. A single node can easily manage several hundred thousand tags. It is also very efficient in terms of hardware requirements. You can start with something as low as one CPU with only one core at one gigahertz with one gigabyte of RAM and one gigabyte of hardware space. With a Raspberry Pi 4, for example, you can manage around 5,000 tags.

Jose Granero: 00:13:45.333 And from its inception, N3uron was conceived to seamlessly deploy distributed architectures with several hundreds of thousands, tons of nodes. You can connect nodes to each other in a matter of minutes, aggregating, for example, all the data coming from your remote assets in a central node using N3uron links and scale your architecture very easily. In a nutshell, N3uron is like a Swiss army knife that has everything you need to address any IoT projects, no matter what the requirements are. And that's it. I turn it over to you, David.

Kudzai Manditereza: 00:14:23.587 Okay. Thank you, Jose. David?

David Schultz: 00:14:27.244 No, thank you. I'm jazzed to be here. Always great to be included in the webinars of the events that you do. One thing you didn't mention, Jose, you can actually run N3uron right on the Opto 22 on the EPIC. So actually have that use case running here. So it can run a lot of places, very scalable. So yeah, a little bit about Spruik. So as I mentioned, I am the Principal Consultant for Spruik Technologies. And while we are a fairly young company, there are decades of experience of the people that are part of the organization. As it says there, it was started in 2017, and we wanted to take a look at a lot of the new technology. And what I mean by that is we didn't want to just go in as a traditional systems integrator and upgrade all the software that's there. We really wanted to take a look at what are some of the emerging technologies, what are the things that IT is doing that we can bring to the OT level. So this new technology is looking at open source, so things like MQTT brokers or InfluxDB for time series data, Grafana. We also want to use container technologies like either Docker or Kubernetes containers. We also want to look at Apollo Federation Services so we can manage and scale with a lot of open technology that is fully managed remotely. So that's the whole modern technology that we want to deploy within the manufacturing level.

David Schultz: 00:15:57.946 We have offices. We're all remote employees, but we have people that are all over the world. And that means we've seen a lot of applications. We've run into a lot of challenging problems that needed to be solved. So we can leverage all that experience across cell world areas. It also means that any of the projects that we do, we can support at an enterprise level. You don't have to worry about an eight-to-five US American support. You'll have 24/7 with people in various world areas throughout that. And of course, we're experts in consulting and implementation. My whole mission is to help manufacturers develop and execute strategies for their digital transformation and asset performance efforts. And that's certainly something that is core to what it is we do at Spruik Technologies. We're there to help lead manufacturers through their digital transformation process and deploying all the great technologies that can be associated with people that are providing support all over the world. So that's Spruik.

What is Unified Namespace (UNS) and Why Implement It?

Kudzai Manditereza: 00:17:01.563 Thank you. Thank you so much, David. Okay, so now before we jump into our overall architecture for the demo that we're going to be showing you and also kind of going into the demo, I believe it's important to kind of first talk about why manufacturers need to adopt the Unified Namespace for their digital transformation strategies. And for that, I would like to invite David to give us a breakdown of why the Unified Namespace. David? Thank you again.

David Schultz: 00:17:30.791 Sure. Yeah, no, thank you. So by definition, the Unified Namespace is the single source of truth for all your data and events that are within your manufacturing process. And it also follows the structure of the overall business. And people will think of a UNS as — it's just all of our edge data that's been put into these asset models, if you will. And that's partly true. The entire intent here is to bring together these semantic data models that we're not just looking at raw values that have no context and no meaning. We want to bring those together and compile those and package them up. And so sometimes they're thought of as a UDT. So instead of having, say, a bunch of disparate compressor tags, we actually model a compressor. And people usually understand, "Yeah, I got the asset, I got the equipment, or this edge data that is produced by that. But a Unified Namespace also means that we need to add context." So we'll get into some functional namespaces. So the functional namespace, the idea is that it's more of a — it's a transactional data. It's an event that occurred. What we'll be looking at in the demo is a functional namespace of an overall equipment effectiveness, an OEE value that is capturing edge data, it's performing a calculation on that data, and then it's presenting the overall — what's the process health that's happening on my particular line right now? So that's an example of a functional namespace or a material movement or a downtime event with a maintenance work order that's associated with it.

David Schultz: 00:19:07.651 We'll also talk a little bit about a hierarchical namespace. So within the hierarchy of the business and a Unified Namespace generally follows the ISA-95 standard of enterprise, site, area, line, cell. We'll have a line namespace or a cell namespace that is comprised of these edge namespaces and these functional namespaces. And the idea is that we want to — it's the tension between data governance of: "This is the thing you have to have versus the types of things you can have." And we want to ensure that we're supporting common data models throughout the organization while giving a little bit of flexibility that's in there. And then finally, there is an informative namespace or informational namespace. And that's all about — the goal is to get the right information at the right time with the right people in the right format. Well, this informative namespace helps capture that information. So based on your role, based on an event, based on where you are in your process, we're going to have a common data structure so that visualization — you can easily ascertain and solve a problem. So when you bring all of this information together, you have a fully contextualized business operation that you can look at in real-time and answer questions like: What's running on Line 1 at this particular plant? What's its overall health? Is it in a downtime event? Is there an active maintenance work order that's associated with that? And it gives you that real-time status and a common data model so that you can analyze all this information in real-time and start making decisions with that. So that's the whole purpose of a Unified Namespace; it's the combination of those data models and namespace models that I was just describing.

Demo Architecture

Kudzai Manditereza: 00:20:52.381 Okay. Thank you so much, David. Yeah, so now I think we can start kind of talking about all the different components that we're going to be integrating for this demo. So what we'll be demonstrating to you today is an implementation of a Unified Namespace solution for tracking the overall equipment effectiveness, right, of a manufacturing enterprise with multiple production sites. And as an example, we're simulating two bottle plants, right? So we've got two bottling processes, one running on Opto 22 equipment in California, which is where Benson is located. And another one is running on the N3uron platform in Madrid, which is also where Jose is located. Now, the process data from these two sources is going to be published to the HiveMQ Cloud MQTT broker, where the Unified Namespace is sort of represented. And then on the other end, we're going to have David, who's running a Sepasoft manufacturing execution system on the Ignition platform, where he will be consuming data from the Unified Namespace, creating work orders, calculating OEE, etc., and then pushing that data back into the Unified Namespace. So to proceed here, I'll let David tell us more about what he’s going to be demonstrating on the Sepasoft MES platform.

David Schultz: 00:22:17.823 Sure. So on the left side here, there is some edge data that's coming in. And later on in this webinar, both Jose and Benson will be demonstrating what information that they are capturing at the edge. They will be modeling that information so it comes through — so it's published as a semantic data model of a manufacturing process and publishing that information into the cloud. From there, I will be on my end consuming that edge information, and I will be starting and stopping production orders. I will be calculating the overall equipment effectiveness of those processes, of those lines that are running. And then I will publish an enterprise namespace back to that broker that'll include the information, the edge data, as well as this functional namespace as a complete enterprise. It's a very basic, but part of an enterprise Unified Namespace, bringing through all the information that's sitting out at the plants and putting it all so that your information is delivered with the context that it needs in order to be effective.

Kudzai Manditereza: 00:23:29.398 Awesome. So Jose, can you give us a breakdown of what your part of the demo is going to include?

N3uron Application for Building UNS

Jose Granero: 00:23:37.888 Sure. Well, first, let's see why N3uron is so well-suited for a unified namespace architecture. N3uron is designed to be deployed as close to the data source as possible, which is one of the premises of the Unified Namespace concept. And it's extremely well fitted for the industrial environment and can exchange data with a variety of systems, such as PLCs, SCADAs, historians, MES, distributed control systems, databases, just to name a few. It lets you build your data model using proper normalization and simple automation techniques so you can build consistent and standardized data models. You can replicate anywhere using templates, as we will see later on. And the data model can contain real-time data coming from many different data sources, computer data, and also metadata, of course. The N3uron platform includes modules, MQTT, and Sparkplug client. Both protocols are lightweight and usually Report by Exception, which are the two premises of the unified namespace. And sometimes Sparkplug is not well suited for some use cases. So for those cases, we can still use the MQTT client to customize our payloads and data parsers, depending on whether we are publishing or pulling data from the unified namespace.

Jose Granero: 00:25:15.589 The platform also includes other delivery modules, okay, as I mentioned before. So you can also use N3uron to subscribe to the broker, to go to the broker, and pull data out of the unified namespace and deliver it to systems that don't support MQTT or Sparkplug. For example, using REST API server, or SQL, or an OPC UA server. Finally, N3uron is built on modern technologies and uses open standards from an Industry 4.0 viewpoint. Data transformation cannot rely on closed solutions or those from a single manufacturer. An open architecture is necessary to ensure scalability and compatibility with future price and expansions. Next one Kudzai, please. And well, this slide depicts pretty well what N3uron is. Many IoT projects aren't scaling because of data interoperability issues, and that's precisely where N3uron comes in. Because basically, what it does is that it's just to enable data interoperability and data governance. And next one, please. This is the architecture we are going to use in today's demo. Okay? On the left-hand side, we have three different manufacturing cells that make up the line. We are going to connect to the filler through OPC UA client, to the labeler with a CMS client module, and finally to the packer with Modbus client or TCP. And we will also be using Derived Tags to make some calculations, for example, to add all the rigids throughout the production line. And then we will eventually publish all this data to Ignition using the Sparkplug module. So in summary, we're going to use five different modules for this demo. And that's it. I turn it over to you, Kudzai.

Opto 22 groov EPIC System Architecture

Kudzai Manditereza: 00:27:24.104 Thank you, Jose. And then, Benson, can you break down what your part of the demo is going to include?

Benson Hougland: 00:27:32.820 Sure. So as you can see there in the middle of the screen, you've got your groov EPIC represented. And in the bubble above it, there's a lot of different software, of course, that allows you to achieve different tasks and solve different problems based on what your application is. And downstream from there, you can see I can connect to various I/O: analog, digital, serial, CAN, you name it. That can all be brought in through various I/O modules if you choose to use it that way. But we do have some customers that use EPIC without any I/O modules simply as an Edge gateway. And in that case, we could communicate like N3uron software. We'll use Ignition to communicate with other devices. My particular demo, I'm going to keep this really simple. I've got a bottling line program that's running inside the EPIC. We're going to show how we're going to use Ignition to create the Unified Namespace. In a sense, the tags on a given namespace that I'll then use to publish up to there on the top line. You can see the HiveMQ Cloud up there. We've also got VPN connectivity on this device as well. Anytime we're communicating upstream, out through a gateway of some sort to reach other networks, we want to do that on its own network interface. So I'll be showing you that as well. And then downstream, that is on a protected OT network so that we're effectively segmenting those two networks from each other. So we're able to capture all the real-time OT data, bring it in, model the data in any way we want, and then send it on its way up to the HiveMQ Broker to be, again, participants in the Unified Namespace. So I'll also show you a little bit about how a quick little HMI for the operator that's directly on the EPIC as well. And then there's a whole bunch of other tools there we won't be getting into. Primarily, we're going to keep this pretty simple from my standpoint. Thanks, Kudzai.

Demo of Building a UNS with HiveMQ Cloud MQTT Broker

Kudzai Manditereza: 00:29:25.797 Thank you, Benson. Okay, so I believe now it's time to jump into the demo, which I'm sure the majority of you really have been patiently waiting for. Right. So we're going to start off by setting up the broker, which the other components are going to be pushing information to. So as I explained previously, what we're going to be using for the MQTT broker, for the HiveMQ Broker, is the HiveMQ Cloud. So you could deploy this on an Azure or private cloud. But for today's demo, we're going to use this managed service called HiveMQ Cloud. So this is a free service, right, which you can actually connect up to 100 devices. So you can sign up with no credit card at all and connect up to 100 devices. And then there's a table there that shows you all the different options that you get out of that. So to sign up for this, it's pretty simple. You just go to Try Out For Free. So when you do that for the first time — because I've already got an account, it will take me straight to the portal. But when you do it for the first time, you're just going to have to put your email, confirm your email, and then as soon as you do that, it will take you to the portal where you just need to select the cloud host, whether it's Azure or AWS. And then when you let into the portal, all of the primary cluster is already created for you. So all you need to do is just to configure the cluster.

Kudzai Manditereza: 00:31:00.738 So for example, this is the cluster that we have here. So if I go into managed cluster, you can see all of this information about this broker cluster. We've got a broker URL address here, right? And then the port number 8883. So this is an encrypted connection, which means all of the clients here can communicate with this broker via an encrypted channel. So you know your data is always protected. And then you've got an access management tab here, which allows you to create all the clients that are allowed to connect to this broker. And then the other thing, which is not really part of this demo today, is we've got integrations, which is Kafka integration. Just for interest sake, this is something that you could use to extend your Unified Namespace, because your Unified Namespace gives you a snapshot of your production. But when you need to persist that information — retain that data for consumption at a later date — this is where you need to bring in a Kafka platform, which retains that data for you, and then allows all the other enterprise applications to consume off of that. So as mentioned, I've already created this broker cluster, and then I've shared this detail with the panelists here. So they are going to be using this broker to create this Unified Namespace. So without further ado, I will turn it over to you. Let's see, Benson.

Benson Hougland: 00:32:30.933 Great. So let's go. Okay, y'all should see my screen there. We get a thumbs up. Terrific. Okay, first things first, let's talk about the demo itself. So I'm going to reverse my screen here. Everything I'm going to be showing you today is running on this EPIC. There are no external gateways, external PCs, or anything that's required. Everything is running right here. And as you can see, I've got an EPIC with a four-channel backplane here. This backplane has got some digital cards in it, input card, digital output card, and one of our multi-function software configurable cards for analog, digital ins or outs, whatever you like. All of those are wired to this backplane here. So I've got a temperature probe. I've got a little knob here that's going to simulate my bottling line speed. I've got some other buttons I can interact with. And then I've got an external system here. It's just a couple of push buttons wired into some digital inputs over here. And I can use those to start the run, to reset the run, or to set a reject. All that information is obviously coming into here, where on my EPIC processor, I've got my two ethernet cables, one that's connected to the corporate LAN so I can reach the HiveMQ Broker over that encrypted channel and the other communicating to the OT network.

Benson Hougland: 00:33:51.917 Now, the other thing that you can probably see here is we've got a built-in HMI. So in here, this screen currently is for downtime status, but this allows the operator to interact with the system. But you can also completely configure the system from this screen as well. I'll be doing it from a browser just because it's easier to do. It does have an HDMI port on it, so I can actually take this out. And I've got another EPIC behind me connected to an HDMI monitor over my shoulder here. So you can certainly extend the HMI outside or again, access any of this from a web browser, which is what I'm going to do right now. So I'm going to switch this back and you can see I've got my Chrome browser up. So the first thing I need to do is log into this device and it has a hostname, just as you would expect from any server. So I would go, EPIC LC2 docs is the name of my EPIC. And when I come to the first page, I am connected securely. I know because I have the padlock up there in my browser. So I have an encrypted connection, which is important. And that means now that as I enter in my username and password, I know I'm not doing that open on some network where somebody could sniff my password. So I'll go ahead and sign in with my administrator account. And indeed, I am now logged into this EPIC, and this is where I manage the device. Now, I'm going to try to keep this as focused as possible on the demo at hand. There's a lot of capability in this system, way more than we can possibly cover in the short time I have. So let me cover some of the very basics.

Benson Hougland: 00:35:21.947 The first one is accounts. So this is where I set up the local user accounts on the device. And I did log in with the Opto username and password, but you can see I can create other accounts for the operator. Even David Schultz — I have an account for him so he can remote into my device if he needed to do so with the proper authentication over VPN. There's also LDAP in here, and that simply means that if you want to use Active Directory or another LDAP-type system to manage the users on this device, you absolutely can. So that's all built in as well.

Benson Hougland: 00:36:00.327 Now, as I said earlier, security is absolutely paramount. So we want to make sure that this device is encrypted, the authentication with the users, and also importantly, the firewall because when I'm sending this data out to HiveMQ, I'm going outbound over the corporate network here to reach the cloud. But indeed, I have no firewall parts open on the way in. I want to make sure that I don't allow any bad actors into this system, so I lock down all of those ports. I only communicate outbound. And the firewall works on all network interfaces, so I can set up firewall rules for each network interface. And real quickly — what our network looks like currently on this EPIC is — as I said — I've got one ethernet connection that's connected to a static network, my OT network, another ethernet connection that's connected to the corporate lands so I can reach other networks, and finally, the OpenVPN tunnel. The EPIC has an OpenVPN client built-in. So once I put in the credentials for the server, I can then connect, and this device can be accessed from anywhere in the world. Okay. So let's get to the meat of this thing. Indeed, it is a controller. So first and foremost, it is a real-time PLC. And unlike many other applications out there or Edge devices, we do give you a choice of programming languages. Our own PAC Control, which we've had for 30 years, or CODESYS and IEC 61131-3 development environment or IDE. So if you want to program in ladder or function block diagram and all of the languages that are supported by IEC, you can do so. I happen to be pretty familiar with that control, so I am using that in this particular case. And this is just an interface that gives me a status of the PAC Control program that's running on there.

Benson Hougland: 00:37:47.937 So what does it look like? Well, I'll bring up another screen here, and that's it there. This is free software as well — download from the Opto 22 website. And I developed a bottom-line application in this guy with just various flow charts. And inside, you can put in script. You can just use condition blocks and action blocks as well. But in this case, I'm just using some of the scripting here. And over on the right side, this is where I've configured all my I/O. So I've got my LED backplane in here. I've got some push buttons. I've got my potentiometer as I described. These are all the I/O points I want to use to build my application. And then I have a lot of numeric variables in there. So these are, of course, the result of inputs coming in or maybe outputs, but more importantly, the data I ultimately want to send up to David and the UNS. And then I can go into debugger and do a quick look at what's going on in there. And the debugger just shows me all these various statuses, where we're at, so I get a real-time view. And I can even go in there and auto-step through some of these blocks to make sure they're doing what they're supposed to do. And they are. So we've got this all squared away already. Just a quick peek into the programming language for the bottling line. So now that that's done, and once it's been created and downloaded into the EPIC, it'll run forever. Now the next step is — well, how do we get the data out of the PAC Control program and on its way up to the UNS?

Benson Hougland: 00:39:14.573 So let's swing back over to my main GroovManage screen. And I've got a lot of different options here, but for my particular demo, I will be using Ignition, and I'll use Ignition to model up that data appropriately for the UNS. So in this case, I want to get data from PAC Control into Ignition. There's a couple of ways to do that. The way I'm doing it is with OPC UA. So this does have a built-in OPC UA server, so it's taking all those tags from the PAC Control program, making them available via OPC. That's the method I'll use from Ignition to pull all the data in. You'll also notice that there's MQTT here as well, and it is built-in. It's native. However, because I want to do some modeling in the data, I'm electing to use Ignition for that. But native MQTT has also supported both Sparkplug and String payloads. So OPC, MQTT, Modbus, there's all kinds of different ways to get data out of this device, including RESTful API, as I saw in the comments. Okay, so now that we've got that all set up, now I want to go into Ignition. So indeed, Ignition is running on this platform. Just as I now understand, N3uron can run on this platform. In this case, we've built a nice interface to be able to get into Ignition. I can run Ignition Edge or full Ignition, your choice, depending on what your application requirements are. I'll go ahead and log in and show you just a couple of settings here in the gateway that are required for this application.

Benson Hougland: 00:40:42.586 So again, everything is secure, so I have to log in with a username and password. And in here, this is where I'm on the configuration pages. A couple of quick things. Number one, OPC connections. I told you I spun up the OPC UA server to get all the PAC Control tags. This is the configuration to talk to that OPC UA server, all localhost, so this is all happening right on the EPIC, not going to an external device or anything like that. So now that we have all the tags in, I can pull all those in and start doing some interesting things with them. Now, the other thing that Ignition is really good for, as is N3uron, is this notion of connecting to other devices. So if I just switch this real quick over my shoulder, I've got some Siemens and Alan Bradley PLCs and so on, but I'm keeping mine simple. So I won't be pulling in some of the other PLC data for my demo. Jose will show you that using the N3uron software.

Benson Hougland: 00:41:36.565 So the final configuration, and I'll switch back, is down here — MQTT Transmission. What is that? MQTT Transmission is a module that fits inside the Ignition ecosystem and its purpose is an MQTT Sparkplug B client. So I've already shown you another MQTT client, and there are plenty to choose from on EPIC. But in this case, we're going to use the MQTT transmission module. And indeed, the setup for that is pretty straightforward. You probably remember that cluster URL that Kudzai had mentioned earlier, I have that in there along with my username and password. And I've established a connection over that outbound port through my network gateway here at Opto 22 and now I can reach and have connected securely and encrypted to the HiveMQ Broker. So that's pretty much all the settings you need to do in the main Ignition Edge gateway. The final part is how do we take those PAC Control tags that I described earlier and formulate them into a way that I can publish that data to the UNS. So for that, we'll use — go ahead, Kudzai.

Kudzai Manditereza: 00:42:43.089 Benson, maybe before you go into that, there is a question here that maybe is more related to what you were showing here. So Sabita is actually asking — I hope I'm pronouncing your name okay: Can MQTT be natively used in groov with CODESYS without going through Ignition?

Benson Hougland: 00:43:03.370 Yes, it can. There's a number of ways to do that. What most of our customers end up doing is they use the libraries that are available from the CODESYS Store, which does have MQTT Sparkplug B, and then you integrate that directly into your CODESYS program. That's one method. There's several, but that's one method. You don't need to use Ignition for that.

Kudzai Manditereza: 00:43:24.392 Got it. Perfect.

Benson Hougland: 00:43:25.842 Terrific. All right, so thank you for that question. We'll jump into the Designer. That's up on the screen now. So the Designer is just software that runs on a PC that comes from the EPIC, loads into your PC, and allows me to start developing my application from a UNS modeling perspective. First thing is I bring in all of my PAC Control tags. So now I have them all here. There's everything, everything is in real-time, and this is from that OPC UA server. But I don't want to send all of this data to the UNS because there's a lot of data that most people don't need. So we've defined what the UNS is, what the namespace looks like, and so what I do is I simply create another folder here at the top called MQTT tags. And in there is where I've defined my topic namespace. So my group ID is California, my edge node ID is Line 1, and my device ID is Bottler. And then what I've done is I've simply dragged those appropriate tags up into here, given the right namespace and the right tag names. And once that's configured, I simply hit the button and I start transmitting on change anything that's happening from this system. There's no need for anybody from the outside to come and connect to my system. I can do it all from here and send it on its way. Now, one quick note, I did do this very simply. In other words, I've taken the PAC Control tags, put them into an MQTT namespace, and sent them on their way. But indeed, I could do the same with UDTs. If I had multiple bottling lines that had a lot of similarities, I would probably, in fact, that's what I usually do, is create a UDT and have that data sent up. But we're keeping it simple here, quickly get tags, put them into tag folders, and publish that data.

Benson Hougland: 00:45:11.001 So this is great for — now David has real-time access to this information flowing through the broker, all Report by Exception, all edge-driven, and 100% encrypted and secure. But what about the operator? So I showed you quickly down here on my screen, I've got an operator interface, but I've also got one that I can just access from my browser, and I'll go into groovView. And groovView is just a built-in web-based HMI, not meant to be SCADA or anything, but a nice operator interface that allows me to start the process, do whatever I need. I'm going to come down to this page. Here's my Bottler production overview. I can see what my current stats are, what my line bottles per minute are, the state, and so on. And I can also enter in some data for how much I want to — what my production might be. And that data is now also just sent to David because I changed it. And then the other important thing is if there is a downtime event. So I have a button here that's going to simulate a downtime event. I press it, the backplane turns red, as you can see there. And then I can enter in a downtime reason. It could be maintenance. This is also sent up to David so that he knows that he can't send a production run down to me because I'm in maintenance. I'm in a downtime status. And a real quick way for the operator to enter that information. Once we've cleared the maintenance, I'll take it back off. And now, David will get the message that I'm no longer in a downtime status and he can start sending me some production runs.

Benson Hougland: 00:46:42.049 So that's kind of all of it in a nutshell, we've got the EPIC with its I/O, a control program running, OPC UA into Ignition, model up the data, and send it up on change to the high-venue broker for ingestion by just about any software that supports Sparkplug B. And in this case, that's going to be David. So he's going to start consuming all this data, all this data, and he'll be able to auto-discover it. I don't have to send him my tag names or anything like that. It'll just pop into his system for building his application. So with that, I'll stop my share.

Kudzai Manditereza: 00:47:19.923 Thank you, Benson. And Jose, you'd like to show us your part of the demo?

Jayashree Hegde: 00:47:29.519 You're on mute, Jose.

Jose Granero: 00:47:31.764 Yeah. Okay. Let me share my screen.

Benson Hougland: 00:47:36.584 While he's getting ready, there was a question that popped up about Sparkplug. Both versions 2.2 and 3.0 are supported. Yes, both are supported.

Jose Granero: 00:47:49.844 Well, first, I'm going to start here from a site showing how N3uron can be installed. I do already have a N3uron installed, but it's incredibly easy to go to our website. And you have this big Download N3uron button on the top on every page, which takes you to the download page. You can download N3uron for whatever operating system. It comes with a two-hour trial period, which can be restarted any number of times. So you can fully evaluate all that we're showing here today and get a proof of concept without having to buy [inaudible]. Installation takes less than a minute, and once it's done, the web user interface automatically opens up in your web browser. N3uron is extremely light. This is the Windows setup, the Window installer. And for example, this one is only 38.9 megabytes. And we recommend installing the whole package. Okay?

Jose Granero: 00:48:59.912 Now I'm going to access — move this over here. Thank you. I am going to access a remote node where I'm going to start configuring everything from scratch. To access Web UI I only need my web browser. This is the web user interface where everything is configured in N3uron. As you can see, it's pretty clean and intuitive. And the first thing you typically use when starting to configure Ignition is to create a new module. Since we have to connect to the OPC UA server that is providing the data for the filler, let's start by creating a new OPC UA client module here. Now, let's select the functionality of the modules in the dropdown menu. Okay, we need to save the logger and API configuration sections for the module here. And once I've created the module instance, I can start creating the connection to the OPC UA server. So let's create a new connection. Let's name it, for example, Filler. And here in the endpoint URL, I need to type in the IP address or URL of the OPC UA server. This case is 1011 or 23. And the port is on five.

Jose Granero: 00:50:46.322 And in this button here should offer me the available endpoints. I'm going to select the only available endpoint, okay, to check the configuration. And now, I'm connected to the OPC UA server. But still, since I am using Sign & Encrypt security mode, I need to trust the certificate the server has just sent. And I need to do the same on the server side — sorry, for a certificate. My node is sending to it. And once I do it, I can, for example, start browsing all the data available in this server. Once I've created the connection, the next step is to start creating tags. So I click here on tags. I can, for example, create a new group. I'm going to name it Filler. And I'm going to create a new tag. And here everything is configured at the tag level. So for example, I'm going to create a tag named Infeed, which is going to be a number — that is going to be a number, in particular an integer. The Deadband is what determines when a new event is going to be generated. Here we can also define whether this tag is going to have write permissions or not. We can also select, in this menu, the Persistency mode. It can have Persistency memory.

Jose Granero: 00:52:45.154 So in case the module or the nodes are started, the values are going to persist — or this persistence, which is ideal, for example, for a set point or something like that. We can also add some details here to this tag, for example, Filler, Infeed, or something, engineering units if they apply, and also a default value in case we don't use any Persistency mode here. And this is probably the most important section where we have to define the data source where these tag values are coming from, where tag values come from. Okay? So here again, I need to select the functionality of my module, OPC UA client. The name of the instance I created previously because I can have several different instances, okay, and the name of the connection, I use Filler, okay? Once I do this, I can use this browser and select my tag. Right now, it's not possible to drag-and-drop the tags to the model directly, but from version 1.1, 1.21.5, this functionality is going to be available. So it would be much easier for users to start using the OPC UA tags in the data model. So I'm going to select, for example, here, Filler, and we have said Infeed, okay? Select it here. And well, this is the scan rate, the sampling time. It's going to be both every five seconds. Okay?

Jose Granero: 00:54:28.904 If we want to store historical values, I can do it also here. Here it would enable that possibility and point to one or several different historian instances installed whether locally or remotely or both. And I can also configure as many alarms as needed down here in case I want to use them. Let's keep it simple for the time being. I save the changes. So if I want to check that everything is configured properly, I have to just come here to real-time, and I can see my tag displaying with good quality. So I could keep on working the same way. For example, duplicate this stuff here. Say that is the outfeed. Sorry, I have a typo here. And manually change the node ID. Now here, they're the same if I go to real-time. Something's wrong. I'm going to do it manually.

[silence]

Jose Granero: 00:55:56.663 Okay, here it is. And I could keep on working the same way, but another way to work also is just to export this to a CSV file and configure everything in that CSV file and import it back to N3uron once you're done. But given that we are going to have us in different lines and that we are going to have more than one filler, it makes sense to start working with templates. So to do that, I can drag and drop the Filler to the templates at the Templates panel, okay? And I'm going to create a new custom property. For example, I'm going name it Line, which is going to be a text — it's going to have a text type because I'm only going to use it to bring an expression in the node ID field. So if I come here to use that custom property in the node ID field, I must add an equal, and quotes because this is a string. Here plus. And now, to use the custom properties, this must be between curly brackets. So I'm going to add a curly bracket here and the name of the custom property, line, curly bracket again. I'm going to remove this one. I'm going to copy this one here and do the same for the outfeed.

[silence]

Jose Granero: 00:57:55.603 Here. So now, if I delete the group I created previously. Now if I right-click here, I can select a filler to create a new instance of that template. By the way, templates follow the object around in paradigms in terms of inheritance. So any changes you make to a template are going to be automatically inherited by all the instances of that template. So a new filler, I'm going to assign a quality to this custom property, which is one. And if I come back to real-time, here it is, it's working. And I can also embed a template within another, and the value of a custom property can be inherited from an upper MQTT in the hierarchy. That's what I'm going to do. For example, given that the filter is going to belong to a line, I'm going to create a line with a custom property. I'm going to give it the same name, Line. And here in tags, what I'm going to do is to create an instance of the Filler. And instead of assigning a value to this custom property, what I'm going to do is to point it to the custom property of the line.

Jose Granero: 00:59:26.178 So I'm going to add an equal curly bracket and the name of the custom property I've used for the line. So I'm going to save it, edit, and now, I create a line. And that brought it to the custom property here, it should be working too. Try this. So this is the recommended way to work at your own when you have to create many different instances of — if you have to create something at least twice, use templates. So here we would have the Line 2 to change custom property. And here's the Line 2. And if we had to create many, many instances, again, we can export this to a CSV file, create all those instances with the custom property values they must have, and then we can import it back to N3uron and have all data model created. To connect to the Labeler, we would do the same. We would create a Modbus client module. Again, we would select the functionality of the module right here. The changes and configuration is [inaudible] very similar regardless of the modules. So of course, there are — it depends on the technologies you're using, but it's very similar. We would create, again, the connection, etc., etc.

Jose Granero: 01:01:11.967 By the way, Ignition and N3uron provides contextual help, which gives you pretty useful tips. All we have to do is to expand it down here. And let's suppose I've already created the connections to the three manufacturing cells in my line, now I could also create a Derived Tags module. I'm going to name it Derived. I need to save the default logger and API configurations. And with the Wi-Fi constraint, we created other types of tags. So for example, if I want to calculate the vision of this, create it here. If I want to add two different values coming from two different tags, and it should select the module type again, Derived Tags, the name of the instance, derived. And here you can select the type of tag. In this case, it's going to be an expression tag because it's going to be the result of a script written using Node.js. The output contains the value timestamp and the quality of data. And here I can — sorry. Now here, I need to add all the tags I'm going to use in the script.

Jose Granero: 01:03:07.940 So for example, if I'm going to use the one Infeed, I can use an alias here. Select the tag path for this alias, which for example, if this one is one for sure. Okay. And I can write my expression using Node.js in this way. And then [inaudible].

[silence]

Jose Granero: 01:04:29.663 And it's in capital letters for the —

[silence]

Jose Granero: 01:04:46.086 So this way I can create simple or more complex expression tags. Let's imagine that we have already created our data model. The next thing we should do is to create a Sparkplug client instance to push all this data to the HiveMQ technician, to the HiveMQ Broker. So I need to create a Sparkplug client instance. Again, it's always the same.

[silence]

Jose Granero: 01:05:31.047 Now, here I have to define the topic definition [inaudible] before. So for example, I can use Madrid for the group. Here I have the edge node, which is going to be both in — oh, I'm sorry. [inaudible]. And here I can select whether I'm going to use Sparkplug v2 or v3 and start adding as many clients, as many brokers as I have in my architecture, providing I have redundant [inaudible] brokers. So for example, I'm going to MQTT broker 1. Going to enter the URL, or the client ID is usually assigned automatically in the authentication mode we're going to use. And in case I have several brokers here, what the client is going to do is to go over that list in case it can connect to the first broker. It’s going to try to connect to the second one. And if it couldn't connect to any brokers, it would initiate the [inaudible] mechanism and start storing locally all the data in order to prevent any data loss.

Jose Granero: 01:07:05.237 And Sparkplug also provides an important functionality, which is the primary host because it allows to detect — the client can also detect whether the primary application is connected or not to the brokers. And in case we enable it, it would do exactly the same. If the primary application loses connection to the broker, the clients would start storing the data locally until the connection or the communication would be restored. And we can store — and we can enable or disable the [inaudible] mechanism at a new device on the data we want to publish. I'm going to show you — I'm going to jump to another node where we have already created the whole data model.

Jose Granero: 01:08:16.893 Okay. Here, we have Modbus client, OPC UA clients, MES client. We have all our tags here, 74 tags with all our machines and the two lines. And in the Sparkplug client configuration, we have group named Madrid, Line 1, Line 2, and also the [inaudible]. And we are publishing everything to HiveMQ Broker. Okay. So if I open, for example Ignition, my local Ignition gateway, which is also subscribed to the broker, I can see how California is pushing all the data as well as with Madrid or Line 1 and Line 2. And finally, I wanted to show you something else. With our module coefficient, you can also create a very simple HMIs. It's fully web-based like a web UI. So I'm going to open it here.

[silence]

Jose Granero: 01:09:54.281 For example, I can start my line from here. This should sync not here, but here. Okay. Now I've already started my line number two. And that's it. I hand it over to you, Kudzai.

Kudzai Manditereza: 01:10:23.558 Thank you. Thank you so much, Jose. So maybe let's kind of bring it all together now with — David, if you could show us your part of the demo. And while David gets ready to show us his part of the demo, I think there's a question here for you, Jose, from Bjorn, who was asking: Is there an official Docker image available for N3uron?

Jose Granero: 01:10:48.710 Not official, but you can very easily create a Docker image of N3uron. In fact, we are going to publish that in our knowledge base. I don't know whether tomorrow or on Friday.

Kudzai Manditereza: 01:11:04.128 Okay. Thank you.

Jose Granero: 01:11:04.787 But yeah, you can use Docker screen here.

Kudzai Manditereza: 01:11:12.642 Okay. David, over to you.

David Schultz: 01:11:14.740 All right. Thank you. So if you go back to the model of what it is that we're building today, you have both the California location that's represented by the Opto 22 groov EPIC, what Benson showed you, as well as the Madrid plant, which was all developed in N3uron. So from there, it's all been published into a HiveMQ Broker that's running into the cloud. Now it's time for the Enterprise MES system. In this case, we're going to be using the Sepasoft to consume some of that data. So much like Benson showed on how you publish the information out of Ignition through the transmission module, I'm going to use the engine module. I'm subscribing to the exact same endpoint. So there's our HiveMQ Broker, has the UNS demo and all of that. And now I'm able to subscribe to all of those tags that are published in the broker. So we'll head back over here to Ignition. And you can see in the MQTT Ignition, I have the California plant, I have the Madrid plant, and I'm now able to consume all of this information. So there is the tag namespace, there is the Edge namespace that's coming right from the broker. So now I'm able to consume this information and start making some manufacturing operations on that. Well, the first thing that I did is that I'm actually going to start creating some different namespaces that are going to exist more at the enterprise level. And when Benson was talking about doing some UDTs, I've actually created an Edge UDT. I can now parse down the specific information that I need in order to run my application. So I have my Edge namespace. In this case, we're going to be simulating a line. And when you're working with OEE, there's some counts that you need, there's some states that you need, and that's what this information is represented here. And I've also created a functional namespace.

David Schultz: 01:13:06.919 So this is now that model that is going to do all of my calculation. I'm going to grab my Infeed and Outfeed. So this is in that particular run that I'll be demonstrating. It calculates OEE, gives you the availability and some performance. And there's a few other metrics that are calculated by the OEE engine itself from Sepasoft. So now that I have my tag models, I've now created instances of those. And since I'm at the enterprise level, what I'm doing is taking it instead of coming in as California Line 1 for group and nodes, I've now raised it up one level. And this is more of the Unified Namespace where I'm sitting at the enterprise. I've created instances of those tags. So now you can see in those particular instances, I've used a parameter that gives me a tag path. So if for some reason there's a piece of equipment that's antiquated and it doesn't follow the semantic hierarchy that we want, I do have a little bit of flexibility in terms of making sure that the information is mapping correctly. So this is the instance that I've used now to create my enterprise California location of Line 1. So now that Ignition is all set up, I'm consuming the Edge namespace. I've configured the two models that I want to create. I now need to go into Sepasoft and actually start configuring Sepasoft so I can run some production models on this.

David Schultz: 01:14:29.919 Before we get going, I think it's important for people to understand the ISA-95. If you're not familiar with that particular standard, Sepasoft has a great website that walks through all of the various pieces. This mostly focuses on the level two. This is all of your objects that are used for your manufacturing execution system as you look at it at the business level — is what's happening in Part 2 of the ISA-95 standard. You can see it's a fairly thick document, but it's something that you want to familiarize yourself with. The more you understand ISA-95, the more MES will make sense. So this is the user environment that I have developed within the Ignition Perspective module that is going to be used to configure my manufacturing execution system in Sepasoft. So following that ISA-95 standard, I have created my enterprise, I have the sites that I'm going to configure, I have the areas that are part of it, and then finally, I get into the lines. The “Lines” is really where all of the action occurs in a manufacturing process, and there are some things that need to get configured as such. The first thing I need to do is get the counters. And what counters are doing is that — how do you actually measure what's coming into the line, that's your Infeed. What's good product that's coming out  — that's referred to as the Outfeed. And then also what is my reject? Now, within Sepasoft, it does have the capability, if you provide two of those values — it'll calculate the third. But in this case, because we have all of those values available, we're capturing it. You'll also notice that the tag path here, that's been hard-coded. There actually is a mechanism. It's called equipment path that you can parameterize, so it follows that same tag namespace. But for the instances of this particular demo, I have just done that in a hard-code fashion.

David Schultz: 01:16:24.167 So once you like what you're seeing in here — I need to move the screen over so I can actually get to this arrow. So those are the counters infeed, outfeed, and the reject. I've also done the same thing for the tag path. Now, it's a best practice within Sepasoft that you let the system do as much of the control as it can. However, if there's things that you want to be able to capture from an edge, like here we have an equipment state, so this is what the actual equipment is doing. You can assign this a tag value as well. And just looking through this list, there's quite a few things that you can create a tag for, so it now becomes more tag-driven than an automatic delivery within Sepasoft. You also have to assign a shift. So this is just saying, "When is this line available that I can run some production on?" And I've also done a live analysis. And what the live analysis is — is that at the beginning of the run, that's where I'm going to calculate all of my OEE for that particular work order. And these are the data points that you'll see a little bit later — is that we're going to get that product code, the work order OEE. These are just the parameters that can be consumed within the OEE system itself. And then backing out of that. I want to spend a little bit of time on mode and class. So real fast, it's another abstraction that exists within Sepasoft. Mode is: What is the equipment supposed to be doing? Am I supposed to be in production, or am I in maintenance? And that informs the overall OEE calculation. And then state, of course, is what's the equipment actually doing. So if I'm in production mode, meaning I'm supposed to be producing, but now I'm in an unplanned downtime state, that's going to negatively impact my availability score for that particular case.

David Schultz: 01:18:11.953 So now that I've configured the overall hierarchy of the business, so all of my equipment within the enterprise, I also need to start creating some materials. So in the ISA-95 standard, and the same here within Sepasoft, you have the idea of a material class, which is “Beers”, and then we also have a material definition. And this is where you just set up the name of the material definition is ale. But where can I run ale? Much like the schedule on your line, what kind of equipment or what kind of my material definitions can be run on a particular line? And then what's that process look like? So looking at my California bottling section, you can see I have Line 1 that's been selected. And there's some Changeover settings. So what Changeover does is how much time does it take you to go from running Manufacturing Work Order 1 to Manufacturing Work Order 2, and it just defaults to 60 seconds. That's something you're measured against. So that would be the changeover mode. You're not supposed to be producing. And then when you finally are — it's what's the rate period? And there's two parameters that Sepasoft uses. One of them is the schedule rate. So when you're scheduling a production run — it's how much time should you allot for this particular work order? It's going to be less than what your standard rate is because the schedule rate is how you schedule your equipment. So it's a lower number. We're going to get out at least a minimum of 60 bottles per minute. But really, the line's capable of running 72. You wouldn't want to schedule something that, in case there's a downtime event, it gives you a little bit of buffer. I should note that you will be measured against the 72, not the 60 number, but that's just for the scheduling purpose itself. And then you also just assign if there's any scales that you need to do.

David Schultz: 01:19:58.714 So for instance, if you're bringing in beer bottles and you're exporting cases, that's where you'd have a 12 to 1. So that's where you can start scaling some of your outfeed units and that type of thing. So that's just a little bit of getting everything all set up. So now that we have our equipment set up, we have our materials set up and we now know what can run on each particular line, it's time to start bottling some beer. So the MES work order table — this is the manufacturing work orders that your ERP system have said — we need to make these and we're at the enterprise level. And because I've set up both lines and both plants to operate, I can run any one of these work orders at any one of the plants. Commonly, this work order table is going to be populated either through a business connector or it's something, there's a SQL query. In this case, I've just done it manually. We don't have any of that information. So you can choose what site and what line you want to be on, and we're actually going to schedule some beer or schedule the packaging of some beer. So we're going to choose what work order are we going to work on. It tells us what product that we're going to make, this is the operation, and then how much of that we're going to make, and we're going to package up 600 bottles. And now that I have that scheduled, you can see that that's showing up on my calendar for when I'm actually going to start making it. And then I can come here and these are some components that are available from Sepasoft. I'm going to go ahead and begin my OEE run.

David Schultz: 01:21:26.875 So now, I'm going to start that changeover process of making some beer. In order to do that, I'm just going to come in real fast, make sure that my quantity is set to 600. If you remember earlier in the demo, Benson changed that from 600 down to 12. I'm going to say, yep, I have that quantity set. Normally, this is something that you would have from an HMI standpoint. You wouldn't come to the OEE system to do that. I'm now going to end my changeover. I'm now in production and I'm going to go ahead and start my line. And if you look over at Benson screen, you'll notice that it's blinking green. It's actually making product.

Benson Hougland: 01:22:04.476 Yep, indeed. And when you sent down the quantity, that showed up on the local operator interface, so we know what the run is going to be.

David Schultz: 01:22:10.970 So now that we're in production, you can see that my calendar event moved back. I can see I'm in production. I'm now active on that. So let's take a look at what this looks like. So now I have a dashboard that's telling me, okay, and I have this defaulted to set up at California on bottling line one, but you can see I have a production run of 600. You'll notice that my OEE and my performance has dropped and that's just because we just started the line. This is one of those differences between a SCADA and an OEE. If you refer back to the ISA-95 standard, most of the time your level two data, that's your SCADA data, that operates at the second or sub-second level. So you're going to see every time there's a count that passes, you're going to see that number going up. But things like an MES, that's more of a minutely-type application. So in this case, every 60 seconds, Sepasoft is taking a look at — what were those counts? And then what was my overall performance, what was my availability, what was the quality over that last 60 seconds? And you notice that it just did a calculation. It captured 39 bottles that came through. And because it should have had more than that, that's where you're starting to see your performance drop off a little bit. So you'll notice here that in the OEE namespace, you can now see those exact same values that my ideal count — I should have had 77 that came out in this period, and I only got 39. And that's a result of the fact that the line is running a little bit slower. So Benson talked about that potentiometer that he had to increase and slow down the speed of that line.

David Schultz: 01:23:50.934 And unfortunately, well, it looks like Benson's already done some rejects in there. So you'll notice that there's some quality data that has — the quality of this production order. And what that means on the quality side is that you had a reject. Something didn't meet the specification of what it is that was supposed to be there on it. So the system said, yeah, that's a bad one, and it got a reject. So just from an understanding of OEE, availability is a maintenance issue. Am I supposed to be running? Am I actually running? Well, we're at 100%. We have not had any downtime events. Performance gets into it, "Hey, I'm making product, but is my line running at the speed that it needs to be? Am I producing the number?" If you remember, we're supposed to be at 72 bottles a minute, and we're running a little bit lower than that. And then finally, quality is — are we making good product? And that's that ratio of how many of the good to how many of the bad. And those numbers are then combined. For me personally, I like performing or staying at the APQ level just because OEE seems to be a — I know it gets used a lot, but I think the underlying values are something that's much more beneficial.

David Schultz: 01:25:03.524 So popping back over here to the engine, you notice that we had the California and Madrid namespaces coming in, all those edge cases. Well, I'm also publishing that enterprise namespace. And I've configured another transmission or the transmission module and another transmitter to push this out. So you can now see in my line. I'm also calculating the overall outfeed counts. Sorry, it gets very dry in my basement. And you can see in real-time in my Unified Namespace that I'm getting both an edge namespace that's part of an overall line namespace — I also have a functional namespace. I'm able to view in real-time what's running on Line 1 in my bottling area of my California plant. What are my overall counts? What's its current state? How is it doing? What's its performance? What's my OEE? And all that information. You notice that I do have a runtime but I don't have any unplanned downtime. But the idea here is that within a Unified Namespace structure, I can very quickly assess what's the overall state of my business, what's the overall health of the business that's there. One thing I will say from an architecture standpoint, it's unusual to have both an edge namespace or a plant namespace publishing into an enterprise along with an enterprise namespace. I generally recommend that you have a site or an area broker that is capturing all of the edge namespaces and actually allowing you to produce, and then creating a single publish of that entire site namespace into an enterprise namespace. The idea is that all of these data models that you've created are going to be replicated throughout your enterprise so that we're able to contextually provide the right information at the right time in the right format to the right people.

David Schultz: 01:27:00.485 A couple of other graphs that I will commonly show here. One of them is downtime by occurrence. You can see we have not had any downtime events. That just means how many times did a particular downtime reason occur? Was there a maintenance issue? Was it an electrical issue or mechanical? And then of course downtime by duration. So when it was down — how long was it down? And this will give you the top five in a Pareto chart. And you can also see some shift performance of — you'll see the top blue line was how much should we have produced, and that bottom line is how many did we produce? And that just gives you a nice view into what's happening. It's going to take about five minutes for this to run. Actually, so Benson, if you want to speed the line up a little bit and really get that thing going, we can finish that up. I think that's pretty much what I had at this point, so.

Benson Hougland: 01:27:53.223 I'm running at 74 bottles a minute right now.

David Schultz: 01:27:55.309 There we go. Now we're making some beer. So it looks like it's going to take a little bit longer for this to finish up. So I don't know if anybody had any questions at this point. Really, it's the end of the demo. You can see exactly what's occurring in here. Just running through the use case again, there's edge namespaces, publishing into a broker. Here in the enterprise level, I'm able to consume that information. I can operate production. I can calculate OEE and I can publish all that information back to that broker for consumption by other people.

Kudzai Manditereza: 01:28:33.039 Okay. Thank you. Thank you so much, David. So I think there is a question here. I think you might be the right person to answer that one. So it's from Rick. So he's asking — can you please go over again how the edge data is getting to the UNS and the MES?

David Schultz: 01:28:56.448 Sure. Actually, if you can bring your presentation up. There's that slide that shows that you have the two edge namespaces. So that's the demo that both Benson and Jose showed — let's see. There you go right there. That's the architecture that we're following. So there on the left, you can see that there is either the N3uron application or in Benson's case, we're using the Opto 22, we're running an Ignition Edge. That's taking the raw tag data that's coming from the PLC. It's putting it into a data model that we're using as a line and publishing that information into the HiveMQ Broker that's sitting in the cloud. From there, I'm consuming that information and using the MQTT engine — I'm able to utilize those tags to operate and control the line. So start production orders and calculate OEE. So it's using the Ignition engine, the transmission module, or the N3uron application to publish using the Ignition, the engine module to subscribe to that information. And then finally, when I'm done creating the OEE calculations, I'm publishing that information back in an enterprise namespace, not just as a plant or a site namespace.

Kudzai Manditereza: 01:30:18.835 Awesome. Thank you, David. So there are a couple of questions coming in here. So I think maybe before we jump in to address the rest of the questions, maybe try to kind of summarize a bit here. I'll sort of give you folks a chance to kind of give your thoughts on the whole summary of things. But really, for me, what's so astonishing really about the Unified Namespace and Sparkplug — the idea that you could have all this data from multiple geographies, different plants, different systems that took different protocols into one interface where all the data is in one repository, any system that needs to interact with that data — it finds it in that one place. That's something for me that I really find astonishing, really, as far as this Unified Namespace is concerned. And I mean, if any of you in the audience here — if you have experience whatsoever with industrial system integration, can you try to imagine what it would take to achieve what we just did with your traditional protocols? It's months and months and months of work really just trying to get things going. But yeah, it was just a matter of hours, even just preparing for this demo here. It wasn't really much of a back and forth, just hours, literally, just to get all of this data into one unified location. So for me, that's really something that I really find amazing about this technology. So I'll kind of give you folks a chance also to kind of give your thoughts as we wrap up. Maybe, Benson, you want to go first?

Benson Hougland: 01:31:50.702 Yeah, sure. There's a lot of our customers that are doing exactly what we're doing here. In this particular case, where the use case is to pull all that data that I'm delivering from the edge into the broker into an MES application. But that's not where it stops. The beauty is — whether I have an MES application or an historian or perhaps a SCADA HMI, whatever, all of those applications are now consuming the same data that I'm producing. So we're not creating one-to-one connections between MES and the PLC or SCADA and the PLC and having to deal with all of these multiple connections. Further, I'm not polling the PLC for its information. I'm sending it on change. So all of those applications that subscribe to the data will always get the real-time information. And further, they'll always know the state of the machine. This is one of the advantages of Sparkplug B — is that within that, not only define the topic namespace, but we also have state management there. So we know that that machine is running even if it's not producing anything. In other words, there's no data change — the system always knows the state of the entire enterprise. So those are a couple of real key things why Sparkplug B running on top of MQTT can make a big impact on your industrial applications.

Benson Hougland: 01:33:13.614 One other note I want to make is, in this demo, because we're spread out geographically, we are using a HiveMQ Cloud edition, but you could also use HiveMQ on-prem. So for that, in that particular scenario, you might have a local broker at the plant that's taking care of all of that OT data namespace that David mentioned early on. And then that gets produced up to some other broker where the enterprise namespace can be set up as well. So there's a lot of different configurations here. So in the event I do lose comms to the broker, just as Jose described, I will store and then forward that information when I reconnect. So it's a very resilient system. It's very high performance. All of those changes happen within milliseconds. And then finally, it's absolutely scalable and secure. And those are key things that are important in any digital transformation exercise — is that notion of scalability, cybersecurity, and performance.

Kudzai Manditereza: 01:34:18.118 Thank you. Jose, do you want to give us some closing thoughts?

Jose Granero: 01:34:26.750 Well, in my opinion, I think that the Unified Namespace is the right approach to succeed whenever you are releasing a new IoT project. Four elements for me are paramount. It's edge-focused. Data must be processed as close to the source as possible to normalize and standardize everything. Using MQTT and Sparkplug allows you to report only relevant changes. So it's very efficient in terms of bandwidth usage. It's lightweight and it's an open architecture. Again, Sparkplug are open protocols that are available for everyone. So that's my opinion.

Kudzai Manditereza: 01:35:24.553 Thank you. David?

David Schultz: 01:35:27.740 Yeah. So you were talking earlier about how easy it was to get all of this data out there and how long it would take versus some of the traditional systems that we had used relative to here. I will say getting the data was the easiest part of this overall exercise. It was just getting the plant model built, building in the material classes and definitions that were associated with that. And then of course, building the visualization screens. I will say from a data ops standpoint, it's very, very, very important. I could use one more “very” to make sure that you get your data models created correctly, because the more you start building in these semantic data models, the more difficult it is to change those in the future. So I would highly encourage people to spend a lot of time really thinking about — how do we want to model the data, and how do we want to consume all this information? The goal is to connect all of our intelligence into a technology. In this case, it's MQTT with a Sparkplug rather than through applications. So ensure once you start presenting information to the enterprise — it's something that people can readily digest and use it for all types of applications.

Q&A

Kudzai Manditereza: 01:36:43.006 Okay. Thank you. Thank you so much. All right. So I think we can kind of dive into the Q&A here and try to answer some of the questions.

Jayashree Hegde: 01:36:54.607 Kudzai, maybe before we kick off the Q&A, may I launch the second poll? I request all attendees to participate. Thank you. Over to you, Kudzai. You can continue. I'll keep the polls open.

Kudzai Manditereza: 01:37:13.029 Okay, thank you. All right. So I think we've got a question from Ravi S. So he's saying, "What's the difference between HiveMQ MQTT Broker and a normal MQTT broker?" Okay. So I guess I can answer that one. So HiveMQ MQTT Broker is a normal broker. So I'm not sure I understand the question, but so HiveMQ really kind of follows the MQTT specification, which is MQTT 3.1.1, and also MQTT 5. So it's compatible with both versions of MQTT. So if that's what you're asking, yes, HiveMQ is the same as a normal MQTT broker. So Ravi also goes on to ask, "How many nodes are required for the broker? How is it calculated initially?" So this all depends on your use case. So we've got our solution engineers who can actually sit down with you and analyze your requirements and see what sort of infrastructure you need. And then from there, they'll be able to calculate how many nodes you need for your broker based also on the kind of hardware or software that you need to deploy that broker on. So this is specifically related to what your needs are and what infrastructure you have available to you, right? And then, okay, let's jump to Sanika. So he says, "I'm a student and still new to this IoT OT convergence space. So my question might sound very basic, but wanted to ask, how is HiveMQ UNS different from traditional cloud services like GCP or Microsoft Azure?"

Kudzai Manditereza: 01:39:08.047 So yeah, I think I can answer this again. So it goes back to the question that Ravi asked, right? So in this case, Microsoft Azure, I suppose you're referring to IoT Hub. So this is where there's kind of a difference between a normal MQTT broker. What you'd call a normal MQTT broker is something that implements the MQTT specification. So as you would know, IoT Hub is sort of a special flavor of MQTT. It's not like a standard MQTT specification that is followed there. The same applies with Google IoT Core, which, by the way, is getting deprecated, I think, in about five months or so from now. So that's kind of the major difference then, right? And then we've got a question from Fred. I don't know — maybe Benson, you want to take that? Do we know what percentage of manufacturers are adopting digital transformation and Industry 4.0?

Benson Hougland: 01:40:07.742 Well, I would argue that it's one of the fastest-growing segments in terms of trying to apply these types of technologies to address digital transformation requirements, which of course is transforming your enterprise to a digital enterprise. And in doing so, you have to do it in a way that is scalable. We've all dealt with point-to-point connections our entire careers — I've been doing this for 25 years — and yeah, there are some other technologies that could potentially help. OPC is obviously one of those, but because of the way that OPC moves data around, it's very different than the Pub-Sub model of MQTT. And for that reason, it can be difficult to continue and maintain those systems, especially when in a poll response-type method. And that's common, what you see in OPC. And I think the whole idea is part of what we're trying to do from a product philosophy is try to reduce the amount of moving parts, right, try to get things consolidated down to where I can capture data, I can model it, and I can get it where it needs to go very, very quickly and easily. And with OT tools, there should be no reason to develop code or get to the shell to start systems up. Try to make it as easy as possible so we can get that data up and start doing something interesting, particularly with achieving those digital transformation goals and becoming a digital enterprise. And if I may, there's another question in there, which Kudzai, you're absolutely qualified to answer, and that is about HiveMQ Cloud in terms of the offer that you guys have presented to our customers and all customers, to be honest. And that is this: this is a very different way of moving data around. It's a lot more efficient, all the things that we described, but it is new.

Benson Hougland: 01:41:53.570 So a lot of you may be very familiar with this notion of some software scanning a PoC or something like that. This kind of turns that on its head for a lot of the reasons that we've described: security, performance, and so on. And that can be somewhat difficult to get your head around. And one of the key pieces here is the broker, right? And so what HiveMQ has done is created their HiveMQ Cloud free edition. And I love this because as our customers are embracing these new ideas and they want to get a PoC up and running quickly, they can sign up for that free cluster, get 100 devices at no charge. And that's amazing because it allows you to actually test all these systems out without any out-of-pocket expense. And then the same with Ignition, N3uron, and others, you've got two-hour free trials, you actually put these POCs together and show everything working without a lot of investment, dollar investment and very little of your time.

Kudzai Manditereza: 01:42:50.481 Thank you. Thank you, Benson. Okay, so the next question, I think maybe Jose, you can answer that one. So Newton Fernandez says, "It was said that the data was modeled. What tool was used for modeling? Is it Ignition?"

Jose Granero: 01:43:12.790 No, I don't know what he's referring to, but we use — the model is created at the Edge, as I said before. So from my side, I created the model in N3uron and Benson did the same in the EPIC, using Ignition in this case. But as we said before, it's not necessary to use Ignition. There are other mechanisms available in the EPIC to do that.

Kudzai Manditereza: 01:43:38.326 Okay. So which means data was modeled using both N3uron and Ignition, right?

Jose Granero: 01:43:43.546 Mm-hmm.

Kudzai Manditereza: 01:43:44.574 Awesome. All right. And then the next one is for you, David. I think you've demonstrated that, but maybe this is an opportunity to kind of elaborate on that. So the question is: how do you organize the topic namespace using Sparkplug B to accommodate the whole ISA-95 hierarchy so that you can subscribe to any layer from the enterprise level, given that there are only group ID, edge node ID, and device ID available?

David Schultz: 01:44:13.753 This is a very common topic, very common question that people get. And it's the advantage of — or the powerful thing of the string or the flat MQTT is you can publish to any topic. The worst thing about the flat MQTT is you can publish to any topic. So Sparkplug tries to control that through this group node ID that must be unique throughout the entire broker. So I mentioned earlier about how I recommend that people have either a plant or an area-level broker that's sitting at a location because the idea of this group and node ID is it's a logical grouping of the devices that are going to be publishing to that particular broker. So I'll recommend that people use an area broker and then have either the area line cell as that group node device. Or they just use that line and cell depending on what the hierarchy or where that broker is being consumed within the enterprise. And that allows you to keep those systems separate. So you noticed here in this particular demo, we had N3uron that was publishing to a site line namespace. And then I was publishing to an enterprise site namespace. At a higher level at the enterprise broker, that's where you will then prepend all of that information with the enterprise site. So I consume the lower-level information. I bring it [inaudible] and organize it. And then that information is then published and presented to the rest of the organization.

David Schultz: 01:45:50.620 So it's not something where every one of your group and node IDs can follow the full-blown structure. I don't think I demonstrated this, but you don't have to start at the very top level as you're building out your local tags. You can actually build your transmitter to publish from a lower level folder so that its group and node ID is going to be unique for every device that's going to be publishing to that. So that's the method that I recommend.

Kudzai Manditereza: 01:46:21.113 Awesome. Okay. So we've got the next one from Doug Hoffer. So that's in relation to HiveMQ, but if any of the panelists here are able to assist, that will be appreciated. So Doug is asking, can you discuss the use of DCS versus PLC relative to HiveMQ/cloud integrations? Anyone want to?

David Schultz: 01:46:48.498 Yeah. So a DCS is going to follow the same — or a similar type of structure. So a very common DCS out there. I mean, so we're talking Emerson DeltaV, the Honeywell Experion, the Siemens PCS 7. Typically they have an endpoint that is available to either Ignition or N3uron. There's a number of great IoT platforms that'll connect into that particular data. So similar to a PLC and you're going to have a driver to it, there's just another endpoint that you can get out of a DCS that you can subscribe to, build up a semantic data model, add context to it from the various topics that we've discussed, and then publish it to a broker, that mechanism. So really the underlying technology of what's giving us the actual log, the level one tag values, it can be very readily through an IoT gateway device, publish that information into a UNS.

Kudzai Manditereza: 01:47:51.809 Awesome. And then the next one is for you, Benson, from Anonymous. What is the difference in Opto 22 and Raspberry Pi? So I guess what's the difference between Opto 22 and Raspberry Pi, and can Node-RED be used?

Benson Hougland: 01:48:10.472 So I'll take the first one. I'm a huge Raspberry Pi fan. And indeed, working with the Raspberry Pi for many, many years is one of the reasons why we decided to build the groov EPIC platform and our groov RIO on the same notion of a Debian-based Linux operating system. The primary issues between the two, aside from cost obviously, is Raspberry Pis are just not industrial devices. If you ever take a look at them, load some software on there, and watch its performance, as that performance starts to go up, the Raspberry Pi heats up and it starts to declock. And this is just common. The purpose of it is you don't want to cook your Raspberry Pi. And so when we've done a lot of these tests and we've run a lot of the same software I run on the EPIC on a Pi — that's what we notice. They're really not designed for industrial applications. The second thing is the memory system. We all know we take a SD card, we shove it into the Raspberry Pi, we load on the OS, whichever one you're choosing to use, and you're off to the races, or so it seems. You lose power to your — abrupt power loss to a Raspberry Pi, you're taking a pretty big risk that that Pi will boot back up because of file disk error problems.

Benson Hougland: 01:49:22.805 One of the things we've done with the EPIC is made it a power-fail file system. And that simply means if I do yank power to my EPIC, I'm assured that the system will boot back up. So there's a lot of underlying industrial-grade technology minus 20 to 70 degrees C here where you're lucky to get to zero to 50 on Pi. That all said, I love Raspberry Pis. They're great tools. But indeed, what we've done is we've got 50 years of experience in building industrial devices that are meant to be put out in the field. And if you have to make a site visit to go fix that thing, you've kind of lost the ROI. So our stuff is designed to be very, very bulletproof. And quickly on the second question about Node-RED. Indeed, Node-RED is pre-installed. All the security, all the accounts, everything is set up on the groov EPIC and groov RIO to run the Node-RED environment right on the device. And so we've been supporting Node-RED since the very early days. Our first groov device in 2013 came with Node-RED.

Benson Hougland: 01:50:24.163 What would you use Node-RED for? Well, it's kind of the catch-all. So there's a lot of tools, a lot of capabilities within Node-RED around the idea of moving messages around. Could I have done this entire demo with Node-RED? I probably could have. However, it's going to be a little bit more effort. I'm going to be doing a lot of development within the Node-RED environment. And what I tried to show you with the Ignition platform is it's more fill-in-the-blanks. So rather than doing any JavaScript code — and JavaScript's great, but if you don't know JavaScript, you don't want to have to rely on that. Filling in the blanks allows it to be maintained. You know you have the support of Ignition in this case, or N3uron if you're running that on our platform or your choice of Edge platforms. So again, a lot of different tools. Think of it like your smartphone. One of you may be using a particular email client on your smartphone, another may be using a different one or different software altogether. It all comes down to what is the task, what is the problem you're trying to solve, choose the right tools from a broad toolbox to get that job done in a way that makes sense for your organization.

Kudzai Manditereza: 01:51:33.400 Awesome. Thank you. So we've got another question from Anonymous. So he's asking if this can be downloaded and watched again. So yes, I think we'll be publishing that on our YouTube channel. I'm not sure if you're going to be sending them a download link, Jayashree, if you've got anything to say there? Okay, so I'll move on to the next one. Ravi is asking: "Can you present a node and network architecture of HiveMQ Broker for on-premise installation?" Yes, I think we can follow up with this architecture. I think we will follow up with an email to show you this is not a network architecture. And then the next question is from Mark to you, Benson. Mark Partout is saying, "What steps have Opto 22 taken to harden the groov EPIC against DDoS attacks?"

Benson Hougland: 01:52:32.369 Yeah, that's a great question as well. DDoS attacks are denial of service, and that's where you absolutely bang on the thing until it craters. Again, that won't happen if you don't have an open firewall port for that outward-facing network. And even in this particular case, I do have HTTPS open, so I'm creating an encrypted, authenticated connection. But once my system is up and running, I shut down those ports too. So now there's no chance I can get a DDoS attack because I'm not listening on anything. I'm literally blocking everything out. But cybersecurity is constantly changing. There isn't a cybersecurity product that covers everything. And indeed, as new threats become available, we're always looking to mitigate those threats. But clearly, authentication, encryption, firewalls — those are some of the very early steps that you can take to literally clamp these systems down and prevent outside attacks. Hopefully, that was a question that I was able to answer for Mark there.

Kudzai Manditereza: 01:53:36.938 Okay, thank you. And the next one is, is Node-RED production ready/safe to use in production? Benson, you want to take that?

Benson Hougland: 01:53:48.985 Can only speak from experience. And we have hundreds of customers out there using Node-RED in a production environment. No question about it.

Kudzai Manditereza: 01:53:59.688 Awesome. Okay, so Renz is asking, "From data regularization point of view, if you go for a HiveMQ Cloud on Microsoft Azure, in what region is my data stored? Is there a way to change regions?" Yeah, so if you actually use the templates that we provide for deploying HiveMQ cluster on Azure, it takes you to account where you are then able to kind of configure all the different regions and accounts that you want to run this off of. So yes, to answer your question, you're able to change the regions based on where you want your data stored. Okay, so I think that's all the questions that we have on the Q&A. I don't know if there's any on the chat. I'm going to have to dig through the chat to see if we've got any questions. Otherwise, if not, I think we have addressed all the questions that we had.

David Schultz: 01:54:59.947 Yeah, I did notice there was one question about how long the HiveMQ Cloud, the free version, how long that persists. And mine's still running. So I haven't [crosstalk].

Benson Hougland: 01:55:09.747 Mine is still running.

David Schultz: 01:55:11.043 Yeah. [laughter]

Kudzai Manditereza: 01:55:12.366 Oh, yeah.

Benson Hougland: 01:55:12.706 It's been running for years.

David Schultz: 01:55:15.266 Same.

Kudzai Manditereza: 01:55:16.175 Yeah. So as long as you haven't reached the limit of 100 devices, it runs forever. And then once you reach the limit, and then you now need to move to a different account. So all right, I think we've got a lot of questions here regarding Node-RED. "Is Node-RED high availability?"

David Schultz: 01:55:39.582 Node-RED's like every other piece of technology out there. It fits a specific application. So to say, "Is it hardened? Is it high availability?" Well, it depends on what you're trying to do. And if [inaudible], I would not use Node-RED for critical compressor shutdown, but if I just want to trigger a work order, one of the early demos that I did was triggering a work order from a remote piece of equipment of, "Hey, you need to come change the filter." That's a great use case for that because even if something goes south, you're not putting anybody at risk, so.

Benson Hougland: 01:56:13.259 Yeah, and I'll add to that. David, you're spot on. The Node-RED is a tool and you want to use it in the right type of application. And generally where Node-RED really excels is the notion of messaging. All Node-RED is — is passing messages from one node to another and doing something interesting with that. Whether it's sending out data over a Twilio node, to send a text message, to parsing data from a SQL database, very good at that, or talking to a web service. My applications here use web services to find out, for example, what the spot price of electricity is on Cal ISO, and then pull that data in and then perform, whether I want to run the turbine or not. So there's a lot of different ways it can be used, but we always tell our customers, "You probably should not use this for control." And we don't have customers that generally do that, particularly real-time critical control. Use the right tool for that. And that's going to be some control engine, CODESYS, PAC Control, whatever you want to use there. But if you're doing high speed PID loops, closed system controls, use the right tool, not Node-RED.

Kudzai Manditereza: 01:57:22.849 Cool. And there's another one here for you, Jose. So Anonymous is asking for system integrator, do we have to purchase N3uron every time for a new client?

Jose Granero: 01:57:36.216 We have a system integrator program available, and there are many, many integrations in Europe and across the world. So please feel free to reach out to us and we'll put you in front of one of those integrators.

Kudzai Manditereza: 01:57:54.193 Okay, thank you so much. So it seems like we've gone through all the questions. I mean, if there's some questions that are left here, we'll be more than happy to kind of get back to you on that.

David Schultz: 01:58:05.540 There's one more that came in about if you use different versions of Sparkplug. It matters not. That's the beauty of the specification. And I don't think we covered it, even though we use Sparkplug in this, HiveMQ can support 3.1.1 and 5.0. So you can use just flat MQTT as well as Sparkplug data. So kind of going on as an ancillary answer to that using Sparkplug and modeling, there are certain things that you can use Sparkplug for other applications that you use Flat MQTT for. And that gives you — it's kind of the best of both worlds. It's that compliance using Sparkplug versus the flexibility and the innovation of using the string version as well. So it's Coke, Pepsi. What do you want? We got it.

Benson Hougland: 01:58:50.221 Right.

Conclusion

Kudzai Manditereza: 01:58:52.119 Awesome. Okay, so with that, I would like to say a special thank you to our panelists. And thank you so much to the attendees for taking your time out to join us today on this session. I'll bring it over to you, Jayashree, for any closing remarks.

Jayashree Hegde: 01:59:08.907 Yeah, sure. What a wonderful demo you guys have put together to show us how to implement UNS. Amazing work there. David, Kudzai, Benson, and Jose, thank you so much. And to all our attendees, thank you for tuning in. We hope you all enjoyed this workshop. Thanks for all those reactions you are sending in. And if you are interested in learning about UNS, we have released a UNS e-book. I've shared the link in the chat. You can download it. And the slide presentation has links to Opto and Spruik and N3uron links and the contact details of Jose, David, as well as Benson. Feel free to reach out to us. And like I already said at the beginning of the session, we will share the recording as well as the presentation over the follow-up email so you can have a look at it later as well. Feel free to forward it to all your colleagues. I know many of you have been asking. So do check it out. Thanks again for tuning in. And thanks to all the panelists here.

Kudzai Manditereza: 02:00:23.335 Thank you.

Jayashree Hegde: 02:00:23.349 It's a really great session. I learned a lot.

Jose Granero: 02:00:25.306 Thanks, everyone.

Benson Hougland: 02:00:26.594 Thank you so much.

Kudzai Manditereza: 02:00:27.189 Thank you.

Jayashree Hegde: 02:00:27.394 Have a great day. Bye-bye.

David Schultz: 02:00:29.519 Bye.

Jose Granero: 02:00:30.641 Bye.

Kudzai Manditereza

Kudzai is a tech influencer and electronic engineer based in Germany. As a Developer Advocate at HiveMQ, he helps developers and architects adopt MQTT and HiveMQ for their IIoT projects. Kudzai runs a popular YouTube channel focused on IIoT and Smart Manufacturing technologies and he has been recognized as one of the Top 100 global influencers talking about Industry 4.0 online.

  • Kudzai Manditereza on LinkedIn
  • Contact Kudzai Manditereza via e-mail

Benson Hougland

Benson Hougland is VP of Marketing & Product Strategy at Opto 22. Benson has three decades of experience in manufacturing automation. In his role as the Vice President of Opto 22, Benson brings awareness to the new lines of products released by the company and still finds time to be hands-on with the hardware and software.

  • Benson Hougland on LinkedIn
  • Contact Benson Hougland via e-mail

David Schultz

David Schultz is the Principal Consultant for Spruik. He works with manufacturers to help them develop and execute strategies for their digital transformation and asset management initiatives. He has 25 years of automation and process control experience across many market verticals, focusing on continuous and batch processing.

  • David Schultz on LinkedIn
  • Contact David Schultz via e-mail

Jose Granero

Jose Granero is Head of Customer Success and Sales Engineering at N3uron Connectivity Systems and has a strong background in industrial automation and telecommunications. In his current role, he works with companies to help build robust and scalable architectures, apply best practices, integrate solutions, and leverage edge and cloud computing.

  • Jose Granero on LinkedIn
  • Contact Jose Granero via e-mail

Related content:

HiveMQ logo
Review HiveMQ on G2