Skip to content

Revolutionizing Logistics Operations With LoRaWAN, HiveMQ, and SenseCAP Tracker

by Anthony Olazabal
17 min read

In the always-evolving logistics industry, efficiency reigns supreme. From optimizing supply chains to enhancing delivery processes, every aspect of the industry seeks innovation to meet the growing demands of today's global market. In this quest for advancement, a technology has emerged as a game-changer, promising to revolutionize the way logistics companies operate – LoRaWAN.

LoRaWAN, short for Long Range Wide Area Network, is a wireless communication protocol specifically designed to enable long-range communication with minimal power consumption. Its unique capabilities have sparked interest across various industries, but perhaps nowhere is its potential more evident than in the logistics sector.

In this article, we delve into the world of LoRaWAN by using a private network relying on HiveMQ MQTT Platform to explore how it is reshaping logistics operations. We will use a unique and compact device: SensCap T1000 card tracker to track assets in real-time. So, buckle up as we embark on a journey through the transformative power of LoRaWAN in logistics.

Global Warehouse Architecture Using MQTT

To keep things simple, we will consider the following architecture with:

  • A private LoRaWAN network relying on ChirpStack and HiveMQ Platform

  • Different types of assets tracked with SenseCAP T1000 tracker using LoRaWAN

  • Cleaned data saved into MongoDB Documents

Global Warehouse Architecture Using MQTT

SenseCAP T1000 Card Tracker in a Nutshell

SenseCAP T1000 is a compact GPS tracker that utilizes GNSS/Wi-Fi/Bluetooth for precise indoor & outdoor location tracking, compatible with LoRaWAN®, Amazon Sidewalk, and Helium Networks. It boasts self-geo-adaptive capabilities, local data storage, and several months of battery life. Additionally, it is equipped with temperature, light, and motion sensors, making it ideal for a variety of location-based applications.

SenseCAP T1000 Card Tracker in a Nutshell

Get SenseCAP Device Information

In order to add our device on our private network server, we need to get the Application Key of the device. To obtain it, we need to use the mobile application SenseCAP Mate to connect to the tracker via Bluetooth and configure it to be used on a private LoRaWAN network.

The following steps assume that you have already downloaded the application and created an account.

Add the Device on SenseCAP Mate

On the home screen of the application, click on the “+” or “Click Add Device”.

Add the Device on SenseCAP MateYou will be asked to allow access to the camera and scan the QR code behind the tracker.

Allow access to the camera and scan the QR code behind the trackerOnce the device is detected, you will be asked to define a name and assign a group and the location of the tracker.

Define a name and assign a group and the location of the trackerClick “Add to account”.

configure the SenseCAP deviceOnce added, you will be asked if you want to configure the device now. We will click “Configuration Now”.

You then need to put it in Bluetooth association mode by pressing the button for 3 seconds.

Bluetooth Pairing Mode on SenseCAP tracker deviceWhen the application detects the tracker, it appears on the screen with its serial number.

Configuring a SenseCAP tracker deviceSelect it and select “Advanced configuration” to enter in the configuration.

Setting up a SenseCAP trackerWe go to the Settings tab > LoRa to change the mode. We pick, from the dropdown list,  “Other Platform”.

From here we need to save, somewhere in a block note, the Device EUI and the Application Key since we will use them later when we create the device in ChirpStack.

Save the Device EUI and the Application KeyWe click “Send” to push the new configuration to the tracker and reboot it.

Set Up Your Private LoRaWAN Network

Since we covered previously how to set up your private network using SenseCAP LoRaWAN gateway, ChirpStack Network Server and HiveMQ Platform we will not cover the steps again here.

If you need to go through the installation process, you can refer to the previous blog, Hands-on Guide to LoRaWAN and HiveMQ MQTT Broker Integration for IoT.

ChirpStack Configuration

Before diving into the usage of our tracker, we will configure our ChirpStack Network Server to allow our tracker to send data by:

  • Creating a new device profile

  • Configure a new application for our use case

  • Adding a device to the application

Create Your Device Profile

To create a device profile, go to Tenant > Device profiles.

Create a new profile by entering the name and the region configuration.

Add Device Profile for Logistic Asset TrackingOn the Join tab, check that “Device supports OTAA” is activated.

Enable device supports OTAAIn the Codec tab, we add some JavaScript functions to manage the data incoming from the sensor. You can find the full script in this GitHub repository.

Copy and paste the full script and click “Submit” to create the device profile.

Create the Application

To create a new application, go to Tenant > Applications.

Create a new application by entering a name.

Create a new applicationThen click “Submit” to create the application.

Add the Device to the ChirpStack Application

In the ChirpStack Network Server Web UI, go to the Application previously created and add a new Device based on the device profile we’ve just created.

Configure the device by entering a name and the EUI of the tracker. Select the previously created device profile.

Configure the device by entering a name and the EUI of the trackerThen click “Submit” to create the device.

On the OTAA keys tab, generate a new key and click “Submit” to save it.

Generate a new keyOnce you save it, wait for a bit until the tracker sends its JoinRequest frame. You will then see all the frames flowing:

SenseCAP tracker sending its JoinRequest frameYou should now be able to see the application traffic directly on HiveMQ MQTT Broker.

MongoDB Extension Configuration

In order to get the LoRaWAN traffic into MongoDB, we need to enable the HiveMQ Enterprise Extension for MongoDB.

Configure the Extension

Start by creating a config.xml file in the conf folder of the extension and paste the following basic configuration:

<hivemq-mongodb-extension xmlns:xsi="<http://www.w3.org/2001/XMLSchema-instance>"
                          xsi:noNamespaceSchemaLocation="config.xsd">
    <mongodbs>
        <mongodb>
            <id>my-mongodb-id</id>
            <connection>
                <host>##MONGODB-HOST##</host>
                <port>27017</port>
            </connection>
        </mongodb>
    </mongodbs>

    <mqtt-to-mongodb-routes>
        <mqtt-to-mongodb-route>
            <id>lorawan-application-to-mongodb-route</id>
            <mongodb-id>my-mongodb-id</mongodb-id>
            <mqtt-topic-filters>
                <mqtt-topic-filter>application/c7681ed7-f0fe-4238-bf1c-677edbef05f0/#</mqtt-topic-filter>
            </mqtt-topic-filters>
            <collection>tracking</collection>
            <database>logistic</database>
            <processor>
                <document-template>document-template.json</document-template>
            </processor>
        </mqtt-to-mongodb-route>
    </mqtt-to-mongodb-routes>
</hivemq-mongodb-extension>

Update the information in the connection section to reflect your context. You can find additional information on security on this documentation page.

Then you need to specify the template you want to use to save the document in MongoDB. You can create a simple template document in the root folder of the extension named document-template.json with the following content:

{
  "topic": "${mqtt-topic}",
  "payload_utf8": ${mqtt-payload-utf8},
  "qos": "${mqtt-qos}",
  "retain": ${mqtt-retain},
  "packet_id": ${mqtt-packet-id},
  "payload_format_indicator": "${mqtt-payload-format-indicator}",
  "response_topic": "${mqtt-response-topic}",
  "correlation_data_utf8": "${mqtt-correlation-data-utf8}",
  "arrival_timestamp": ${timestamp-ms}
}

Tips: Before starting the broker, don’t forget to remove the DISABLED file in the directory of the extension (extensions/hivemq-mongodb-extension/DISABLED).

Start the broker (or restart it) to take into account the configuration of the extension. You should see the following block in the logs:

2024-02-28 14:45:26,769 INFO  - Starting extension with id "hivemq-mongodb-extension" at /opt/hivemq/extensions/hivemq-mongodb-extension
2024-02-28 14:45:26,840 INFO  - HiveMQ Enterprise Extension for MongoDB: Successfully loaded configuration from '/opt/hivemq/extensions/hivemq-mongodb-extension/conf/config.xml'.
2024-02-28 14:45:26,882 INFO  - MongoClient with metadata {"driver": {"name": "mongo-java-driver|reactive-streams", "version": "4.9.1"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "6.5.11-7-pve"}, "platform": "Java/Private Build/19.0.2+7-Ubuntu-0ubuntu322.04"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@2c08e7c3, com.mongodb.Jep395RecordCodecProvider@537a3253]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[192.168.69.180:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null}
2024-02-28 14:45:26,887 INFO  - Monitor thread successfully connected to server with description ServerDescription{address=192.168.69.180:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=21, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=10773763}

Once traffic is flowing on the LoRaWAN application, you will see the documents created in the MongoDB database.

Configuring MongoDB extensionAs you can see, the content of the payload is a huge JSON object containing all the information:

JSON logs

This is where HiveMQ Data Hub enters the process to clean the data and save a bit of carbon in the end-to-end data chain.

Data Cleaning (Data Transformation)

HiveMQ Data Hub Prerequisites

To configure the HiveMQ Data Hub policy, you need:

If you followed the first article to set up your broker, you will need to update the config file for the broker to reflect the following sample:

<?xml version="1.0" encoding="UTF-8" ?>
<hivemq xmlns:xsi="<http://www.w3.org/2001/XMLSchema-instance>"xsi:noNamespaceSchemaLocation="config.xsd">
	<listeners>
		<tcp-listener>
			<port>1883</port>
			<bind-address>0.0.0.0</bind-address>
		</tcp-listener>
	</listeners>
	<control-center>
		<enabled>true</enabled>
		<listeners>
			<http>
				<port>8080</port>
				<bind-address>0.0.0.0</bind-address>
			</http>
		</listeners>
	</control-center>
	<rest-api><!-- Enables or disables the HiveMQ REST API-->
		<enabled>true</enabled><!-- Enables or disables authentication and authorization for the HiveMQ REST API-->
        <auth>
					<enabled>false</enabled>
				</auth>
				<listeners>
					<http>
						<port>8888</port>
						<bind-address>0.0.0.0</bind-address><!-- Defines listener name to help distinguish between multiple listeners -->
						<name>api-http-listener</name>
					</http>
				</listeners>
			</rest-api>
			<anonymous-usage-statistics>
				<enabled>false</enabled>
			</anonymous-usage-statistics>
</hivemq>

This will allow you to interact with the API (http://broker-ip:8888) without authentication and see the Policies in the Control Panel (http://broker-ip:8080) with the default credentials (Username: admin and password: hivemq).

Remember that this setup is ok for a lab but not for production, so we encourage customers to use our Enterprise Security Extension to protect all services of HiveMQ Broker including HiveMQ Control Center and API.

After updating the configuration, restart the broker to take it into account.

Validation Schema

To make sure that we are receiving a JSON object, we create a very simple policy that will just check that the payload is actually a JSON object. Create a file called “json -schema.json” and add the following content:

{
    "description": "Generic JSON schema, it requires just a JSON, nothing further specified",
    "type": "object"
}

Then using HiveMQ MQTT CLI, upload the schema to the broker:

mqtt hivemq schema create --id json-schema --type json --file json-schema.json

Note: If you are executing the command remotely, don’t forget to specify ‘—url https://your-broker-address:8888

Transformation Script

The next step, after ensuring that the payload will be in JSON format, is to proceed with the transformation, which consists in extracting the base64 data coming from the sensor and converting it to a simple JSON payload. Create a file called “script.js” and add the following content:

function transform(publish, context) {
    const newPayload = {
        topic: publish.topic,
        payload: publish.payload.data.object.messages,
        userProperties: [{ name: "transformed", value: "true" }]
    }
    return newPayload;
}

In a nutshell, the transform function is called by the Data Hub transformation to manipulate the payload, and the decodeBase64 function is called from the transform function to decode the data in the original payload. We then build the new payload respecting the Publish Object format.

Upload the script to the broker using HiveMQ MQTT CLI:

mqtt hivemq script create --id lorawan-cleaning --type transformation --file script.js

Note: If you are executing the command remotely, don’t forget to specify ‘—url https://your-broker-address:8888

Policy

Now that we have both the validation schema and the transformation script, we can create the policy that will handle the messages. Create a file called “policy.json” and add the following content:

{
    "id": "lorawan-application-cleaner",
    "matching": {
        "topicFilter": "application/c7681ed7-f0fe-4238-bf1c-677edbef05f0/device/+/event/up"
    },
    "validation": {
        "validators": [
            {
                "type": "schema",
                "arguments": {
                    "strategy": "ALL_OF",
                    "schemas": [
                        {
                            "schemaId": "json-schema",
                            "version": "latest"
                        }
                    ]
                }
            }
        ]
    },
    "onSuccess": {
        "pipeline": [
            {
                "id": "deserialize",
                "functionId": "Serdes.deserialize",
                "arguments": {
                    "schemaVersion": "latest",
                    "schemaId": "json-schema"
                }
            },
            {
                "id": "lorawan-cleaning",
                "functionId": "fn:lorawan-cleaning:latest",
                "arguments": {}
            },
            {
                "id": "serialize",
                "functionId": "Serdes.serialize",
                "arguments": {
                    "schemaVersion": "latest",
                    "schemaId": "json-schema"
                }
            }
        ]
    },
    "onFailure": {
        "pipeline": [
            {
                "id": "drop-invalid-message",
                "functionId": "Mqtt.drop",
                "arguments": {
                    "reasonString": "Your client ${clientId} sent invalid data according to the schema: ${validationResult}."
                }
            }
        ]
    }
}

Upload the policy to the broker using HiveMQ MQTT CLI:

mqtt hivemq data-policy create --file policy.json

Note: If you are executing the command remotely, don’t forget to specify ‘—url=”https://your-broker-address:8888”’

If your device is already connected and sending data to the broker, you should instantly see the new payloads as shown below with the new structure:

{
	"valid":true,
	"err":0,
	"payload":"110100008065e0a6b700d8001464",
	"messages":[
		[
			{
				"measurementValue":
					{
						"statusName":"The GNSS scan timed out and failed to obtain the location.",
						"id":1
					},
				"type":"Positioning Status",
				"measurementId":"3576",
				"timestamp":1.709221559E12
			},
			{
				"measurementValue":
					[
						{
							"eventName":"Press once event.",
							"id":8
						}
					],
				"type":"Event Status",
				"measurementId":"4200",
				"timestamp":1.709221559E12
			},
			{
				"measurementValue":"21.6",
				"type":"Air Temperature",
				"measurementId":"4097",
				"timestamp":1.709221559E12
			},
			{
				"measurementValue":"20",
				"type":"Light",
				"measurementId":"4199",
				"timestamp":1.709221559E12
			},
			{
				"measurementValue":"100",
				"type":"Battery",
				"measurementId":"3000",
				"timestamp":1.709221559E12
			}
		]
	]
}

Wrap Up

As we conclude our exploration of the symbiotic relationship between LoRaWAN technology, HiveMQ, and the SenseCAP T1000 card tracker, it's clear that the fusion of these cutting-edge solutions heralds a new era in logistics management. By harnessing the long-range communication capabilities of LoRaWAN, combined with the seamless integration provided by HiveMQ, and the precision tracking offered by the SenseCAP T1000, logistics companies now possess a powerful toolkit to optimize operations like never before.

From real-time asset tracking and monitoring to enhanced route planning and predictive maintenance, the possibilities afforded by this triad of technologies are limitless. By leveraging data-driven insights and fostering greater transparency throughout the supply chain, businesses can drive efficiencies, reduce costs, and ultimately deliver superior service to their customers.

As the logistics landscape continues to evolve, it's imperative for companies to stay ahead of the curve by embracing innovative solutions such as LoRaWAN, HiveMQ, and the SenseCAP T1000. By doing so, they not only position themselves for success in the present but also lay the foundation for a more agile, responsive, and competitive future. So, as we conclude this exploration, let us embrace the promise of technology to propel the logistics industry into new realms of efficiency, connectivity, and opportunity.

Note: You can find the different schema and scripts in the following GitHub repository: https://github.com/anthonyolazabal/Logistic-LoRaWAN-DataHub

Anthony Olazabal

Anthony is part of the Solutions Engineering team at HiveMQ. He is a technology enthusiast with many years of experience working in infrastructures and development around Azure cloud architectures. His expertise extends to development, cloud technologies, and a keen interest in IaaS, PaaS, and SaaS services with a keen interest in writing about MQTT and IoT.

  • Contact Anthony Olazabal via e-mail

Related content:

HiveMQ logo
Review HiveMQ on G2