Adding machine learning to your LoRaWAN device

During The Things Conference this January I talked about the combination of LoRaWAN and on-device machine learning (TinyML). This is an incredibly powerful combination, as to preserve power and bandwidth LoRaWAN devices often resort to sending very little data. A device for monitoring machine health by observing the machine's vibration pattern might just send the peak motion every hour, discarding interesting potential fault states. With machine learning you can analyze the full signal on the device itself, and just send the conclusion ('abnormal vibration pattern seen') to the network.


This is a companion discussion topic for the original entry at https://www.edgeimpulse.com/blog/adding-machine-learning-to-your-lorawan-device/
1 Like

I watched that talk. Iā€™ve only really just discovered TinyML and Edge Impulse. Iā€™m totally blown away by it. Iā€™ve been developing remote, ultra low power sensors for a number of years and Iā€™m very excited by the possibilities this new technology will bring. I almost donā€™t know where to start! Iā€™m also into LoraWAM and The Things Network. This is an awesome combination in my opinion. I have some experience with image sensor so thought I might start with something along those lines. I have believed for years that what is needed is low power intelligence at the edge. I developed a vibration sensor back in 2013 that listened for events which it them analysed and transmitted only key parameters of each event. It kind of worked but was pretty limited. I truly believe a sensor like that (and many other) could be made to work now.

A questionā€¦. Can anyone recommend a low power, low resolution images sensor?

1 Like

Hi @BenH, awesome! Iā€™d suggest to take a look at OpenMV for now, they have some really nice modules coming out which are great for images. Not currently possible to train these with edge impulse but weā€™ll be adding some support soon!

Hi Jan,

your support for the OpenMV cam has been added until now according to this blogpost: https://www.edgeimpulse.com/blog/adding-machine-learning-to-your-lorawan-device/

In that demo you used the ST kit together with a lora radio shield.
What do your recommend as lora radio module when I would like to use the the OpenMV Cam H7 plus as development board? According to the pin outs, an Mbed shield does not fit into this MCU.

Any suggestions what direction to go in order to have OpenMV have its inferencing on TTN?
Tnx for answering already and great show performance you had on the last TTN conference!

@wdebbaut, good questionā€¦ You should be able to wire the shield to the OpenMV with not too much effort, as the shield uses only a few pins (see https://os.mbed.com/media/components/pinouts/MB2xAS_Pinout.jpg) - just SPI, one analog pin and a digital pin or two. Butā€¦ I donā€™t see any drivers easily available for the OpenMV, and itā€™s not trivial to get the timings right. Perhaps using a module (so something with both the radio and their own MCU) that you can then talk to over UART (AT commands) would be easier.

Tnx for answering Jan,

in that case it is maybe easier to plug in an ordinary webcam on the ST development board with the Mbed lora shield for which drivers are available. I will have a look at the TTN forum for that, or on the ST Microelectronics docs page.
Honestly, I do not understand your module solution; seems complicated working with AT commands on an UART interface between the OpenMV and that ā€˜moduleā€™.

We will keep you posted on this one, as it is a research project we are actually involved in at our university in Leuven.

Yeah, the downside is then that you need to write the image processing code on the ST board, which is done on the OpenMV board already, and it has a lot less RAM on the ST IoT Discovery Kit than the OpenMV Cam H7+. But you can run any exported model on virtually any ST board anyway, so you can switch to a higher end dev board. E.g. https://nl.mouser.com/ProductDetail/stmicroelectronics/nucleo-h743zi2/?qs=lYGu3FyN48cfUB5JhJTnlw%3D%3D&countrycode=DE&currencycode=EUR + ArduCam + LoRa shield. All supported by Mbed already.

Honestly, I do not understand your module solution; seems complicated working with AT commands on an UART interface between the OpenMV and that ā€˜moduleā€™.

The nice thing about a module is that it handles all things around the radio. The only thing you need to do is:

  1. Set keys.
  2. Say ā€œSend a messageā€.

Whereas with the LoRa shield you have the full LoRaWAN stack running on the same MCU, which thus requires a full LoRaWAN stack. And youā€™ll only need basic UART connection to the module, which makes writing a small driver trivial (https://docs.openmv.io/library/pyb.UART.html).

For people running into this thread: the Arduino Portenta H7 + vision shield with LoRa module gets you a really nice setup to build models that classify images and send it over LoRaWAN. Hereā€™s an end-to-end application: https://github.com/edgeimpulse/example-portenta-lorawan

1 Like

Hi all,
is there some example (ie. skect and wire connection) on how to send the classification result (i.e. from arduino nano 33) through lorawan by using a shield or a uart module?

Thank you

Riccardo

@wallax, the Portenta example uses the standard MKRWAN library to send data, so I think itā€™s pretty much the same code. When a conclusion changes just call modem.print, see the code here: https://github.com/edgeimpulse/example-portenta-lorawan/blob/9f5c27550dbfec8a555cefb3bcb64cede28f5408/src/ei_main.cpp#L92

Hi Jan,
I read the post, but I miss something due to my low expertise in C programmingā€¦ maybe I have to implement serial communication between the Nano33 and the MKR shield device which sends (LoRa) the data received by Nano 33 (by serial?) ?

Do you know where I can find some example related to the connection of Nano33 with a radio module (i.e. SX1276) with MbedOS libs?

thank you, Iā€™m a bit confused sorry

Riccardo

Hi @wallax, ah with the SX1276 directly. My suggestion would be:

  1. Get the LoRaWAN part working first. I doubt that anyone has ever used the Mbed OS stack with the Nano 33 BLE Sense, so using an Arduino library for that might be easier.
  2. Once you have this working, just hook the send code into one of the Edge Impulse example sketches for the Nano 33 BLE Sense.

Hi Jan,

Iā€™m sorry, but I do not understand which is the easiest way to send through LoRa /LoraWAN (i.e. to TTN) the classification results (i.e. Accelerometer, audio, eccā€¦) from my arduino Nano 33 BLE sense (Iā€™m definitely poor in C programming but Iā€™m enthusiast of Edge Impulse studioā€¦) to LoRa Gateway/TTN

Is it the communication using MKRWAN library, for example with another Arduino MKR1300/1310 which reads the serial and sends through LoRa like in https://github.com/arduino-libraries/MKRWAN/blob/master/examples/LoraSendAndReceive/LoraSendAndReceive.ino

Or you think it is easier to use a (for example) SX1276 module or something similar?

Or maybe (hopefully) there is a third easier way?

Thank you so much for yur time, any help will be very appreciated

@wallax, so the first thing is that youā€™ll need to find a way of connecting a LoRa radio to the Nano 33 BLE Sense. My suggestion would be to find a module that supports AT commands that comes with an Arduino library, e.g. RN2483 (not sure if theyā€™re still sold, and if they come with headers) or CMWX1ZZABZ-091 module (same remark). That should let you do communications from Nano 33 BLE Sense => TTN, and once you have that itā€™s easy to send inference results over.

I think itā€™d be smart to cross-post this on the TTN forums as well, thereā€™s probably people who have done such a thing./

Thank you so much for your time

In the meanwhile I have found this tutorial that seems to cover this topic:

https://osoyoo.com/2018/07/26/osoyoo-lora-tutorial-how-to-use-the-uart-lora-module-with-arduino/

It adopts UART module for LoRa transmission.

I thin it could be good to share with others.

However, as a second step Iā€™m searching a way to generalize the (LoRa) communication part when using the firmware building tools of your IDE (i.e. also for video boards such as HiMax etc.).

Such firmwares all outputs the classification (i.e. Himax, Thunderboard Sense, etc.) on serial and, if embedded, BLE, isnā€™t it?

So I need to parse this data (i.e. in arduino MKR1300) in order to correctly format the data in Cayenne LPP for TTN, it is ok?

Are you figuring some upgrade to IDE functionalities in such direction or the solution I propose can be a good alternative?

thank you

@wallax So rather than printing out the results:

You would just send over the LoRa comms part.

yes, perfect

So then I can use the UART LoRa module envisaged.

Thanks a lot

Related the other question (If you prefer I can open new post such as ā€œLoRa wireless transmission of classification decisionā€), when an user creates a ready-to-go firmware (generalizing for every board, see picture) it could be useful to use an MKR1300 M0 board to catch through serial the result and to share it through lora?

This is because every board envisages a different developing IDE and programming skills, so the ready-to go option could be very easy for the adoption of all the boards easily for audio/video ecā€¦ recognition. On the ther hand the firmware, being compiled, I think is not editable anymore

Thanks

Ric

Hi @wallax, yeah, the ready-to-go-binaries are super useful for quickly validating your model, but youā€™ll need to compile them from source if you want to modify anything. So my suggestion would be to use the Arduino library as a starting point and share an Arduino sketch on how to integrate.

Thanks Jan,

I think I will try such way

Ric

Hi, would you please let me know whether can we apply Edge Impulse Machine Learning Algorithms for the Cybersecurity of LoRaWAN?

Can you please suggest to me a few best topics about LoRaWAN security using ML and DL?
Thank you