During The Things Conference this January I talked about the combination of LoRaWAN and on-device machine learning (TinyML). This is an incredibly powerful combination, as to preserve power and bandwidth LoRaWAN devices often resort to sending very little data. A device for monitoring machine health by observing the machine's vibration pattern might just send the peak motion every hour, discarding interesting potential fault states. With machine learning you can analyze the full signal on the device itself, and just send the conclusion ('abnormal vibration pattern seen') to the network.
I watched that talk. I’ve only really just discovered TinyML and Edge Impulse. I’m totally blown away by it. I’ve been developing remote, ultra low power sensors for a number of years and I’m very excited by the possibilities this new technology will bring. I almost don’t know where to start! I’m also into LoraWAM and The Things Network. This is an awesome combination in my opinion. I have some experience with image sensor so thought I might start with something along those lines. I have believed for years that what is needed is low power intelligence at the edge. I developed a vibration sensor back in 2013 that listened for events which it them analysed and transmitted only key parameters of each event. It kind of worked but was pretty limited. I truly believe a sensor like that (and many other) could be made to work now.
A question…. Can anyone recommend a low power, low resolution images sensor?
Hi @BenH, awesome! I’d suggest to take a look at OpenMV for now, they have some really nice modules coming out which are great for images. Not currently possible to train these with edge impulse but we’ll be adding some support soon!
In that demo you used the ST kit together with a lora radio shield.
What do your recommend as lora radio module when I would like to use the the OpenMV Cam H7 plus as development board? According to the pin outs, an Mbed shield does not fit into this MCU.
Any suggestions what direction to go in order to have OpenMV have its inferencing on TTN?
Tnx for answering already and great show performance you had on the last TTN conference!
@wdebbaut, good question… You should be able to wire the shield to the OpenMV with not too much effort, as the shield uses only a few pins (see https://os.mbed.com/media/components/pinouts/MB2xAS_Pinout.jpg) - just SPI, one analog pin and a digital pin or two. But… I don’t see any drivers easily available for the OpenMV, and it’s not trivial to get the timings right. Perhaps using a module (so something with both the radio and their own MCU) that you can then talk to over UART (AT commands) would be easier.
in that case it is maybe easier to plug in an ordinary webcam on the ST development board with the Mbed lora shield for which drivers are available. I will have a look at the TTN forum for that, or on the ST Microelectronics docs page.
Honestly, I do not understand your module solution; seems complicated working with AT commands on an UART interface between the OpenMV and that ‘module’.
We will keep you posted on this one, as it is a research project we are actually involved in at our university in Leuven.
Honestly, I do not understand your module solution; seems complicated working with AT commands on an UART interface between the OpenMV and that ‘module’.
The nice thing about a module is that it handles all things around the radio. The only thing you need to do is:
Set keys.
Say “Send a message”.
Whereas with the LoRa shield you have the full LoRaWAN stack running on the same MCU, which thus requires a full LoRaWAN stack. And you’ll only need basic UART connection to the module, which makes writing a small driver trivial (https://docs.openmv.io/library/pyb.UART.html).
For people running into this thread: the Arduino Portenta H7 + vision shield with LoRa module gets you a really nice setup to build models that classify images and send it over LoRaWAN. Here’s an end-to-end application: https://github.com/edgeimpulse/example-portenta-lorawan
Hi all,
is there some example (ie. skect and wire connection) on how to send the classification result (i.e. from arduino nano 33) through lorawan by using a shield or a uart module?
Hi Jan,
I read the post, but I miss something due to my low expertise in C programming… maybe I have to implement serial communication between the Nano33 and the MKR shield device which sends (LoRa) the data received by Nano 33 (by serial?) ?
Do you know where I can find some example related to the connection of Nano33 with a radio module (i.e. SX1276) with MbedOS libs?
Hi @wallax, ah with the SX1276 directly. My suggestion would be:
Get the LoRaWAN part working first. I doubt that anyone has ever used the Mbed OS stack with the Nano 33 BLE Sense, so using an Arduino library for that might be easier.
Once you have this working, just hook the send code into one of the Edge Impulse example sketches for the Nano 33 BLE Sense.
I’m sorry, but I do not understand which is the easiest way to send through LoRa /LoraWAN (i.e. to TTN) the classification results (i.e. Accelerometer, audio, ecc…) from my arduino Nano 33 BLE sense (I’m definitely poor in C programming but I’m enthusiast of Edge Impulse studio…) to LoRa Gateway/TTN
@wallax, so the first thing is that you’ll need to find a way of connecting a LoRa radio to the Nano 33 BLE Sense. My suggestion would be to find a module that supports AT commands that comes with an Arduino library, e.g. RN2483 (not sure if they’re still sold, and if they come with headers) or CMWX1ZZABZ-091 module (same remark). That should let you do communications from Nano 33 BLE Sense => TTN, and once you have that it’s easy to send inference results over.
I think it’d be smart to cross-post this on the TTN forums as well, there’s probably people who have done such a thing./
However, as a second step I’m searching a way to generalize the (LoRa) communication part when using the firmware building tools of your IDE (i.e. also for video boards such as HiMax etc.).
Such firmwares all outputs the classification (i.e. Himax, Thunderboard Sense, etc.) on serial and, if embedded, BLE, isn’t it?
So I need to parse this data (i.e. in arduino MKR1300) in order to correctly format the data in Cayenne LPP for TTN, it is ok?
Are you figuring some upgrade to IDE functionalities in such direction or the solution I propose can be a good alternative?
Related the other question (If you prefer I can open new post such as “LoRa wireless transmission of classification decision”), when an user creates a ready-to-go firmware (generalizing for every board, see picture) it could be useful to use an MKR1300 M0 board to catch through serial the result and to share it through lora?
This is because every board envisages a different developing IDE and programming skills, so the ready-to go option could be very easy for the adoption of all the boards easily for audio/video ec… recognition. On the ther hand the firmware, being compiled, I think is not editable anymore
Hi @wallax, yeah, the ready-to-go-binaries are super useful for quickly validating your model, but you’ll need to compile them from source if you want to modify anything. So my suggestion would be to use the Arduino library as a starting point and share an Arduino sketch on how to integrate.