Adding machine learning to your LoRaWAN device

During The Things Conference this January I talked about the combination of LoRaWAN and on-device machine learning (TinyML). This is an incredibly powerful combination, as to preserve power and bandwidth LoRaWAN devices often resort to sending very little data. A device for monitoring machine health by observing the machine's vibration pattern might just send the peak motion every hour, discarding interesting potential fault states. With machine learning you can analyze the full signal on the device itself, and just send the conclusion ('abnormal vibration pattern seen') to the network.

This is a companion discussion topic for the original entry at

I watched that talk. I’ve only really just discovered TinyML and Edge Impulse. I’m totally blown away by it. I’ve been developing remote, ultra low power sensors for a number of years and I’m very excited by the possibilities this new technology will bring. I almost don’t know where to start! I’m also into LoraWAM and The Things Network. This is an awesome combination in my opinion. I have some experience with image sensor so thought I might start with something along those lines. I have believed for years that what is needed is low power intelligence at the edge. I developed a vibration sensor back in 2013 that listened for events which it them analysed and transmitted only key parameters of each event. It kind of worked but was pretty limited. I truly believe a sensor like that (and many other) could be made to work now.

A question…. Can anyone recommend a low power, low resolution images sensor?

Hi @BenH, awesome! I’d suggest to take a look at OpenMV for now, they have some really nice modules coming out which are great for images. Not currently possible to train these with edge impulse but we’ll be adding some support soon!