FireGuard: running two impulses on a Arduino PortentaH7

Hello everyone!
I’m a fresh Informatics Engineering graduate and a couple of months ago i started working on my thesis project which involved the creation of a device capable of recognizing and signaling the presence of nearby forest fires.
The initial goal was to recognize a nearby fire from audio data, so I built an audio dataset by collecting over 2800 signals lasting 5 seconds. I used EdgeImpulse to create a CNN that classified the samples, defining two classes: “Fire” and “Not-Fire”. The model worked fairly well in recognizing fire from audio data and i was still having time available for the writing of the thesis, so i was given the opportunity to work with an Arduino Portenta H7. Since the device has two cores I thought it would be interesting to try to run two neural networks on the same device: one for analyzing audio data, the other for image processing.
Together with the Arduino Portenta H7 I had the Portenta vision shield (LoRaWAN) available, which interfaces two microphones, a camera, an SD reader and a LoRaWAN module to the device. I built a dataset containing about 5000 images, and again i used EdgeImpulse to generate the CNN (MobileNet) that did the recognition. The challenge now was to be able to run the models on two different cores, making them communicate simultaneously.
I decided to load the audio recognition model into CoreM4 and the video recognition model into CoreM7. Initially the audio model produced memory errors which I was able to solve thanks to this community help. Another problem I encountered was making the two cores communicate: the RCP library for serial communication between the cores was giving compilation errors, I solved this by writing/reading the digital PINs of the device.
Subsequently I designed a test to be performed directly from the device, I won’t go into details, but the tests have produced encouraging results, and seeing the project work has really satisfied me.
Ultimately, the device is capable of:

  • Running two differents CNNs, one on the M4 core one on the M7 core. If the M4 core hears the sound of fire in the vicinity, the M7 core is activated which takes a picture and performs the classification of the sample.
  • If the image fed to the network on the M7 core is also classified as fire, an alert packet is sent using LoRaWAN technology.
  • Every picture taken from the device is saved inside the SD card as a matrix of pixels, which can be compressed in standard formats using a computer. These pictures can be used to expand the dataset of the specific location in which the device could be placed.

I’m really glad about this project, if you are interested in a detailed reading please visit this github repo i made. I’m currently looking for a job as software developer so a bit of visits will help a lot.
Peace!

3 Likes

Great project @RayGun182 , sorry it took me a while to find out about it. Everything you are doing with the PortentaH7 is interesting.

For non-RPC communication between the M7 and M4 cores you might want to look here at how I do it, using basic serial communication and some trick pins on the PortentaH7.

I followed you on twitter as that is easy to DM communicate, from @rocksetta

I have recently got vision working on the M4 core so that is something you might be interested in on this thread here with Vision on the inner M4 core the outer faster M7 core would work on the sound classification.

Very impressive getting the LoRa working. I struggled with that for a long time, my solutions for Helium are here since I don’t have TTN nearby and have never got it working.

Well done recording a picture off the camera and onto the SD card. I have a github issue here hoping Arduino will make that easier to save JPG etc, but would like to see your code.

I will look at your github and try to understand what you did.

P.S. well done!