Arduino IDE Serial monitor issue

Hi,

I’ve followed a tutorial using your awesome tool for tinyML:

Everything gone well and I ran the nano_ble33_sense_microphone example on the Arduino Nano 33 BLE Sense board.

Everything was fine, I’ve build the library (ei-xxx-arduino-1.0.1), the board classifies, the serial monitor shows classification results etc.

Then I’ve tried to refine the neural network, added some other sounds, played with MFE/MFCC.
Then I’ve builded the new library V.1.0.2, erased the older one from arduino, and included the new one as usual.

Unfortunately, when I run the same nano_ble33_sense_microphone example on the Arduino Nano 33 BLE Sense board, this time the Serial Monitor doesn’t show anything. Leds are not blinking (only the green one fixed as usual)…

Then, If I replace the ei-xxx-arduino-1.0.2 library with the older one (1.0.1) everything return OK, but with the old NN… obviously :slight_smile:

Please can you help me to understand what do I miss?

thanks in advance,

Riccardo

Hi, This sounds particularly peculiar! Can you share the exact steps on the process you used to update the model, and what parameters you changed? My first guess is that the model update was too big and has maybe caused an exception. Try changing only one parameter and see if that works?

Hi

thank you so much for your prompt reply!

Now I’ve created a new project, uploaded the dataset with 6 categories, created the impulse and the NN, built and installed the library as .ZIP in the Arduino IDE.

Then I run the nano_ble33_sense_microphone example but I have the Serial Monitor empty.

So, I’ve added a delay(10000) after the Serial.begin(115200); and I’ve obtained that output:
Edge Impulse Inferencing Demo
Inferencing settings:
Interval: 0.02 ms.
Frame size: 132300
Sample length: 8268 ms.
No. of classes: 6
ERR: Failed to setup audio sampling
Starting inferencing in 2 seconds…
Recording…

With the error related to audio sampling.

However, I’ve adopted the default settings in the data acquisition section of EdgeImpulse (16000 sampling rate)… do I miss something?

ps- In other projects I also tried to use 44100/48000 sampling rate, but it doesn’t work (maybe it is not supported by Arduino nano 33 sampler?)

Thank you…

Riccardo

@aurel or @jenny Can you take a look here?

Hi @wallax,

It looks like the sampling rate in your dataset is set to 48 kHz (interval 0.02ms ~ 50 kHz) whereas the Arduino BLE mic sample rate is 16 kHz.
Could you try to downsample your audio files to 16 kHz and re-import them in your project?

Aurelien

1 Like

Hi

thank you for the reply,

So the ArduinoNano33BLE maximum sampling frequency is 16KHz? Or maybe I can change the PDM frequency in the PDM.begin with higher rates?

As I understand your library sets the PDM sampling rate ar the higher sampling rate in the dataset, it is ok?

Can you suggest another Arduino-like board able to sample at 48KHz (ie.e external microphone?) or a Mac application for downsampling to 16KHz? I’ve seen SoX but works at 44K or higher…

Thank you very much…

As far as I know the Nano 33 BLE Sense only does 16KHz.

Sox can do this fine on macOS by the way:

sox in.wav -r 16000 out.wav

ah, ok

thank you so much!

Riccardo

Hi

now I’ve subsampled to 16KHz my dataset and the nano_ble33_sense_microphone example runs perfectly (thank you so much).

On the other hand, when I add BLE functionalities (which I’ve already added to the first version of my EdgeImpulse project, working perfectly) I obtain the following in the serial monitor (maybe I have to open another topic in the forum?):

Recording…
Recording done
ERR: MFCC failed (-1002)
ERR: Failed to run DSP process (-1002)
ERR: Failed to run classifier (-5)
Starting inferencing in 2 seconds…
Recording…

By moving step by step to reproduce the working file of the first version, I’ve added ONLY the following lines of code:

#include <ArduinoBLE.h>
BLEService ledService(“19B10010-E8F2-537E-4F6C-D104768A1214”); // create service
BLEByteCharacteristic ledCharacteristic(“19B10011-E8F2-537E-4F6C-D104768A1214”, BLERead | BLEWrite);

and, in setup, under Serial.begin(115200);
I’ve added

if (!BLE.begin()) {
Serial.println(“starting BLE failed!”);
while (1);
}

That If clause create the problem… but in the previous version works perfectly, and I’ve added other lines there to add a different value in the characteristic for each classified sound…

thank you

Ric

Glad to hear your progress! But ugh, these are always tough gremlins to extinguish. Probably worth opening up a new thread on this new issue.

@wallax here’s a workaround Incompatible with ArduinoBLE?

Reported it with the TF team but never got a response unfrotunately.

thanks… I 'll take a look and then I will inform you if it is ok or I have to open a new thread…

thank you so much

Riccardo

1 Like

Hi, just another brief question… what kind of classifier implements your tool? is a k-Nearest Neigbor or something else? Is there some link related to a theorietical documentation or overview (i.e. also for the differences between convolutional NN pooling and dense NN)?

thank you so much!

Most of the time, the pre-set architectures in the “NN Classifier” menu will give you the best out-of-box performance. For audio it is 1D Convolutional. In my experience, increasing the number of training cycles and/or modifying the learning rate will give you the most bang for the buck. There are lots of additional resources out there if you want to really dive into the meaning behind the different layers, etc., but the system is designed so you don’t have to be an ML expert.

Hi, it is just to understand wich kind of classifier are using in the tool because I’m using it for some academic work and I have to tell the classifier (kNN, SVM, random forest…) when I’m writing papers…

thank you

Hi @wallax, for classification and regression we use neural networks, and for anomaly detection K-means.

1 Like

Hi

I’ve read from your docs that your (precious) EON tool is based on CMSIS library. From internet I see that such library can implement several classifers such as SVM (e.g. linear, rbf, etc.), Naive Bayes, etc.:

So I’d like to understand which one is adopted dy default and if it is possible to switch somehow from one to another…
Actually I obtain exciting results from EdgeImpulse ML adoption, but I have to describe it, I do not need to dive in to modify something, I need just to understand and to briefly describe it (if you have some documentation, or maybe you can refer to my (future) papers as documentation…?..

Thanks for your patience

Ric

Hi @wallax, we indeed use CMSIS-DSP and CMSIS-NN underneath to enable hardware acceleration on Cortex-M microcontrollers (but it’s not the only thing we use, e.g. we use MIL on ARC to do the same, and we use Arm Compute Library on Cortex-A, and have non-accelerated implementations for other targets). But we use the parts here selectively, we’ll use the FFT functions to make spectrograms faster, and we use the neural network kernels to make neural networks faster. But you can’t train models other than neural networks and K-Means clustering in Edge Impulse today.

Hope that helps a bit in clarifying.

2 Likes

ok

thank you Jan for the prompt response