I’ve followed a tutorial using your awesome tool for tinyML:
Everything gone well and I ran the nano_ble33_sense_microphone example on the Arduino Nano 33 BLE Sense board.
Everything was fine, I’ve build the library (ei-xxx-arduino-1.0.1), the board classifies, the serial monitor shows classification results etc.
Then I’ve tried to refine the neural network, added some other sounds, played with MFE/MFCC.
Then I’ve builded the new library V.1.0.2, erased the older one from arduino, and included the new one as usual.
Unfortunately, when I run the same nano_ble33_sense_microphone example on the Arduino Nano 33 BLE Sense board, this time the Serial Monitor doesn’t show anything. Leds are not blinking (only the green one fixed as usual)…
Then, If I replace the ei-xxx-arduino-1.0.2 library with the older one (1.0.1) everything return OK, but with the old NN… obviously
Please can you help me to understand what do I miss?
Hi, This sounds particularly peculiar! Can you share the exact steps on the process you used to update the model, and what parameters you changed? My first guess is that the model update was too big and has maybe caused an exception. Try changing only one parameter and see if that works?
Now I’ve created a new project, uploaded the dataset with 6 categories, created the impulse and the NN, built and installed the library as .ZIP in the Arduino IDE.
Then I run the nano_ble33_sense_microphone example but I have the Serial Monitor empty.
So, I’ve added a delay(10000) after the Serial.begin(115200); and I’ve obtained that output:
Edge Impulse Inferencing Demo
Interval: 0.02 ms.
Frame size: 132300
Sample length: 8268 ms.
No. of classes: 6
ERR: Failed to setup audio sampling
Starting inferencing in 2 seconds…
With the error related to audio sampling.
However, I’ve adopted the default settings in the data acquisition section of EdgeImpulse (16000 sampling rate)… do I miss something?
ps- In other projects I also tried to use 44100/48000 sampling rate, but it doesn’t work (maybe it is not supported by Arduino nano 33 sampler?)
It looks like the sampling rate in your dataset is set to 48 kHz (interval 0.02ms ~ 50 kHz) whereas the Arduino BLE mic sample rate is 16 kHz.
Could you try to downsample your audio files to 16 kHz and re-import them in your project?
now I’ve subsampled to 16KHz my dataset and the nano_ble33_sense_microphone example runs perfectly (thank you so much).
On the other hand, when I add BLE functionalities (which I’ve already added to the first version of my EdgeImpulse project, working perfectly) I obtain the following in the serial monitor (maybe I have to open another topic in the forum?):
ERR: MFCC failed (-1002)
ERR: Failed to run DSP process (-1002)
ERR: Failed to run classifier (-5)
Starting inferencing in 2 seconds…
By moving step by step to reproduce the working file of the first version, I’ve added ONLY the following lines of code:
Hi, just another brief question… what kind of classifier implements your tool? is a k-Nearest Neigbor or something else? Is there some link related to a theorietical documentation or overview (i.e. also for the differences between convolutional NN pooling and dense NN)?
Most of the time, the pre-set architectures in the “NN Classifier” menu will give you the best out-of-box performance. For audio it is 1D Convolutional. In my experience, increasing the number of training cycles and/or modifying the learning rate will give you the most bang for the buck. There are lots of additional resources out there if you want to really dive into the meaning behind the different layers, etc., but the system is designed so you don’t have to be an ML expert.
Hi, it is just to understand wich kind of classifier are using in the tool because I’m using it for some academic work and I have to tell the classifier (kNN, SVM, random forest…) when I’m writing papers…
I’ve read from your docs that your (precious) EON tool is based on CMSIS library. From internet I see that such library can implement several classifers such as SVM (e.g. linear, rbf, etc.), Naive Bayes, etc.:
So I’d like to understand which one is adopted dy default and if it is possible to switch somehow from one to another…
Actually I obtain exciting results from EdgeImpulse ML adoption, but I have to describe it, I do not need to dive in to modify something, I need just to understand and to briefly describe it (if you have some documentation, or maybe you can refer to my (future) papers as documentation…?..
Hi @wallax, we indeed use CMSIS-DSP and CMSIS-NN underneath to enable hardware acceleration on Cortex-M microcontrollers (but it’s not the only thing we use, e.g. we use MIL on ARC to do the same, and we use Arm Compute Library on Cortex-A, and have non-accelerated implementations for other targets). But we use the parts here selectively, we’ll use the FFT functions to make spectrograms faster, and we use the neural network kernels to make neural networks faster. But you can’t train models other than neural networks and K-Means clustering in Edge Impulse today.