Problem deploying audio classification model to Nicla Vision

Question/Issue:
I have an audio classification model that I built from a public dataset and am trying to use with the Nicla Vision.
I successfully built the model and it has reasonable accuracy ~90% when tested against dataset test data. I deployed the model as an Arduino library and it built and tested successfully using the static buffer example with a dataset test sample.
I tried using the microphone and microphone_continuous examples. Both run, but do not return a correct classification result using live audio data (always classifies “noise”).
I tried deploying a binary file, but that had the same result.

I then tried reflashing the Edge Impulse firmware and tried live classification using the Nicla Vision microphone with the model and that worked, so the microphone is working. I also tried using that live classification data with the static buffer example and that also worked.

Can someone help me to debug why classification doesn’t work when run on the MCU using the PDM microphone. It seems that there must be some mismatch in my setup. I tried performance calibration, but that did not help. I suspected that I had build issues with the Arduino IDE (I don’t get errors, but do have warnings), but since binary deployment does not work - it seems like something more basic.

Is there a current list of Edge Impulse version dependencies (firmware, software) for the Nicla Vision? Is there an example audio project that I could deploy to test my hardware setup?

Project ID:
191688

Context/Use case:
Nicla Vision using onboard PDM microphone to classify audio data from a bee hive.

Try deploying the “Unoptimized (float32)” model.

Also see Joeri’s advice here.

I tried deploying the “Unoptimized (float32)” model and it changed the result somewhat but still not getting the desired result.

I also tried turning off the EON compiler, but that model doesn’t run (no error, but doesn’t return after starting inferencing).

One issue that I had found earlier is that I had a audio buffer allocation error because of the sampling rate for the WAV files (44.1KHz). I found this using “edge-impulse-run-impulse --debug” with the deployed binary model (I was not seeing any error without the --debug). I reduced the sampling rate to 22.05KHz and that error was fixed, but still not getting the correct result.

Record some data using the Studio Data Acq page. Then listen (and view the waveform) to what got recorded and how it differs from the public dataset. I suspect you’ll need some onboard-Nicla processing to cleanup the mic before inferencing.

The other option is to add an equal amount of samples recorded with you Nicla to your existing dataset.

1 Like

For now I think adding samples recorded with your Nicla is the best path forward.

You can try some other techniques like Data Augmentation and Synthetic generation or audio.

Data Augmentation & Synthetic Data