Keyword spotting-nano 33 ble

It has been a long time that I am trying to solve this issue. I have arduino nano 33 ble, not sense card namely. So there is no an embedded microphone on it. Therefore, I am using an external mic which is SPW2430. Well, I trained my model on EdgeImpulse but the output code is with PDM. I tried to overwrite those PDM functions but it did not work. So, I decided to write another code which uses timers and interrupts. When I press the button it will acquire the audio signal and then it will put those values to inference buffer and communicate with the model. But the problem is my classification results are returned as NULL, I even could not be able to allocate them as far as I understand.

Hi @bulut,

There are a number of possible things that could be going wrong here:

If you pass debug=true in run_classifier(*signal, *result, debug), does that offer any insights into what is failing?

I recommend trying some intermediate steps to see what might be failing:

  1. Try classifying a static buffer filled with the raw sample values from a training or test sample. Do you run out of memory or does this work?
  2. Try recording 1 second (or whatever your keyword window is) of audio data with your code and printing the raw values to the serial terminal. If you copy and paste those values into your test from 1), does inference work the way you expect it to?
  3. If 2) doesn’t work, try pasting the values you recorded into a CSV file and uploading it to Edge Impulse as a test sample and run inference there. Do you get the expected results?
  4. If 1-3 work, try doing a simple sequential test of: record and store raw data in a buffer > perform inference > print inference results (or do some action based on those results)
  5. If 4) works, then move on to the continuous case of keyword spotting.

For the continuous case (5), my Wio Terminal example might help: https://github.com/ShawnHymel/ei-keyword-spotting/blob/master/embedded-demos/arduino/wio-terminal/wio-terminal.ino. I use timer interrupts and DMA to continuously fill a buffer with audio data while inference is being performed on a rolling window of audio data.

Hope that helps!