Is it possible to use a WAV file instead of continuous bluetooth input?

Hey everyone, so I am working on a voice command project.

I have a MAX9184 microphone component for the Arduino, and I am able to record audio with it and save it as a .WAV file. I used the edge impulse classifier to train a model to detect whenever I say “light on” or “light off” and deployed it as an Arduino library. However, in looking through the examples it seems that all of them use a BLE component which is something I don’t have and something that I probably cannot get.

Is it possible for me to use the .WAV files that I am able to save on the classifier that I deployed? For making/training the classifier I was following this tutorial, but as I said I don’t have a BLE component needed and cannot get one. If it isn’t possible with Edge Impulse, could someone please point me to some resource I can use for a .WAV classifier on the Arduino?

Hi @CraftedDoggo,

However, in looking through the examples it seems that all of them use a BLE component which is something I don’t have and something that I probably cannot get.

I think you’re mixing up a few things. We support the Arduino Nano 33 BLE Sense out of the box, but the Arduino library export does not depend on any BLE parts. It runs on any Arm-based Arduino.

Is it possible for me to use the .WAV files that I am able to save on the classifier that I deployed?

Yeah, I suggest you do this first and can classify data: https://docs.edgeimpulse.com/docs/running-your-impulse-arduino

Then, you can feed your WAV file into this. Could you elaborate on how you get the data from the microphone? Do you have a buffer in memory with audio data or is it stored as a file on an SD card or something?

If you’re able to load your audio data into a int16_t array you can classify like this:

int16_t audio[16000] = { 0 }; // <-- this should be filled with audio data

int audio_get_data(size_t offset, size_t length, float *out_ptr) {
    return numpy::int16_to_float(&audio[offset], out_ptr, length);
}

int main() {
    signal_t features_signal;
    features_signal.total_length = 16000;
    features_signal.get_data = &audio_get_data;

    // invoke the impulse
    EI_IMPULSE_ERROR res = run_classifier(&features_signal, &result, true);
    ... rest of the app
}

Hi @janjongboom,

Thank you for your response. I will go ahead and try the classification method you told me. The .WAV file is stored on an SD card which is connected to the Arduino.

@CraftedDoggo, in that case the chance is pretty high that this is uncompressed PCM16 data in the WAV file (which is what we see most on embedded systems). You can then just skip over the first 44 bytes of the WAV file and the bytes after are already int16. So you can just use fread to read it into a buffer.