Deploying Edge Impulse on Raspberry Pi for Audio Classification

I am working on a final project for my class, and one part of my final project is cough detection. I have a microphone that connects to the raspberry pi to collect audio samples every 5 seconds. I was able to follow the tutorial https://docs.edgeimpulse.com/docs/running-your-impulse-locally and get the model running. But to run the model, I need to input the raw features of the audio sample. I am wondering if there is any way that I can get the raw features locally. Any help will be appreciated!

Hi @Chloe!

I don’t have a Pi with a microphone, but apparently arecord can record WAV files. I guess this would work to record data at 16KHz (if that’s your frequency your model operates on) to a WAV file:

arecord -D plughw:0,0 -f S16_LE -c1 -r16000 --duration=5 out.wav

Then you can read the WAV file in Node.js with the wavefile module. With some code like this:

import { WaveFile } from 'wavefile';
import fs from 'fs';

let buffer = fs.readFileSync('out.wav');

if (buffer.slice(0, 4).toString('ascii') !== 'RIFF') {
    throw new Error('Not a WAV file, first four bytes are not RIFF but ' +
        buffer.slice(0, 4).toString('ascii'));
}

const wav = new WaveFile(buffer);
wav.toBitDepth('16');

const fmt = wav.fmt;

let freq = fmt.sampleRate;
// console.log('Frequency', freq);

// tslint:disable-next-line: no-unsafe-any
let totalSamples =  wav.data.samples.length / (fmt.bitsPerSample / 8);

let dataBuffers = [];

for (let sx = 0; sx < totalSamples; sx += fmt.numChannels) {
    try {
        let sum = 0;

        for (let channelIx = 0; channelIx < fmt.numChannels; channelIx++) {
            sum += wav.getSample(sx + channelIx);
        }

        dataBuffers.push(sum / fmt.numChannels);
    }
    catch (ex) {
        console.error('failed to call getSample() on WAV file', sx, ex);
        throw ex;
    }
}

// now dataBuffers contains your data that you can classify.

Hi Jan @janjongboom , thank you so much for your reply. I appreciate a lot. But I guess my question is, is there any way that I can process the data locally and then feed the raw features into the model to get a result locally with the current setup i have? Let me know what you think! Thanks a lot!

Hey what do you mean with

But I guess my question is, is there any way that I can process the data locally and then feed the raw features into the model to get a result locally with the current setup i have?

The script above should run on the Raspberry Pi and thus get results locally, then classify locally as well.

Thanks! I will try that and see what I get!

@janjongboom Hi Jon, one more question. Is there a python script that I can use to get the raw data? Because all the other part of my project is in python so it would be nice that I can process the audio using a python script also. Thanks in advance!

Hi @Chloe,

The script @janjongboom provided above is a Node.js script to gather the raw features of .wav audio file that is run locally on your Raspberry Pi.

You can then follow this guide to classify the data within a Node.js script: https://docs.edgeimpulse.com/docs/through-webassembly

Instead of deploying your impulse as a C++ library, you will deploy your impulse as a WebAssembly library.

You can also utilize your WebAssembly library from within a Python3 web server: https://docs.edgeimpulse.com/docs/through-webassembly-browser

@Chloe, it looks like Scipy can read this directly. E.g.

from scipy.io import wavfile
samplerate, data = wavfile.read('./output/audio.wav')

And I think that the data is already in the right file in that case for the inferencing library.

@janjongboom hello, just needed a suggestion and more clarity about the data used in training and testing in audio recognition.

Currently I receive PCM audio samples, which I get after interfacing PDM microphone with nRF DK. Than I have created a python script to generate .wav files using the above PCM samples for listening audio.

So in Edge Impulse in real life testing, what type of data should I upload to the edge impulse to recognise audio ?

  1. Should I use the raw PCM data that I get from development kit directly

OR

  1. Should I generate .wav files from the PCM data and use that .wav files to recognize voice by passing it to edge pulse using python script. If yes is there any script available to pass .wav file to edge impulse for testing.

OR

  1. Or should I get data from the generated .wav file (mentioned in 2nd option ) using the scipy module as you have mentioned above.

Can you guide me, which data should I use to train and test.

Thanks in advance

Just replied here: Data format for audio recogniton

Regards,

Louis

@Nikhil If you already have the data on a computer just use the edge-impulse-uploader to upload the data, it converts it for you.