Connection between raspberry pi 4 and edge impulse

Question/Issue: Currently creating a project with some friends that uses Edge Impulse and a Raspberry Pi 4. We currently have the model, all data samples, and everything running when we type the command: “edge-impulse-linux-runner,” however, we want to be able to adjust how the output looks once the code is ran from our project. We cannot figure it out for the life of us. It just continuously shows the loop of voice data, with spikes in accuracy whenever we say a keyword. I want the terminal to provide a clearer interface whenever the person (there are three of us) speaks the key phrase which in our instance is the word “access.” What should we do? We are so lost.

Project ID: 178200

Context/Use case:

Hi @fubar,

You can use the .eim impulse file and SDK to create your own interface with the .eim runner operating in the background. There is some information here about how that works: Linux Python SDK - Edge Impulse Documentation. You can also find a Python example here that uses the .eim file to perform inference on a set of static features: https://github.com/edgeimpulse/linux-sdk-python-external/tree/master/python/examples/custom

If it helps, I have a full example here that shows how to continuously pull images from a Pi camera and use the .eim file on a RPi 4 to perform image classification: computer-vision-with-embedded-machine-learning/cnn-live-inference.py at master · ShawnHymel/computer-vision-with-embedded-machine-learning · GitHub

Thank you @shawn_edgeimpulse ! By any chance does this work this vocal stuff? The project I’m doing deals with voice recognition and not images unfortunately. I may have seen that tutorial about webcam usage and how it can be ported over with .eim but I don’t know how to do the same via voice.

@shawn_edgeimpulse I forget to include this in my previous reply, but the link of GitHub that goes to examples of static features is invalid, and will not open up on my end.

Hi @fubar,

Apologies–try this link: linux-sdk-python/classify.py at master · edgeimpulse/linux-sdk-python · GitHub. That is an audio example as well that uses the .eim runner.

Thank you @shawn_edgeimpulse,

I will try this out once I’m back home from work. You said that the GitHub file includes audio detection using the runner, is that for the link that you just gave me, or from the original post?

Hi @fubar,

The new link: it’s an example under the “audio” section in that repository.

Hi @shawn_edgeimpulse,
For the use of this code, would I just plug it straight into my Raspberry Pi’s terminal? I don’t know where to really allocate any of the SDK stuff using edge impulse beyond the edge-impulse-linux-runner command

Hi @fubar,

The code I linked to is an example of a Python program that uses the Edge Impulse Python library.

From the terminal, you would first install the dependencies and the edge_impulse_linux library:

udo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev libopenjp2-7 libgtk-3-0 libswscale-dev libavformat58 libavcodec58
pip3 install edge_impulse_linux -i https://pypi.python.org/simple

Then, download the repository:

git clone https://github.com/edgeimpulse/linux-sdk-python

Go to the audio example and download the keyword spotting model from your project.

cd linux-sdk-python/examples/audio
edge-impulse-linux-runner --download modelfile.eim

Then, assuming you have a microphone plugged in that is supported (e.g. a USB microphone), you should be able to run the program as follows:

python classify.py modelfile.eim

The script should select a microphone for you and start classifying audio input (e.g. keyword spotting).

Thank you @shawn_edgeimpulse,

Will I have to change much beyond the code given given? I’m currently uploading this code onto the terminal of the pi. All I really want to access is the ability to get the key word that my group and I have audio samples of, and then when received to the pi terminal, I just want it to run and then say who is currently talking.

One more question @shawn_edgeimpulse,

The UI on the terminal now looks a lot cleaner than previously. If I want the terminal to display a message whenever the user speaks and the model recognizes it, how would I go along doing so?

My apologies for the string of questions, but we are so confused since there aren’t many tutorials using a raspberry pi 4.

Hi @fubar,

Look for the part where you see:

for res, audio in runner.classifier(device_id=selected_device_id):
    print('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']['classification']), end='')
    for label in labels:
        score = res['result']['classification'][label]
    print('%s: %.2f\t' % (label, score), end='')
    print('', flush=True)

Change that to something like the following (note that I have not tested it):

target_label = "access" # Or whatever label you want to look for
threshold = 0.6

for res, audio in runner.classifier(device_id=selected_device_id):
    score = res['result']['classification'][target_label]
    if score > threshold:
        print(f"Heard {target_label} with confidence {score}")
        print("", flush=True)
1 Like

Oh okay, @shawn_edgeimpulse, that makes some sense. Would I change target_label to a name within my project, or does this classify it as it’s own variable?

@shawn_edgeimpulse Hi, I’m also one of the members working on this project. So, for target_label there is no label called access within our samples. We just want that as soon as we say the word access then it performs the commands in the lines after it.

The target_label Shawn referred to is whatever is in your project within Studio in the Impulse Design section in the Output Features Block.

1 Like

Oh okay, that makes sense. I currently have disabled the noise and unknowns from my project, would I just redownload the modelfile.eim to implement the changes?
@shawn_edgeimpulse @MMarcial

Now it’s currently giving us an “unsupported aarch64” error now that I am trying to run a new --download modelfile.eim

Just redownload the modelfile.eim to implement the changes AFTER:

  • executing all the sections in the Impulse Design Section and then
  • goto the Deployment page and build the Linux (AARCH64) ready-to-go binary.

Assuming you re-built the Impulse and Built the binary and are now getting unsupported aarch64, then something else must have changed besides just the EIM file. My advice is to check you engineering logbook you are using to document this task (a very important task if you are ever going to file for a patent). Then start rolling back changes until you get to a working version, then start moving forward carefully documenting changes.

Perhaps a capture of the screen with the unsupported aarch64 error might help.

Check these forum posts here and here.


Hi @MMarcial , this is a screenshot of the error we got.