How to use WAV files on a deployed impulse on a Linux box


If I have a trained impulse to process audio data, I can deploy it, e.g. as an EIM, to a Linux (or MacOS) machine. Currently, when I run the deployed impulse it accesses the microphone on my Linux box to process the stream of inputs. However, I’d like to have the ability to send a set of .WAV files into the deployed impulse. This is hinted at in Is it possible to use a WAV file instead of continuous bluetooth input? but in that example it was for BlueTooth and Arduino.

Instead what I’d ideally like to do is have a bit of C or Python code that imports the .WAV files from my Linux box and runs them locally on the trained Impulse to print out the predictions. There is an audio example at Linux Python SDK but again this seems more geared to receiving inputs from the Linux box’s microphone, rather than .WAV files.

Many thanks in advance

Hi @rasanderson

While the existing examples focus on real-time inputs, adapting them for file-based inputs shouldn’t be too complex. You can utilize libraries like PyDub or scipy to read .WAV files and feed them into your deployed model for inference.

Linux python sdk would be my choice here :smiley:



Hi @Eoin
Great - I’ll try the Python SDK as you suggest