Hello
If I have a trained impulse to process audio data, I can deploy it, e.g. as an EIM, to a Linux (or MacOS) machine. Currently, when I run the deployed impulse it accesses the microphone on my Linux box to process the stream of inputs. However, I’d like to have the ability to send a set of .WAV files into the deployed impulse. This is hinted at in Is it possible to use a WAV file instead of continuous bluetooth input? but in that example it was for BlueTooth and Arduino.
Instead what I’d ideally like to do is have a bit of C or Python code that imports the .WAV files from my Linux box and runs them locally on the trained Impulse to print out the predictions. There is an audio example at Linux Python SDK but again this seems more geared to receiving inputs from the Linux box’s microphone, rather than .WAV files.
Many thanks in advance
Roy