How to add a preprocessing technique, when bringing my own model?


I have a trained TF-Lite model. This model classifies sounds based on the Mel-Spectrograms of the sound inputs. For this, I have a Python code that preprocesses the raw audio data by converting them into a normalised Mel-Spectrogram. I am wondering, how do I integrate this preprocessing technique with my pretrained model when deploying?

Thank you.

Hello @shossain,

Which deployment type do you want to use? C++ library, Linux .eim?

And what’s your current implementation of the mel-spectrogram algorithms? (Python ? C++? Other?)


Thank you @louis for your prompt response.

I wish to use multiple deployment platforms for experiment purpose, such as C++ library, linux and macOS.

My current Mel-Spectro implementation is in Python (using a package called ‘librosa’).

Thanks for the info @shossain,

So for Linux based deployment, if you want to stay within a python environment, you can use our Linux Inferencing Python SDK that works with the downloaded Linux .eim format.

Just pre-process your features from your raw data as you do before training your model and then pass those features in the run_classifier function:

For the C++ implementation, that’s a bit trickier:
You can have a look at this docs section Bring your own model (BYOM) - Edge Impulse Documentation

  • First I’d suggest to have a look at our own DSP C++ implementation to see if that can match your need (maybe our MFE block can be set up to work like your pre-processing), if so, just use the functions that we already have. This will ensure to leverage the HW acceleration and the optimizations.

  • If it does not work for you, you will need to implement the custom cpp code yourself, like we do for custom dsp blocks:Building custom processing blocks - Edge Impulse Documentation

I hope that helps.



Also, from an internal discussion with @ivan, he mentioned:
The number of features the DSP function outputs should exactly match the number of features BYOM model expects


Thank you @louis for the guidance.Will try these out.

Dear @louis , I have one more question.
If I wish to deploy as a C++ library, and use the existing (predefined) MFE block from Edge Impulse, how do I pass on the features to the model for classification, just like the example you provided for the Linux deployment (it worked!).
Also, when I upload my own pretrained model and deploy as a C++ library, as shown in: On your desktop computer - Edge Impulse Documentation , I noticed that the generated zip file does not contain the “source” folder where I “main.cpp” should be. Is this ok, or my deployment has a potential bug?

Thank you again.

Hello @shossain,

You need to clone the source repository as mentioned here: