I have a trained TF-Lite model. This model classifies sounds based on the Mel-Spectrograms of the sound inputs. For this, I have a Python code that preprocesses the raw audio data by converting them into a normalised Mel-Spectrogram. I am wondering, how do I integrate this preprocessing technique with my pretrained model when deploying?
So for Linux based deployment, if you want to stay within a python environment, you can use our Linux Inferencing Python SDK that works with the downloaded Linux .eim format.
Just pre-process your features from your raw data as you do before training your model and then pass those features in the run_classifier function:
First Iād suggest to have a look at our own DSP C++ implementation to see if that can match your need (maybe our MFE block can be set up to work like your pre-processing), if so, just use the functions that we already have. This will ensure to leverage the HW acceleration and the optimizations.
Also, from an internal discussion with @ivan, he mentioned:
The number of features the DSP function outputs should exactly match the number of features BYOM model expects
Dear @louis , I have one more question.
If I wish to deploy as a C++ library, and use the existing (predefined) MFE block from Edge Impulse, how do I pass on the features to the model for classification, just like the example you provided for the Linux deployment (it worked!).
Also, when I upload my own pretrained model and deploy as a C++ library, as shown in: On your desktop computer - Edge Impulse Documentation , I noticed that the generated zip file does not contain the āsourceā folder where I āmain.cppā should be. Is this ok, or my deployment has a potential bug?