New signal processing performance metrics

At Edge Impulse we believe that signal processing is key to embedded Machine Learning and we try to leverage the decades of industry knowledge around signal processing before even applying a machine learning model. To help developers choose better parameters for their signal processing blocks, and to show whether a model will fit the latency and memory constraints that their application has, we now introduce real-time performance metrics for all processing blocks in the Edge Impulse Studio.


This is a companion discussion topic for the original entry at https://www.edgeimpulse.com/blog/signal-processing-performance

@mathijs I’m really excited to see this feature deployed, as I know that in some cases (e.g. audio processing), the DSP time can be worse than the actual ML inference time.

I do have a question about the naming. I’m looking at the “On-device performance” metrics in the processing block for my motion detection project. It shows “inferencing time” --does that include the actual time it takes to perform inference (forward pass in the ML model) or just the DSP time (feature extraction prior to inferencing). Same with peak RAM usage: does that include inference or just feature extraction (processing)?

If it’s just the DSP time prior to inference, could I request the name be changed to something else, such as “DSP time,” “feature extraction time,” or “processing time?” I’d like to make sure I’m reading it correctly and keeping it separate from the “latency” time reported on the Deployment page (which, as I understand it, is only the inference time and does not include the DSP/processing time, right?).

Hey @ShawnHymel, on the DSP screen the performance (inferencing/RAM) is just for DSP operations. On the NN screen the performance is just for NN operations. Regarding RAM usage we (by default, you can disable it) try to use the heap as much as possible for all operations, so typically look at Peak RAM Usage on both DSP & NN and pick the highest. That’s ~ the amount of RAM you’ll need.

At some point we’ll unify this in the Deployment page so you’ll have one overview of the complete algorithm.

If it’s just the DSP time prior to inference, could I request the name be changed to something else, such as “DSP time,” “feature extraction time,” or “processing time?” I’d like to make sure I’m reading it correctly and keeping it separate from the “latency” time reported on the Deployment page (which, as I understand it, is only the inference time and does not include the DSP/processing time, right?).

Great idea, will do that.

1 Like