On the “analyze optimizations” tool during Deployment, I would love to see something that gives me feature extraction time (latency) in addition to inference latency. Some features (like MFCCs) can take 100s of milliseconds whereas NN inference only takes a few milliseconds, so the total time to perform inference can be a lot longer than what’s reported.
Not sure if it’s possible to test/report something like that, but it would be very helpful in figuring out which microcontroller I should use.
Thanks for your feedback, it’s a good point as extracting features can take more or less time depending on your target. We have some performance metrics available here: https://docs.edgeimpulse.com/docs/inference-performance-metrics
We could point out to this user guide in the deployment part and/or add an estimate for a specific target (ie Cortex M4F).