The files that Edge Impulse generates for model inference, which inference engine Edge Impulse use under the hood?

The files that Edge Impulse generates for model inference, which inference engine Edge Impulse use under the hood?

It depends on the hardware. For general purpose MCUs we typically use EON Compiler w/ TFLite Micro kernels (incl. HW optimization, e.g. via CMSIS-NN), on Linux if you run on CPU we use TensorFlow Lite. But then for accelerators we’ll use a wide variety of other runtimes, e.g. hardcoded network in silicon for Syntiant, custom SNN based inference engine for Brainchip Akida, DRP-AI for Renesas RZV2L, etc.

1 Like