Introducing EON: neural networks in up to 55% less RAM and 35% less ROM

At Edge Impulse we enable developers to build and deploy machine learning models that run on embedded devices. From machines that detect when they're going to break, to devices that can hear water leaks, to camera traps that spot elephant poachers. With memory being very scarce on many of these devices - a typical device might have less than 128K of RAM - we're happy to announce our new Edge Optimized Neural (EON™) Compiler, which lets you run neural networks in 25-55% less RAM, and up to 35% less flash, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers.

This is a companion discussion topic for the original entry at
1 Like

This is a great improvement