The new EON based CMSIS-PACK for STM32

Hi all,

The CMSIS-PACK deployment option is one of the most popular ways to deploy your impulse, and today we’ve released a new version of our CMSIS-PACKs for STM32 that will hopefully make your life even easier. The biggest change is that the pack is now based on EON, just like most other deployment options. For most people nothing will change (deploy as usual, add the pack as usual), but if you’re interested in the new things we can now do: read on!

In the past this CMSIS-PACK was based around STM32Cube.AI, but this means that we are dependent on ST for features, updates, and bug fixes. By replacing the inferencing engine with EON we can now ship source code (there’s not a single binary component in your CMSIS-PACK now!), run on any ST Cortex-M MCU (instead of just M4F, M7 or M33), and iterate over new features much faster (we’re working on new kernels, new neural network architectures, and new quantization techniques - which we can now bring to this pack immediately).

And what about performance? You should see no difference in inferencing time, we still use CMSIS-NN and CMSIS-DSP to accelerate your DSP and ML code, and this will run about as fast as before - and your memory / flash footprint might even go down thanks to the better memory planning in EON. In addition you’ll now have access to the various memory allocation schemes in EON which can bring real memory usage down even more.

To get started just head to the Studio, click Deployment and select the new CMSIS-PACK! Docs are here: https://docs.edgeimpulse.com/docs/using-cubeai and if you have any questions, just let us know here. :rocket:

P.S.: There’s now no ST specific code in the pack anymore, so would love to hear from someone with an MDK or IAR license whether they can import the pack there.

1 Like

This is great. Will Eon open source in the future?

Yeah we’re planning to open source it once we’ve updated to latest TFLM ops.