Edge-Impulse C++ library as MicroPython Native Module

Question/Issue:
I am trying to integrate my Edge Impulse inference engine (C++ library + compiled model) directly into custom MicroPython firmware for the Raspberry Pi Pico. However, when I build the firmware with the inference library included, the final .uf2 file becomes too large (~1.9 MB) and the board does not boot after flashing — there is no serial output and the device does not enumerate over USB.


Context/Use case:

The goal is to provide an integrated firmware that includes both the MicroPython interpreter and the native inference engine as a module.


Steps Taken:

  1. Built the inference library (ei_inference) in C++ with the Pico SDK — standalone version works fine.
  2. Verified the compiled model runs correctly on the Pico with a native C++ application.
  3. Tried to compile a custom MicroPython build using the official MicroPython port for RP2040, adding the ei_inference static library and model as a custom C module.
  4. The firmware compiled successfully but produced a .uf2 file ~1.9 MB in size.
  5. After flashing, the board does not enumerate over USB, no serial output appears, and the device appears bricked until reflashed with standard MicroPython.

Expected Outcome:
The Pico should boot into MicroPython with the custom inference module available (import ei_inference) so students can run inference calls directly from Python scripts.


Actual Outcome:
The device does not boot at all.
No serial device appears, and no REPL output.
The Pico must be reflashed with plain MicroPython to work again.


Reproducibility:
—Always
… Sometimes
…Rarely


Environment:
Platform: Raspberry Pi Pico (RP2040)
Build Environment Details:

  • Pico SDK v1.5.1
  • Custom MicroPython build using mpy-cross and make on macOS 13
  • Edge Impulse C++ library and compiled model embedded as static library in MicroPython build
    OS Version: macOS 13.5.2 (also tested on Ubuntu 22.04)

Custom Blocks / Impulse Configuration:

  • 3-axis raw accelerometer data, small NN classifier, compiled with CMSIS-NN and TFLM micro kernels.
  • No custom DSP blocks.

I want to confirm:

  • Is this approach feasible for the Pico (2 MB flash limit)?
  • Any tips for reducing size (e.g., exclude unused modules, shrink model, other linker flags)?

Hi @not-kronox101

Great to hear you are using us to teach your course to students, can you share any detail with us about the course? You can PM me and we can see if we can share any additional resources to help!

Lets try some other Model optimization steps first

Can you share the project ID? I would like to see if you are enabling int8 optimization and EON compiler as a first step to reduce the footprint.

There are also a number of flags we can try to enable to reduce the footprint too, but I want to check with the embedded team what ones make sense based on your usage:

Flash size limitations

From what I can find there are limitations on the size at about 1.5MB for a model that you are exceeding, and thats why it gets your board into a bad state:
PICO_FLASH_BINARY_SIZE_LIMIT 0x180000 = 1 536 kB

You can try to increase this, but I need to ask for advise on this first.

Best

Eoin

Hi Eoin,

Thank you for the feedback and suggestions! I’ve made progress with a workaround: I compiled a custom MicroPython build for x86/ARM to validate the integration of the Edge Impulse module, while using the Pico solely as a sensor input device (streaming data via serial). I have compiled edge-impulse model as a static library to reduce compilation errors, though I am still running into some linking errors (missing functions) while compiling against micropython. The Pico’s flash size remains a constraint for standalone deployment.

To address your questions:

  • Project ID: [“I’ll share details via PM shortly.”]
  • Optimizations: I’ve enabled int8 quantization and EON—any additional flags or linker tricks would be greatly appreciated!
  • PICO_FLASH_BINARY_SIZE_LIMIT: I’d love to hear the embedded team’s advice on adjusting this safely.

Could you clarify how to PM you on this platform? (I don’t see a direct message option.)

Thanks again for your support!