Add Tensorflow Micropython Examples as Edge Impulse runtime

I´m the creator of the tensorflow-micropython-examples project.

The purpose of this project is to make it easier to experiment with TinyML.

At the moment we support ESP32 and RP2040. We have a STM32 port but there are some issues with it.

On ESP32 we have hello-word, micro-speech and person-detection running. For RP2040 we have hello-world and micro-speech. STM32 did run hello-world but its not working at the moment due to being too big.

To start with we are implementing all of the tlfm reference examples but I want to have other examples in the future and encouraging users of Edge Impulse to run them on our firmware seems a natural way to get new examples.

I´m researching how we can wrap the edge impulse sdk but at the moment what does work is running tensorflow lite for microcontrollers models in micropython.

Similar to OpenMV you can drop off the model.tflite file into the flash and then run the model on device. On ESP32 you can use PSRAM for larger models like the 300 kb person detection example.

I wonder if the Edge impulse documentation could be updated to list us as a runtime for tflm models.

Hi @michael.o - thanks for the initiative. My biggest issue with this is that models (esp. non-vision models) are more than just the TensorFlow Lite model. E.g. preprocessing code, DSP code, non-neural networks and post-processing of the results. What we bundle with models deployed through Edge Impulse is all of that. Single interface (ei_run_classifier) to pass data in => conclusions out. For simple image models the approach of dragging-dropping the tflite model will work but this falls apart for object detection models (coming to OpenMV soon as well).

So in my opinion a better way forward would be:

  1. Wrap around our normal inferencing SDK (so via the ei_run_classifier / ei_run_classifier_continuous functions).
  2. Tell users to export as C++ library from Edge Impulse, and drop it into your project (note that SDK + model-parameters + tflite-model folders are tightly coupled, you need to update all of them), and then rebuild and flash.
  3. Use that from MicroPython.

I know this is more annoying as you lose the drag&drop, but it’s the same pattern we’re now going to follow w/ OpenMV to support both the non-plus H7 better with more models, and you get all the DSP / normalization / post-processing for free. If that’s interesting, would be happy to add your project as community ports.

Note that we’re working on RP2040 and ESP32 official support (@AIWintermuteAI is doing this at the moment).


@janjongboom Thanks for your detailed response.

I will look at wrapping the edge impulse sdk in micropython. Do the blocks come with the exported c++ on with each example or is there a way to bake them all in (or select which ones to bake in) to the firmware.

Micropython has a way to compile C and I think C++ codes into .mpy files that can be placed into an existing firmware at runtime. I was thinking of trying to use this so that the C++ parts in the export would be compiled into a native module which could still be drag and dropped onto the firmware.

I was looking at this approach at the start before I realized that an esp32 with 4MB flash was enough to fit all of the tflm operators.

There is a fixed function table that these modules can access but since I would be making the firmware anyways I think it should be possible to expose the right methods.

@AIWintermuteAI be sure to pick up the accelerated esp32 tflm kernels from Espressif here:

Optimized ansi for esp32 and assembler for the esp32s3 dsp instructions. For person detection on the s3 its supposed to be 10x faster.

Thanks a lot for the hint! I’m starting with ESP32 this week.

1 Like

Hi @AIWintermuteAI

I have tested the edge impluse trained image classification model on the project and it works great. But as @janjongboom said, I have a problem with the feature process in speech recognition.

It seems that the feature provider of TF Lite Micro is different from edge impluse.
I also mentioned the specific problems in this link:

So, I wonder if your work will solve this problem? Any updates to share with us, thanks.

hi @duzhipeng

AIWintermuteAI will be programming the ESP32 with compiled code, not micropython.

Using any other “feature provider” won’t work, b/c the model trained in Edge Impulse is trained against features generated from our versions of common DSP preprocessing blocks. (like MFE in your shown in the github issues link). @michael.o have you had any luck compiling our SDK into an .mpy file?

Because we develop under esp-idf and micropython at the same time. I wonder if it means the C/C++ SDK support esp-idf ?

We have found that Edge Impulse supports deploying Arduino or C/C++ SDK. But I don’t have the ability to port C/C++ SDK to esp-idf.

Will edge impulse launch an esp-idf based SDK or example?Thanks

ESP IDF is just Espressif’s official IoT Development Framework, the code you compile with the help of ESP IDF is still C/C++ code…
edge-impulse-sdk can be compiled with ESP IDF, it just requires a little bit of fiddling with CMakeLists.
Once we finish integrating ESP32 as officially supported board, the code for firmware/standalone inferencing example will be made public.
Not sure if that will be helpful in your task - from what I understand your goal is to either integrate the whole edge-impulse-sdk into mpython firmware or at the very least DSP part of it (to make the firmware occupy less resources).

Thanks for your reply.

“integrate the whole edge-impulse-sdk into micropython firmware” is great for me. “edge-impulse-sdk + IDF C/C++” can also meet my basic needs

So I wonder is there any schedule for integrating ESP32 as officially supported board. I’m really looking forward to it

Hello @duzhipeng ,

We cannot give you a precise ETA, @AIWintermuteAI has started to work on this a few weeks ago so it is definitely coming :slight_smile:



OK. Thanks.
I will wait for it

We expect the firmware and integration to be completed by mid-April, barring emergencies :slight_smile:

@AIWintermuteAI @janjongboom

Hi, all

Any update on the progress of integrating ESP32 as officially supported board?
Will the esp-nn or esp-dl be used in the TensorFlow op official support? Or just ANSI C in the first version.

I try to integrate Edge Impulse C++ library to esp idf project via CMake and components.
But I meet Compile Errors like:ESP32 Compile Errors

“…/components/ei/edge-impulse-sdk/CMSIS/DSP/Source/FilteringFunctions/arm_biquad_cascade_df1_fast_q15.c:220:18: error: implicit declaration of function ‘__PKHBT’”

This error seems a bit difficult to me. So I’m wondering about the progress of official integration.

We’re still targeting mid-April for ESP32 firmware/integration launch :slight_smile:
It will use ESP-NN optimized kernels.
The error has to do with CMSIS - it is not really supposed to be used at all on non-ARM platforms. You can just delete CMSIS folder from edge impulse sdk, when compiling for ESP32.

1 Like

Hi, @AIWintermuteAI Thanks.

I delete CMSIS and then change some porting config, the compile is ok now.

I will wait for the official firmware/integration launch, thanks.

I’am also very interested in deploying EdgeImpulse on ESP32-S3. Just designing a device capable of analyzing sound samples (from a mic) for monitoring health of an electrical engine (for predictive maintenance). Would be happy to beta-test.

Hello @duzhipeng and @TeeVee,

We released yesterday the support for the ESP-EYE (which include hardware acceleration using esp-nn):



@louis Hi, thanks for the notification.
I have test the new sdk, and it works great for me, especially the performance of esp nn.

But there are a few questions after my test:

  1. the audio feature DSP time is too long than classification, is there any possibility of optimization in feature dsp?
  2. esp32 s3(591ms) audio feature DSP time is longer than esp32(335ms). In my opinion esp32s3 should perform better than esp32.
  3. Image classification fails in esp32 s3. I have changed CONFIG_IDF_TARGET_ESP32 to CONFIG_IDF_TARGET_ESP32S3 in ei_classifier_porting.h, but still error.

A quick update:
The image classification application on esp32 s3 is ok now when I use tflite-trained not trained_model_compiled.


  1. Improving DSP performance is in plans.
  2. There are no DSP optimizations included in the firmware. So ESP32-S3 should have the same performance as ESP32. Perhaps you didn’t set clock to 240 Mhz? Check the parameters in sdkconfig for optimized clock settings.