I am trying to build a project with the edge impulse SDK, namely edge-impulse-ingestion on firmware-eta-compute-ecm3532 and somehow I am getting an error of finding the right include within the Segger project, namely I am missing the path to the #include “eta_bsp.h”
I’m trying to port edge impulse library into my C++ app using nRF52832/FreeRTOS in Segger IDE. I’ve found the same issues about the c++ compiler limitations in SEGGER.
I was wondering if the cmake/ make way of building the edge impulse lib is specific to ETA ecm3532, because this is exactly what I need for my nRF52832.
Basically I need a way to bypass Segger’s compiler and flash my app+edge-impulse through JLink.
Maybe I need to create a custom Makefile and use nrfjprog CLI to flash the device. It would be a combination of my app’s Makefile and the edge-impulse lib’s Makefile, kind of.
Any ideas on how to achieve this? I would really appreciate it!
Hi @felipe.carrau, actually nothing special in the CMake/Make builds. For a minimal application with a Makefile that builds an app + library, see https://github.com/edgeimpulse/example-standalone-inferencing should be pretty simple to get this running on a different target. We also ship with a CMakeLists.txt file in the C++ Library export, just including that in the CMakeLists of your application should build in any CMake environment already.
Hello Jan ! Thanks for your response. I finally got it running yesterday for an nRF5 SDK example “ble_app_blinky” adding what was inside the Zephyr example Makefile to ours. It was a bit of a pain as it was the first time I worked with editing Makefiles. The good thing was that the only edit to edge-impulse-sdk I had to make was changing this:
/edge-impulse-sdk/CMSIS/DSP/Include/arm_math.h __STATIC_FORCEINLINE macros to:
static inline attribute((always_inline))
Ported this Zephyr inference example main.cpp to actually call the inference Edge Impulse library.
I bypassed Segger Embedded Studio using this repo.
Running on a Feather nRF52832. Next steps should be smooth sailing now!
The only scary part (not tested yet) is memory usage while running the inferencing when we add FreeRTOS to the mix. I don’t think we got a lot of room left on RAM when the device is running our app.
Great, 3) should not be necessary though, and you can drop 2) if you want to lower memory usage (it’ll dynamically allocate the tensor arena in that case). What’s the type of model you’re running? The Studio’s estimate of DSP/NN usage should be accurate for this target.