Integrate ST-Neural ART library into STM32CubeIDE project

Question/Issue:
I downloaded the ST-Neural ART library for my model trained on Edge Impulse. I’m using a YOLOv5 model (EI says it doesn’t support downloading the CUBE.mx CMSIS format for that model.) Do you have any documentations or guides showing how to pull this library into a STM32CubeIDE project? The only guide I could find is the CUBE.mx CMSIS guide and that is not applicable to the ST-Neural ART format. I want to be able to call the run_classifier from within my project, currently, with what I’ve tried, I can’t get the code to build (lots of path/dependency issues).

Project ID:
[Provide the project ID]

Context/Use case:
Object detection

Steps Taken:

  1. Created model and downloaded ST-Neural ART library
  2. Tried to import into STM32CubeIDE, but can’t get it to work

Expected Outcome:
Being able to call the functions for the model within STM32CubeIDE.

Actual Outcome:
Build and path issues.
Reproducibility:

  • [ ] Always

Environment:

  • Platform: [STM32N6570-DK]

Hi @Williedlbat

You have the right path here by exporting as ST-Neural ART library but let me check the recommended IDE and next steps with embedded team for you to get the inference working. I’m not sure why it wouldn’t allow export of CUBE.mx CMSIS for that board.

Best

Eoin

HI @Williedlbat

Quick follow up with steps from @ei_francesco on our embedded team:

The current recommended path is to make your changes in the network.c of our public firmware and then follow the remaining steps in our firmware quickstart guide [here] and you should be able to import as a project (GitHub - edgeimpulse/firmware-st-stm32n6)

  1. Open your project deployment EI ST‑Neural ART zip
    You’ll find files like:
  • network.c(this is what you need to replace @Williedlbat)
  • network_data.hex (the compiled network blob for the NPU)
  • model-parameters/, postprocessing/, edge-impulse-sdk/ (support code)
  1. Replace network.c in your firmware project
    In your CubeIDE project (or the reference FW), locate the existing network.c and replace it with the one from the EI export zip.

Let us know how the project works for you or if encounter any further path / dependency issues please share the IDE version, Development OS and project ID so we can reproduce with the team.

Hope this helps, and thanks again @ei_francesco for the advice.

Best

Eoin

Hi @ei_francesco and team, thanks a lot for the detailed steps.

Just to clarify my use case: we’ve created a fresh STM32N6 project directly in STM32CubeIDE (not starting from the public Edge Impulse firmware repo), and we’d like to integrate the ST Neural ART library into that existing project. Our goal is to be able to call the inference from within our own application logic.

From your explanation, I understand the path is to replace the “network.c” in the reference firmware with the one from the EI export. In our case though, since we’re working with a new CubeIDE project, is there a guide (or more detailed explanation) for integrating the ST Neural ART library and inference flow directly into a CubeIDE-based project from scratch?

Specifically:

  • Besides swapping “network.c”, what’s the minimal set of files and includes we need from the export (“model-parameters/”, “postprocessing/”, “edge-impulse-sdk/” etc.)?
  • Is there documentation on how to properly link the Neural ART library in CubeIDE projects?
  • Once integrated, is the correct way to run inference to just call the functions in network.c (like network_init() and network_run()), or are there additional wrapper steps required?

If the recommended approach is always to start from the reference firmware repo and then adapt it, we’re happy to follow that — but if there’s an official guide for integrating into a CubeIDE project from scratch, that would fit our workflow better.

Thanks again for the support and guidance!

Hi @Williedlbat

We lack a guide on howto integrate a Neural Art export into Cube.
The firmware repo can be “too much”, you can also have a look at this repo:

that really do just the basic stuff (peripheral power on , then runs inference).

If you you the Edge Impulse inference engine, there’s no need to call network_init() or network_run(), just run_classifier():

The network_init() and network_run() are called here:

TO update the model, as the readme says:

and

network.c needs to be replace, and you also need the edge-impulse-sdk plus model-parameters.

Right now, we don’t support a cubemx export for N6, that would probably makes easier to integrate it.

I’m working on improving the Neural-Art support, I’ll check what it takes to setup a cube project that includes Edge Impulse.

regards,
fv