Build C++ Library with STM32CubeIDE

I think I was able to add that libraray now in my project.
Is there any guidance how can I use the library in my project? How to launch the algorithm and where to get results?

Yes, https://docs.edgeimpulse.com/docs/using-cubeai this tutorial still applies. E.g. @ShawnHymel has used it to do keyword spotting from CubeIDE: https://github.com/ShawnHymel/ei-keyword-spotting

1 Like

I followed that tutorial but as I cannot do the ‘Adding the CMSIS-PACK’ part because


I followed Build C++ Library with STM32CubeIDE to add created C++ libraries.
After that I continued the tutorial from ’ Configuring printf’ but after adding those lines in the main.cpp I get compilation error
In file included from …/Core/Src/main.cpp:73:0:
C:/Users/ett14229/STM32CubeIDE/workspace_1.5.0/ML_WachingMachine/Core/edge-impulse-sdk/classifier/ei_run_classifier.h:89:10: fatal error: tflite-model/trained_model_compiled.h: No such file or directory
#include “tflite-model/trained_model_compiled.h”
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
How should I continue to get my libraries in use.

@ArtoK have you updated your include settings? Could you post a screenshot of Properties > C/C++ Build > Settings > Tool settings > G++ Compiler > Includes ?

@ArtoK where did you put the tflite-model folder? Also under Core/?

You are correct. It was there but not anymore. Strange. I copied it back and not it is going further.
Stucks here.
…/Core/Src/main.cpp:80:28: error: ‘huart1’ was not declared in this scope
HAL_UART_Transmit(&huart1, (uint8_t*)string, strlen(string), 0xffffff); // send message via UART
^~~~~~
I’ll check .ioc file next.
Thanks.

1 Like

For some reason the last part of the guide does not work. ‘Seeing the output’.
I’m using Putty and similar setup with it gives me console output if I update the firmware.

@ArtoK

  1. Are you setting the correct baud rate (try both 115,200 and 9,600?
  2. Have you configured the right UART peripheral, and are writing to the right peripheral?
  3. Can you print anything from your application and get output via normal STM32HAL functions?

Got this working. the STM IDE set the UART configuration to the wrong pins. Had to force it to correct pins.
image

Next question is that how can I gather the raw data with HW so that I dont have to copy and paste it from the cloud?

Here’s something that might help. This was added a few weeks ago so you might have missed out - https://docs.edgeimpulse.com/docs/cli-data-forwarder#classifying-data

Check the Mbed OS example and use the code to gather Raw features from sensor data so you do not need to copy-paste it from the EdgeImpulse studio

1 Like

Did you solve this error when you try to add CMSIS-PACK?

No I did not. I used C++ libraries instead.

1 Like

I think I will do the same till they find the solution!

Hi,
I try the ei-keyword-spotting tutorial from @ShawnHymel with NUCLEO L476RG and SPH0645 mic, but I have some problems (ERROR: audio buffer overrun).
I measure the inference time execution and it’s about 480 ms, which I think is too much if the sampling rate is 32K and the buffer size is 6400 samples.(but probably I didn’t get exactly the flow)

Do you have some suggestions to solve it?

thank you so much for the tutorial and work

I think 32 kHz is too fast. The example I use is 16 kHz, which is what Edge Impulse uses by default. Even though my I2S mic samples at 32 kHz, I drop every other sample to make it 16 kHz. I know, I should be using a low-pass filter to avoid aliasing if I’m down-sampling, but that would take what little resources I have, so I just have to assume that I won’t encounter frequencies above 8 kHz (as my application is for vocal range anyway).

If you look for audio_buffer_inference_callback() function in my main.cpp of the L476 example, you can see where I drop samples and convert the 24-bit audio to 16-bit audio.

6400 is the size of my I2S buffer. Every time half of that is filled up, it calls the audio_buffer_inference_callback() function. Here, the audio samples are converted to 16-bit PCM and 16 kHz and stored in one of the inference.buffers (it’s a double buffer). The size of that buffer is EI_CLASSIFIER_SLICE_SIZE.

One of these inference.buffers is sent to run_classifier_continuous() each time it’s filled up. That function should take no more than 250 ms to perform feature extraction (MFCCs) and inference, as it needs to meet the goal of performing inference 4 times per second.

Hope that helps!

1 Like

Thank you so much for your support!

I did some debug with the oscilloscope, and I found a strange behavior.
The inference algorithm seems to work properly for the first 3 times and then, it takes too much time.

I put a toggle pin UP-DOWN across “run_classifier_continuous” function The YELLOW

and a toggle pin UP-DOWN across half callback and complete callback --> the BLUE signal

The project I tried is the one downloaded from the repository (same BOARD and MIC), compile with timing optimization

If I can give you other information let me know.

Thank you so much

What is the “timing optimization” option? It’s been a couple of months since I made that project, so I may not remember. Is there a particular checkbox in preferences or flag you set that’s different from my example project?

Good morning,
I did two tests:

  • The first test with all the same preferences you set, but the behaviour is the same as the image I posted.
  • The second one with “timing optimization” (is in properties-> C/C++ build -> Settings-> MCU GCC Compiler-> Optimization) With the same result.

Do you have some other suggestion?

thank you so much