Why would dsp process run but not classification?

Hello EI community!

We’re classifying audio.

I’ve verified the timing of the run_classifier_continuous() function to be less than the time it takes for the next buffer to be ready, so there should be no timing issues yet.

Our buffer is stored directly in RAM, so we use this function:

numpy::signal_from_buffer((float*)raw_samples[inference.buf_select], SAMPLE_BUFFER_SIZE, &mySignal);

and then we call this function to run our classification process:

EI_IMPULSE_ERROR res = run_classifier_continuous(&mySignal, &result, false, true);

In model_metadata.h, I’ve chosen a slice window of 2 to start, because our timing constraints are limited.

Also, I make sure to call run_classifier_Init() at program start as well.

However, this is my output:
image

Any ideas or areas we can start probing for?

Thanks!

I identified the source of the classification timing being zero.

In model_metadata.h, I switched #define EI_CLASSIFIER_INFERENCING_ENGINE to EI_CLASSIFIER_CUBEAI from EI_CLASSIFIER_TFLITE

I mistakenly thought this was an efficiency boost I was missing out on since I deploy the sdk as C++ library rather than CMSIS pack. The ROM usage was reduced by about 20%…well no wonder… it was because I effectively switched off the entire inferencing engine!

Now this begs the question, what does EI_CLASSIFIER_CUBEAI do in the preprocessor if it did not switch on another version of the inference engine that is more optimized for STM32-CM4 architecture (what I was expecting)?

I have avoided deploying as CMSIS pack due to issue’s I’ve had in the past with the deployment missing a bunch of source files.

Hi @Markrubianes,

That’s a legacy definition as we don’t use CubeAI inferencing engine anymore. Using our EON Compiler gives you a boost in ROM usage and latency.
The EI_CLASSIFIER_INFERENCING_ENGINE should not be changed by users. We have an action on our side to separate user-changing defines to another file so it will be less confusing.

Aurelien

1 Like

@Markrubianes try CMSIS, as we don’t use packs or anything annoying like that. The edge-impulse-sdk already has all the source you need to run CMSIS-DSP and CMSIS-NN. It will typically auto compile in based on macros, but to be sure you’re using it, define EIDSP_USE_CMSIS_DSP and EI_CLASSIFIER_TFLITE_ENABLE_CMSIS_NN as 1

Also, no need to worry about include paths as we change everything to use the full file path

1 Like

And if you have any issues with missing source, please let us know. You should have everything inside the export

1 Like

It appears I already have those defines as such, having exported as C++ library. Thanks you both for the confirmation and explanations!

2 Likes