EI Audio Classifier behaves differently in deployed version

Question/Issue: I am working in a project of cough detection with Edge Impulse audio classifier where I have trained a model over 700 cough training audio and 176 test data as well. Two labels are set, Cough and Noise. Now model is built with accuracy level close to 65%.Now I deployed impulse in to my STM32G474RE MCU with 512KB flash and 128KB data memory sucessfully.In order to validate deployed impulse, I live classified a known sample voice file in impulse server that predicted 33% of cough and took raw static features of the same file and tested with deployed impulse of MCU where I got 58% prediction of cough. I tried multiple audio files prediction, got same result, difference is too big and not proportional as well in almost all tested audio. Any solutions to this is appreciated. Thanks.

EI classifier output

MCU output Terminal

**Project ID:**140930

Context/Use case: Cough detection system

Hello @rajtest007,

Which model have you deployed, the quantized version or the float32?
The Model Testing and Live Classification use the float32 version.

If the accuracy between the two models is big, you can also check this page:



Thanks Louis,

I have deployed float32 version in to my MCU, But still this could not meet the result of EI live classifier. Irrespective of model accuracy, EI classifier and deployed float32 version in MCU are behaving quite differently. I just followed this link Cube.MX CMSIS-PACK - Edge Impulse Documentation to port model in to my MCU which seemed fine to deploy. But its not working in the way EI classifier predicts. Hope you have solution on this. Thanks again.

Hello @rajtest007,

How did you collect your data samples?
Using your board or from your mobile phone/other imported dataset? The microphone used and the settings (gain, etc…) can have an impact on the model results.

One thing you can check to see if the issue comes from your sampling is to download the float32 C++ library, and copy the raw features from the studio and add them your features[] array, compile and check the results:

If the results match the ones in the studio with the cpp library, but it is different in your application, it usually indicates that, in your application, the raw data differ too much from the data used during the training.



Thanks Louis,

Actually i have not yet provided samples from microphone to the Model in the deployed library for prediction. Before integration, as you suggested, i just downloaded the float32 C++ library,and copied raw features from edge impulse studio and added in features[] array,compiled and tested, now the result differs. This is what my problem now. By this I found huge variation of prediction tested with multiple audio samples . Please find me solution for this. Thanks again.

Hello @rajtest007,

I just tested with your project and I get the same results as the ones expected:

I’m trying with a couple of other samples from your test dataset and I come back to you.



Tested with 3 samples and they all look fine.

Could you point out which samples don’t match please?



Thanks Louis,

You have tested audio files from edge impulse studio and downloaded c++ library with same raw features. In my case i tested the same procedure with Cube.MX CMSIS-PACK instead of c++ library in embedded board STM32G474RE environment. But this gives me different result.

STM32G474RE raw features

STM32G474RE terminal result

And i also noted that processed features from studio and STM32G474RE generated features varies.

Edge impulse provides support on STM32 MCUs as per the document Cube.MX CMSIS-PACK - Edge Impulse Documentation.
and followed the procedure but edge impulse studio output and deployed unoptimized float32 model output with same raw feature provides different prediction.

How to solve this?

Still waiting for solution…

Instead of making me re-read this very long thread can you please re-state the specific question.


Project ID is 140930

We are working on cough detection with Edge Impulse which is working fine in live classification after trained with training and test audio datasets. Our end product is Microcontroller board STM32G474RE.we need to port trained impulse in to this board and after that will interface microphone to feed real time audio input to the trained model for prediction. That is the implementation plan. In order to port impulse, this particular microcontroller needs Cube.MX CMSIS-PACK which is supported by edge impulse in the deployment section. We successfully ported Cube.MX CMSIS-PACK as per document Cube.MX CMSIS-PACK - Edge Impulse Documentation in to microcontroller board application. To verify ported local library application, we need to copy raw data from live classification of edge impulse studio and paste in to our local application features array of STM32G474RE board as per the document. But they both provides different prediction. This is where we have frozen with no further action. And we did not interface microphone so far as ported library with known raw features is not behaving similar to edge impulse live classifier. Hope you understand the issue well. Please do the needful.

Thank you @rajtest007 for the concise statement of the problem.

@shawn_edgeimpulse or (at)Team please advise.

I have a similar issue. Simple test voice data trained on Edge Impulse.
followed this Keyword spotting - Edge Impulse Documentation
but with 2 key words.
I followed this to build the library.
As a generic C++ library - Edge Impulse Documentation

The code works as expected on Ubuntu 20.04 with gcc, g++.

But running the same code on ARM cross compiling with arm-none-eabi-gcc tool chain i.e. QEMU versatilebp and on a STM32F429 discovery board yields arbitrary classification values.

The feature data was different from that reported in edge impulse as rajtest007 reported in this thread.

I was able to solve the issue on the discovery board (after some debugging) by:
disabling the log function in numpy.h (edge-impulse-sdk/dsp/numpy.h
attribute((always_inline)) static inline float log(float a)
return std::log(a);
#ifdef fred
/* rest of log code */

The the same code on QEMU is still problematic. However I have just discovered that disabling the EON compiler in the deployment within edge impulse causes the model to work.

Doing some more debugging on the compiler output.

1 Like

EON compiler version now working.

The generated code has a set of initialisation of the tensorData array.
TensorInfo_t tensorData[] {

some of these are statically allocated using code like
tensor_arena + 2800
The tensor_arena pointer is initially NULL when EI_CLASSIFIER_ALLOCATION_HEAP is set.
The arm-none-eabi-gcc (at least version 7.3.1) compiles this addition of an integer to a NULL pointer as zero. Thus all of the offsets within the tensor_arena array are lost to subsequent code.

Manually modified these to:
const_cast<void*>(static_cast<const void*>(&quant12))}, },
{ kTfLiteArenaRw, kTfLiteInt8, (void * ) 2800, (TfLiteIntArray*)&tensor_dimension13, 832,

Did some more manual mods to fix the generated code specifically in the functions init_tflite_tensor and tflite_learn_5_init

1 Like