Question/Issue: I am working in a project of cough detection with Edge Impulse audio classifier where I have trained a model over 700 cough training audio and 176 test data as well. Two labels are set, Cough and Noise. Now model is built with accuracy level close to 65%.Now I deployed impulse in to my STM32G474RE MCU with 512KB flash and 128KB data memory sucessfully.In order to validate deployed impulse, I live classified a known sample voice file in impulse server that predicted 33% of cough and took raw static features of the same file and tested with deployed impulse of MCU where I got 58% prediction of cough. I tried multiple audio files prediction, got same result, difference is too big and not proportional as well in almost all tested audio. Any solutions to this is appreciated. Thanks.
I have deployed float32 version in to my MCU, But still this could not meet the result of EI live classifier. Irrespective of model accuracy, EI classifier and deployed float32 version in MCU are behaving quite differently. I just followed this link Cube.MX CMSIS-PACK - Edge Impulse Documentation to port model in to my MCU which seemed fine to deploy. But its not working in the way EI classifier predicts. Hope you have solution on this. Thanks again.
How did you collect your data samples?
Using your board or from your mobile phone/other imported dataset? The microphone used and the settings (gain, etc…) can have an impact on the model results.
One thing you can check to see if the issue comes from your sampling is to download the float32 C++ library, and copy the raw features from the studio and add them your features[] array, compile and check the results:
If the results match the ones in the studio with the cpp library, but it is different in your application, it usually indicates that, in your application, the raw data differ too much from the data used during the training.
Actually i have not yet provided samples from microphone to the Model in the deployed library for prediction. Before integration, as you suggested, i just downloaded the float32 C++ library,and copied raw features from edge impulse studio and added in features[] array,compiled and tested, now the result differs. This is what my problem now. By this I found huge variation of prediction tested with multiple audio samples . Please find me solution for this. Thanks again.
You have tested audio files from edge impulse studio and downloaded c++ library with same raw features. In my case i tested the same procedure with Cube.MX CMSIS-PACK instead of c++ library in embedded board STM32G474RE environment. But this gives me different result.
Edge impulse provides support on STM32 MCUs as per the document Cube.MX CMSIS-PACK - Edge Impulse Documentation.
and followed the procedure but edge impulse studio output and deployed unoptimized float32 model output with same raw feature provides different prediction.
We are working on cough detection with Edge Impulse which is working fine in live classification after trained with training and test audio datasets. Our end product is Microcontroller board STM32G474RE.we need to port trained impulse in to this board and after that will interface microphone to feed real time audio input to the trained model for prediction. That is the implementation plan. In order to port impulse, this particular microcontroller needs Cube.MX CMSIS-PACK which is supported by edge impulse in the deployment section. We successfully ported Cube.MX CMSIS-PACK as per document Cube.MX CMSIS-PACK - Edge Impulse Documentation in to microcontroller board application. To verify ported local library application, we need to copy raw data from live classification of edge impulse studio and paste in to our local application features array of STM32G474RE board as per the document. But they both provides different prediction. This is where we have frozen with no further action. And we did not interface microphone so far as ported library with known raw features is not behaving similar to edge impulse live classifier. Hope you understand the issue well. Please do the needful.
The code works as expected on Ubuntu 20.04 with gcc, g++.
But running the same code on ARM cross compiling with arm-none-eabi-gcc tool chain i.e. QEMU versatilebp and on a STM32F429 discovery board yields arbitrary classification values.
The feature data was different from that reported in edge impulse as rajtest007 reported in this thread.
I was able to solve the issue on the discovery board (after some debugging) by:
disabling the log function in numpy.h (edge-impulse-sdk/dsp/numpy.h attribute((always_inline)) static inline float log(float a)
{
return std::log(a); #ifdef fred
/* rest of log code */ #endif
}
The the same code on QEMU is still problematic. However I have just discovered that disabling the EON compiler in the deployment within edge impulse causes the model to work.
The generated code has a set of initialisation of the tensorData array.
TensorInfo_t tensorData[] {
…
}
some of these are statically allocated using code like
tensor_arena + 2800
The tensor_arena pointer is initially NULL when EI_CLASSIFIER_ALLOCATION_HEAP is set.
The arm-none-eabi-gcc (at least version 7.3.1) compiles this addition of an integer to a NULL pointer as zero. Thus all of the offsets within the tensor_arena array are lost to subsequent code.
Manually modified these to:
const_cast<void*>(static_cast<const void*>(&quant12))}, },
{ kTfLiteArenaRw, kTfLiteInt8, (void * ) 2800, (TfLiteIntArray*)&tensor_dimension13, 832,
Did some more manual mods to fix the generated code specifically in the functions init_tflite_tensor and tflite_learn_5_init