C++ Library gives different results than Model testing results

I trained a speech command classification model and deployed C++ Library on nrf5340DK. On board the model is reporting completely different results. I have tested with raw feature data from various test samples. One example is up-2022-03-08 16:04:07.wav.2th7jm07.s1. After some debugging, I found that the MFCC features created on the device are completely different from the “Processed Features” reported on the portal for the same sample.

Could you help me to fix this?


Hello @chetankankotiya,

Sorry for the late reply, I am having a look at it now.
I will let you know,


Hello @chetankankotiya,

I cannot find your sample 2th7jm07.s1 to test on this sample.

However when I use the example-standalone-inferencing, I have the same results between the studio and the console output (tested with both quantized and float32 model):

The quantized model might differ slightly because one the “Model Testing” tab, we are using the float32 version.

I do not have an NRF5340DK with me so I cannot test further. I guess you followed this tutorial, correct?