NIcla Voice- Model behaves differently on the board- Not working

I’m using the Nicla Voice board and working on a project called Audio2 (Edge Impulse Project ID: Audio2). I developed my model using the Edge Impulse Studio, and during Live Classification, the model performs very well — predictions are accurate and consistent.

However, after deploying the model to the board and running it with edge-impulse-run-impulse --debug, I consistently get wrong classification results. The predictions no longer match the behavior I saw during Studio testing.

Has anyone else experienced this mismatch between Studio and on-device inference? Could this be related to differences in audio processing, sampling rate, or normalization between the two environments?

Any insights or suggestions are appreciated!

Thanks in advance.

Hello @madhu_sajc

could you please confirm if you trained the model with the Nicla Voice board microphone? or you used a different microphone?

If you point to the public dataset that you are using we can try to test here.

Thanks!