Different classification between on-device and Studio project

Question/Issue: I’ve currently trained a model with 65-70% accuracy at identifying the relative location of keystrokes on a keyboard using vibrational sensor data in Edge Impulse Studio. I’m deploying it onto an Arduino RP2040, and currently I’m getting different results on device versus in the studio when doing a static buffer example and copying raw features into my Arduino code.

I’ve already seen previous posts stating that this could be an issue with the quantized vs unoptimized model. I have been having this issue with both the quantized and unoptimized model, and the training accuracy between my quantized and unoptimized model while different, don’t seem to be large enough to cause this significant an issue.

Below are images of the Arduino vs the Edge Impulse classification for the unoptimized model. Is there any potential fix to this?

Project ID: 161685

Context/Use case: School Project

I wasn’t allowed to put the second image in one post since I am a new user, so this is the Arduino classification image.

I have also verified that the features generated by the Studio model classification and Arduino model are the same

Hello @ble86 ,

Welcome to Edge Impulse community :wink:
Just a quick question: Are the data provided in your training dataset come from your device?

Let me know,



Hi @louis

The data in the training set came from contact mics and was recorded through ADC channels of an ESP 32 Saola. We are planning on having the ESP 32 communicate with the RP2040 to send the expected data for the model.

However, I do not believe this should effect the static buffer example, as I’m copying raw features from my Edge Impulse Studio data to test this solution.

I used the upload custom .csv data to upload our ESP 32 contact mic data to Edge Impulse



Hello @ble86,

Sorry I misread your original question and missed your were using the static buffer example.
Indeed this is odd then.

Let me try to reproduce your issue on my side. I’ll come back to you after.



Hi @louis,

Thank you very much! I hope to hear from you soon.


So I managed to reproduce your issue using the GitHub - edgeimpulse/example-standalone-inferencing: Builds and runs an exported impulse locally (C++)

I just tested both the quantized model and the float32.

I am creating an internal ticket, I suspect it comes from stacking many DSP block but it should give you different results.

I’ll let you know as soon as I hear from the embedded team.




Sorry I’m confused on your most recent response. You were able to reproduce the issue on both quantized and float models, but are unsure of the exact problem?

I tried making a version of the model with fewer DSP blocks as well yesterday (only raw features and spectral features) and still had the same issue of different results between Studio and Arduino models, so I’m not sure if that helps address the possible issue.

How soon/quickly would a ticket with the embedded team possibly get looked into?

Correct, I can reproduce your issue but I don’t know how to fix it. It is maybe a bug in our C++ SDK.

I cannot give an ETA for now but I just let the team know after I created the internal ticket.




Alright, sounds great. Hope to hear soon

That’s being said, fixing this issue will probably won’t give good results to your model.

If you’re working on vibration data, I’d recommend to focus on the spectral analysis DSP block or write your own custom DSP block if you are familiar with digital signal processing.

Can you share which sensor are you using?


We are using some contact microphones we ordered off amazon and had to feed. through some of our own amplifier circuit designs.

Hi @ble86 Sorry for the late reply, but it took a while to investigate this. Every learn block has a dsp dependency array, with the DSP block IDs. For you this was not sorted alphabetically (it was [8, 3, 25, 42, 48, 54]) causing the Studio to swap the first and second block of features around. However in the SDK we always operate on sorted DSP blocks, yielding this discrepancy.

I have a PR open for this, hope to land it in the coming days.

As an immediate fix I’ve removed and re-added your classifier block and retrained it. Have checked the output against our SDK now as well.

1 Like