Question/Issue: I’ve currently trained a model with 65-70% accuracy at identifying the relative location of keystrokes on a keyboard using vibrational sensor data in Edge Impulse Studio. I’m deploying it onto an Arduino RP2040, and currently I’m getting different results on device versus in the studio when doing a static buffer example and copying raw features into my Arduino code.
I’ve already seen previous posts stating that this could be an issue with the quantized vs unoptimized model. I have been having this issue with both the quantized and unoptimized model, and the training accuracy between my quantized and unoptimized model while different, don’t seem to be large enough to cause this significant an issue.
Below are images of the Arduino vs the Edge Impulse classification for the unoptimized model. Is there any potential fix to this?
The data in the training set came from contact mics and was recorded through ADC channels of an ESP 32 Saola. We are planning on having the ESP 32 communicate with the RP2040 to send the expected data for the model.
However, I do not believe this should effect the static buffer example, as I’m copying raw features from my Edge Impulse Studio data to test this solution.
I used the upload custom .csv data to upload our ESP 32 contact mic data to Edge Impulse
Sorry I’m confused on your most recent response. You were able to reproduce the issue on both quantized and float models, but are unsure of the exact problem?
I tried making a version of the model with fewer DSP blocks as well yesterday (only raw features and spectral features) and still had the same issue of different results between Studio and Arduino models, so I’m not sure if that helps address the possible issue.
How soon/quickly would a ticket with the embedded team possibly get looked into?
That’s being said, fixing this issue will probably won’t give good results to your model.
If you’re working on vibration data, I’d recommend to focus on the spectral analysis DSP block or write your own custom DSP block if you are familiar with digital signal processing.
Hi @ble86 Sorry for the late reply, but it took a while to investigate this. Every learn block has a dsp dependency array, with the DSP block IDs. For you this was not sorted alphabetically (it was [8, 3, 25, 42, 48, 54]) causing the Studio to swap the first and second block of features around. However in the SDK we always operate on sorted DSP blocks, yielding this discrepancy.
I have a PR open for this, hope to land it in the coming days.
As an immediate fix I’ve removed and re-added your classifier block and retrained it. Have checked the output against our SDK now as well.