Different Classification Results After Deploying to Raspberry Pico

Question/Issue:
[Why is my model showing different (and also extremely large) results when deployed to my Raspberry Pico RP2040 device?]

Project ID:
[315617]

Context/Use case:
[I have a locally trained tensorflow light model (4 inputs, classified into three outputs) that I have compiled through edge impulse as a standard c++ library, which I am deploying to a Raspberry Pico RP2040. When testing the tflite model locally, and through the edge impulse webpage, I get good results, with probabilities usually between -10 and 10. When I deploy to the RP2040, the category probabilities are massive, and often incorrect. They are anywhere from -105,000 to 60,000. ]

Steps Taken:

  1. I have tried normalizing my model inputs from 0 to 1 with no luck.
  2. I have tried comparing the scale of the larger outputs from the device with the testing outputs, but there is no correlation.
  3. I have reproduced the same issue on multiple RP2040 devices, with multiple model deployments.

Expected Outcome:
Expected data sample (Input → Output): [0.62500, 0.75048, 0.62175, 0.76763] → [3.29, 3.02, -6.24]

Actual Outcome:
Actual Device Outcome: [0.62500, 0.75048, 0.62175, 0.76763] → [-105291.67969, -16691.92383, 56639.35547]

Reproducibility:

  • [x ] Always
  • [ ] Sometimes
  • [ ] Rarely

Environment:

  • Platform: [e.g., Raspberry Pi Pico RP2040]
  • Build Environment Details: [Pico SDK v1.5.1, cmake]
  • OS Version: [Windows 11]
  • Edge Impulse Version (Firmware): [1.53.13]
  • Project Version: [e.g., v19]
  • Custom Blocks / Impulse Configuration: [Used custom tflite model deployed as c++ library]
1 Like

Hi @bernarja

Thanks for reporting this, I’ll try to reproduce and check with the embedded team.

Do you have a solutions engineer reviewing this for you, can you please share the engineers name so I can check in with them on this?

Did you try exporting to the pico firmware, were the results also the same?

Model upload is a tricky one you are using a custom model here, please review our docs for best practices and the troubleshooting here for caveats with preprocessing :

Best

Eoin

Also can you confirm if the int8 results are the same as you see here?

Best

Eoin

Our embedded team suggested step is to try using the

There is no suggestion that this is rp2040 specific, hopefully that example will give you a base to work from.

Best

Eoin

Hi Eoin,

Thanks for your replies. I do not have a solutions engineer reviewing this for me at the moment.
I wasn’t able to run this by exporting the pico firmware, I’m using some custom sensor code on my pico and I couldn’t figure out how to incorporate that while exporting the pico firmware.

I did try the int8 version, and while the results on the pico were far less large, they still were different than the “check model behavior”. It also seemed like there wasn’t much variance in my NN output, even though my input tensor was changing the output was remaining constant.

I also tried running the standalone inferencing, but I could not get it to compile with both make and mingw32-make, I’m getting a [Makefile:110: app] Error 87. I’m a bit more familiar with cmake.

Lastly, I did finally find out how to revert my project back from a BYOM to custom edge impulse blocks, and that showed good success. I trained a similar model through the edge portal, and exported as a C++ library and it worked just fine. For now I will use this model, but I’ll try and get the standalone inferencing to compile.