OpenMV - Output tensor Image Classification issue

​I create a project to find if my coffee mug is on my desk or not. I used an Arduino Portenta H7 + Vision Shield

  • Data Capture: I captured 50 Grayscale images from each class directly on Edge Impulse. Images were split 80%/20% for train/test
  • The Images were crop 96 x 96
  • The model used for Transfer Learning: MobiliNetV2 96x96 0.35 replacing the final layers with a dense Layer (16 neurons), dropout layer (0.1), and an output layer with 2 neurons.
  • I​ used data augmentation also added a Vertical flip, besides the default EI transformations (with that I could use both hands to capture images, inverting the camera)
  • I​ got 100% of Accuracy during training and testing
  • I​ performed live classification using the Portenta connected with EI Studio and worked very well.
  • I​ deployed the model using the Arduino IDE (C/C++) and the result was great
  • I​ also deployed the model on OpenMV IDE (MicroPython) and I realized that the class with lower probability had an “one” added, for example: instead of 0.000034, I got a 1.000034. I corrected that on code, but I cant’ understand. that is what is going out from Output Tensor.

1 Like

Hi @mjrovai,

Can you please send me the project ID of your Edge Impulse project so I can try the OpenMV deployment locally?


hi @jenny,
Here is my project:

Thanks a lot

Hi @mjrovai,

Thank you for your patience, looking at your project now!

Thanks a lot, @jenny. As an additional comment, I installed the OpenMV on a Mac Pro M1 2021. Could this be the issue? Maybe there is a bug when values go to lower than zero (negative), and so, the “1” is the most significant bit (or digit). Make sense? Anyway, I will install the OpenMV on an older Mac to verify it.
UPDATE: Same with the OpenMV IDE installed in a Mac Intel I5 and an RPi4

Hi @mjrovai,

I have alerted our engineering team of this, thank you for your patience!

– Jenny

Thanks @jenny. I also deplyod the model usning a RaspberryPi and the problem seems to be restricted to TensorFlow Lite (int8 quantized). The float32 version is OK.

Hi @mjrovai,

I’ve run your int8 quantized model on my Mac and the output was correct. So it is really OpenMV related. They’ve recently updated to the latest Tensorflow lib, this could solve the issue. Let me know if you’re able to upload your OpenMV cam. Otherwise I can send you a firmware binary.


Thanks @Arjan. Yes, I also have checked with TensorFlow Lite in my Mac and the model is correct.

The issue definitely is with the OpenMV TF library. I am using the Micropython version 1.17 (MicroPython: v1.17-omv-r15 OpenMV: v4.2.1 HAL: v1.9.0 BOARD: PORTENTA-STM32H747).

As I understood, there is a newer version (1.18). Can you help me to update it?
Thanks a lot.

Yes, I can create a build (although not test at the moment). Can you send me your email to The binary is too big to drop here.

Hi @Arjan, installing the Portenta fw (“firmware.bin”) that is part of V.2.3 (,
the classification worked fine.


Thanks a lot!

1 Like


Good news you’ve got it working!