I’m working on a basic class project to classify between two classes of faces. I’m deploying this to an OpenMV H7 (not a plus), so the limited RAM and Flash mean I’m using a basic convolutional neural network.
The model trains on the dataset well enough, but when I deploy the code it seems to be behaving erratically. Namely, the classifier outputs tend toward “(1.00, 0.99)”. The output layer of the model is softmax, so if I understand correctly both values should sum to 1, and this is calculating incorrectly. I isolated the issue by loading a training image on the OpenMV instead of the sensor snapshot, and despite the same image yielding a correct “(0.99, 0.01)” on EI, the OpenMV classifier still gives “(1.00, 0.99)”.
Any recommendations on what may be occurring here and how to fix it? This problem has subsisted through RGB/Grayscale images and various model architectures.
Hard to say where this exactly fails. We don’t deploy the Edge Impulse version of Tensorflow to the OpenMV hardware, so there can be differences. Although OpenMV recently updated there Tensorflow library. So building and flashing there latest firmware could resolve the discrepancies.
Furthermore, we’re working on a deploy option in our studio that builds firmware for OpenMV hardware with the model included. This means you can run bigger models, like MobileNet, on OpenMV H7 cams.