Model accuracy and tflite model

Hello,
I had trained a model on edge impulse to classify images.
To create impulse, these options have been chosen: input images size is 96*96, a processing block: image, learning block: transfer learning (images).
The accuracy of the model in edge impulse is 98.9%, however, as soon as I turned the model and deployed it into OpenMV, the accuracy does not look good.

Any thoughts on why the accuracy decreases when deploying the model into OpenMV?

Cheers,
Melika

Hello @melikas ,

Have you trained your model with images taken by the OpenMV cam too?

Let me know,

Regards,

Louis

Hi @louis
Yes, the model is trained with images taken by OpenMV.
BTW, what does it matter what takes the image?

Regards,
Melika

Hello @melikas,

It should not matter much for image projects but for some projects although I have seen differences. For example, if the brightness is too high on the training data and much darker on the inference input (or vice versa), same for the gain, different settings when converting RGB to greyscale, fish eye lenses, etc…
In short, the model can learn a bias introduced by a specific configuration of the input images.

To follow-up with your accuracy issue, is the testing accuracy (Model testing page in the studio) similar to the validation accuracy (Transfer Learning page in the studio)? If not your model might have overfit. This can happen when you don’t have enough data, not trained enough, etc… see Increasing model performance - Edge Impulse Documentation for more info.

Regards,

Louis

Hi @louis,
For the purpose of testing the accuracy of the model on the camera, we aren’t capturing new images. instead we copy the validation image set onto the SDcard and then load each image into the FB and submit it to the classifier. We were expecting to have identical results since we are using the same set of validation images. We dont’ believe this can be an over fit issue, because even if there was one in the model as it stands, since it’s the same set of images the bias would produce the same result.

So the problem we are trying to understand is why for the same set of images, we get different accuracy results when loaded and classified on the camera? is edge impulse altering the images in some way we aren’t replicating on the camera perhaps?

Regards,
Melika

Hi @melikas,

There could be a number of things going on here:

First, Edge Impulse does manipulate the images even prior to the “DSP block.” If you look at the “Image data” section under “Create impulse,” you can see that “Resize mode” causes images to be squashed or cropped (depending on the setting) to make them all the same resolution (as set by the width and height parameters). In the Image below, every input image is resized (scaled up or down) to make the width fit into 96 pixels. Equal amounts of top and bottom of the image are then cropped out.

OpenMV does not perform this cropping/resizing, so you will need to manually adjust your input images to match the “resize mode.” Also, note that OpenMV does not have a great/efficient resizing function, so your best bet is to crop at the sensor where you can.

Second, I recommend using the “processed features” from the DSP block of one of your training samples as a starting point. You can see in this image that the image has been resized to match that “resize mode.” The raw image data won’t work, as it has not been resized yet. Copying these processed features is a good way to determine if inference is working before working on the resizing functionality of your input raw images.

Finally, it might be helpful to test with some images taken from the the OpenMV camera. As Louis mentioned, the different sensors might make a difference. I’ve seen lighting, skewing, focal length, field of view, etc. all make a difference in a classification task. I usually recommend capturing training data from the same sensor (camera, microphone, accelerometer, etc.) that you plan to use for inference. Sometimes it doesn’t matter, but it’s just another variable that could make your model not work well.