Implementing AI + Flask server

The model running on my Flask server via TensorFlow Lite always returns the “missing cap” class even when the bottle is correctly capped. I am using images captured from an ESP32-CAM and sending them to the Flask server for inference. I downloaded TensorFlow file from the Dashboard because i dont think there is tensorflow file in the deployment. Is this an issue? because the accuracy that i got for the classification is already 100% and 98% for model testing.This is a visual inspection system using an ESP32-CAM to classify capped , missing cap and unlabeled bottles. The images are sent to a local Flask server running the TFLite (float32) model downloaded from the Dashboard. What do i do now?

  1. Captured training data in Edge Impulse using webcam and uploaded images.
  2. Trained an image classification model with two classes: “complete”, “missing cap” and “unlabeled”
  3. Deployed the model as TensorFlow Lite (float32) and used it with TensorFlow Lite Interpreter in Python (Flask app).
  4. ESP32-CAM captures an image and POSTs it to Flask for inference.
  5. Image saved, loaded, resized to the model’s input shape, and fed to the interpreter

Hi @amiras0fea - if you are downloading the TFLite model from the project dashboard and not using one of the deployment options on the deployment page, you need to make sure that you are manually doing the same pre-processing that you included in your impulse.

You mention that you resize your images, but are you taking the same approach as you configured in your impulse?

In other words are you cropping (fit longest axis or fit shortest axis) or squishing (squash) the images to resize? Then, what is your colour depth? Did you train with RGB or Grayscale? You’ll need to do the same. Lastly, the image processing blocks scales the pixels to float values between 0…1. So you’ll want to make sure you do that also.

Hope this helps!