Has anyone ever fed an image classification model an existing image (e.g. a png file) to generate an inference (using Arduino on STM32 to be specific)? I have a scenario where I’m creating png files from a thermal camera and want to run inferences on the images as they are saved. I’m also realizing that all of the image-based projects I’ve created were either on Python (fairly easy) or using a camera directly (not an option with the way I’ve set up this project)! Thanks in advance for any help.
The flow is basically the same than when you capture an image.
I’d suggest you start using the static_buffer example.
What you need to implement is the reading of the image (and eventually resizing it and converting the image format), and then pass the raw values of the image into the “
static_buffer” (just don’t make it static )
I wrote a piece of code some years ago for the ESP32 CAM when we did not officially supported it:
It uses a couple of functions available only for the ESP32 but feel free to have a look to understand the logic.
In your case, instead of taking a picture you could just read it and then follow the flow.