Help understanding the resolution and color depth of training and inference images

I’m working with an OpenMV Cam H7 and would want to create a FOMO model for detecting some objects of interest.

I plan to build an Impluse with an Image Data block that sets the image dimensions to 96x96 pixels (using squish), and an Image block that converts the image color depth to Grayscale.

Knowing these Impulse settings, what resolution/color-depth should the training/testing images be captured at?

Since the images will be resampled to 96x96 and grayscaled, should they just be collected at 96x96/grayscale originally using the H7 Cam? Or, is there any difference in collecting them at a higher resolution (say 240x240) and RGB?

Hi @rhammell,

Your trained impulse comes packed with all the pre-processing & learning blocks as a single package. This means that regardless of the resolution or color depth of your captured input image,during inference it will be preprocesed back to align with the configuration of your impulse.

You don’t have to resize or change the color depth of your image again during inference.

Let me know of this helps.

Thanks,
Clinton

@Clinton,

Thanks for the reply. This certainly clarifies another question that I had - mainly that the pre-processing is included in the model build, so I don’t have to worry about performing any resize/color-depth changes to input images that are being inferenced.

Knowing that, am I right to think that there should be no significant difference in performance (measured detection accuracy, not processing time) between input images captured at 96x96/grayscale and 240x240/color? Since both options would be pre-processed to the same configuration (96x96/grayscale) during the inference, they should perform the same, correct?

1 Like

Hi @rhammell ,

You’re right, there wont be much significant difference.

Thanks,
Clinton

HI,

Is this valid for projects where the project is exported in a C++ library? In the example GitHub - edgeimpulse/example-standalone-inferencing-linux: Builds and runs an exported impulse locally (Linux) (APP_CAMERA=1 option)
…I see that the captured image has to be resized and cropped before calling the function: “run_classifier()”

Thanks