Data format for object detection training for FOMO

I have an Arduino BLE 33 with an OV7670 camera module and would like to detect the centroid of bananas using FOMO.

I have successfully collected a couple of images using the Edge Impulse website.

In order to speed up the process I wish to download images along with the bounding box information from Google Images rather than collect images using the OV7670.

I’ll then write a Python3 program to reformat the images, if required, and create the json file(s) with bounding box information.

I then plan to import the data using the edge-impulse-uploader.

I need clarification on the required formats of the images and json file(s) please.


I’ve downloaded 50 banana images from Google Images.

The widths of the 50 images range from 349 to 1024 with an average of 798.

The aspect ratios (w/h) range from 0.34 to 1.51 with an average of 0.83.

The images that I collected with the OV7670 were 195x195. I’ve also read that when importing images the aspect ratios should be similar.

My question is:
a)What width(s) and aspect ratio(s) should I adjust the images to?

2)json files(s):

My question (more of just a confirmation) is:
a)Can I put the bounding box information in to one json file?

Thank you for your assistance in advance.

The Input Block of the Impulse will create square images from your raw input data so you don’t have to resize them. When designing the Impulse and using object detection, then use a square image size, e.g. 96x96, 160x160 or 320x320.

Yes, put all the bounding box info in one file as described here.


Thank you for the very clear and complete answer.