Use FOMO or YOLOV5 for object detection

**Question/Issue:**Hello, I want to deploy an object detection model on Ariduino Nicla Vision. And I need to collect the coordinate of objects in the camera picture to analyse their motion (robots that are at least 3 meters away from Nicla) . Which model is more suitable for this task (FOMO or YOLOV5)? Iā€™d appreciate it if you could help me.

Project ID:

Context/Use case:

It all depends on the accuracy of the measurements you will gather from each image. See this post.

I see. So the measurement from YOLOV5 is more accurate since it gives actual bounding box information while FOMO uses less resource. Is that correct?

iā€™d always expect yolo to be more accurate, but FOMO will be faster since it was designed to be as small as possible.

e.g. the smallest yolov5, yolo v5n, has 1,900,000 parameters ( GitHub - ultralytics/yolov5: YOLOv5 šŸš€ in PyTorch > ONNX > CoreML > TFLite )

where as the smallest FOMO, alpha=0.1, has 10,000 parameters.