Hello all! I am trying to acheive object detection with orientation of 4 robots from an arial view. I can find their centroids using fomo while running on an esp32cam but I need their heading as well. Do you know of any system that uses a bounding box and its orientation?
If not, Should I train a special identifier, lets say “1” “2” “3” “4” on each bot and train various angles?
Is there a more efficient way of going about this?
Im experimenting with an esp32cam, but am willing to move to a raspberry pi or full pc.
Great question!
This question does not come often and unfortunately we don’t support this but I’d love to see a model that could support this natively in Edge Impulse Studio!
Should I train a special identifier, lets say “1” “2” “3” “4” on each bot and train various angles?
This option could work if you have only one single object. And you’ll probably need enough data. Else, I suspect the model will easily get confused because the objects will look similar.
Is your camera fixed?
If so and if you just need an approximate heading you could train a classifier (left, right, up, down) but that would not give the bounding boxes. Similarly, you could try a visual regression.
It’ll probably give better results than the object detection with special identifiers.
Let me know how it goes, feel free to share your project and share more context, examples of the images, etc… That can help advocate your needs to our product team