How can i use my Yolo formatted custom dataset to train a fomo model
I’m new to fomo and as i understand fomo doesn’t work like other frameworks like yolov5 or tensorflow even tho its like them i couldn’t find a good source for my situation. I need to train a model with FOMO and use on my raspberry pi and i’ll detect objects. I prepared a dataset for yolov5 at first and i can change the dataset format with roboflow but i dont know which format does fomo uses i tried csv formats but fomo gave me a protection header error. As a summary i have a dataset and i want to use it to train fomo model is this possible?
And one more question, does FOMO require an internet connection when using the model i trained because i can’t open raspberrypi’s internet
I’m opened to answer any questions that will be asked thank you for your helps and I’m sorry for my bad english
FOMO is quite different from other object detection models. I recommend checking out this video to learn more about FOMO (and its limitations): tinyML Talks: Constrained Object Detection on Microcontrollers with FOMO - YouTube
To answer some of your questions:
- Datasets for FOMO are just like datasets for other object detection models (e.g. YOLO). You need images with bounding box information. If you are using the Edge Impulse Studio to create your ground-truth bounding boxes, we have some tools to help with that. See here: Labeling queue (Images) - Edge Impulse Documentation. If you are still running into errors, please provide the exact error message you are seeing and a project ID so we can try to replicate your error.
- Once you have trained the model, you can deploy it to an edge device (e.g. Raspberry Pi) to perform inference without an internet connection.
Hope that helps!
@BerkeErtep YOLO uses a reference of center-middle of the bounding box (BB) and then normalizes the data based on the image size.
FOMO uses the upper-left corner for the reference of the BB with units of pixels, aka not normalized. See this for FOMO labels file specs (its a JSON file).
FOMO puts all BB labels in a single file.
YOLO puts BB in a file that matches the image name, aka a label file will exist for each image file.
So read the YOLO TXT files and place them in a file called
bounding_boxes.labels scaling the values up based on the image resolution and moving the reference from center-middle to upper-left.
FYI, we just released the support of new image dataset acquisition formats (COCO JSON, Pascal VOC, YOLO TXT, OpenImage, Plain CSV): Uploader - Edge Impulse Documentation