Hello everyone!
I am doing a project on “Feasibility of automatic garment measurement using AI”. Can anybody help me out? I have some queries such as…
- I uploaded T-shirt images, then labeled them and after that I split these in an 80/20 ratio and train the model using FOMO. How can I change the image size in edge impulse which is in the labeling queue?
- How can I extract the coordinate values from those images?
- How can I do live testing?
- I used 909 images, why can’t I use Yolo v5 for model training?
RE: @shawn_edgeimpulse maybe I am mis-understanding the coordinate
issue but given:
ei_impulse_result_t result = { 0 }; // "result" of Classifier().
run_classifier(&signal, &result, debug_nn);
You can then do:
for (size_t ix = 0; ix < EI_CLASSIFIER_OBJECT_DETECTION_COUNT; ix++)
{
auto bb = result.bounding_boxes[ix];
ei_printf(") [ x: %lu, y: %lu, width: %lu, height: %lu ]\n", bb.x, bb.y, bb.width, bb.height);
}
So one can get the top-left corner
, width
and height
of the BB.
RE: Using YOLOv5
See this.
Hi @MMarcial,
Correct, FOMO does give you gross bounding box measurements. I misunderstood FOMO slightly in that explanation video: instead of completely suppressing adjacent cells with the same classes, it just reports those adjacent cells in the bounding box. However, it’s still just a measurement of cells on the grid (not a true bounding box information), which is not great if you need exact pixel measurements. I had a discussion about this recently here: Analysising the motion of object from camera