Queries about edge impulse

Hello everyone!
I am doing a project on “Feasibility of automatic garment measurement using AI”. Can anybody help me out? I have some queries such as…

  1. I uploaded T-shirt images, then labeled them and after that I split these in an 80/20 ratio and train the model using FOMO. How can I change the image size in edge impulse which is in the labeling queue?
  2. How can I extract the coordinate values from those images?
  3. How can I do live testing?
  4. I used 909 images, why can’t I use Yolo v5 for model training?

Hi @S.sarkar,

  1. Edge Impulse will automatically change the image sizes to be the same prior to feature extraction and training. If you go to “Create impulse,” you’ll see the first block labeled “Image data.” You can adjust the parameters in this block to crop/scale the images.
  2. FOMO does not give bounding box information (see this presentation to learn more about how FOMO works: tinyML Talks: Constrained Object Detection on Microcontrollers with FOMO - YouTube). You will need to use another model, such as YOLOv5 or MobileNetv2-SSD to get bounding box info.
  3. See this document to learn how to do live classification: Live classification - Edge Impulse Documentation
  4. You should be able to train YOLOv5 with any number of images. What error message are you seeing?

RE: @shawn_edgeimpulse maybe I am mis-understanding the coordinate issue but given:

ei_impulse_result_t result = { 0 }; // "result" of Classifier().
run_classifier(&signal, &result, debug_nn);

You can then do:

for (size_t ix = 0; ix < EI_CLASSIFIER_OBJECT_DETECTION_COUNT; ix++)
        auto bb = result.bounding_boxes[ix];
        ei_printf(") [ x: %lu, y: %lu, width: %lu, height: %lu ]\n", bb.x, bb.y, bb.width, bb.height);

So one can get the top-left corner, width and height of the BB.

RE: Using YOLOv5
See this.

Hi @MMarcial,

Correct, FOMO does give you gross bounding box measurements. I misunderstood FOMO slightly in that explanation video: instead of completely suppressing adjacent cells with the same classes, it just reports those adjacent cells in the bounding box. However, it’s still just a measurement of cells on the grid (not a true bounding box information), which is not great if you need exact pixel measurements. I had a discussion about this recently here: Analysising the motion of object from camera