Executing .eim model with Linux SDK Python APIs

Hi everyone,

I was working on a people detection model deployed on a Jetson Nano using FOMO and I’ve come across some issues and doubts. I will list all my doubts and give a brief explanation:

  1. Bounding boxes
    I tested some images using the classify-image.py from linux-sdk-python repo and I expected to have centroids as the output so why are bounding boxes displayed, alongside information about them? Has that anything to do with centroid calculations? I meant to perform a comparison between YOLO and FOMO.

  2. Image datasets
    I was wondering if there was a way to import COCO dataset in Edge Impulse, to be able to detect multiple classes. Another issue would be to import the labels directly instead of labelling all the images manually.

  3. example-standalone-inferencing-linux
    I’ve followed the necessary steps to build the project on a Jetson Nano but everytime I execute
    { APP_CUSTOM=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j } my Jetson Nano gets frozen and stuck. Down below in the same README.md, there is a brief annotation saying object detection models are not actually supported. Does somebody know if the re are any updates coming on the subject?

Thank you everyone in advance,

Iker Arrizabalaga

Hi @iker_arrizabalaga,

  1. FOMO models report bounding box information so that the output interface is the same as other object detection models. However, the bounding boxes are calculated from the centroid and x, y, width, height of the cell that belongs to that cell. Additionally, adjacent cells can be merged into one bounding box (see this thread).

  2. You can import datasets with the COCO formatting and annotations by following the ingestion service guide. Note that the original COCO is huge, so you might run into compute time limitations if you are using the community tier.

  3. I don’t know about this particular limitation. I will ask the engineering team if they have any updates.