FOMO run detection method from locally

I am trying to create the “live classification” feature locally using the run_object_detection_inference method from . Here’s how I call the method:

interpreter = tf.lite.Interpreter(model_path=“…”)
specific_input_shape=[320, 320, 1]
y_data = [40, 40, 2]
img =“…jpg”)
img = img.convert(“L”)
numpydata = asarray(img)

minimum_confidence_rating = 0.5
num_classes = 1

However, when running the code, I am encountering a list index out of range error for the command interpreter.get_tensor(output_details[2])[0].tolist() in .

I would appreciate any help with this issue.

Hello @exodiakiller,

Can you share the link of the please?
I am not sure which example you are referring to.



Thanks for the quick reply. I’m referring to , which I obtained from the “edit block locally” option. I have uploaded the files to GitHub: Fomodetection/ at main · Exodiakiller/Fomodetection · GitHub

do you have a standalone reproduction case that can be run? the out of bounds you mention could be triggered by a number of different things


I noticed that the output_details I get through interpreter.get_output_details() only has one output tensor named ‘StatefulPartitionedCall:0’. If I understand the code correctly, there should be a total of four output tensors. Have you been able to execute a FOMO model using the method run_object_detection_inference from Unfortunately, I don’t have an exact standalone reproduction case.

i’ve only ever run this piece within the context of studio. e.g. edit block locally, train model in a custom way, then reupload back into studio to then deploy to a specific target.

is there a reason you’re trying to roll your own inference piece? ( as opposed to using a deployment? )


I plan to extract the bounding boxes/centroids after detection in order to perform further recognition on the respective objects. My objects are so small that they fit within the circle.

The project is initially intended to run in Python and then on the ESP32.

Is there perhaps a sample project that runs the TFLite FOMO model in Python?

Hi @exodiakiller,

Thanks for using Edge Impulse! We only really provide this Python code for training; it’s not designed for efficient or convenience inference and we recommend users stick to our typical deployment options (C++, etc).

That said, if you are really set on doing FOMO inference in Python you can use the run_segmentation_inference() function for inspiration:

You won’t be able to use run_object_detection_inference() since that is for SSD models.

run_segmentation_inference() does a bunch of things, including evaluation against a labelled input. The important call for you is this one, which takes the raw output of the FOMO model and turns it into bounding boxes:

    # convert model output to list of BoundingBoxLabelScores including fusing
    # of adjacent boxes. retains class=0 from segmentation output.
    y_pred_boxes_labels_scores = convert_segmentation_map_to_object_detection_prediction(
        output, minimum_confidence_rating, fuse=True)

You can also use the output “raw” and treat it as a segmentation map showing class probabilities for each region of the image.


@exodiakiller have you seen this python code.

Thank you for the response. I haven’t looked at it yet, but I will check it out.

Yes, I think it will accomplish what you want but using other higher level Edge Impulse routines.