Export FOMO-AD with onnx

Hi,

Is there any way to export FOMO-AD model in onnx format?

thanks.

1 Like

Hello @ricardo.salmazo,

No currently we don’t support exporting FOMO-AD models to ONNX.
Feel free let us know about your use case, I’m curious to head more about it.

Best,

Louis

Hi Louis,

I’m trying to use FOMO-AD with Hailo8 processor in a Raspberry Pi 5 but I had some problens during the conversion from Tflite to Hef (Hailo format). The program from Hailo get lost during que quantization of model. I’ve already had a working model but I had to do a cut at the end of the network to Hailo could end the quantization and the result wasn’t the same of the Edge Impulse website, at least the processing time was fast, about 15ms.

OK, thanks for the context.

What input size are you using?
Did you compare when runnit it on the RPi5 CPU vs on the Hailo accelerator?

For some deployment options, we need to convert our models to ONNX.
I’ll check with our embedded team if they have any suggestion on the best approach.

Best,

Louis

Fomo-ad consists of two graphs, body and head. Which one did you cut?

Also, what pre-processing did you apply to your input image? In other words, what are the input features format you’re passing to the model?

Could you share a screenshot of the input/output of the new graph you generated in Neuron please?

Best,

Louis

1 Like

Hi Louis,

I haven’t thoroughly compared CPU vs. Hailo yet, but on a Raspberry Pi 5 other models have taken about 350–400 ms with a 360×360 grayscale input.

I capture images at 1456Ă—1088 in RGB and use OpenCV to resize and convert them to 360Ă—360 grayscale.

I extracted a subgraph at the output of “model/feature_extractor/AvgPool_2” (backbone output). My plan is to run the head on the CPU, feeding it with the features produced by Hailo.

Ultimately, this is for a food-industry project where I capture images, run a detection model for the target product, crop the detected objects, and apply FOMO-AD to determine whether they pass or fail.