Image segmentation in OpenMV H7 plus camera

We are trying to use OpenMV for a space mission. “Munal - Nepal’s First High School Satellite” Here in the camera mission, we are trying to implement classification (good/bad image) and then segmentation on good images to get the contents of the image and append it in a log. The log will be used as a reference to downlink images afterward. For segmentation, four classes: Space, Land, Sea, and Cloud are specified.
Do you have any recommendations to successfully complete this mission?

Hello @nayanbakhadyo,

What a cool use case!
We currently don’t support segmentation as is but there might be some workaround using FOMO without applying the post-processing.
Can you give more context or define what is a good and bad image?

Let me check with @matkelcey for the workaround.

Best,

Louis

A good image is simply an HD image taken by OpenMV that is good enough for social media. Yes, if only I get class for every pixel then I can calculate the coverage of each class over that image. Then I can keep a log for each image. Using that log we will downlink the required images.
If there is work it would be very helpful.
Segmentation is a final milestone for full success in our camera mission.
I appreciate you giving attention to this matter. Thanks @louis.

So the workaround I am thinking of is using FOMO, so the class won’t be at the pixel level but with a 1/8th of the image size. (e.g. for a 128x128 image it will be at 16x16px).

Best,

Louis

Thanks Louis. Is 1/8th the limit? Also how accurate will it be on my usecase?

@nayanbakhadyo Take a look at this doc that describes how to change the 1/8 output to another size.

Regarding FOMO accuracy:

  • FOMO Likes:

    • similar shaped objects
    • an object that fits into a single FOMO cell (⅛ of input image size (default))
  • MobileNet v2 SSD or YOLO are better at image segmentation in that FOMO does not like:

    • oblong objects
    • does not yield true BBs
    • many objects
    • many classes

Note that FOMO uses 30x less processing power and memory than MobileNet V2 SSD or YOLOv5.

Thank you MMarcial. I have studied the doc. But still is there any new updates coming soon that will make semantic segmentation possible?

I don’t see why sematic segmentation would not be possible using the BYOM feature of Edge Impulse.

I have not tried BYOM yet but this sounds like a fun use-case. If you have a known-good working model in TensorFlow and Keras I would like to take you model code and follow this tutorial. I am very interested in what the SDK Profiler will return. It seems to me that since the underlying code must analyzer each and every pixel that the model will run slowly on any given resource-constrained microcontroller.

Hi, @MMarcial, I tried BYOM but the model output should comply with either classification, regression, or object detection. My model currently produces an image of dimension equals the number of classes with a probability of the corresponding class using simple RCNN. The problem is I cannot find proper inference to run this model in OpenMV.

Can we set a meeting to solve this? Our scheduled launch date is this July and we really want to execute this.

@shawn_edgeimpulse I think I am missing something here. There should not be a constraint if a BYOM is returning a multiple classification result. So I am currently at a loss as to how to solve this. Please advise.

1 Like

Hi @nayanbakhadyo,

As the others mentioned, image segmentation is not supported in Edge Impulse at this time. I highly recommend training a segmentation model in TensorFlow, converting your model to a TFLite model file, then using the TFLite interpreter in OpenMV with MicroPython: tf — Tensor Flow — MicroPython 1.19 documentation. The library is a bit tricky to use, but it should work for your needs.

2 Likes