I am working on an embedded computer vision problem and want to run a U-Net model on an OpenMV Cam H7 Plus. I am able to train and quantize a model on my own machine but when it comes time to deploy it the OpenMV Cam fails to load the model.
Using netron to visualize my tflite file, it seems that I am using all supported ops but the model still cannot be loaded by the OpenMV cam.
This roadblock has me wanting to explore if it’s possible to achieve this using Edge Impulse instead, as there is very good support for EI models on the OpenMV Cam.
To train a U-Net model I would need an original image and then an “image” of labels that correspond to each pixel of the original image. I already have a process to create labels for my images, but there does not seem to be a way to actually upload labels like this to EI. Does this mean it would not be possible to use EI to create a U-Net based model? Or are there any features that I missed that could help me do this?
I’ve never actually run image segmentation on Edge Impulse–I just tinkered with it briefly in a Colab script. I can’t think of a way to do per-pixel labels like that in Edge Impulse off the top of my head. Maybe @dansitu or @matkelcey would know how to do that?
The only thing I can maybe think of is to do some wonky image hacking where you turn the labels into pixel values and then append the new “image” to the original image (i.e. if you have a 32x32 image, the new “image” would be 32x64 with the top half being the training sample and the bottom half being your labels in pixel form). In Expert Mode, you could split the image to only use the top half as the input sample and the bottom half as your ground truth label set. Super hacker-ish, but it’s what I got for now :-/