Hello,
I’m planning to develop an image processing model using the OV5640 camera from OmniVision Technologies. Its output formats support RAW RBG, RGB565/555/444, CCIR656, YUV422/420, and YCbCr422.
According to the EdgeImpulse’s docs, the Image block supports both RGB and Grayscale color depths, and the signal_t structure requires data in RGB888 format, refer to: GitHub - edgeimpulse/example-signal-from-rgb565-frame-buffer: Create an Edge Impulse signal_t struct from your RGB565 frame buffer to run ML on embedded cameras.
My question is:
- Does the training and test datasets for the model also need to be in RGB888 format?
- Can I directly upload images in JPEG, PNG, or BMP formats?
- Once the model is deployed on the device, does the OV5640 data require format conversion (e.g., RGB565 to RGB888) before being analyzed by the model? Otherwise, errors will occur.
Thanks!