Image Data Format

Hello,

I’m planning to develop an image processing model using the OV5640 camera from OmniVision Technologies. Its output formats support RAW RBG, RGB565/555/444, CCIR656, YUV422/420, and YCbCr422.

According to the EdgeImpulse’s docs, the Image block supports both RGB and Grayscale color depths, and the signal_t ​​structure requires data in RGB888 format, refer to: GitHub - edgeimpulse/example-signal-from-rgb565-frame-buffer: Create an Edge Impulse signal_t struct from your RGB565 frame buffer to run ML on embedded cameras.

My question is:

  1. Does the training and test datasets for the model also need to be in RGB888 format?
  2. Can I directly upload images in JPEG, PNG, or BMP formats?
  3. Once the model is deployed on the device, does the OV5640 data require format conversion (e.g., RGB565 to RGB888) before being analyzed by the model? Otherwise, errors will occur.

Thanks!

Hi @dannydeng - thanks for reaching out! I’m double checking on a couple of things, but here is my initial response:

  1. Yes, you’ll want your training and test data to be in the same format that the model will process on device.

  2. You can directly upload JPEG and PNG images for sure. I thought we added BMP support but I tried uploading a .bmp image now and it failed. I’ll ask the team.

  3. Yes, conversion will be required on device. As mentioned in that example repo you referenced, the signal_t struct expects data to be laid out in the RGB888 format.

Hopes this helps!