I am capturing single channel grayscale uint8_t data array from the image sensor. I am using quantized model and the run_classifier steps into the code (below) for DSP. Why is it expecting 1-channel data as RGB888? It is a waste of 2x memory size of the grayscale image. We can just use the uint8_t value as gray value.
ei_run_dsp.h: extract_image_features_quantized():
...
int32_t r = static_cast<int32_t>(pixel >> 16 & 0xff);
int32_t g = static_cast<int32_t>(pixel >> 8 & 0xff);
int32_t b = static_cast<int32_t>(pixel & 0xff);
int32_t gray = (iRedToGray * r) + (iGreenToGray * g) + (iBlueToGray * b);
gray >>= 16; // scale down to int8_t
gray += EI_CLASSIFIER_TFLITE_INPUT_ZEROPOINT;
if (gray < - 128) gray = -128;
else if (gray > 127) gray = 127;
output_matrix->buffer[output_ix++] = static_cast<int8_t>(gray);
It’s a good question, we’re doing this so we have a consistent signal across all sensor types. The memory overhead is not that bad as you imagine, as you can do this on the fly with the signal_t. Your internal buffer is int8, we then dynamically convert when needed. See e.g. https://github.com/edgeimpulse/example-signal-from-rgb565-frame-buffer which has RGB565 framebuffer and then converts on the fly. We do this on fully supported dev boards already to save RAM.