Question/Issue:
I’m processing 16-bit image data and converting it to a packed RGB format (0x00RRGGBB), which is then stored as a float and passed into a classification model. The model is deployed as a C++ library using an impulse from Edge Impulse. The DSP block expects input as float values that internally represent RGB-packed pixels.
However, the model output doesn’t seem correct, and I’m unsure if the conversion step is being handled properly.
Is there a recommended way to convert raw input data into RGB-packed float values for such models? Should any normalization, clamping, or filtering be applied before mapping to RGB? Also, is value-casting uint32_t to float (instead of type-punning) a reliable method here?
Environment:
Controller: ESP32
Framework: FreeRTOS (VSCode)
Model Type: Classification model (Edge Impulse Impulse)
Deployment: C++ library
Language: C/C++
Code Snippet:
uint16_t raw_swapped = __builtin_bswap16(input[idx_in]);
float temp = raw_swapped;
float norm = (temp - min_C) / (max_C - min_C);
norm = (norm < 0.0f) ? 0.0f : (norm > 1.0f) ? 1.0f : norm;
uint8_t gray = (uint8_t)(norm * 255.0f);
uint32_t rgb_packed = (0x00 << 24) | (gray << 16) | (gray << 8) | gray;
output[idx_out] = static_cast(rgb_packed);
Additional Info:
Looking for any best practices, filters, or recommended preprocessing methods to ensure proper data format and model performance. Any guidance would be appreciated. The model was trained on PNG grayscale images.