How to transform RGB888 for int8 input model

Hi team,
I have image buffer of type RGB888 of size (32,32,3) which is my model input size as well. The values in image buffer are in range(0,255).

I have seen for model in int8 type, each input pixel values will be subtracted by 127. But i’m confused by the quantization of the input as shown in the above image. It subtracts by 128.

So how should I transform my RGB888 buffer ?

Thanks & Regards,
Ramson Jehu K

Hi @Ramson,

From what I understand about the quantization process, the scale factor and zero-point are determined either during or after training, and they’re used to map quantized inputs (e.g. [0…255]) to a 0…1 scale. In your case, it looks like 128 is being added to each pixel value, and that result is being multiplied by ~0.00392.

I’m guessing that your RBG888 data is array of uint8 values. You should just need to convert each uint8 to int8 such that each pixel R, G, and B value goes from [0…255] to [-128…127]. For example:

(int8_t)((uint16_t)val - 128)

Hope that helps!

Hi @shawn_edgeimpulse,

I tried converting uint8 to int8 as you said. But my inferencing results are always incorrect.

What I’m trying to do is get image from the sensor as rgb888 of size (1280,720,3) which I’m resizing as (32,32,3) since its my model input size and convert it to int8 as you said. And then I’m inferencing it.

I do have doubt, while training I uploaded the 1280x720 jpeg image. The raw features were generated for (32,32,3).

Is there any chance that the raw features generated by edge impulse and raw data from my sensor would be different ? Is that causing the incorrect inferencing?

Please guide me through this.

Thanks

Hi @Ramson,

The Edge Impulse SDK will quantize your image (ei_run_dsp:extract_image_features_quantized) for you given you’ve exported a quantized(int8) TFLite model (e.g. C/C++ library). The only thing you have to do is convert the RGB888 (3 byte packed) into a 4 byte packed (RGB) signal as follows:

int raw_features_get_data(uint32_t offset, uint32_t length, float *out_ptr)
{
    size_t out_ptr_ix = 0;
    offset *= 3;

    // read pixel for pixel
    while (length != 0) {

        // grab the RGB components from 3B packed RGB format
        uint8_t r = image_buffer_u8[offset];
        uint8_t g = image_buffer_u8[offset + 1];
        uint8_t b = image_buffer_u8[offset + 2];

        // then convert to out_ptr format: 4 Byte packed
        float pixel_f = (r << 16) + (g << 8) + b;
        out_ptr[out_ptr_ix] = pixel_f;

        // and go to the next pixel
        out_ptr_ix++;
        offset+=3;
        length--;
    }

    // and done!
    return 0;
}

Then you setup and call as follows:

while(1) {
    ei::signal_t signal;
    signal.total_length = EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT;
    signal.get_data = &raw_features_get_data;

    capture_image(); // your image capture function

    ei_impulse_result_t result = { 0 };
    EI_IMPULSE_ERROR ei_error = run_classifier(&signal, &result, true);
    if (ei_error != EI_IMPULSE_OK) {
        ei_printf("Failed to run impulse (%d)\n", ei_error);
        break;
    }
}

3 Likes

Hi @rjames,

Why should we change RGB888 3 byte packed to 4 byte packed when the model input is only 3 channel?

I have downloaded the C++ library from deployment tab. In edge-impulse-sdk/classifier/ei_run_dsp.h. The function extract_image_features_quantized() contains the below

    if (channel_count == 3) {
    // fast code path
       if (EI_CLASSIFIER_TFLITE_INPUT_SCALE == 0.003921568859368563f && EI_CLASSIFIER_TFLITE_INPUT_ZEROPOINT == -128) {
                    int32_t r = static_cast<int32_t>(pixel >> 16 & 0xff);
                    int32_t g = static_cast<int32_t>(pixel >> 8 & 0xff);
                    int32_t b = static_cast<int32_t>(pixel & 0xff);

                    output_matrix->buffer[output_ix++] = static_cast<int8_t>(r + EI_CLASSIFIER_TFLITE_INPUT_ZEROPOINT);
                    output_matrix->buffer[output_ix++] = static_cast<int8_t>(g + EI_CLASSIFIER_TFLITE_INPUT_ZEROPOINT);
                    output_matrix->buffer[output_ix++] = static_cast<int8_t>(b + EI_CLASSIFIER_TFLITE_INPUT_ZEROPOINT);
        }
}

where EI_CLASSIFIER_TFLITE_INPUT_ZEROPOINT is -128.
As you can see, each pixel is subtracted by -128. here they are not converting to 4 byte packed.

The SDK runs through the DSP pipeline before running inference. The input to the DSP pipeline is always in floats hence in firmware you need to repackaged in that format. raw_features_get_data() above takes cares of that. The SDK will further convert the image in the format required for NN.

Is your camera buffer already in 32x32 resolution? Or do you resize before calling run_classifier()?

My camera buffer is 1280x720p. I’m resizing it to 32x32.

Can you let me know which function does that?

See below the dataflow (edit raw_features_get_data output)

your image capture -> RGB 3byte 
    raw_features_get_data -> BGR 4 byte packed/float
        extract_image_features_quantized -> BGR int8
               NN
1 Like

HI @rjames,

Thanks for the explanation. One thing I wanna know is that, After all the DSP process, what will be the input format of the model will be RGB or BGR?.

I’m not using the edge impulse SDK for now, I’m using the model alone. if I give input to the model as RGB, the inference output is wrong. and when i give input as BGR format, the inferencing output is correct.

The output\input of the DSP is BGR.

I’ll update the dataflow above to be explicit about the order.

Thank you very much for the clarification @rjames. Appreciate your good work.

1 Like