EI C++ Library isn't quantized?

Question/Issue: When I load my EI model via tflite::GetModel(), the resulting interpreter->input(0).type is float32?

Project ID:

Context/Use case: I know my model is quantized and I can load successfully with an older version of Tensorflow. For whatever reason, the version of tensorflow and the third_party libs isn’t able to load the same quantized model. It seems to think the input is float32 (it should be int8).

Thanks!

model = tflite::GetModel(trained_tflite);
...
EI_TFLITE_RESOLVER
interpreter = new tflite::MicroInterpreter(model, resolver, tnt_tensor_arena, tnt_arena_size, er);
...
TfLiteStatus allocate_status = interpreter->AllocateTensors();
input = interpreter->input(0);
output = interpreter->output(0);
..
printf("TF input type %i\n",input->type);  //eh?

FWIW - Here’s a portion of my model_metadata.h

#define EI_CLASSIFIER_TFLITE_INPUT_DATATYPE         EI_CLASSIFIER_DATATYPE_INT8
#define EI_CLASSIFIER_TFLITE_INPUT_QUANTIZED        1
#define EI_CLASSIFIER_TFLITE_INPUT_SCALE            0.003921568859368563
#define EI_CLASSIFIER_TFLITE_INPUT_ZEROPOINT        -128
#define EI_CLASSIFIER_TFLITE_OUTPUT_DATATYPE        EI_CLASSIFIER_DATATYPE_INT8
#define EI_CLASSIFIER_TFLITE_OUTPUT_QUANTIZED       1
#define EI_CLASSIFIER_TFLITE_OUTPUT_SCALE           0.00390625
#define EI_CLASSIFIER_TFLITE_OUTPUT_ZEROPOINT       -128

Hi @stantonious,

The quantization is applied to the neural network weights only, the input tensor is still float32.

Aurelien

Thanks for the quick response! I’m a little confused as the EI_CLASSIFIER_TFLITE_INPUT_DATATYPE states INT8.

My model was trained on grayscale images. And my device captures int8 grayscale. How can I convert my on device INT8 grayscale to the input tensor float32?

Thanks

Hi @stantonious,

My mistake you’re correct about the int8 inputs and outputs.
I’ve forwarded your issue to our ML team but we’re not sure how you’re running into this issue.

if you want to use TFLM and not the rest of the SDK, you should disable EON Compiler, download our SDK, and use the TFLM version that’s in there. EON Compiler is better in every way so it’s best to just use that unless you have some special reason.
If you’re trying to use the official TFLM it’s not something we support as we apply several changes to it.

Aurelien