Question/Issue: When I load my EI model via tflite::GetModel(), the resulting interpreter->input(0).type is float32?
Project ID:
Context/Use case: I know my model is quantized and I can load successfully with an older version of Tensorflow. For whatever reason, the version of tensorflow and the third_party libs isn’t able to load the same quantized model. It seems to think the input is float32 (it should be int8).
Thanks!
model = tflite::GetModel(trained_tflite);
...
EI_TFLITE_RESOLVER
interpreter = new tflite::MicroInterpreter(model, resolver, tnt_tensor_arena, tnt_arena_size, er);
...
TfLiteStatus allocate_status = interpreter->AllocateTensors();
input = interpreter->input(0);
output = interpreter->output(0);
..
printf("TF input type %i\n",input->type); //eh?
Thanks for the quick response! I’m a little confused as the EI_CLASSIFIER_TFLITE_INPUT_DATATYPE states INT8.
My model was trained on grayscale images. And my device captures int8 grayscale. How can I convert my on device INT8 grayscale to the input tensor float32?
My mistake you’re correct about the int8 inputs and outputs.
I’ve forwarded your issue to our ML team but we’re not sure how you’re running into this issue.
if you want to use TFLM and not the rest of the SDK, you should disable EON Compiler, download our SDK, and use the TFLM version that’s in there. EON Compiler is better in every way so it’s best to just use that unless you have some special reason.
If you’re trying to use the official TFLM it’s not something we support as we apply several changes to it.