Question/Issue: When I load my EI model via tflite::GetModel(), the resulting interpreter->input(0).type is float32?
Context/Use case: I know my model is quantized and I can load successfully with an older version of Tensorflow. For whatever reason, the version of tensorflow and the third_party libs isn’t able to load the same quantized model. It seems to think the input is float32 (it should be int8).
model = tflite::GetModel(trained_tflite); ... EI_TFLITE_RESOLVER interpreter = new tflite::MicroInterpreter(model, resolver, tnt_tensor_arena, tnt_arena_size, er); ... TfLiteStatus allocate_status = interpreter->AllocateTensors(); input = interpreter->input(0); output = interpreter->output(0); .. printf("TF input type %i\n",input->type); //eh?