I am deploying a time-series quantized model through C++ library. My input is initially float32. Do I need to manually quantize my input and then dequantize my output?
EI_CLASSIFIER_TFLITE_INPUT_DATATYPE is EI_CLASSIFIER_DATATYPE_INT8. Same with the output datatype in the metadata file. But result.classification[0].value is programmed in the files to return float, so I am a bit confused. Any help is appreciated.