Deployment: Do I need to quantize my input for quantized time-series models?

I am deploying a time-series quantized model through C++ library. My input is initially float32. Do I need to manually quantize my input and then dequantize my output?

EI_CLASSIFIER_TFLITE_INPUT_DATATYPE is EI_CLASSIFIER_DATATYPE_INT8. Same with the output datatype in the metadata file. But result.classification[0].value is programmed in the files to return float, so I am a bit confused. Any help is appreciated.

Hello @LayanH,

No you can pass your raw data as an input (same format as the ones in your processing block, see image below), only the weights of the Neural Network are quantized.

Screenshot 2025-03-19 at 17.05.43

Best,

Louis