Question/Issue:
Hi, I was wondering if I am going to deploy my model as an optimized 8int model, should i use my original data that is 12bit in resolution or trim my data to be 8 bit for training and testing?
Project ID:
742653
Context/Use case:
I like to use Artificial Neural Network for an electronic nose. My sensors are 12 bit in resolution and my device is esp32S3. For some reason, when i deploy a float32 model with esp-idf, it doesnt work. So i can only use the int8 model, should my data be trimmed from 12bit to 8bit for training and testing? How will the resolution of data affect model inference?
Environment:
- Platform: ESP32S3
- Build Environment Details: ESP-IDF