Model testing, which version (int8/float32) is it testing?

I’ve been using Edge Impulse for my thesis and I’d like to know the testing results for both versions of the model, the quantized 8 int version and the unoptimized one. However, the model testing section of the platform doesn’t tell you which version it’s testing. I’m guessing it’s the quantized vesion. Is there any way we could test both versions?

David Córdova.


It’s float32 on Model testing. If you look at Deployment, then at the performance widget that shows up when you click at C++ Library you can also get the int8 metrics and compare between them.

1 Like

Great! Thank you very much @janjongboom