Model testing, which version (int8/float32) is it testing?

Hello,
I’ve been using Edge Impulse for my thesis and I’d like to know the testing results for both versions of the model, the quantized 8 int version and the unoptimized one. However, the model testing section of the platform doesn’t tell you which version it’s testing. I’m guessing it’s the quantized vesion. Is there any way we could test both versions?

Regards,
David Córdova.

Hi,

It’s float32 on Model testing. If you look at Deployment, then at the performance widget that shows up when you click at C++ Library you can also get the int8 metrics and compare between them.

1 Like

Great! Thank you very much @janjongboom

“C++ Library you can also get the int8 metrics” not getting this option, it is only showing “Unoptimized (float32)”.

I have to deploy for the OpenMV library and firmware.

OpenMV only supports quantized models, aka int8. Reference is here.

1 Like

Is there any method to convert my existing model and make it compatible with this??