Did confidence calculation change?

After spending a month working on some other stuff I came back to this, made a new model, and updated our device with it. On the website everything is working extremely well and I routinely see confidence scores of 1.0. For my previous model that used to also be the case on the embedded system. However, now when I run the model on the device, confidence seems to max out quite a bit lower. It’s rare for me to see anything above 0.83 (although it does happen) but 0.83 is extremely common. One thing I noticed is that the MAF was removed from run_classifier_continuous (although it’s still present in the API, which is confusing). I don’t necessarily see why that would be the culprit but I don’t see anything else obvious that’s changed…or at least, nothing else I understand :slight_smile:.

Hi @jefffhaynes,

We indeed changed the MAF filter for a similar averaging filter with the same default parameters. On our tests we don’t see any differences in results, but let’s make sure the same applies to your project. Did you already tried to run inference with bypassing the filter
enable_maf = false when calling ei_run_classifier_continuous()?

Yes! Disabling MAF did the trick. Not sure I fully understand but I’ll take it. Thanks

1 Like

@jefffhaynes Just to close the loop, we’re adding performance calibration to the Studio soon which moves the configuration of the MAF parameters (and many more!) to a visual UI which lets you see immediately the impact that these have. Stay tuned :slight_smile:

2 Likes