On the model testing page, the " Set confidence thresholds" dialog claims the value reflects the minimum acceptable confidence. However, it appears to be exclusive rather than inclusive. E.g. if you set this threshold to 1.0, no samples will be considered valid even if they have an associated confidence of 1.0.
Hello @jefffhaynes,
That’s an interesting point, I’ve actually never set a confidence threshold to 1
I’m creating an internal ticket so our Core Engineering team can modify this behaviour or can help me understand why it has been done this way.
I’ll let you know.
Best regards,
Louis