How does "Set Confidence Thresholds" affect deployed model

Hi all, I have a classification and anomaly detection model that I am looking to deploy to Arduino. In the model testing window I am able to set the confidence thresholds for my model. The explanation of the setting states: “This affects both live classification and model testing.”
So what I am wondering is will changing this value alter how the model performs once deployed to my Arduino. and if not, how can I go about setting this threshold on the deployed model?
Thanks, Nathan

Hi @n.donaldson,

The “Set Confidence Thresholds” only affects model testing in Studio. If, for example, it is set to 0.7, then only confidence values above 0.7 for the target class (e.g. the class that matches the “ground truth” label) will be counted as a true positive. If the target class confidence is, say, 0.6 (even if that class matches the ground truth label), it will be counted as “unknown” (and therefore a false negative).

None of this affects anything in deployment. To change the threshold in deployment, you need to compare the value to your own threshold. For example:

int target_class = 1;
float threshold = 0.7;
run_classifier(&features_signal, &result, ei_debug);
if (result.classification[target_class].value >= threshold) {
    ei_printf("target class identified\r\n");
}

Hope that helps!

That’s great, Thanks for your help!
Nathan

1 Like

@shawn_edgeimpulse I disagree with you at least if one is deploying a FOMO model. I documented here how a FOMO deployed model ** is ** affected by Set Confidence Thresholds, a setting within the Studio.

Therewithin I stated:

I understand that the EI Studio Model Testing page has a menu item Set Confidence Thresholds that actually controls the parameter, EI_CLASSIFIER_OBJECT_DETECTION_THRESHOLD, and is saved into model_metadata.h upon deployment. But how is this EI Studio setting in anyway obvious that it will control the &results of an inference via a call to run_classifier()?

1 Like

I’ve also noticed that in the smoothing function, there are 2 parameters, classifier confidence and anomaly confidence. Is my understand correct that these would be the equivalent of the thresholds on the model testing tab?

@MMarcial Ah! That is something I did not know about FOMO, thank you!