Portenta FOMO object_weight to reduce false positives

I have a FOMO model that is too sensitive, so I am messing around with the object_weight (default 100) setting in the advanced mode setting. Note the background setting is 1.0 compared to the default object_weight setting of 100.

In the code below I set the object_weight to 5 to try to make the background reading a little stronger and get a few less false positives. Does anyone have any experience with this. I will know soon if 5 was a good setting, but does anyone else have any suggestions.

model = train(num_classes=classes,
              learning_rate=LEARNING_RATE,
              num_epochs=EPOCHS,
              alpha=0.35,
              object_weight=5,
              train_dataset=train_dataset,
              validation_dataset=validation_dataset,
              best_model_path=BEST_MODEL_PATH,
              input_shape=MODEL_INPUT_SHAPE,
              lr_finder=False)

A few minutes later…

It doesn’t seem to be doing anything dramatically different. Possibly getting a few more background classifications over other FOMO objects but not really obvious. object_weight is an integer so the lowest I guess that I can go is 1.

I am not sure by what you mean by too sensitive.

Have you read this Object Weighting.

Maybe changing the Cut Point will help with the sensitivity.

1 Like

object_weight being tagged as type int is a bug, it’s actually handled as a float ( sorry, i’ll fix that ) it’s never cast to an int so should be ok to pass an object_weight of, say, 0.5.

we usually see the best result from the value ~100 for projects because there is a majority of outputs that are the implied background class. e.g. something like …
image
… and we need to balance the implied background class vs objects of interest

do you mind posting a couple of examples from your project showing some representative labels ( like this screenshot from “data acquisition” ) ( or DM me your project id ) it might be something else…

cheers, mat

1 Like