Good Accuracy when Training, but Bad Accuracy on Model Testing using FOMO

Hi, i want to develop person detection using FOMO in edge impulse. When i trained the model with my dataset i got a good accuracy, and F1 Score 93.33%

But when i test the model using model testing i got very bad accuracy for all test dataset like this image

The strange thing is when i checked on sample image in model testing, the object are detected with good accuracy. Although the F1 Score is still bad
and after i deploy the model to the Arduino Portenta H7 in OpenMV i got many bounding box noise even i made the threshold = 0.9.
Anyone have solution? My Project Id is 110681
Thanks :slight_smile:

Hi @m4ri01,

Thank you for pointing this out. There appears to be a bug in Studio. If you select a particular sample (e.g. FudanPed00011) and click on “Show classification” under the 3-dots menu, you can see that it is actually being classified correctly. The “Model testing” page seems to show inaccurate results for object detection projects. I have filed a bug with our dev team.

In deployment, you will likely see a number of false positive hits with the limited dataset (a few hundred samples) used to train the model. FOMO seems to need many more samples than other object detection models for transfer learning. Otherwise, you will see false positives (I noticed this with our face detection demo). I recommend collecting training/test sample images that include your intended background (i.e. wherever you plan to deploy your device). This will help train the model to work better in your particular environment.

thanks @shawn_edgeimpulse for your information and advice about FOMO algorithm that need to be train using dataset with intended background. I’l try to change the dataset in my project.
beside that, i’m very glad to point out this bug in the studio. I hope it will get fixed soon :slight_smile:

1 Like

Hi @m4ri01,

After chatting with one of our ML engineers, it seems that the inaccuracies witnessed during testing is something inherent in FOMO, as the location of the object will likely not perfectly match the centroid of the bounding box set by the user. As a result, it will be flagged as incorrect.

For now (until a better test can be constructed), the best bet is to look at the individual test samples to see if FOMO is detecting the objects correctly as well as to test it in actual deployment.