I’m just looking for clarity on this subject. I think I understand the attached screen shot when it comes to data which isn’t clearly identifiable.
However, I’m using Edge Impulse to identify sirens within street and car noise. My model seems to be working well, so why should it be a problem in this scenario?
100% accuracy is just a flag to watch out for, in a machine learning model can be indicative of overfitting, where the model has learned the training data too well, including noise and outliers.
This means it might perform very well on the training data but poorly on unseen data because it fails to generalize.
In the context of identifying sirens, even if the model performs well now, 100% accuracy should be approached with caution.
It’s vital to test the model on a diverse set of examples that it hasn’t seen before to ensure it generalizes well to real-world data.
If it still performs well, that’s a good sign, but continuous monitoring and testing with new data are recommended.