Just started to use this fantastic and amazingly elegant tool.
The goal is to use a Raspberry Pi 4 as a speed limit sign recognizer. I did some model training sessions with 30 images for each class, only 2 classes for this test purpose. I used Google Streetview as the source for the images. My results here: https://studio.edgeimpulse.com/studio/31244
The recognition level appears to be very low, although the images used for training should be good enough. Any suggestions on how to improve the recognition level?
Best regards, Rob.
I cloned your project and was able to get 77% training accuracy and 21% testing accuracy with the following changes, although past this I’m not sure what else could be done to increase accuracy, perhaps @janjongboom could help:
- I went to Dashboard -> Rebalance Dataset.
- I moved items from the test set to the training set until there were 50 in the training set - 25 40s and 25 50s. This made the training-testing split closer to 80-20 than 70-30.
- Training cycles: 100, learning rate: 0.1, score threshold: 0.65 - experimenting with these could help accuracy.
- I selected the Unoptimized float32 model as the model version.
I would also recommend that you try making all of your bounding boxes the same size. Also, more data would definitely help.
Big thanks for these suggestions! I will use these tips an see if I can get better results.
@robhazes Quick tip: you can lower the score threshold on the NN Classifier page to see more detail (no need to retrain) - that will give you a quick insight in whether the model can find the object, but just with too low accuracy. E.g. here is an example:
So there’s a correct box, just with low confidence. More data helps there (or lower the threshold).
You can set it to 0.01 to see all boxes (there’s always 10 predicted):
I already got better results, just by moving the few test images to the train folder, so I guess I just have to gather more training data to get where I want. But this gives good insight in why some images get the low score they do.