I’ve noticed that sometimes (especially on the MFCC audio classification models), stopping at 30 epochs gives a better model (i.e. more accurate on unseen data) than 50 or 100, which makes me think that the model is overfitting after about 30.
While I could look at the loss and accuracy values to get an idea of where the overfitting starts, I often find it easier to look at a plot of epochs vs. loss/acc to see where the val and training losses diverge. If possible, I would love to have some plots (matplotlib or TensorBoard-style) to help me spot overfitting.
Yep, great suggestion - we’re tracking it as part of some AutoML work we’re working on.
I’ve noticed that sometimes (especially on the MFCC audio classification models), stopping at 30 epochs gives a better model (i.e. more accurate on unseen data) than 50 or 100, which makes me think that the model is overfitting after about 30.
According to @dansitu the model with the lowest loss should automatically be picked, so training too long should not matter too much - but he’s on PTO so I can’t ask
I’m back now Just confirming that Jan is correct—we’ll always pick the model with the lowest validation loss. This helps avoid overfitting the training dataset, but it won’t do much to avoid overfitting the validation dataset if you are iteratively tweaking the model to optimize validation performance.
As Jan mentioned, we have some pretty exciting stuff coming around improving this part of the workflow, so stay tuned!
For now, you can always export a Jupyter notebook and use matplotlib or TensorBoard directly.