I’m trying to add a custom learning rate scheduler using the “advanced” mode for a NN block.
Because of a bug in TF2, this is not possible. I get the following error after training is complete:
“ValueError: Unknown decay: DoubleCosineDecay!”
I believe that after training the NN, you try to read back the “best” epoch using something like: model = tf.keras.models.load_model('saved-models/model112')
if you changed that to model = tf.keras.models.load_model('saved-models/model112', **compile=False**) **model.compile()**
Then that should work fine. (I get the problem all the time when doing Jupyter notebooks with TF, and this is the currently accepted work-around)
Thanks for your feedback!
You’re right about reading back the best epoch. One quick way to bypass it is to remove the callbacks=callbacks in the model.fit() function, this will not load the best model though.
Maybe @dansitu has better way to handle this case (overwriting the load_model function?).
Hi @nigelcroft, thank you for the bug report! Unfortunately there’s no good way to hack around it in the advanced editor, but we’ll work on a fix for this issue ASAP.