Questions about Transfer learning Mechanism

I’m tried to build model(transfer learning) with ipython notebook.
Actually, I’m not the expert of the DL ground.
I’m the embedded engineer who want to test models on the board.

I understand that Fine-tuning in transfer learning usually fixes the front layer of the pre-trained model and updates the parameters of the back layer from scratch.
However, the code was different from what I knew.
The Fine-tuning epoch was only 10.
I don’t know how the training cycle and learning rate I chose in visual simple mode affect.

Please explain.

@dansitu can you answer here?

In our transfer learning scheme we run two training cycles. The main one trains the final layer of the model from scratch with a high learning rate. The “fine tuning” cycle then uses a smaller learning rate to update a subset of the earlier layers. All of this is configurable in the script via FINE_TUNE_EPOCHS and FINE_TUNE_PERCENTAGE.

Note that right now transfer learning doesn’t work in an iPython Notebook since we don’t provide the model weights. I’m working on a fix for this currently and it should be working by the end of the week!


Thank you for your reply @dansitu!! :blush: