ModelProto is a container that is not supported in the Python SDK–it only supports loading .onnx files. You need to save your ONNX model as a file first (e.g. my_model.onnx) before using it with the Python SDK. For example, if my_model is a ModelProto object in ONNX, you need to save it first with:
I tried the code in Google Colab and it worked for me also. But when I tried it locally (.ipynb file), it didn’t work (it kept loading for maybe an hour). Should I always work on Google Colabs?
Uploading pre-trained model!
The project is not even created in jobs.
No problem, I will just upload the model to your site next time ( But I think you should check the problem with implementing the profile function locally )
Besides, are LSTM models not supported for the moment?
I wanted to follow up on your suggestion. I think it may work in cases of torch.hub.load function, but what if I need to load a custom model? I’ve written a function called load_pretrained that may be useful:
def load_pretrained(model, pretrained):
pretrained_dict = torch.load(pretrained, map_location=device)
if 'state_dict' in pretrained_dict:
pretrained_dict = pretrained_dict['state_dict']
model_dict = model.state_dict()
pretrained_dict = {k[6:]: v for k, v in pretrained_dict.items() if (k[6:] in model_dict and v.shape == model_dict[k[6:]].shape)}
model_dict.update(pretrained_dict)
model.load_state_dict(model_dict, strict=False)
return model
This function loads a pretrained model from a given path and merges the weights with the model that’s passed in as an argument.