Python SDK with ONNX model


Project ID:

Context/Use case:
I am trying to use the Python SDK to make a profiling of an Onnx model. When I use ei.model.profile() function, I got this error:

Could not profile: Was unable to load_model of type <class ‘onnx.onnx_ml_pb2.ModelProto’> with exception Unexpected model type

Hi @Bouhmid,

ModelProto is a container that is not supported in the Python SDK–it only supports loading .onnx files. You need to save your ONNX model as a file first (e.g. my_model.onnx) before using it with the Python SDK. For example, if my_model is a ModelProto object in ONNX, you need to save it first with:, "my_model.onnx")

Then you can call:

profile = ei.model.profile(model=my_model.onnx, device='cortex-m4f-80mhz')

Hope that helps!

I saved my model in an ONNX file and then pass it to the profile function:

    profile = ei.model.profile(model= './my_model.onnx',

I got this error:

Could not profile: No uploaded model yet

Hi @Bouhmid,

Is there any way you could share your .onnx file so we can try to replicate the error?



Feel free to also share your project ID so we can check the logs if needed.



Hello @Bouhmid, did you solve the problem?

Hello @HasOut,
No, this problem is still unresolved. I forwarded my model to the Edge Impulse team and I am waiting for their response.

I’m facing the same problem. Please if you have any updates let me know. Thank you.

Hello @shawn_edgeimpulse @louis,

Are there any novelties?
I have tried with a simple pre-trained Alexnet model using Pytorch and I am facing the same issue!

Hello @Bouhmid,

I am trying to replicate your issue with AlexNet to Pytorch to ONNX to Edge Impulse.
I am using this Colab: Google Colab

I can upload the model to EI.

However, I now have an issue with the input format (nchw vs nhwc).
I’m checking internally



I tried the code in Google Colab and it worked for me also. But when I tried it locally (.ipynb file), it didn’t work (it kept loading for maybe an hour). Should I always work on Google Colabs?

Thank you,

Hello @Bouhmid,

No it should work the same way.
Which part is loading for an hour?

For the input format, we should convert it to the right format when receiving the ONNX file.
I am checking with the team where the issue comes from.



Uploading pre-trained model!
The project is not even created in jobs.
No problem, I will just upload the model to your site next time ( But I think you should check the problem with implementing the profile function locally )

Besides, are LSTM models not supported for the moment?

For the local profiling, it is solved!

I just want to make sure that LSTM models are not yet supported!

Thank you,

Hello @Bouhmid,

I replied in the other thread: Are LSTM models deployable using Python SDK? - #3 by louis



1 Like

Hey @louis ,

I wanted to follow up on your suggestion. I think it may work in cases of torch.hub.load function, but what if I need to load a custom model? I’ve written a function called load_pretrained that may be useful:

def load_pretrained(model, pretrained):
    pretrained_dict = torch.load(pretrained, map_location=device)
    if 'state_dict' in pretrained_dict:
        pretrained_dict = pretrained_dict['state_dict']
    model_dict = model.state_dict()
    pretrained_dict = {k[6:]: v for k, v in pretrained_dict.items() if (k[6:] in model_dict and v.shape == model_dict[k[6:]].shape)}
    model.load_state_dict(model_dict, strict=False)

    return model

This function loads a pretrained model from a given path and merges the weights with the model that’s passed in as an argument.

Let me know what you think!