Uploading .onnx module (converted from .pth) fails

Question/Issue:
Converting ONNX model failed

Project ID:
gb0048-project-1

Context/Use case:

I am using this repository
https://github.com/sithu31296/PyTorch-ONNX-TFLite

to run the following script to convert my model from .pth to .onnx

import torch
from sed_demo.models import Cnn9_GMP_64x64

# Load the PyTorch model
num_audioset_classes = 527 # number of classes the model was trained on
model = Cnn9_GMP_64x64(num_audioset_classes)
checkpoint = torch.load(r"C:\Users\bibbo\PyTorch-ONNX-TFLite\Cnn9_GMP_64x64_300000_iterations_mAP=0.37.pth", map_location='cpu')
model.load_state_dict(checkpoint["model"])
model.eval() # set the model to evaluation mode

# Define a dummy input for the ONNX export.
dummy_input = torch.rand((32, 64, 64))

# Export the model to an ONNX file
torch.onnx.export(model, dummy_input, "model.onnx")

The conversion looks successful, and I am able to choose the model in Studio. However, when select
Upload file I get the following message:

Creating job... OK (ID: 9105449)

Scheduling job in cluster...
Container image pulled!
Job started
Converting ONNX model...
Scheduling job in cluster...
Container image pulled!
Job started
INFO: No representative features passed in, won't quantize this model

Trying conversion using onnx2tf...
Grabbing input information...
Grabbing input information OK

Conversion using onnx2tf failed, trying conversion using onnx-tf...
ERROR: Could not load albumentations library. This is a known issue during unit tests on M1 Macs (#3880) and will prevent object detection augmentation from working. 
Original error message:
 No module named 'albumentations'
Traceback (most recent call last):
  File "/app/convert-via-tf-onnx.py", line 68, in <module>
    ei_tensorflow.onnx.conversion.onnx_to_tflite(args.onnx_file, file_float32, file_int8,
  File "/app/./resources/libraries/ei_tensorflow/onnx/conversion.py", line 73, in onnx_to_tflite
    raise Exception('Expected an input shape with batch size 1, but got: ' + json.dumps(input_shape) + '. ' +
Exception: Expected an input shape with batch size 1, but got: [32, 64, 64]. If you have symbolic dimensions or dynamic shapes in your network, see https://onnxruntime.ai/docs/tutorials/mobile/helpers/make-dynamic-shape-fixed.html#making-a-symbolic-dimension-fixed to make these fixed first.

Conversion using onnx-tf also failed, cannot use this ONNX file - contact your solutions engineer, or post the logs on the forum.
Application exited with code 1

Converting ONNX model failed, see above

Job failed (see above)

Hi @gb0048,

Edge Impulse expects the input shape of your model to have a batch size of 1 (or none at all). If you are working with grayscale images (which I am assuming you are), the input shape should be (64, 64).