Unsqueeze version 13 is not implemented in ONNX file

Question/Issue:
I got this error when processing the .onnx model to platform.

Project ID:
231588

Context/Use case:
https://studio.edgeimpulse.com/studio/231588/pretrained-model

I uploaded .onnx file and selected ST IoT discovery Kit as a profile.
Here’s a full logs.

Creating job… OK (ID: 9327969)

Scheduling job in cluster…
Container image pulled!
Job started
Converting ONNX model…
Scheduling job in cluster…
Container image pulled!
Job started
INFO: No representative features passed in, won’t quantize this model

Trying conversion using onnx2tf…
Grabbing input information…
Non NCHW format detected, preserving input shape
Traceback (most recent call last):
File “/usr/local/lib/python3.10/dist-packages/onnx2tf/utils/common_functions.py”, line 280, in print_wrapper_func
result = func(*args, **kwargs)
File “/usr/local/lib/python3.10/dist-packages/onnx2tf/utils/common_functions.py”, line 358, in inverted_operation_enable_disable_wrapper_func
result = func(*args, **kwargs)
File “/usr/local/lib/python3.10/dist-packages/onnx2tf/utils/common_functions.py”, line 49, in get_replacement_parameter_wrapper_func
func(*args, **kwargs)
File “/usr/local/lib/python3.10/dist-packages/onnx2tf/ops/BatchNormalization.py”, line 158, in make_node
tf_layers_dict[Y.name][‘tf_node’] = tf.nn.batch_normalization(
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/util/traceback_utils.py”, line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File “/usr/local/lib/python3.10/dist-packages/keras/layers/core/tf_op_layer.py”, line 119, in handle
return TFOpLambda(op)(*args, **kwargs)
File “/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py”, line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
ValueError: Exception encountered when calling layer “tf.nn.batch_normalization” (type TFOpLambda).

Dimensions must be equal, but are 4 and 60 for ‘{{node tf.nn.batch_normalization/batchnorm/mul_1}} = Mul[T=DT_FLOAT](Placeholder, tf.nn.batch_normalization/batchnorm/mul)’ with input shapes: [1,60,32,4], [60].

Call arguments received by layer “tf.nn.batch_normalization” (type TFOpLambda):
• x=tf.Tensor(shape=(1, 60, 32, 4), dtype=float32)
• mean=tf.Tensor(shape=(60,), dtype=float32)
• variance=tf.Tensor(shape=(60,), dtype=float32)
• offset=tf.Tensor(shape=(60,), dtype=float32)
• scale=tf.Tensor(shape=(60,), dtype=float32)
• variance_epsilon=0.0010000000474974513
• name=None
e[31mERROR:e[0m The trace log is below.
e[31mERROR:e[0m input_onnx_file_path: /home/model.onnx
e[31mERROR:e[0m onnx_op_name: model/transition_block/sub_spectral_normalization/batch_normalization_2/FusedBatchNormV3
e[31mERROR:e[0m Read this and deal with it. GitHub - PINTO0309/onnx2tf: Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
e[31mERROR:e[0m Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
e[31mERROR:e[0m If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
e[31mERROR:e[0m Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
Grabbing input information OK

Conversion using onnx2tf failed, trying conversion using onnx-tf…
ERROR: Could not load albumentations library. This is a known issue during unit tests on M1 Macs (#3880) and will prevent object detection augmentation from working.
Original error message:
No module named ‘albumentations’
Traceback (most recent call last):
File “/app/convert-via-tf-onnx.py”, line 68, in
ei_tensorflow.onnx.conversion.onnx_to_tflite(args.onnx_file, file_float32, file_int8,
File “/app/./resources/libraries/ei_tensorflow/onnx/conversion.py”, line 96, in onnx_to_tflite
tf_rep.export_graph(tf_model_path)
File “/usr/local/lib/python3.10/dist-packages/onnx_tf/backend_rep.py”, line 143, in export_graph
signatures=self.tf_module.call.get_concrete_function(
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py”, line 1215, in get_concrete_function
concrete = self.get_concrete_function_garbage_collected(*args, **kwargs)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py”, line 1195, in get_concrete_function_garbage_collected
self.initialize(args, kwargs, add_initializers_to=initializers)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py”, line 749, in initialize
self.variable_creation_fn # pylint: disable=protected-access
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py”, line 162, in get_concrete_function_internal_garbage_collected
concrete_function, _ = self.maybe_define_concrete_function(args, kwargs)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py”, line 157, in maybe_define_concrete_function
return self.maybe_define_function(args, kwargs)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py”, line 360, in maybe_define_function
concrete_function = self.create_concrete_function(args, kwargs)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py”, line 284, in create_concrete_function
func_graph_module.func_graph_from_py_func(
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/func_graph.py”, line 1283, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py”, line 645, in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py”, line 445, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/func_graph.py”, line 1269, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/func_graph.py”, line 1258, in autograph_handler
return autograph.converted_call(
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py”, line 439, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/autograph_generated_filen73p6flj.py", line 30, in tf____call
ag
.for_stmt(ag
.ld(self).graph_def.node, None, loop_body, get_state, set_state, (), {‘iterate_names’: ‘node’})
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 463, in for_stmt
py_for_stmt(iter, extra_test, body, None, None)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 512, in py_for_stmt
body(target)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 478, in protected_body
original_body(protected_iter)
File "/tmp/autograph_generated_filen73p6flj.py", line 23, in loop_body
output_ops = ag
.converted_call(ag
.ld(self).backend.onnx_node_to_tensorflow_op, (ag
.ld(onnx_node), ag
.ld(tensor_dict), ag
.ld(self).handlers), dict(opset=ag
.ld(self).opset, strict=ag__.ld(self).strict), fscope)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py”, line 439, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/autograph_generated_filehko52q0x.py", line 62, in tf___onnx_node_to_tensorflow_op
ag
.if_stmt(ag__.ld(handlers), if_body_1, else_body_1, get_state_1, set_state_1, (‘do_return’, ‘retval_’), 2)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 1363, in if_stmt
py_if_stmt(cond, body, orelse)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 1416, in py_if_stmt
return body() if cond else orelse()
File "/tmp/autograph_generated_filehko52q0x.py", line 56, in if_body_1
ag
.if_stmt(ag
.ld(handler), if_body, else_body, get_state, set_state, (‘do_return’, ‘retval_’), 2)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 1363, in if_stmt
py_if_stmt(cond, body, orelse)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 1416, in py_if_stmt
return body() if cond else orelse()
File "/tmp/autograph_generated_filehko52q0x.py", line 48, in if_body
retval
= ag
.converted_call(ag
_.ld(handler).handle, (ag__.ld(node),), dict(tensor_dict=ag__.ld(tensor_dict), strict=ag__.ld(strict)), fscope)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py”, line 439, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/autograph_generated_file2lyuep46.py", line 41, in tf__handle
ag
.if_stmt(ag__.ld(ver_handle), if_body, else_body, get_state, set_state, (‘do_return’, ‘retval_’), 2)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 1363, in if_stmt
py_if_stmt(cond, body, orelse)
File “/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/operators/control_flow.py”, line 1416, in py_if_stmt
return body() if cond else orelse()
File "/tmp/autograph_generated_file2lyuep46.py", line 40, in else_body
raise ag
.converted_call(ag
.ld(BackendIsNotSupposedToImplementIt), (ag__.converted_call(‘{} version {} is not implemented.’.format, (ag__.ld(node).op_type, ag__.ld(cls).SINCE_VERSION), None, fscope),), None, fscope)
onnx.backend.test.runner.BackendIsNotSupposedToImplementIt: in user code:

File "/usr/local/lib/python3.10/dist-packages/onnx_tf/backend_tf_module.py", line 99, in __call__  *
    output_ops = self.backend._onnx_node_to_tensorflow_op(onnx_node,
File "/usr/local/lib/python3.10/dist-packages/onnx_tf/backend.py", line 347, in _onnx_node_to_tensorflow_op  *
    return handler.handle(node, tensor_dict=tensor_dict, strict=strict)
File "/usr/local/lib/python3.10/dist-packages/onnx_tf/handlers/handler.py", line 61, in handle  *
    raise BackendIsNotSupposedToImplementIt("{} version {} is not implemented.".format(node.op_type, cls.SINCE_VERSION))

BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.

Conversion using onnx-tf also failed, cannot use this ONNX file - contact your solutions engineer, or post the logs on the forum.
Application exited with code 1

Converting ONNX model failed, see above

Job failed (see above)

Hi @nuttawac

Welcome to the forum

Let me check your project.

Best

Eoin

Hi @nuttawac

Can you share the ONNX IR format version of this model?

You can find this by opening the model with Netron please share a screenshot with us for review. Thanks.

Best

Eoin