Error while implementing ResNet based NN using Edge Impulse NN Classifier

I am trying to implement a ResNet based NN architecture in Edge Impulse NN classifier. In the “Keras expert mode” I am using the Keras functional API to declare my network.
But during the “Calculating performance metrics…” stage I am getting the following error.

Job started
Splitting data into training and validation sets…
Splitting data into training and validation sets OK

Training model…
Training on 2693 inputs, validating on 674 inputs
Epoch 47% done
85/85 - 20s - loss: 0.7918 - accuracy: 0.5978 - val_loss: 0.5008 - val_accuracy: 0.7522
Finished training

Saving best performing model…
Still saving model…
Converting TensorFlow Lite float32 model…
Converting TensorFlow Lite int8 quantized model with float32 input and output…
Converting TensorFlow Lite int8 quantized model with int8 input and output…
Calculating performance metrics…
Traceback (most recent call last):
File “/home/train.py”, line 734, in
main_function()
File “/home/train.py”, line 723, in main_function
file_int8=os.path.join(dir_path, ‘model_quantized_int8_io.tflite’))
File “/home/train.py”, line 381, in get_model_metadata
‘layers’: describe_layers(keras_model),
File “/home/train.py”, line 356, in describe_layers
‘shape’: layer.input.shape[1],
AttributeError: ‘list’ object has no attribute ‘shape’

Application exited with code 1 (Error)

Job failed (see above)

I assume this is generated from the input to Add() layer, which takes a list as input.

Has anyone implemented ResNet models on Edge Impulse? Any information regarding this would be greatly helpful.

Thank you for your time :slight_smile:

I am including my NN code below.

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.optimizers import Adam

model architecture

channels = 1
columns = 40
rows = int(input_length / (columns * channels))

Define graph

inputs = keras.Input(shape=(input_length,))

Reshape to H*W format

x = layers.Reshape((rows, columns, channels), input_shape=(input_length, ))(inputs)

Layer Conv0

x = layers.Conv2D(19, kernel_size=3, activation=‘relu’, kernel_constraint=tf.keras.constraints.MaxNorm(1), padding=‘same’,use_bias=False)(x)

Average pool layer

x = layers.AveragePooling2D(pool_size=(4,3), strides=(4,3), padding=‘same’)(x)

Residual block 1

y = x
x = layers.Conv2D(19, kernel_size=3, activation=‘relu’, kernel_constraint=tf.keras.constraints.MaxNorm(1), padding=‘same’,use_bias=False)(x)
x = layers.Conv2D(19, kernel_size=3, activation=‘relu’, kernel_constraint=tf.keras.constraints.MaxNorm(1), padding=‘same’,use_bias=False)(x)
x = layers.Add(input_shape=x.shape)([x,y])

Fully connected layer

x = layers.Flatten()(x)
outputs = layers.Dense(classes, activation=‘softmax’, use_bias=True, name=‘y_pred’)(x)

model = keras.Model(inputs=inputs, outputs=outputs, name=“resmodel”)

this controls the learning rate

opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999)

this controls the batch size, or you can manipulate the tf.data.Dataset objects yourself

BATCH_SIZE = 32
train_dataset, validation_dataset = set_batch_size(BATCH_SIZE, train_dataset, validation_dataset)
callbacks.append(BatchLoggerCallback(BATCH_SIZE, train_sample_count))

train the neural network

model.compile(loss=‘categorical_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
model.fit(train_dataset, epochs=1, validation_data=validation_dataset, verbose=2, callbacks=callbacks)

@shyamap95 I think what would work is:

  1. Keep the input layer as is generated by Edge Impulse.
  2. Change the reshape layer then to be compatible with that.

I think the code we generate for 2D audio convolutions already does what you want:

model = Sequential()
channels = 1
columns = 40
rows = int(input_length / (columns * channels))
model.add(Reshape((rows, columns, channels), input_shape=(input_length, )))

If not, give me your project ID (you can find it on your dashboard) and I’ll take a look!

1 Like

Hi @janjongboom

Thanks a lot for your response.

I modified the input layer code generated by Edge Impulse, as it includes a Sequential() model which is not suitable for ResNet architecture. I had verified the alternate code I am using for reshaping the input, using a simple CNN model.

As long as I remove the layers.Add() layer (required for ResNet) the code runs fine. When I include the Add() layer, the TF-lite model gets generated but while “Calculating performance metrics…” the code runs into error.

My project ID is 18148.

Hi @shyamap95 It’s actually not your input layer, but rather:

x = layers.Add(input_shape=x.shape)([x,y])

Here there are two inputs feeding into a single layer which screws up our layer representations. We’ll fix this this week.

Hi @janjongboom
Thanks a lot :smiley: