GlobalMaxPooling / Exception: TensorFlow Lite op "REDUCE_MAX" has no known MicroMutableOpResolver method

Hello!

I tried the following model structure:

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 reshape (Reshape)           (None, 199, 65)           0         
                                                                 
 conv1d (Conv1D)             (None, 199, 8)            2088      
                                                                 
 conv1d_1 (Conv1D)           (None, 199, 16)           528       
                                                                 
 max_pooling1d (MaxPooling1D  (None, 100, 16)          0         
 )                                                               
                                                                 
 conv1d_2 (Conv1D)           (None, 100, 16)           1040      
                                                                 
 conv1d_3 (Conv1D)           (None, 100, 32)           2080      
                                                                 
 max_pooling1d_1 (MaxPooling  (None, 50, 32)           0         
 1D)                                                             
                                                                 
 conv1d_4 (Conv1D)           (None, 50, 32)            4128      
                                                                 
 conv1d_5 (Conv1D)           (None, 50, 64)            8256      
                                                                 
 global_max_pooling1d (Globa  (None, 64)               0         
 lMaxPooling1D)                                                  
                                                                 
 dropout (Dropout)           (None, 64)                0         
                                                                 
 y_pred (Dense)              (None, 2)                 130       
                                                                 
=================================================================
Total params: 18,250
Trainable params: 18,250
Non-trainable params: 0

Resulting in following error:

Saving best performing model...
Converting TensorFlow Lite float32 model...
Converting TensorFlow Lite int8 quantized model...
Calculating performance metrics...
Profiling float32 model...
Profiling float32 model (tflite)...
Traceback (most recent call last):
  File "/home/tflite_find_operators.py", line 100, in <module>
    raise Exception('TensorFlow Lite op "{}" has no known MicroMutableOpResolver method.'.format(key))
Exception: TensorFlow Lite op "REDUCE_MAX" has no known MicroMutableOpResolver method.
Error while finding memory:
Command '['sh', '/home/prepare_tflite.sh']' returned non-zero exit status 1.
Profiling int8 model...
Profiling int8 model (tflite)...
Traceback (most recent call last):
  File "/home/tflite_find_operators.py", line 100, in <module>
    raise Exception('TensorFlow Lite op "{}" has no known MicroMutableOpResolver method.'.format(key))
Exception: TensorFlow Lite op "REDUCE_MAX" has no known MicroMutableOpResolver method.
Error while finding memory:
Command '['sh', '/home/prepare_tflite.sh']' returned non-zero exit status 1.

Model training complete
Traceback (most recent call last):
  File "/app/./resources/libraries/ei_tensorflow/profiling.py", line 328, in profile_tflite_model
    subprocess.check_output(['sh', '/home/prepare_tflite.sh']).decode("utf-8")
  File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['sh', '/home/prepare_tflite.sh']' returned non-zero exit status 1.
Traceback (most recent call last):
  File "/app/./resources/libraries/ei_tensorflow/profiling.py", line 328, in profile_tflite_model
    subprocess.check_output(['sh', '/home/prepare_tflite.sh']).decode("utf-8")
  File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['sh', '/home/prepare_tflite.sh']' returned non-zero exit status 1.

Taking inspiration from https://stackoverflow.com/a/66971484, I constructed a new model:

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 reshape (Reshape)           (None, 199, 1, 65)        0         
                                                                 
 conv2d (Conv2D)             (None, 199, 1, 8)         2088      
                                                                 
 conv2d_1 (Conv2D)           (None, 199, 1, 16)        528       
                                                                 
 max_pooling2d (MaxPooling2D  (None, 100, 1, 16)       0         
 )                                                               
                                                                 
 conv2d_2 (Conv2D)           (None, 100, 1, 16)        1040      
                                                                 
 conv2d_3 (Conv2D)           (None, 100, 1, 32)        2080      
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 50, 1, 32)        0         
 2D)                                                             
                                                                 
 conv2d_4 (Conv2D)           (None, 50, 1, 32)         4128      
                                                                 
 conv2d_5 (Conv2D)           (None, 50, 1, 64)         8256      
                                                                 
 global_max_pooling2d (Globa  (None, 64)               0         
 lMaxPooling2D)                                                  
                                                                 
 dropout (Dropout)           (None, 64)                0         
                                                                 
 y_pred (Dense)              (None, 2)                 130       
                                                                 
=================================================================
Total params: 18,250
Trainable params: 18,250
Non-trainable params: 0

Which is identical to the 1D counterpart. In another use-case, I had a similar problem in my own scripts (having errors with 1D model including GlobalMaxPooling), but I have been able to full-integer quantize and convert 2D-model to .tflite. I have also been able to successfully deploy it to Nano 33 BLE Sense.

However, Edge Impulse gives the same error as for the 1D model. I guess though that the models convert successfully, but when the profiling starts (Profiling float32 model (tflite)), some operation is missing (REDUCE_MAX). The problem is probably in the GlobalMaxPooling.

Anyway, I think it would be helpful to be able to use the GlobalMaxPooling-layer in Edge Impulse. :slight_smile:

Hello @ympyra,

I think this GlobalMaxPooling problem is intrinsically linked to tensorflow lite.
@dansitu @matkelcey, you might have an explanation on this? :slight_smile:

Regards,

Louis

Hello @louis!

Yeah, I think the same. Interestingly, I noticed that even though I received those errors, I could download all models from my dashboard. Then I decided to try to deploy the model to Nano 33 BLE Sense and it worked.

So I am not sure anymore what the error triggered by Profiling float32 model (tflite) means. :grin: Well, at least because of it, I didn’t get the measures for Peak RAM and Flash Usage.

Hi @ympyra,

Thanks for your helpful post! This was an issue on our end, and I have a fix in the pipeline that should hopefully be deployed on Monday :slight_smile:

Have a great weekend!

Warmly,
Dan

1 Like

Hello @dansitu!

Great to hear that!

Have a pleasant weekend as well! :relaxed: