Question/Issue:
I want to import my own model into Edge Impulse, but get the error:
Job started Converting SavedModel… Scheduling job in cluster… Container image pulled! Job started INFO: No representative features passed in, won’t quantize this model
Extracting saved model… Extracting saved model OK
–saved-model /tmp/saved_model does not exist Application exited with code 1
Converting SavedModel failed, see above Job failed (see above)7
Could you help me understand why this is happening? I trained my model and saved it in TensorFlow SavedModel format, then compressed it in a zip file.
Converting SavedModel...
Scheduling job in cluster...
Container image pulled!
Job started
INFO: No representative features passed in, won't quantize this model
Extracting saved model...
Extracting saved model OK
Converting to TensorFlow Lite...
WARN: Failed to convert to TensorFlow Lite: SavedModel file does not exist at: /tmp/extracted_sm/{saved_model.pbtxt|saved_model.pb}
Application exited with code 1
Converting SavedModel failed, see above
Job failed (see above)
it worked after I renamed the ZIP file to “saved_model”. But then I got more errors. I think those are from my tensorflow model.
Creating job... OK (ID: 24689120)
Scheduling job in cluster...
Container image pulled!
Job started
Converting SavedModel...
Scheduling job in cluster...
Container image pulled!
Job started
INFO: No representative features passed in, won't quantize this model
Extracting saved model...
Extracting saved model OK
Converting to TensorFlow Lite...
WARN: Failed to convert to TensorFlow Lite: Op type not registered 'SimpleMLCreateModelResource' in binary running on job-project-532838-24689126-dqu3g-fbrp5. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
You may be trying to load on a different device from the computational device. Consider setting the `experimental_io_device` option in `tf.saved_model.LoadOptions` to the io_device such as '/job:localhost'.
Application exited with code 1
Converting SavedModel failed, see above
Job failed (see above)
Indeed, it seems that you’re using an op that is not supported.
I am not familiar with that op, I had a quick look and it seems to come from tensorflow-decision-forest (TF-DF) correct?
Exactly, the model is from tensorflow-decision-forest. I am currently trying to figure out a way to deploy this model for C/C++. But I just learned that tflite is not yet supporting this library.