Unable to profile .tflite of MobileVIT

I followed code copied from here.

from transformers import TFMobileViTForImageClassification
import tensorflow as tf


model_ckpt = "apple/mobilevit-xx-small"
model = TFMobileViTForImageClassification.from_pretrained(model_ckpt)

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS,
    tf.lite.OpsSet.SELECT_TF_OPS,
]
tflite_model = converter.convert()
tflite_filename = model_ckpt.split("/")[-1] + ".tflite"
with open(tflite_filename, "wb") as f:
    f.write(tflite_model)

After saving .tflite file, I run the following in Colab.

ei.model.profile(model='./mobilevit-xx-small.tflite')

But I got this.

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-13-1bfe319f0780> in <cell line: 1>()
----> 1 ei.model.profile(model='./mobilevit-xx-small.tflite')

12 frames
/usr/local/lib/python3.10/dist-packages/edgeimpulse_api/api_client.py in deserialize(self, response, response_type)
    304 
    305             if not data["success"]:
--> 306                 raise Exception(data["error"])
    307 
    308         except ValueError:

Exception: No uploaded model yet

Documentation said,

This often means that the model you are attempting to upload is unsupported.

But I don’t think this is the case. (Doc said TensorFlow Lite (.tflite or .lite) is supported)

@yujonglee Not at this moment:

Op builtin_code out of range: 150. Are you using old TFLite binary with newer model?
Registration failed.

We’re a few versions behind on the main TensorFlow version, which has these ops. There’s a PR open to upgrade this to the latest but it’s been a pain to stabilize.

Got it. Is there some near roadmap that I can refer to? If it is low priority for now, please let me know.

@yujonglee It’s hard to commit to a timeline here. There’s a bunch of moving pieces unfortunately… We’ve seen non-deterministic training behavior (that’s a big one), we have some dependencies that are not compatible (so requires porting ourselves), we ship some custom kernels so have to rebuild from source code for a variety of architectures. I was hoping this would be finished in Q2, but we clearly didn’t make that :slight_smile: I’ll hope to have something before the end of summer, but as said above, cannot commit to an exact timeline.

1 Like

@janjongboom Thank you.
One more question: How about ONNX? Do you have supporting Opset number?