Tflite model location

Is there a way:

  1. Can I import and use my own model (tflite quantized model) for transfer learning in the case of image classification?
  2. I can visualize the model just like netron helps visualize .tflite model?

(a bit of noob in this space)

Found answer to (2): deployment on openMV generates tflite file which can be viewed using netron.

Update: it is also on the dashboard once you complete all the steps!

1 Like

Hey @ameynaik, for transfer learning you actually need the full models in hdf5 format, you can’t use quantized models.

There’s currently a way to plug in your own transfer learning models by clicking the three dots on the Transfer Learning page, and selecting Switch to Keras (expert) mode. Then find these lines:

base_model = tf.keras.applications.MobileNetV2(
    input_shape=INPUT_SHAPE, alpha=0.35,

Here you can write some Python code to download your own transfer learning model and then plug it into the weights parameter. At some point we’ll block this as I don’t like these blocks to have internet access, but we’ll replace it with a UI for plugging in your own transfer learning models.

And yes, for 1. See the Dashboard, files are under Download block output.

1 Like

Thanks! I could over-write the entire model by storing .h5 in my google drive and then downloading it using

> import requests

> def download_file_from_google_drive(id, destination):
>     URL = ""
>     session = requests.Session()
>     response = session.get(URL, params = { 'id' : id }, stream = True)
>     token = get_confirm_token(response)
>     if token:
>         params = { 'id' : id, 'confirm' : token }
>         response = session.get(URL, params = params, stream = True)
>     save_response_content(response, destination)    
> def get_confirm_token(response):
>     for key, value in response.cookies.items():
>         if key.startswith('download_warning'):
>             return value
>     return None
> def save_response_content(response, destination):
>     CHUNK_SIZE = 32768
>     with open(destination, "wb") as f:
>         for chunk in response.iter_content(CHUNK_SIZE):
>             if chunk: # filter out keep-alive new chunks
>                 f.write(chunk)
>download_file_from_google_drive(file_id, destination) (resource)

Then at the bottom of keras expert mode, I tried overwriting my entire model.
from tensorflow.keras.models import load_model
print(“over-writing the model”)
model = load_model(destination)

Is this valid??
I see different performance on edge impulse(not sure if my model is over-written). I am trying to leverage edge impulse to generate a firmware-build for my model so that I can run it on an embedded device.

Hi @ameynaik,

To load your weights pre-trained on ImageNet you need to change the weights value to point out to your own .h5 file:

Then training will be done using your model instead of the default ImageNet weights.

Let me know if that helps,

@ameynaik We don’t support that workflow at the moment, and even when you manage to get it to work it’ll probably break very soon once we do an update. My suggestion would be to either use TensorFlow Lite to convert the neural network and then integrate it into your firmware, or replicate the training stage in Edge Impulse.