Deploying a new model on Arduino afterwards


Lets say that I have trained and tested a model. Now, I have downloaded the arduino library from edge impulse and uploaded the sketch. Since, I have already deployed this model ie uploaded the sketch on arduino and once its deployed I want to change the model because I want to add new classes to it. Now, lets say I dont have access to arduino except remotely but I have a new tflite model and I would like to deploy this model. Is it possible to change the tflite flat buffer only? Like I have the new model and I dont want to download the entire library again rather I want to use the new tflite and update it only (use the same data feeding mechanism and every other thing is the same) I would appreciate if some one can describe the process thoroughly.

HI Umid.

What is changing is the tflite and the model parameters folder that you get when downloading the library. So you only need to replace these two. This is the only solution. You have to download the library each time you train a new model.


Hi Omar,
Is it possible that we can train the model locally using the local train option and then manually change these files in the previously downloaded library?

Hi Umid,

I am not aware of the local train option, where did you find that?


Here is the option. Once you download this file then it has a well written code for training the data. Im assuming this is trained locally because of the structure of the code.

Previously, a few months back, there used to be edit in jupyter notebook option here. Also I have developed a program from Edge Impulse that goes through the entire pipeline locally I was just struggling in deploying it. Exactly the same results are achieved because I am using the same MFE from github and using same scipy libraries etc.

I just need help in updating the program without downloading it. I would really appreciate your help.

Hi Umid,

The option is saying to “Edit block locally” and not "Train locally’. Of course, you can download the code and train it on your local machine, but the resulting model is going to be stored as a tflite model.
We are using the EON compiler to allow you to have the model provided you as a C++ source code, the eon compiler reduces the memory and the flash consumption on your MCU.

The idea of this option is to allow you to have the possibility to write your code locally and test it and then upload it to train it on the cloud.

You can produce the pipeline locally but you will not get the same optimization that we provide in the studio.

If you do not care about the optimization, then you need to disable the EON compiler and download the C++ library, then you can replace the tflite model if it is updated, this should work.


1 Like

Alright. Thanks Omar I will try this out and update this thread accordingly. This was helpful, thanks for clearing any misconceptions.