Help with local trainning

I need some help deploying a locally trained TFlite model.

Project ID:

Context/Use case:
Hello, I am not sure if this problem has been ask before. And sorry for some stupid questions, I am new to the tinyml field.
After looking into the form for a couple form I couldn’t find my answer except for some redirection to Custom learning blocks - Edge Impulse Documentation.
I have a 3090ti, so I would like to instead use my local GPU to train the model instead of the edge-impulse server because only enterprises are allowed to use GPU training.
I have previously trained a TFlite model using the notebook from How to Train TensorFlow Lite Object Detection Models Using Google Colab | SSD MobileNet - YouTube but I need to deploy the model into esp32-cam.
Is that even possible?

Thank you for any input

Hi @sxu29,

You cannot perform training locally with Edge Impulse. Your best bet is to train using TensorFlow/Keras or PyTorch and then convert your trained model to a C++ library (or a library for your particular microcontroller) using our Python SDK. Note that Google Colab also gives you limited access to GPU compute time. Hope that helps!

1 Like

Thank you for the reply and guidance, I will take a look at the Edge Impulse Python SDK.

Side note, the only reason why I am asking to train the model locally is that I have a server with Nvidia GPUs and I would like to utilize its potential. I will not be using any online service such as google collab for training.

1 Like