Question/Issue:
I need some help deploying a locally trained TFlite model.
Project ID:
Context/Use case:
Hello, I am not sure if this problem has been ask before. And sorry for some stupid questions, I am new to the tinyml field.
After looking into the form for a couple form I couldn’t find my answer except for some redirection to Custom learning blocks - Edge Impulse Documentation.
I have a 3090ti, so I would like to instead use my local GPU to train the model instead of the edge-impulse server because only enterprises are allowed to use GPU training.
I have previously trained a TFlite model using the notebook from How to Train TensorFlow Lite Object Detection Models Using Google Colab | SSD MobileNet - YouTube but I need to deploy the model into esp32-cam.
Is that even possible?
Thank you for any input
Cheers
Sam