I’ve trained an NVIDIA Tao model(specifically YOLOv4 having MobileNetV2 backbone) and I want to deploy the same onto Grove Vision AI v2 which can support a model input of 3x224x224 and about 1.7 MB in size.
How can I post process(prune, retrain etc.) on Edge Impulse so that I can reduce the model size? Any help on this would be greatly appreciated.
Thanks for the quick reply! Unfortunately, at the moment I only have access to Grove Vision AI v2 which has a constraint of 2 MB SRAM(obv not all of the SRAM is available for the model). Would love some tips for target device options here? Currently using Seeed Vision AI module but Vision AI v2 has Cortex M55 and Ethos U55.
Additionally, I observed that the TAO YOLO SSD models are smaller in size(~1.6 MB for the MobileNetV2 3x224x224 backbone) but like all the other TAO models, the accuracy(or precision score on EI) is really bad about 9.2%(YOLOv4 - MobNetv2 3x224x224). Do you know the reason for this? My project ID is as follows: mouse vs cup - Dashboard - Edge Impulse
Also I noticed that here NVIDIA TAO (Object detection & Images) | Edge Impulse Documentation that not all backbones might be pre trained from NVIDIA. Is that correct or am I interpreting it wrong? Do you have a detailed guide for the pre-trained models from NVIDIA TAO and the models trained by EI on ImageNet?
Another quick question: how do I change the model being optimized in EON tuner?
I see you are working for Himax are you engaging with our solutions team? I think you would be best to work with them on TAO as it is something I dont have a lot of experience with yet. (still waiting for my test device to arrive )