I’ve trained a model for object detection and it’s running on a jetson Nano. I saw in the EdgeImpulse documentation the TensorRT librairy is not supported at the moment . I’m curious to know the reason behind, is it linked to the DSP blocks?
If I convert the Tensorflow model to ONNX do I have a chance to make it work or it’s not worth the effort ?
I also have a comment / question, Jetson Nano is delivered with Python3.6, edgeimpulse requires python 3.7 . I had a lot of troubles upgrading python on the nano and having the dependencise for OpenMV work.
So my question would be what is the simplest way to upgrade python on the nano ? Is there a docker image that I could use to make this process easier?
@Keja the reason is that at the end of the neural network there’s a custom operator that performs the regression step to output the bounding boxes. This sounds easy, as this operator is used by every MobileNet SSD model, but whatever we try we cannot get this converted in a way that TensorRT understands this. We’re currently working on a new object detection pipeline that does not depend on this operator, and from then we can support GPU on Jetson Nano out of the box.
If someone reading this actually has a functioning pipeline to go from trained TFLite model => something running in TensorRT under C++ then I’ll buy you a beer.
I also have a comment / question, Jetson Nano is delivered with Python3.6, edgeimpulse requires python 3.7 . I had a lot of troubles upgrading python on the nano and having the dependencise for OpenMV work.
I don’t think we have a hard dependency on 3.7, if you have a functioning OpenCV installation under 3.6 then I think it might just run as-is.