I’m exprementing running the eim model generated in my EI project with Jetson nano. I would like to report the following:
- I’m facing the green screen issue. It is a known issue for opencv and jetson nano as reported here.
- I modified the
cv2.VideoCapture() in image.py in linux-sdk-python and installed this library from source in my machine to include the fix but the code hanges after
cv2.VideoCapture(). So I think this need a fix somehow.
- I’m getting 1 fps with raspberry pi 4 and using Jetson nano I’m getting 2 fps (with the green screen problem). Does EI team experianced object detection with Jetson nano before ? is it normal to get only 2 fps ?!.
Hi @yahyatawil I presume that you’ve followed the steps in: https://docs.edgeimpulse.com/docs/nvidia-jetson-nano to successfully setup your working environment. Generally have not had any issues with object detection and inferencing speed is about 200-300ms, so you should be getting about 3-4 frames per second. Can you confirm that when you run the model using edge-impulse-linux-runner that this baseline is working for you? I personally use a Logitech webcam and that works pretty well.
Yes I followed the installation instructions. I’m using the CSI-Camera port using Raspberry Pi camera. I will try the edge-impulse-linux-runner later and report it back. But again, running with Python SDK and using Raspberry Pi CSI Camera will probably lead to green screen issue.
@yahyatawil The 1fps and 2fps are very low, I wonder if there’s something blocking in your CV pipeline. Running on CPU we see ~300ms. per inference on Rpi4, and ~250ms. per inference on Jetson Nano. Naturally we can run faster on the GPU (see https://github.com/edgeimpulse/example-standalone-inferencing-linux#tensorrt on how to build .eim files that do that) but we’re still having trouble compiling object detection models. Once we’ve published our new pipeline this’ll be resolved (but very annoying for the time being).