Unused CPU running object detection

I have 40-50% unused CPU while seeing 60-100 ms inference speed in docker on both a RPi4 and Odroid N2+ which makes me believe I’m IO bound on the camera.

I’m using a See3Cam_CU30 from Econ systems and we leverage the onboard MJPEG codec when running on a Jetson Nano.

I’ve tried modifying the gst-launch-1.0 params but get an “unsupported format” error which, if I’m understanding the architecture of the edge-impulse-linux-runner is from another process which is picking up the .jpg files in /dev/shm/* and performing inference on them.

A few questions:

  • Any recommendations on exploring why I’m not using more CPU?
  • Is there docu on the edge-impulse-linux-runner ( if not, am I understanding the arch correctly ? )
  • Do the .jpgs in /dev/shm hold on to any memory? I would guess not, but perhaps use up inodes?
  • Should those get cleaned up after stopping the process?

thanks!

just a datapoint but I hacked gstreamer.js to GST_DEBUG=4 and use this pipeline that works from terminal but fails from edge-impulse-linux-runner with gstreamer reporting a syntax error. I console.log the same as the spawn command and paste it into the terminal to test.

gst-launch-1.0 v4l2src device=/dev/video0 

! 

image/jpeg,width=640,height=480 

! 

jpegdec 

! 

tee name=t 

t. ! queue ! videoconvert ! ximagesink 

t. ! queue ! jpegenc ! multifilesink location=test%05d.jpg

Hello @inteladata,

  1. Not entirely sure but can you see if you can grant more resources to nodeJS?
  2. You can have a look at what is happening in that source code, edge-impulse-linux-cli/runner.ts at master · edgeimpulse/edge-impulse-linux-cli · GitHub , feel free to explore this repository, it’s the source code for the linux cli.
  3. Do the .jpgs in /dev/shm hold on to any memory? Good question, I am not sure @janjongboom maybe you have an idea?
  4. That remind me of an old thread: Runner.sock is not cleaned up after classification It probably has not been done, I’ll check again with our core engineering team.

Best,

Louis

hi @louis … quick update

The node profiler is suggesting that the resizeImage ( which calls sharp.resize ) is consuming the most cycles but it’s not clear to me why CPU usage is so low. I nice’d node and it didn’t make a difference. I’m still investigating.

Couple of questions :

  1. Can I safely ignore the use of the word “classification” in documentation and code when working with fomo? My guess is that the docu hasn’t caught up yet?
  2. I was looking for a standalone example with fomo but not finding anything specific to fomo…do I need to adapt existing examples? For example, using opencv python or c/c++ API to pull frames via v4l?
  3. In the c++ standalone examples there is step to copy features into main.cpp … this is an enormous array of features when using fomo … am I barking up the wrong tree?

I feel like my best course of action is to the layers of abstraction so that I can determine what is preventing full use of the CPU.

thanks!

Hello @inteladata,

Yes resizing is greedy indeed, especially when the input image is very large. If your camera support it, you can also try to grab the image at the desired size directly from the sensor. Or at least collect them with a low resolution size.

  1. The image classification part is available in the code to be compatible with image classification projects too. Can you point me which part of the documentation is not clear to you so I can see how to improve it please?

  2. The standalone code for FOMO and for the classic object detection is the same, when using FOMO, you can just ignore the width and height. example-standalone-inferencing-linux/camera.cpp at master · edgeimpulse/example-standalone-inferencing-linux · GitHub

  3. I believe the array you are talking about is the raw values of the picture, correct?

Best,

Louis

thanks @louis
I see that sharp uses vips and is fast, but I would expect it to be even MORE greedy. I’ll post some summary information from node profiler for context.

Lowest res on this camera is 640x480.

  1. I think, for example, it was mainly that the fomo detection was being performed in a classifier file : /usr/lib/node_modules/edge-impulse-linux/build/library/classifier/image-classifier.js vs image-object-detector.js… it’s not a big deal, I just wanted to be sure I was doing the right thing with fomo ( which seems like I am )
  2. will take a look at camera.cpp. I see that the README for https://github.com/edgeimpulse/example-standalone-inferencing-linux doesn’t explicitly address object detection, but you mention they are sharing the same code base
  3. yes, I think the features appear to be the raw picture values … would I still paste those into main.cpp?