High model latency on Raspberry Pi 4

I am performing a real-time object classification task on a Raspberry Pi 4 with the Python SDK. The command I used to download the model is:

edge-impulse-linux-runner --download modelfile.eim

The model however runs quite slow (+300ms)

edgeimpulse forum pic

Is there a way to use a quantised model or some other way to reduce classification speed?

Hello @ElectricDragon,

You can pass the --quantized argument to your command, it will build and download the quantized version. You can see all available arguments with the --help argument:

edge-impulse-linux-runner --help
Usage: edge-impulse-linux-runner [options]

Edge Impulse Linux runner 1.2.6

Options:
  -V, --version        output the version number
  --model-file <file>  Specify model file, if not provided the model will be
                       fetched from Edge Impulse
  --api-key <key>      API key to authenticate with Edge Impulse (overrides
                       current credentials)
  --download <file>    Just download the model and store it on the file system
  --clean              Clear credentials
  --silent             Run in silent mode, don't prompt for credentials
  --quantized          Download int8 quantized neural networks, rather than the
                       float32 neural networks. These might run faster on some
                       architectures, but have reduced accuracy.
  --enable-camera      Always enable the camera. This flag needs to be used to
                       get data from the microphone on some USB webcams.
  --dev                List development servers, alternatively you can use the
                       EI_HOST environmental variable to specify the Edge
                       Impulse instance.
  --verbose            Enable debug logs
  -h, --help           output usage information

Regards,

Louis

Also note that some times, float32 models can be faster as commented here: How to get better Raspberry performance?

Unfortunately this is as fast as it’ll go on the Pi 4 until we have some new smaller models for object detection. This 300ms is already with full hardware acceleration on the CPU.

I tried the quantized flag and ran the code. I got the following error

Traceback (most recent call last):
  File "classify.py", line 130, in <module>
    main(sys.argv[1:])
  File "classify.py", line 97, in main
    for res, img in runner.classifier(videoCaptureDeviceId):
  File "/home/pi/.local/lib/python3.7/site-packages/edge_impulse_linux/image.py", line 61, in classifier
    res = self.classify(features)
  File "/home/pi/.local/lib/python3.7/site-packages/edge_impulse_linux/image.py", line 45, in classify
    return super(ImageImpulseRunner, self).classify(data)
  File "/home/pi/.local/lib/python3.7/site-packages/edge_impulse_linux/runner.py", line 60, in classify
    return self.send_msg(msg)
  File "/home/pi/.local/lib/python3.7/site-packages/edge_impulse_linux/runner.py", line 106, in send_msg
    raise Exception(resp["error"])
Exception: Classifying failed, error code was -13

@janjongboom can you help me?

@ElectricDragon Quantization is not going to help much here, f32 models are as fast as i8 currently.

Oh okay…thank you for your help.