CLI and the Python SDK classify differently

Hi, I’m trying to classify images (One label per data item) using a Raspberry Pi 4, but I have a problem with It. When I run the impulse by the Edge Impulse Runner It classify correctly. But since I want it to do the identification locally and without internet, I need to use the Python example “Classify.py”. The issue I have is that the latter gives me a totally wrong result.

If you could help me, it would be very good for me because I have to deliver the work urgently, and I did not expect this error.

Project ID: 176862

@ncasane

Can you share more on what is exactly wrong with results?

Secondly the Edge Impulse Runner can run offline once you’ve have downloaded the file locally.
The easiest way is to download the model from the Studio (Deployment page).

Once you have the model.eim you can run the specify the model when invoking the unner as follows:

edge-impulse-linux-runner --model-file your_downloaded_model.eim

An optional way (more involved) is to download the model with edge impulse runner.

edge-impulse-linux-runner --download your_downloaded_model.eim --force-target runner-linux-aarch64

The --force-target ensures you get the model built for aarch64.
The benifit of this approach is that you can run this command on either the host or the target.

Hi, James!!

I didn’t know that I had to download the modelfile forcing the target. Now I had done it, but the classification it’s still wrong.

When I use the Edge Impulse Runner (edge-impulse-linux-runner) I get this:

classifyRes 65ms. {
  Cilindre: '0.0000',
  Desconegut: '0.9999',
  Esfera: '0.0000',
  Quadrat: '0.0001',
  Rellotge: '0.0000'
}
classifyRes 51ms. {
  Cilindre: '0.0000',
  Desconegut: '0.9997',
  Esfera: '0.0000',
  Quadrat: '0.0002',
  Rellotge: '0.0000'
}
classifyRes 50ms. {
  Cilindre: '0.0000',
  Desconegut: '0.9999',
  Esfera: '0.0000',
  Quadrat: '0.0001',
  Rellotge: '0.0000'
}

This Is the result I expect to get when there’s no object in the image but instead of it when I run python3 classify.py modelfile.eim I get this:

(Which is incorrect because there’s no object, and It should identify “Desconegut”)

MODEL: /home/tdrnil/modelfile.eim
Loaded runner for "Nil Casañé / Detecció d'objectes"
Looking for a camera in port 0:
Camera V4L2 (480.0 x 640.0) found in port 0 
Camera V4L2 (480.0 x 640.0) in port 0 selected.
Cilindre: 0.01  Desconegut: 0.00        Esfera: 0.01    Quadrat: 0.01   Rellotge: 0.98  Rellotge

Cilindre: 0.00  Desconegut: 0.00        Esfera: 0.01    Quadrat: 0.01   Rellotge: 0.98  Rellotge

Cilindre: 0.01  Desconegut: 0.00        Esfera: 0.01    Quadrat: 0.02   Rellotge: 0.97  Rellotge

Cilindre: 0.00  Desconegut: 0.00        Esfera: 0.01    Quadrat: 0.00   Rellotge: 0.99  Rellotge

Cilindre: 0.01  Desconegut: 0.00        Esfera: 0.01    Quadrat: 0.01   Rellotge: 0.97  Rellotge

Cilindre: 0.01  Desconegut: 0.00        Esfera: 0.01    Quadrat: 0.01   Rellotge: 0.97  Rellotge

Cilindre: 0.00  Desconegut: 0.00        Esfera: 0.00    Quadrat: 0.01   Rellotge: 0.99  Rellotge

@ncasane

The Edge Impulse Runner automaticaly detects your host and downloads the model for that host. But if for example you’re on a x86_64 and would like to download a model for AARCH64 then you can use --force-target for this.

By the way you’re running the above results on the Raspberry Pi 4?

Yes I’m running them in the Raspberry Pi 4, but I had modified the Classify.py to export the resolution via the serial port.

@rjames

Hi! Have you been able to get to what the problem of sorting is?
I had also tried to make another project with a new database, this is the project ID: 152847.

If it helps you, I use a Raspberry Cam 2.1 and when I switch between the CLI and the python I have to change the /boot/config.txt path.

  • dtoverlay=vc4-kms-v3d (Python)
  • dtoverlay=imx219 (Edge Runner)

@ncasane

With your model I was able to reproduce your issue. I get different results with python sdk and edge-impulse-linux-runner. We’ll look into the issue.

@ncasane,

Is it possible to test them both using the same overlay?

I tested with the python sdk custom/classifiy.py and features from your project and got (the same) correct results. I compared the Python SDK, Example Standalone Inferencing for Linux (APP_CUSTOM) and Studio.

My guess is that with the overlays you feed different input. Please try the above and share your findings.

Hi, I can’t test it with the same overlay, because with the dtoverlay=imx219 I get this error when I try to use the Python SDK:

MODEL: /home/tdrnil/modelfile.eim
Loaded runner for "Nil Casañé / Detecció d'objectes v2"
Looking for a camera in port 0:
Traceback (most recent call last):
  File "/home/tdrnil/classify.py", line 148, in <module>
    main(sys.argv[1:])
  File "/home/tdrnil/classify.py", line 82, in main
    raise Exception('Cannot find any webcams')
Exception: Cannot find any webcams

looks like cv2.VideoCapture(port) in get_webcams() is not returning any camera. You’ll have debug this.

Can you not run edge-impulse-linux-runner on target instead? This is offline

  1. Deploy a AARCH64 (Linux) eim.
  2. copy this “downloaded.eim” to the target.
  3. run the following (offline):
edge-impulse-linux-runner --model-file downloaded.eim

@rjames

Hi!
I had been trying to make classify with the example for C++ but I had received this error when trying to execute APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j:

make: *** No targets specified and no makefile found.  Stop.

Also I tried what you told my about running the edge impulse runner offline, but this isn’t what I need because I want to export the classification to an Arduino.

@ncasane

Hi! Could you have been in the wrong directory? You should be in the project root. The the project root directory contains the Makefile (that couldn’t be found).

I hope this helps

Hi!
I had been trying the command APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j in the correct directory but I still get this error:

In file included from ./edge-impulse-sdk/classifier/ei_run_classifier.h:56,
                 from source/custom.cpp:27:
./edge-impulse-sdk/classifier/inferencing_engines/tflite_eon.h:30:10: fatal error: tflite-model/trained_model_compiled.h: No such file or directory
   30 | #include "tflite-model/trained_model_compiled.h"
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:137: source/custom.o] Error 1
make: *** Waiting for unfinished jobs....
edge-impulse-sdk/tensorflow/lite/micro/kernels/rfft2d.cc: In function ‘TfLiteStatus tflite::ops::micro::rfft2d::Eval(TfLiteContext*, TfLiteNode*)’:
edge-impulse-sdk/tensorflow/lite/micro/kernels/rfft2d.cc:163:26: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘unsigned int’} and ‘int’ [-Wsign-compare]
  163 |   for (size_t ix = 1; ix < input->dims->size; ix++) {
      |                       ~~~^~~~~~~~~~~~~~~~~~~
edge-impulse-sdk/tensorflow/lite/micro/kernels/rfft2d.cc:167:26: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘unsigned int’} and ‘int’ [-Wsign-compare]
  167 |   for (size_t ix = 1; ix < output->dims->size; ix++) {
      |                       ~~~^~~~~~~~~~~~~~~~~~~~
edge-impulse-sdk/tensorflow/lite/micro/kernels/rfft2d.cc:171:28: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘unsigned int’} and ‘int’ [-Wsign-compare]
  171 |   for (size_t row = 0; row < input->dims->data[0]; row++) {
      |                        ~~~~^~~~~~~~~~~~~~~~~~~~~~
edge-impulse-sdk/tensorflow/lite/micro/kernels/complex_abs.cc: In function ‘TfLiteStatus tflite::ops::micro::complex_abs::Eval(TfLiteContext*, TfLiteNode*)’:
edge-impulse-sdk/tensorflow/lite/micro/kernels/complex_abs.cc:67:34: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘unsigned int’} and ‘int’ [-Wsign-compare]
   67 |   for (size_t dim_ix = 0; dim_ix < input->dims->size; dim_ix++) {
      |                           ~~~~~~~^~~~~~~~~~~~~~~~~~~
edge-impulse-sdk/tensorflow/lite/micro/kernels/real.cc: In function ‘TfLiteStatus tflite::ops::micro::real::Prepare(TfLiteContext*, TfLiteNode*)’:
edge-impulse-sdk/tensorflow/lite/micro/kernels/real.cc:58:34: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘unsigned int’} and ‘int’ [-Wsign-compare]
   58 |   for (size_t dim_ix = 0; dim_ix < input->dims->size; dim_ix++) {
      |                           ~~~~~~~^~~~~~~~~~~~~~~~~~~
edge-impulse-sdk/tensorflow/lite/micro/kernels/real.cc:63:34: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘unsigned int’} and ‘int’ [-Wsign-compare]
   63 |   for (size_t dim_ix = 0; dim_ix < output->dims->size; dim_ix++) {
      |                           ~~~~~~~^~~~~~~~~~~~~~~~~~~~
edge-impulse-sdk/tensorflow/lite/micro/kernels/real.cc: In function ‘TfLiteStatus tflite::ops::micro::real::RealEval(TfLiteContext*, TfLiteNode*)’:
edge-impulse-sdk/tensorflow/lite/micro/kernels/real.cc:79:34: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘unsigned int’} and ‘int’ [-Wsign-compare]
   79 |   for (size_t dim_ix = 0; dim_ix < input->dims->size; dim_ix++) {
      |                           ~~~~~~~^~~~~~~~~~~~~~~~~~~
edge-impulse-sdk/tensorflow/lite/micro/kernels/real.cc: In function ‘TfLiteStatus tflite::ops::micro::real::ImagEval(TfLiteContext*, TfLiteNode*)’:
edge-impulse-sdk/tensorflow/lite/micro/kernels/real.cc:97:34: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘unsigned int’} and ‘int’ [-Wsign-compare]
   97 |   for (size_t dim_ix = 0; dim_ix < input->dims->size; dim_ix++) {
      |                           ~~~~~~~^~~~~~~~~~~~~~~~~~~
At global scope:
cc1plus: note: unrecognized command-line option ‘-Wno-asm-operand-widths’ may have been intended to silence earlier diagnostics
At global scope:
cc1plus: note: unrecognized command-line option ‘-Wno-asm-operand-widths’ may have been intended to silence earlier diagnostics
At global scope:
cc1plus: note: unrecognized command-line option ‘-Wno-asm-operand-widths’ may have been intended to silence earlier diagnostics
nil@tdrnil:~/example-standalone-inferencing-linux $ APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j
g++ -Wall -g -Wno-strict-aliasing -I. -Isource -Imodel-parameters -Itflite-model -Ithird_party/ -Os -DNDEBUG -DEI_CLASSIFIER_ENABLE_DETECTION_POSTPROCESS_OP=1 -g -Wno-asm-operand-widths -DEI_CLASSIFIER_USE_FULL_TFLITE=1 -Itensorflow-lite/ -std=c++14 -c source/custom.cpp -o source/custom.o
In file included from ./edge-impulse-sdk/classifier/ei_run_classifier.h:21,
                 from source/custom.cpp:27:
./model-parameters/model_metadata.h:120:2: error: #error "Cannot use full TensorFlow Lite with EON"
  120 | #error "Cannot use full TensorFlow Lite with EON"
      |  ^~~~~
In file included from source/custom.cpp:27:
./edge-impulse-sdk/classifier/ei_run_classifier.h: In function ‘EI_IMPULSE_ERROR {anonymous}::run_inference(const ei_impulse_t*, ei::matrix_t*, ei_impulse_result_t*, bool)’:
./edge-impulse-sdk/classifier/ei_run_classifier.h:130:31: error: ‘run_nn_inference’ was not declared in this scope; did you mean ‘run_inference’?
  130 |     EI_IMPULSE_ERROR nn_res = run_nn_inference(impulse, fmatrix, result, debug);
      |                               ^~~~~~~~~~~~~~~~
      |                               run_inference
In file included from source/custom.cpp:28:
./inc/bitmap_helper.h: In function ‘int create_bitmap_file(const char*, uint16_t*, size_t, size_t)’:
./inc/bitmap_helper.h:61:22: warning: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘unsigned int’} [-Wsign-compare]
   61 |     for(int i = 0; i < h; i++) {
      |                    ~~^~~
./inc/bitmap_helper.h: In function ‘int create_bitmap_file(const char*, float*, size_t, size_t)’:
./inc/bitmap_helper.h:121:22: warning: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘unsigned int’} [-Wsign-compare]
  121 |     for(int i = 0; i < h; i++) {
      |                    ~~^~~
source/custom.cpp: In function ‘int main(int, char**)’:
source/custom.cpp:77:96: warning: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘std::vector<float>::size_type’ {aka ‘unsigned int’} [-Wformat=]
   77 |         printf("The size of your 'features' array is not correct. Expected %d items, but had %lu\n",
      |                                                                                              ~~^
      |                                                                                                |
      |                                                                                                long unsigned int
      |                                                                                              %u
   78 |             EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, raw_features.size());
      |                                                 ~~~~~~~~~~~~~~~~~~~                             
      |                                                                  |
      |                                                                  std::vector<float>::size_type {aka unsigned int}
At global scope:
cc1plus: note: unrecognized command-line option ‘-Wno-asm-operand-widths’ may have been intended to silence earlier diagnostics
make: *** [Makefile:137: source/custom.o] Error 1

What should I do?

@ncasane

It looks like you deployed the C++ library with Enable EON Compiler enabled and are trying to use USE_FULL_TFLITE. They do not go together. Notice the error:

./model-parameters/model_metadata.h:120:2: error: #error “Cannot use full TensorFlow Lite with EON”
120 | #error “Cannot use full TensorFlow Lite with EON”

  1. Disable EON Compiler and re-export your Edge Impulse C++ Library.
  2. Remove the previous C++ library and place the new library.
  3. Clean your build with APP_CUSTOM=1 ... make clean
  4. Build as before.

Hi!
I had created the custom build but now I have two questions. The first one is that I don’t understand how to execute the identification. The other one is that I need to export the finally object that had been identified, is it possible with the c++ SDK?

@ncasane

I think I understood your previous post. I’ll try to answer.

You’ve built the custom app (APP_CUSTOM=1). You can use this to test the model on your target.

Go to your project and grab raw features:

  1. Go to Live classification
  2. Load a test sample (Load sample)
  3. Copy the Raw features and paste into a file, say features.txt.

Then you run the custom app as follows:

./build/custom features.txt

and compare with the results you get in Studio.

If you want to get your camera sensor integrated you can build the APP_CAMERA=1 application. Note you’ll have to build OpenCV for your target. I wonder if you’ll encounter the same OpenCV issues you had previously with the python app. It’s worth a try.

1 Like