ESP32 Arduino Sketch, always predicting same class

Hi guys,
I might be doing something wrong, but it is a bit suspicious, so prefer posting a message here.
I have got a binary image classification trained, the confusion matrix is not fantastic but it is not so bad either.
I so deployed that in my ESP32 Camera, but I always see the same prediction with high (> 0.9) probability regardless of what the camera sees.
The training set is produced with the same ESP32, so I was not expecting such a problem. Please note that if I use my phone (in classification mode), I see much better results.

Any clues?

Thanks

Hello @edge7,

Several people had the same issueā€¦ I definitely need to take some time this week to have a deeper look.
Are you using the Basic Example or the Advanced one in the github repository?

Note that on the basic one, we just do a cutout of the data instead of a proper resize. This might explain the lack of accuracy.

Also, how many classes do you have on your project?

Regards,

Louis

Hi guys,
sorry for the stalking in the last few days, but I am trying to finish my project.
What I am doing
Binary image classification
What am using as hardware
ESP32 Cam
How I have created the training set
I have got around 1000 pictures directly from the ESP32 (Jpeg pictures) and sent those to my local PC in HTTP, then I have uploaded those to your platform. The dataset is pretty balanced.
Which model I then trained
MobileNetV1 0.25 (no final dense layer, 0.1 dropout).
I am able to use the above model without out of memory problems. The confusion matrix is good enough.
Which code am using?
A modified version of the advanced classification. I do not need a webserver and stream (saving memory), so doing being able to use the MobileNetV1 0.25, which is not usable with the classic advanced classification sketch.

code is here
Which problem I have?
When uploading the model in the ESP32, the device always predicts one class (the same class). When I use my phone, switching to classification mode, the results are better. Please note that as said above, the training set comes directly from the ESP32 cam.

Thanks

Hello @edge7,

I see on your project that at the moment, your model is not trained.
Could you train it again so I can download your custom version of the arduino library and check your accuracy on Edge Impulse Studio please?

Regards,

Louis

Sure mate.
If you want to use my code, there are a couple of things to change (SSIDā€¦).
Also, there is a small part where I send the JPEG in a HTTP request (which does not make sense for you at the moment).
If we are able to find a solution, I would be happy to create a cleaner example and send a PR to your repo. The important thing to note is that, if we get rid of the streaming, we have more ram to allocate for tensorflow, which allow users to run bigger (and probably better) models.

BTW, the training is done and the accuracy in the validation set is good to me.

Thanks for your time.

ah one more note:
In my current sketch I have got the following files:

  • Advance-Image-Classification modified.
  • The camera_index.h (No touched)
  • The camera_pins.h (No touched)

I got rid of app_http.cpp and moved the relevant classification functions into Advance-Image-Classification file.

1 Like

Hi @louis I have collected some more data, and redone the training, will let you know if the situation gets better. If you got any updates or tips please share.

Thanks

Hi @edge7,

So I managed to run your project with your code:

As you can see I was trying to take a picture of your image displayed on my laptop so the conditions are definitely not ideal but I actually recognized a cat on the last try :smiley:

So few notes if it can help:

I am using Arduino IDE v 1.8.15
In the board manager, i have installed the esp32 boards version 1.0.6 (source of the boards: https://dl.espressif.com/dl/package_esp32_index.json)
I have an AI Thinker board with the OV2640 camera module.

Regards,

Louis

Hi @louis, how have you run it?
Have you flashed the code in the ESP32 and then pointed the camera to your laptop?

Yes :smiley:

Also additional comment from @janjongboom.
To ā€œseeā€ what your camera sees, you can print the raw data (like 0x383c2b, 0x343827, 0x2b2f1e,...) and then feed it into one useful tool from our CLI edge-impulse-framebuffer2jpg

I also made sure your model worked using https://github.com/edgeimpulse/example-standalone-inferencing and it works as expected.

Regards,

Louis

ok, a bit weird, but BTW you can see a lot of ā€˜noneā€™ donā€™t you?
2 points:
Have you downloaded the 8bit integer or the float model? EON vs not EON?
How can I print the buffer?

I will try again because I feel am close to success! :slight_smile:

Hello @edge7,

Yes I downloaded the default version (int8 EON)

How can I print the buffer?
I guess you could do something similar to this (I havenā€™t tried this piece of code but I guess it should work, add a delay(10) somewhere if it prints too fast and you loose information in the for loop) :

s = fmt2rgb888(fb->buf, fb->len, fb->format, out_buf);
Serial.print("raw data, total length: ");
Serial.println(EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE);
for (int i = 0; i < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; i += 3) {
      Serial.print("0x");
      Serial.print(out_buf[i], HEX);
      Serial.print(out_buf[i+1], HEX);
      Serial.print(out_buf[i+2], HEX);
      Serial.print(", ");
}
Serial.println("");

And then fill the printed raw hex in:

edge-impulse-framebuffer2jpg -w 96 -h 96 -o test.jpeg -b "<your-frame-buffer>"

You can also check the cli tool helper:

edge-impulse-framebuffer2jpg --help                                                                                  
Usage: edge-impulse-framebuffer2jpg [options]

Dump framebuffer as JPG file

Options:
  -V, --version                 output the version number
  -b --framebuffer <base64>     Framebuffer in base64 format or as a list of RGB888 values
  -f --framebuffer-file <file>  File with framebuffer in base64 format or as a list of RGB888 values
  -w --width <n>                Width of the framebuffer
  -h --height <n>               Height of the framebuffer
  -o --output-file <file>       Output file
  -h, --help                    output usage information

Regards

2 Likes