ESP-EYE memory errors on deployment

Hello,

We are trying to deploy the example model from the tutorial from the GitHub README (https://www.survivingwithandroid.com/tinyml-esp32-cam-edge-image-classification-with-edge-impulse/)
on our ESP-EYE device.

When using the most basic model (MobileNetV2 96x96 0.05) in Edge-Impulse the deployment works but the model is not accurate. Every other model fails with the following errors:

  1. When deploying the model with the default partitions scheme we are getting the following error:
    WiFi connected
    Starting web server on port: ‘80’
    Starting stream server on port: ‘81’
    Camera Ready! Use http:// 192.168.1.158 to connect
    Capture image
    Edge Impulse standalone inferencing (Arduino)
    ERR: Failed to run DSP process (-1002)
    run_classifier returned: -5

  2. When deploying the model in arduino IDE using the “Huge APP” partition scheme we are getting the following error:
    WiFi connected
    Starting web server on port: ‘80’
    Starting stream server on port: ‘81’
    Camera Ready! Use ‘http:// 192.168.1.158’ to connect
    Capture image
    Edge Impulse standalone inferencing (Arduino)
    ERR: failed to allocate tensor arena
    Failed to allocate TFLite arena (error code 1)
    run_classifier returned: -6

The ESP-EYE has 4MB of memory available.
According to the arduino IDE, the code itself takes ~1.2MB of memory.
According to the Edge-Impulse website, all models do not need more than 1MB of additional memory. However, it seems that the memory is the issue here.

Adding a screenshot of our board settings in arduino IDE:

Can you please advise on how can we make the more complicated models work on our device?
Thank you!

Hello @OfirSagi,

You probably selected this board using the following line:

#define CAMERA_MODEL_AI_THINKER // Has PSRAM

In your case, it won’t work, you need to define and select something like:

#define CAMERA_MODEL_ESP_EYE

If you want I created a repo (inspired by the tutorial you shared) here: https://github.com/edgeimpulse/example-esp32-cam
The ESP EYE board definition is available on this repo.

Also, it has 3 examples:

  • Inference on boot: doesn’t need wifi just run the inference when you boot your board and save the original image + the infered one on the SD card if present
  • Basic Example: Just run a basic web interface like the one in the android survival tutorial
  • Advanced example: More complex but with more options on the web interface.

When I tested the this example, I was using the AI Thinker board and I noticed that MobileNet V2 models could only work if you used 48x48 images. You can try MobileNet v1 ( 0.01 for example) instead with 96x96 images, this one should run on the ESP.

Unfortunately, ESP32 is not officially supported by Edge Impulse thus we have not had time yet to optimize the memory.

Regards,

Louis

1 Like