ESP32 CAM memory issues after a while - failed to allocate tensor arena

Help needed - has anyone experienced this? Am i doing something wrong? Shall i move to object detection rather than classification?

Object classification task with ESP32-CAM and classifier (48px x 48px) running a few times till running into error below. It feels like a memory leak as it kicks in after a certain time.
With a larger network (160px x 160px) the effect kicks in after one successful inference. With the smaller network after around 20 inferences.

rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0030,len:1344
load:0x40078000,len:13964
load:0x40080400,len:3600
entry 0x400805f0
Edge Impulse Inferencing Demo
Camera initialized

Starting continious inference in 2 seconds...
Predictions (DSP: 2 ms., Classification: 359 ms., Anomaly: 0 ms.): 
    chick: 0.03906
    unknown: 0.96094
Predictions (DSP: 2 ms., Classification: 359 ms., Anomaly: 0 ms.): 
    chick: 0.07422
    unknown: 0.92578
Predictions (DSP: 2 ms., Classification: 359 ms., Anomaly: 0 ms.): 
    chick: 0.07031
    unknown: 0.92969
Predictions (DSP: 2 ms., Classification: 358 ms., Anomaly: 0 ms.): 
    chick: 0.08203
    unknown: 0.91797
Predictions (DSP: 2 ms., Classification: 358 ms., Anomaly: 0 ms.): 
    chick: 0.06250
    unknown: 0.93750
Predictions (DSP: 2 ms., Classification: 359 ms., Anomaly: 0 ms.): 
    chick: 0.05859
    unknown: 0.94141

...............

ERR: failed to allocate tensor arena
Failed to initialize the model (error code 1)
ERR: Failed to run classifier (-6)
ERR: failed to allocate tensor arena
Failed to initialize the model (error code 1)
ERR: Failed to run classifier (-6)
ERR: failed to allocate tensor arena
Failed to initialize the model (error code 1)
ERR: Failed to run classifier (-6)
ERR: failed to allocate tensor arena
Failed to initialize the model (error code 1)
ERR: Failed to run classifier (-6)


Without the error above I am happy the way it runs.

Steps how to reproduce:
Deployment option: Arduino Library
Executing the esp32/esp_camera.cxx in classifier mode with #define CAMERA_MODEL_AI_THINKER in arduino ide@1.8 and esp32@2.0.11
Project ID 327534

@floda Based on my experience, the error you are encountering is related to a memory issue where the size of your model exceeds the RAM capacity of your microcontroller. To avoid this, select the targeted device during the training process to optimize the model to that specific device. After feature exploration, you can find the On-Device Performance results, which display the RAM, ROM, and latency for both optimized (Int8) and unquantized (Float32) models. If the RAM usage of your model exceeds the recommended specifications for your microcontroller, you will need to retrain your model and adjust some parameters in your Neural Network Settings.

Please, share the screenshot of your model for On-device performance before deployment and let me know if this works, and if not so I may provide other alternatives.

“In one of my past projects, I needed to switch to transfer learning for object detection and use the MobileNetV1 model for training. I set the image size to 96px X 96px with RGB color depth on the Image data Block. It’s important to check the number of features on the input layer before starting the training because a large number can result in a bigger model size.”

Hi and thanks for the reply. the model fits perfectly in the RAM. I am able to do multiple inferences already as you see in the log. If i use a bigger model, the memory allocation problem occurs earlier. The artefact i am seeing feels more like unfreed memory.

I think i will try this too. i let you know. Best

1 Like

Hello @floda,

I’m indeed suspecting an issue related to allocated memory that is not freed.

Have you implemented a custom piece of code or you are just using the default example?

Best,

Louis

1 Like

Hello I encountered the same issue with the default example.

Hi Louis,

no custom code added. just standard example. I didnt understood the inference library enough to create more transparency.

Tx and best

Hi. Looks like it was fixed. Thank you

1 Like