Error "Arena size is too small for all buffers" while inferencing audio project on XIAO ESP32-S3

Hi, I was trying to inference an audio KWS project on XIAO ESP-S3 with Sense HAT. The device was working fine on the previous audio project, only facing issues with this particular project. Thanks in Advance.

ERR: Failed to run classifier (-3)
Arena size is too small for all buffers. Needed 15280 but only 12160 was available.
AllocateTensors() failed
ERR: Failed to run classifier (-3)
Edge Impulse Inferencing Demo
Inferencing settings:
Interval: 0.062500 ms.
Frame size: 16000
Sample length: 1000 ms.
No. of classes: 2

a

  • Your MCU is out of memory.
    • Run EON and select a Model with lower memory requirements.
    • Run Performance Calibration to analyze performance scenarios.

Thanks @MMarcial for the response.

Unfortunately, XIAO ESP32-S3 is not supported by EON. So can’t use that right now.

I will check this, thanks.

@shawn_edgeimpulse

  • To reduce the size of an image classification Impulse one can reduce the Image Width and Image Height in the Create Impulse page of the Input Block.

  • What options do we have to reduce the size of an audio Impulse?

Hi @MMarcial,

For audio projects, you can try:

  • Reducing the bit depth (e.g. from 16-bit to 8-bit)
  • Reducing the sampling rate (8 kHz is probably the lowest you want to go for voice data)
  • Using int8 quantized models instead of float 32
  • Using the EON Compiler (if available)
1 Like

@shawn_edgeimpulse Thanks, but unfortunately I was not able to solve it after reducing the bit depth and reducing the sample rate. And the EON is not supported by the XIAO ESP32-S3 now.

As an alternate solution, you can find a MCU that will run the model with the EI Profiler.

@MMarcial , Can you please check the link? Thanks.

@salmanfarisvp

The link takes you to a Hugging Face tutorial. Do not worry about that. Scroll down to the Profile Your Model section.

You can also get to it here:

Hi @salmanfarisvp,

Also, what is your window size? If that’s too big, you will run out of memory quickly. I generally find that a window size of 1 sec is the most you will be able to use for MCU KWS projects.

As @MMarcial mentioned, you can use the profiler to get an estimate of the RAM usage. You can use the Python SDK, but it’s not necessary. You can also view RAM/ROM estimations in Studio. Go to the Deployment page. Under Model Optimizations, click Change target and select the Espressif ESP-EYE (the closest board we have to the ESP32-S3).

Try toggling EON Compiler and viewing the difference between int8 and float32 versions. You should see an estimated total RAM and ROM usage.

What’s weird is that your ESP32 is reporting that only ~12 kB is available for the arena size. That doesn’t seem right, considering the ESP32-S3 has something like 512kB of RAM. Paging @AIWintermuteAI – any thoughts on this one?

From @AIWintermuteAI: ESP32-S3 optimized kernel support is in the works. Something you might want to try: swap the Edge Impulse C++ kernel with an ESP-NN kernel to support faster/efficient computations on the ESP32-S3. This is a bit of a hack, but it might be worth a shot: TinyML Made Easy: Image Classification - Hackster.io. Note that it will not work with a library created with the EON Compiler enabled.

Yes, mine is 1 sec.

it’s 41.6K RAM usage in classification.

Thanks and Yes, I tried to use the ESP-NN lib compiled by @AIWintermuteAI. I’ve been using a small Yes/No project with XIAO-ESP32-S3 with the NN lib swapped and it’s working fine, but here I’m trying to build a running faucet demo with Edge Impulse provided running faucet dataset.

Here is my public project link: running_faucet - Dashboard - Edge Impulse

Thanks for the help and comments.

I tried to split the data into 1 sec to experiment and now I get the different error "AllocateTensors() failedERR: Failed to run classifier (-3)" .

Any suggestions. Thanks.

ESP32-S3 optimized kernel support is in the works.
Great to hear - we are just working on a sound recognition solution on ESP32-S3 - can u give a rough idea of schedule?
Also we could do a beta testing if helpful.