Failed to allocate TFLite arena (0 bytes) when running inference on desktop (Windows 11 and Ubuntu 22.04)

Hello,

I try to run inference on a desktop computer as explained here:

The NN I try to run is the one proposed in this other tutorial (MNIST, number recognition) and generated with the Edge Impulse Python SDK:

The compilation is ok (except some warnings), but when I run the generated app, I get the error message -6:
Failed to allocate TFLite arena (0 bytes)

Effectively the arena size for my NN is 0:

const ei_config_tflite_graph_t ei_config_tflite_graph_0 = {
    .implementation_version = 1,
    .model = trained_tflite,
    .model_size = trained_tflite_len,
    .arena_size = 0
};

What is exactly arena size and why I get a NULL one ?
When I try the same thing but with 1 NN generated with the Edge Impulse web dashboard, I successfully run inference on my desktop.
I have done test on Windows 11 and Ubuntu 22.04.

Thanks in advance for your help :slight_smile:

I found a workaround by setting manually the arena size in model_variables.h, but is it the expected way to do it ? I did not find this step in the tutorials.

Hi @adamsantamaria,

Thank you for letting us know and for posting a workaround! We will look into this issue.

Hi @adamsantamaria,

As a workaround, can you convert your model to a SavedModel format (model.save()) or a TFLite model format and upload that using the Studio (on the Dashboard page, click Upload your model). From there, see if you can deploy the model via Studio. That should prevent the 0 arena size issue. We will work on fixing this for the Python SDK.

1 Like

@adamsantamaria With a recent change we now have 2 places where the arena size is stored (both in the model and in the struct) and only one was generated during deployment…

Here’s a quick workaround (applied to your project already):

  1. Profile the model via clicking the button under ‘On-device performance’ (or through the API).
  2. Save the model.
  3. Done.

Then the arena size properly propagates to deployment.

I’ll get a fix in today or tomorrow (and add a test for deploying w/o profiling).

2 Likes

@shawn_edgeimpulse and @janjongboom, thanks for your quick and accurate answers.
As soon as the fix is implemented I will test it :slight_smile:
Really nice tool btw !

1 Like

Thanks @adamsantamaria this will most likely go live with the patch release on Friday. Will update this thread.

This has now been released (and has proper integration tests), apologies for the inconvenience :slight_smile:

The problem is solved :slight_smile: Thanks !

1 Like