Error while creating standalone executable

Project name: test_byom


I have a pretrained model of size 258 KB. I uploaded the model and then deployed as a C++ library. Then I unzipped the zip file and into that folder I copy-pasted the contents from the following link (example-standalone-inferecing):

I then copied the features and pasted in “main.cpp”, as per the instruction.
Then when I tried to ran “$ sh”, I encountered the following error:
./tflite-model/tflite_learn_3.h:26:42: error: integer literal is too large to be represented in any integer type

Any ideas how to resolve this?


Hi @shossain

Can you please share the code around the section you pasted the features in? You have too large of an integer in there as is stated in the error. Try using another project, perhaps the one you have is misconfigured.



Hi @Eoin,

I deleted the old project and created new one, but the issue remains. Here’s the screenshot of how the error appears:

Note that when I do the same (BYOM and deploy as a C++ library) with a smaller model (of size 90 KB), the error does not appear then and the standalone executable runs as expected.

Below are the codes in the main.cpp file, where I pasted my features in:

#include <stdio.h>

#include “edge-impulse-sdk/classifier/ei_run_classifier.h”

// Callback function declaration
static int get_signal_data(size_t offset, size_t length, float *out_ptr);

// Raw features copied from test sample
static const float features[] = {
0.1055, 0.1172, 0.1367, 0.2539, … (4096 features in total)

int main(int argc, char **argv) {

signal_t signal;            // Wrapper for raw input buffer
ei_impulse_result_t result; // Used to store inference output
EI_IMPULSE_ERROR res;       // Return code from inference

// Calculate the length of the buffer
size_t buf_len = sizeof(features) / sizeof(features[0]);

// Make sure that the length of the buffer matches expected input length
    ei_printf("ERROR: The size of the input buffer is not correct.\r\n");
    ei_printf("Expected %d items, but got %d\r\n",
    return 1;

// Assign callback function to fill buffer used for preprocessing/inference
signal.get_data = &get_signal_data;

// Perform DSP pre-processing and inference
res = run_classifier(&signal, &result, false);

// Print return code and how long it took to perform inference
ei_printf("run_classifier returned: %d\r\n", res);
ei_printf("Timing: DSP %d ms, inference %d ms, anomaly %d ms\r\n",

// Print the prediction results (object detection)

ei_printf(“Object detection bounding boxes:\r\n”);
for (uint32_t i = 0; i < result.bounding_boxes_count; i++) {
ei_impulse_result_bounding_box_t bb = result.bounding_boxes[i];
if (bb.value == 0) {
ei_printf(" %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\r\n",

// Print the prediction results (classification)

for (uint16_t i = 0; i < EI_CLASSIFIER_LABEL_COUNT; i++) {
ei_printf(" %s: “, ei_classifier_inferencing_categories[i]);
ei_printf(”%.5f\r\n", result.classification[i].value);

// Print anomaly result (if it exists)

ei_printf(“Anomaly prediction: %.3f\r\n”, result.anomaly);

return 0;


// Callback: fill a section of the out_ptr buffer when requested
static int get_signal_data(size_t offset, size_t length, float *out_ptr) {
for (size_t i = 0; i < length; i++) {
out_ptr[i] = (features + offset)[i];

return EIDSP_OK;


Hi @shossain

OK I think the arena size is key here, you are trying to put a model that is too large onto the target architecture. That is why you don’t see this issue with a smaller model.

Please share the project ID so I can check what target you have selected, and also share the target config so others can offer suggestions too.



Hi @Eoin,

The project ID for this: 296526. I have made the project public now.
Regarding the target, I am currently just trying to deploy as a C++ library.

About the arena size, I tried something different, which resolved the error. I edited the line in tflite_learn_3.h as follows:

const size_t tflite_learn_3_arena_size = 22136092888451465000 (previously)
const size_t tflite_learn_3_arena_size = 22136 (present version)

The reason I edited the line is that the integer in the previous version was too large, and importantly the variable “tflite_learn_3_arena_size” is not used anywhere else in the code (I might be wrong).
I can now run the standalone app (by running ./build/app), but now I am getting a new error as follows:

1 Like

the Arena Size value can be determined via the python SDK for a given deployment platform. If you want to get a value that matches your architecture. :

profile = ei.model.profile(model=model, device='cortex-m4f-80mhz')


This will produce an output such as the following:

Target results for float32:


{ ‘device’: ‘cortex-m4f-80mhz’,

‘tfliteFileSizeBytes’: 3364,

‘isSupportedOnMcu’: True,

‘memory’: { ‘tflite’: {‘ram’: 2894, ‘rom’: 33560**, ‘arenaSize’: 2694**},

‘eon’: {‘ram’: 1832, ‘rom’: 11152}},

‘timePerInferenceMs’: 1}

Hopefully that correct value will resolve your issue completely.

Also checking with the embedded team on why that value is set so high.



1 Like

Thanks @Eoin .

I was checking with another smaller TF-Lite model (pretrained). I was able to execute it, but received an error as follows:

It says “Invoke failed (1), run_classifier returned: -3”

Any idea what might be causing this?

Hi @shossain

One of the tech team has also logged this, and it is being investigated. I will update once they have reproduced and found the source.