No, 96x96 should run with no issues on S3.
Can you try manually increasing the model arena size? It can be found model_variables.h. Try adding a few hundred kb, if you have PSRAM this much should not be an issue.
You are correct; increasing a few KBs made it run on the S3 too. But I wonder why this is happening. But now it’s working I don’t mind setting a bit higher manually for the S3. Thanks for the help.
Now, one blunder that I’m noticing now, which has been boggling my mind since yesterday, is that each time I run the example, even with different inputs, I get exactly the same result. I have a binary classifier, and almost every time, I get the exact same value of the impulse, predicting it as one class with 98% confidence. In the studio, it looks like it is working just fine. For your reference, I’m using a binary classifier (greyscale input) quantized model with an EON compiler. I’m using the static_buffer example.
Example result I’m getting:
Predictions (time: 675 ms.):
free:0.984375
occupied:0.015625
run_classifier returned: 0
Timing: DSP 5 ms, inference 675 ms, anomaly 0 ms
Somewhere, I was reading on the forum regarding using a function run_classifier_image_quantized
instead of run_classifier
when using a quantized model. Could this might be the case? I tried running the former function but couldn’t seem to make it work because of the first argument in the function (impulse), which I could not access in my main file.
Project id: 563549.
Any help/suggestions are much appreciated. Many thanks!
Garvit
Static_buffer runs inference on static data, so you SHOULD be getting the same output? You need to run the camera example to get the image from camera.
re: increasing arena size made it work.
So it is related to the fact that ESP32-S3 uses optimized kernels, which have different arena size requirements due to scratch buffers used in those kernels. We benchmark for regular kernels or CMSIS NN kernels, so we don’t know the exact arena size needed. We do leave some wiggle space, but apparently in that case it was not enough.
Thanks for responding. Sorry for the confusion but I meant even when changing the static input inside the static_buffer.ino example, it gives out the same result. It gets the input from the ‘features’ variable that has the processed feature flattened array right? So even if I paste another array there and flash the firmware again, the response is identical, always.
It gets the input from the ‘features’ variable that has the processed feature flattened array right?
Yes, this is correct.
So even if I paste another array there and flash the firmware again, the response is identical, always.
The important thing here is if it matches the classification results from Studio. If it matches, the device inference works correctly and the issue is with the model. Do the results match?
Yes, the predictions are correct in the studio; it’s only messed up when I’m running it on an ESP32. The results don’t match at all.
Did you find a solution for this anomaly? I have the same problem with sound classification.
Here in the seeed wiki they have a solution, but not working for me, gives the same mixed results as before.
Hi @garvit185
Did you try solution detailed above? Also pease use our docs for reference or highlighting bugfixes if you can, we dont have control over the SEEED ones and they may be out of sync →
Can you try manually increasing the model arena size? It can be found model_variables.h. Try adding a few hundred kb, if you have PSRAM this much should not be an issue.
Best
Eoin
@garvit185 ,
I tested your project on T-Camera S3 and could not reproduce the results mismatch. After tweaking the arena size, the results from Studio match the results on the device:
@eduard please create a new forum thread with detailed description (follow the template) and the steps to reproduce.
Hi @AIWintermuteAI & @Eoin
I found what I was doing wrong. In the input features instead of copying raw features I was putting in the processed feature array. It happened because I got confused with the nomenclature, where the input type was float and the raw features were hexadecimal. I thought the input should be the processed feature array as they were of type float.
Once I tried to do it correctly, the IDE showed me errors when I added the raw features.
But thanks to you, from your screenshot, I could figure out what I was doing wrong. Also the other issue is solved as well after increasing the tensor arena size in the esp32-s3 chip.
Thanks guys so much for your support and your patience.
If it is of any help, I would suggest pasting the processed features instead of raw features should throw an error.
Many thanks,
Garvit
Hi Garvit
Please I need your help regarding this
I’m currently using ESP32-S3 DevKitC-1-N8R8 for my project, it’s a voice recognition system and I have followed the steps given. Please how can I get the complete code, I don’t write C or C++, what am I supposed to change please?
Hi @peppp
If you’re having memory issues similar to those I had with the ESP32-S3, there would be a file named ‘model_metadata.h’ in your edge impulse inference library. Update the
#define EI_CLASSIFIER_TFLITE_LARGEST_ARENA_SIZE
with a larger size and try running it. Hopefully should work.
Cheers
Okay, I’ll try that but do I have to change anything in my code since I’m using an INMP44 microphone as well and I couldn’t find ESP32-S3 DevKitC-1 as part of the supported boards? @garvit185
@AIWintermuteAI
I have the same error, but I’m new to this and I can’t seem to figure out what I need to update in the model_variables.h file. . Could you say more about what exactly needs to change in this file? I don’t see a variable like ‘arena’.
(projectId: 632239, running on ESP32S3 with 8MB PSRAM, Arduino IDE)
Hi @georgie
You need to update this variable in the model_variables.h file of your arduino library (the downloaded zip):
i.e. add this line with the size for your given hardware:
#define EI_CLASSIFIER_TFLITE_ARENA_SIZE (200 * 1024) // Increase to 200KB
Hi i have same memory problem with my ESP32-S3-EYE board it got 8 MB PSRAM. I use ESP-IDF. I enabled PSRAM and set #define EI_CLASSIFIER_TFLITE_LARGEST_ARENA_SIZE (500 * 1024)
in model_metadata.h and still i got this error ERR: Failed to allocate persistent buffer of size 192, does not fit in tensor arena and reached EI_MAX_OVERFLOW_BUFFER_COUNT Guru Meditation Error: Core 0 panic'ed (StoreProhibited). Exception was unhandled.
. Can someone help?
add delay at the end of the loop .
Me too! Have you solved this issue?
try to lower the parameters of your processing blocks.
try to get around 52920 raw samples from there you can increase it little by little to see how much your board can take