Tiny Machine L. Kit And OV7675 Camera Problem

Hello Edge impulse family
This will be my first project with Edge impulse and Arduino Tiny Machine learning kit.
Please forgive my excitement as I am new to Edge impulse and Tiny machine learning kit.
I created an edge impulse project that recognizes fruits. (project no:149197) and I came to the last step, deployment. In this step, I took a file for Arduino 33 BLE Sense and uploaded this file to the card on my windows machine with the node.js command by putting my card in boot mode.

Then when I give the command edge-impulse-run-impulse --debug on my screen;

Edge Impulse impulse runner v1.16.0
[SER] Connecting to COM4
[SER] Serial is connected, trying to read config...
[SER] Retrieved configuration
[SER] Device is running AT command version 1.7.0
Want to see a feed of the camera and live classification in your browser? Go to http://192.168.1.34:4915
[SER] Started inferencing, press CTRL+C to stop...

I get a message like, but when I go to the specified address, I can’t see any screenshots.
also when i use :edge-impulse-run-impulse command directly

Edge Impulse impulse runner v1.16.0
[SER] Connecting to COM4
[SER] Serial is connected, trying to read config...
[SER] Retrieved configuration
[SER] Device is running AT command version 1.7.0
[SER] To get a live feed of the camera and live classification in your browser, run with --debug
[SER] Started inferencing, press CTRL+C to stop...
LSE
Inferencing settings:
        Image resolution: 96x96
        Frame size: 9216
        no. of classes: 3
Starting inferencing in 2 seconds...
taking photo...
ERR: failed to allocate tensor arena
Failed to allocate TFLite arena (error code 1)
Failed to run impulse (-6)

I am getting an error of the form.
I can connect my card within the edge impulse website and upload images to edge impulse, but I cannot use the card alone.
What should I do to identify objects with the OV7675 camera? Where am I doing wrong?
Thank you in advance for your help.

Hi @seloak,

The Arduino BLE Sense is limited in RAM (256 KB), your model already takes most of it. You could use static allocation to bypass this issue, see: GitHub - edgeimpulse/firmware-arduino-nano-33-ble-sense: Edge Impulse firmware for the Arduino Nano 33 BLE Sense development board
Otherwise choosing a smaller FOMO model may help too.

Aurelien

1 Like

Thanks for help @aurel
What can I do for the model to use less memory?
-Like reducing the number of samples?
-Reducing the image size (like 36x36) ?
-Changing some parameters?
My current Object Detection param.
Number of training cycles
25
learning rate
0.015
Validation set size 20 %

I would be very grateful if you could explain what I need to change.
It would be great if there was an explanatory guide for Arduino BLE Sense.

Thank you for your help.

Hello @seloak,

Are you using FOMO (Object Detection) or MobileNet transfer learning (Image classification).
Reducing your image size will definitely improve the on-device performances.

You can also reduce the color depth (choose grayscale), it will reduce the number of features by 3.
You can also choose a model using MobileNetV1 instead of MobileNetV2 and choose a smaller alpha if you selected 0.35.

Number of training cycles, learning rates and validation set size won’t have any impact.

Best,

Louis