We really like the feature of on-device performance information when developing our NN classifier model on Edge Impulse. It gives us some ideas before we deploy it onto our custom boards.
One question around the RAM Usage & ROM Usage info that I would like to clarify. From the C++ library that I download in the Deployment step, it looks like the trained tflite model will be compiled and sitting in the static memory. I’m wondering why the RAM Usage is much less than the ROM Usage on the NN classifier that we created?
Hardware optimizations like CMSIS-NN that need to be loaded in.
Functions that get pulled in through math.h
You RAM usage is determined by:
The scratch space we need to execute the neural network.
Some extra allocations made by ops in the network (e.g. for CNNs we need some more intermediate state).
We determine these by compiling your model with a base application and looking at RAM/ROM increase. If you want to see the exact split of RAM/ROM that the network takes you can export to C++ Library then look in trained_model_compiled.cpp and there’s all the memory allocations (for the network). To see everything else that gets pulled in you better look in the map file of an application for full overview.
it looks like the trained tflite model will be compiled and sitting in the static memory.
It mostly sits in ROM and by default we allocate scratch space on the heap, but you can statically allocate this by defining EI_CLASSIFIER_ALLOCATION_STATIC.
Where should i actually place the memory info function?
I’m getting same values for all the designed models…
For examples if edge impulse estimates 16KBS of RAM Usage… how can i verify it on my hardware? I’m using nano 33 BLE Sense.
Any help would be really appreciated…