Mismatch between estimated ROM usage and actual usage in C++ exported library

Hello there,

I’m having a few questions regarding the size estimate on the studio vs the actual exported C++ library, as so:

  • the studio shows its estimate

  • we exported the library and i see one the compiled model data arrays is ~120kB in size

  • which i found it is related to the layers’ neurons

I do not have access to the studio project, i am implementing the exported library in device firmware. We are using a NRF52832 which should be about the same as a STM32 M4F MCU processor wise. Also, i am told we are deploying with these settings:


  1. Should i ignore the ROM estimate from the studio ?
  2. Are we doing something wrong that lets us see this mismatch ?

Thanks in advance

Hi @TiagoNascJBay this looks like you received an unquantized (float32) model, not the quantized int8 model for which you show the performance metrics. That 120K array will then be 31K.

Can you ask your customer for the quantized C++ export?

Yes, i will ask for a quantized one, and check sizes. I was afraid that could be the case since it was a float. Will report back after that.

1 Like

Thanks @TiagoNascJBay, also an unusually deep network for so little features - there’s probably a lot to win there.

Reporting back.

Got a new exported library with the correct settings (EON+quantized) and now the sizes match the estimate almost perfectly. Thanks again for the help.