Significant Zephyr compile improvements?

I just downloaded a new model and noticed that my Zephyr code compiled down to a much, much smaller size. Not complaining, that’s fantastic, but is it expected or am I missing something? I noticed there are some Zephyr macros added but I’m not clear on what they’re doing.

Is it removing unused FFT tables now?

1 Like

Hey @jefffhaynes

What is the target that you are working with? Nordic did get a new SDK rev to 2.4.0. Likely the New Pin API?

  • Updates to the Toolchain - Available via the awesome NCS manager, and the vscode plugin!
    *** New Pin API - The Oberon PSA core library provides an optimized implementation of the PSA Crypto API, which reduces the required flash footprint when compared to the Mbed TLS PSA Crypto implementation.**
  • MCU Bootloader Updates - Noticeably faster boot times.

Let me loop in @vojislav to comment on the noted compilation improvements.

Best

Eoin

Still targeting 2.3.0 for now (haven’t tackled the bootloader updates). Under 2.3.0 with my prior EI training/code gen, my single image flash came out to around 490k (so with MCU I was starting to run out of space!). I just ran a new training with some more data (no reason to think the network itself is any smaller) and now the output is around 330k, which is great. Unfortunately I’m traveling and not near hardware so I haven’t been able to really test the new build yet.

hi @jefffhaynes

We did introduce some optimizations that reduce the memory footprint of out SDK.
Please let us know when you test the model on hardware if you see any difference. Thanks!

Best,
Vojislav

Hm, well I finally got back to hardware and tried this new slimmer model but so far it doesn’t seem to work. It’s possible I’m missing something but I went from many generations of models working to this model that compiles down to a fraction of the size and now it doesn’t work, which is a bit suspicious. I’ll keep working on it and try to see what isn’t lining up. Are you able to explain why the model is now compiling to something 33% smaller than before?

Thanks,

Jeff

Are you able to take a look at this? I’ve tried everything I can think of to isolate the model from everything else, including passing in static data and, while running great on here (the site), the model no longer produces results on my custom nRF5340 board with Zephyr 2.3.0. My project is 64474. Thanks.

Hi @jefffhaynes

What I would suggest to do next is to export plain C++ deployment option and add those files to your application.

Lets see if that makes any difference.
Also did you do some updates regarding your build process?

Can you also test your model with simple GitHub - edgeimpulse/example-standalone-inferencing-zephyr application?

This is all raw C++. I haven’t tried any precompiled libraries/apps. When I do the same dataset as BYOM it works great so it’s not the dataset itself. I did copy/paste features into my own code using the EON-compiled code and it doesn’t seem to work (or gives weird results). Maybe the difference is some preprocessing? Not sure. I’ll try to spend more time on it but for now I’m planning on sticking with the BYOM route (just takes up a lot more code space).

Thanks,

Jeff