Hi @baozhu1 EON Compiler is indeed based on that project, but has a bunch of extra things, like support for extra memory allocation methods, support for custom ops, support for CMSIS-NN arena size calculation, and actually supporting latest TensorFlow Lite models / kernels (upstream project is stuck on TF2.3).
It’s currently not open source though, but would be happy to hear any suggestions.
Have cleared up the edgeimpulse/tflite_micro_compiler repository, that is no longer relevant.
At least if we write it in C++ I know it will still compile in 10 years, unlike the Python dependency hell we encounter on a daily basis
On a more serious note, C++ is probably the right choice here because it’s easy to get things like the required memory allocation from tensors. You can just instantiate them, and then see how much they’ve allocated. That part will always be native, so more straight forward to write everything in C++.
Hi @janjongboom Vikram here from Espressif Systems. Would like to add some changes to the tflite_compiler to make it more specific to ESP32 and ESP32-S3 etc to sqeeze more performance.
As you mentioned, the github tflite_compiler is stuck to older version. It has r2.4 branch as well which is bit later than tensorflow 2.3 but still much older. Is it possible to upstream changes from EON to this repo, so that it is easy to maintain and contribute by broader audience?
If that’s not in the plans, is there a way to collaborate on this front? This will help to avoid rework from my side to rework from my side to update the repo.