Escape from TensorFlow Lite Micro

@janjongboom
If we use Custom Block and our MCU SDK also supports ONNX inference, is it possible to not use TensorFlow lite and EON?

Hi @baozhu1,

At the moment the custom block needs to output a tflite model as it’s also being used in the rest of our pipeline (ie: model testing).

Aurelien

This topic is to discuss with you the feasibility of implementation because TFLIte has too many limitations.

  • TFLite Micro is written in C++, which adds a lot of flash cost and development difficulty in embedded devices.
  • Not all models can be converted to TFLite format, and it will cost a lot of R&D to convert them perfectly. The problem is mainly focused on quantization and the handling between NCHW and NHWC.
  • The user population is very different, and I believe you can compare the results from the TensorFlow and Pytorch curves.

Is it possible that when a user wants to use ONNX, the model test also has a docker approach, but provides your interface for easy interaction? Or some other better way, I hope more people can participate in this discussion.

@AIWintermuteAI @aurel @janjongboom

Hi @baozhu1, agree that TF Lite Micro adds extra flash cost, which is why we have developed our EON compiler that can reduce Flash by 50%.
We have some targets that require ONNX input but we convert the model to TFLite inside the studio in order to comply with our pipeline, using ONNX in our model testing would require substantial development and not something planned at the moment.

Aurelien

Looking forward to some updates on ONNX deployment soon.

@baozhu1 FYI, we’re adding NCHW support in the SDK / EON in the next weeks.

With ^ in place users shouldn’t care about the inferencing engine. On some targets we’ll use EON with TFLite Micro kernels; on some we’ll use accelerators; on some we’ll use NPUs; on some we’ll use full TFLite; etc. You just pick the target and we’ll make sure it runs on the target hardware.

In addition we have some easy scripts to go from ONNX => TFLite. Perhaps we can make those available as part of the Studio workflow if people upload an ONNX file.

1 Like

:ok_hand:
We look forward to seeing the final results soon.