This request may belong more to the SIG Micro group, but here goes anyway.
Since BatchNorm is not supported by TF lite micro, it seems using SeLu activation with Le_cun kernel initializer and normalisation of the input has a similar normalising effect, as suggested by Klaumbauer et.al.
Implementing SELU in the TF Lite micro branch used by edgeimpulse may provide a shortcut to BatchNorm like performance. And may @dansitu ? be easier than BN implementation.
I’m a noob to TF custom ops so none of the above are easy to me - hence a feature request ;-).
btw. NN traning in expert mode acts in the must interesting non-deterministic ways when crunching through training sets trying to use activation="selu",kernel_initializer="lecun_normal"