Model using LSTM not deployable

The LSTM (UNIDIRECTIONAL_SEQUENCE_LSTM) operation is now supported by the TFLite Micro. I am able to train a model using the LSTM layer but the deployment is not working with EON. Without EON enabled, I am able to build and deploy to hardware but it does not work.

Compiling EON model...
Could not set up compiler
Didn't find op for builtin opcode 'UNIDIRECTIONAL_SEQUENCE_LSTM' version '1'. This model is not supported by EON Compiler of TensorFlow Lite Micro, but is in full TFLite (e.g. on Linux).

Failed to get registration from op code UNIDIRECTIONAL_SEQUENCE_LSTM
 
Failed starting model allocation.

I guess the deployment bundle is not using the latest TFLite Micro SDK. Please update the SDK or let me know how can I solve it locally?

Hi @naveen we’re currently in progress on bringing the latest TFLM kernels into Edge Impulse (@AIWintermuteAI is working on it). However, this takes a while as we’re having thousands of models, 50+ boards over 20 different architectures that all need to be verified as still working (we have gotten very good at finding bugs in TF, CMSIS-NN and other vendor libraries :slight_smile: ). This will also enable subgraph support in the EON Compiler which will be useful for RNNs in general.

If you can email me a TFLite file w/ example inputs I’ll take a look and see if we can backport the op into the current SDK (jan@edgeimpulse.com).

1 Like

Hi @janjongboom,

I have added you as a collaborator to a test project (Project ID: 10419) with data that uses a bare-minimum model with one LSTM layer. Please have a look.

Thanks,
Naveen

Hi @naveen, thanks - interesting that there’s now a straight op for LSTMs rather than dealing w/ subgraphs so that’s good - looking at the complexity of this I’ll wait for @AIWintermuteAI’s work to land.

1 Like

Hey @janjongboom @AIWintermuteAI, any updates on this issue?

Yes, quite a lot actually :slight_smile:
The tflite update internal PR is currently at the testing stage (we have a lot of supported hardware, need to make sure we’re not breaking anything). Fingers crossed, we can get it in production in next few weeks.

1 Like

Thanks for the update! Looking forward to using it next month or so.

Hello, @janjongboom @AIWintermuteAI! Just checking in on this issue as well, was wondering if you had any updates. Thanks!

I trained an LSTM model and haven’t tried to deploy it after seeing this thread, I assume I’ll get the same error as described above. I am also seeing an error in the “Classifier” section:

“Didn’t find op for builtin opcode ‘UNIDIRECTIONAL_SEQUENCE_LSTM’ version ‘1’. This model is not supported by EON Compiler of TensorFlow Lite Micro, but is in full TFLite (e.g. on Linux). Failed to get registration from op code UNIDIRECTIONAL_SEQUENCE_LSTM. Failed starting model allocation.”

Was wondering if there are any alternatives to deploy while the pull request is pending. For example, would importing a model work?

This PR has actually landed, so we should have LSTM/RNN support now. I’ll ping @AIWintermuteAI

Thanks for the update! I retrained the model and the error disappeared. It got replaced with “This model won’t run on MCUs. Calculated arena size is >6MB” but that’s another issue. Will try to deploy now

Hi, @curious_cat !
Were you able to successfully deploy the LSTM model?

Yes, thank you! It works:)