Retraining a pretrained model - transfer learning (PART II)

its great EI offers Transfer Learning (as learning block) - even for voice and keyword-spotting.

The block unfortunately has a fixed window size 1000ms and MFE Audio block - that is not aligned with my scenario.

In a previous post you proposed to ingest the pre-trained model as encoded string what worked great for my previous project.

Now I am using Syntiant NDP101 with Dense-Layers and hence with a lot of weights - and I am getting the error message on training: “request entity too large” - as the string is really long.

What I did in detail:

I tried zlib compression (not enough compression) and reading s3 bucket (no access) to ingest.

Is there any other way to get this possible? Maybe I am on the wrong track here?

Thx a lot!!!

Hi @Christian42 interesting idea! Unfortunately don’t have a good way around your limitations right now, the request entity too large is thrown by the web server - I guess the script is too large; and indeed we block network access in the Keras container for non-enterprise customers.

The good thing is that we’ll come out with a Syntiant transfer learning model based on the same work very soon (@dansitu’s team).