Hello. I am working on accelerometer data and I would like to try and implement a custom convolutional model. First, when trying to use the “visual (simple) mode”, I ran into an issue : it would allow me to use 1-D convolutions because the minimum number of dimension to use such a convolution wasn’t met. I realized using the “export as notebook” that the data had been flattened, in spite of the data in the “Raw Data” tab having three axes. And here is my problem : in “Keras (expert) mode”, the only cell that appears is that of the model, without access to the data formatting. Hence, to use the non-flattened data, what would you recommend ? Is there a way to modify the downloaded notebook and then upload it back into Edge Impulse ?
You adding a reshape layer at the beginning of your NN architecture should help.
Let me know if it does not work, I’ll check out your project.
Thank you for your answer. That is what I did and at least, the model does train and has a somewhat decent output. The thing that bugs me though is that, using that method, I don’t input the data with channels corresponding to the axes, as one would normally do. The values have been “shuffled” in that in each channel, we have data from all three axes. I wonder if that hurts the results. Is there a way to prevent the flattening of the raw data ?
I actually don’t use much 1D Conv with accelerometer data but use Dense layers instead.
Do you mind sharing your project so I can have a quick look?
Sure, the project ID is 108219.
I tried quite a few things and both my process and the Eon Tuner seem to agree that in my case, convolutional models work better.
By the way, the notebook I downloaded runs well after my modifications and trains a functional model but when I tried running it in keras expert mode after adding the same modification, training stops after one epoch, because of a TypeError : cannot pickle ‘module’ object. I can’t really check what’s going on there so I’m stumped.
Thank you for your help,
I am going to differ this to our ML team who might see what is going wrong straight away
Dan from the ML team here. Thanks for using Edge Impulse!
First up, the reason the custom model you’ve defined in Expert Mode is failing is because it contains a lambda layer (a layer that is implemented in Python code, not pure TensorFlow ops), so it can’t be serialized. The error message we’re giving here is pretty confusing so we’ll work on improving that! This error aside, lambda layers won’t run on-device, so we should find another solution here.
It looks like your lambda layer is intending to repeat the data. Did you consider using the Keras
RepeatVector layer to accomplish the same thing?
Secondly, regarding the shape of the data: you are right that the samples arrive to the model in a flattened form, like this:
[1, 2, 3, 1, 2, 3, 1, 2, 3]
You can get your desired format by doing a reshape and a transpose:
>>> tf.transpose(tf.reshape([1, 2, 3, 1, 2, 3, 1, 2, 3], (3, -1))) <tf.Tensor: shape=(3, 3), dtype=int32, numpy= array([[1, 1, 1], [2, 2, 2], [3, 3, 3]], dtype=int32)>
To do this in Keras you’d use the following layers:
model.add(Reshape((-1, 3), input_shape=(input_length, ))) model.add(Permute((2, 1)))
I’ve added a new version to your NN Classifier block that shows a very simple 1D convolutional model in action:
However, this comes with a caveat. If you hit Train, you’ll see some error messages relating to the TRANSPOSE op. This op is currently not supported in the Edge Impulse SDK, so you may run into trouble if you’re intending to deploy to a microcontroller (though embedded Linux will be just fine).
We’ll start working on adding support for this op and will let you know when it is integrated. Hopefully it will be ready by the time you are ready to deploy—all of the other Edge Impulse features should still work.
If you’re using Edge Impulse for a commercial project, drop me an email at firstname.lastname@example.org and I can try to get your work unblocked in the short term.
First, thank you very much for your answer.
I replaced the lambda function with the keras RepeatVector and tried out the transpose layer, although, as you said I got an error that came from the Exception: TensorFlow Lite op “TRANSPOSE” has no known MicroMutableOpResolver method.
The surprising thing is, when I take out the transpose layer, I still get a similar error : Exception: TensorFlow Lite op “TILE” has no known MicroMutableOpResolver method.
I wasn’t sure what the “TILE” was but I tried a few things and I’m pretty sure it comes from the RepeatVector layer. Can you confirm that it isn’t supported either ? Or is it something else ?