@jens1185 Looking now.
edit: This is project 17595 right? It seems like you have fixed this already? Any idea how?
@jens1185 Looking now.
edit: This is project 17595 right? It seems like you have fixed this already? Any idea how?
@janjongboom
yes sorry for not replying. I have no idea, what I did, suddenly it worked and yes the project is 17595
But the model is still not working. I created a random as an additional output feature. I have now 3 classes + anomaly - idle, random and mygesture. I collected ~ 3,5 min for each class.
When I test it on the website, it works quite well. But on the Arduino my gesture is not detected. It always outputs ārandomā. Even now, during I am writing and the arduino is lying on my desk without movement, like I trained it to return āidleā.
I set the flag to true like @aurel suggested, but I donāt really understood how to compare and solve the problem now.
Hi @jens1185 Thereās a bug in the SDK when having multiple spectral analysis blocks with different axes feeding into them. Iāve filed a bug for the embedded team, but a quick fix is to reduce this to a single spectral analysis block with all data from all sensors feeding into that. Gets me to 96.12% accuracy on your validation set, so donāt expect a big drop in accuracy.
Hi @janjongboom; Thanks for answering my question. I thought using one spectral analysis block per Sensor is the way to do sensor fusion in edge impulse. I will try your suggestion and load it on my Arduino. 96,12 % would be great.
Yeah, it should work as well, weāll fix it in the SDK
Hi @janjongboom works much better now on my Arduino! Thanks.
Another question regarding the best coding strategy. I want to add a second gesture now. But this is always performed with second sensor. So sensor 1 wil be fixed at the right arm and sensor 2 (also 9 axis) will be fixed at the left arm. My plan is to control everything with one microcontroler. When sensor 1 detects gesture1 LED 1 should be turned on and when sensor 2 detects gesture2 LED 2 should be turned on.
What is the best strategy now? In my eyes there could be 2 possibilities:
Collect new data where I feed all 18 sensor values in one array; Train one modell for both gesture with the features: 1) idle, 2) random, 3) gesture_1, 4) gesture_2;
Keep the model and the data I created now and design a new second model with the second sensor and the gesture_2 (features will be: idle, random, gesture_2); Then I would also deploy this new model and write an arduino sketch, where I import the two models. Model 1: the actual model which has as input all nine axes from sensor 1 and as output: random, idle, gesture 1; Model 2: new model, which has as input all nine axes from sensor 2 and as output: random, idle, gesture_2
The sketch would be then something like:
void loop
if the output of model 1 is gesture 1 --> LED 1: ON for 10 seconds
if the output of model 2 is gesture 2 --> LED 2: ON for 10 seconds
if the output of model 1 and model 2 is idle since 3 minutes --> System off
What do you think, which is the better strategy?
Thanks again for the great support!