Detection of a specific gesture

@jens1185 Looking now.

edit: This is project 17595 right? It seems like you have fixed this already? Any idea how?

@janjongboom
yes sorry for not replying. I have no idea, what I did, suddenly it worked and yes the project is 17595

But the model is still not working. I created a random as an additional output feature. I have now 3 classes + anomaly - idle, random and mygesture. I collected ~ 3,5 min for each class.
When I test it on the website, it works quite well. But on the Arduino my gesture is not detected. It always outputs ā€œrandomā€. Even now, during I am writing and the arduino is lying on my desk without movement, like I trained it to return ā€œidleā€.

I set the flag to true like @aurel suggested, but I donā€™t really understood how to compare and solve the problem now.

Hi @jens1185 Thereā€™s a bug in the SDK when having multiple spectral analysis blocks with different axes feeding into them. Iā€™ve filed a bug for the embedded team, but a quick fix is to reduce this to a single spectral analysis block with all data from all sensors feeding into that. Gets me to 96.12% accuracy on your validation set, so donā€™t expect a big drop in accuracy.

Hi @janjongboom; Thanks for answering my question. I thought using one spectral analysis block per Sensor is the way to do sensor fusion in edge impulse. I will try your suggestion and load it on my Arduino. 96,12 % would be great.

Yeah, it should work as well, weā€™ll fix it in the SDK :slight_smile:

Hi @janjongboom works much better now on my Arduino! Thanks.

Another question regarding the best coding strategy. I want to add a second gesture now. But this is always performed with second sensor. So sensor 1 wil be fixed at the right arm and sensor 2 (also 9 axis) will be fixed at the left arm. My plan is to control everything with one microcontroler. When sensor 1 detects gesture1 LED 1 should be turned on and when sensor 2 detects gesture2 LED 2 should be turned on.
What is the best strategy now? In my eyes there could be 2 possibilities:

  1. Collect new data where I feed all 18 sensor values in one array; Train one modell for both gesture with the features: 1) idle, 2) random, 3) gesture_1, 4) gesture_2;

  2. Keep the model and the data I created now and design a new second model with the second sensor and the gesture_2 (features will be: idle, random, gesture_2); Then I would also deploy this new model and write an arduino sketch, where I import the two models. Model 1: the actual model which has as input all nine axes from sensor 1 and as output: random, idle, gesture 1; Model 2: new model, which has as input all nine axes from sensor 2 and as output: random, idle, gesture_2
    The sketch would be then something like:
    void loop
    if the output of model 1 is gesture 1 --> LED 1: ON for 10 seconds
    if the output of model 2 is gesture 2 --> LED 2: ON for 10 seconds
    if the output of model 1 and model 2 is idle since 3 minutes --> System off

What do you think, which is the better strategy?

Thanks again for the great support!

Hi @jens1185 for two models on two separate devices Iā€™d train two models.

1 Like