Detection of a specific gesture

Hello everybody,
I am Jens and new to the community. I am working on a project and need some tipps from you guys.
I want to build a model which detects one specific gesture (length of the gesture ~ 1 sec). I want to upload the model later on an arduino. When the Arduino is started the sensor (3 axes - accelometer) shall start collecting data. When the gesture is detected a LED shall be switched and switched off after 5 seconds. In all other cases the LED shall stay off.

I am struggeling what might be the best strategy, when collecting data.
I need a feature “my gesture” and a good dataset when I perform this gesture, that is clear. My question is:
Shall I create a feature like “random gesture” and collect data with random movements (incl. idle) or is it a good idea to create just one feature (my gesture) and work with the anomaly value?

Thanks for your help.

Hi @jens1185

Shall I create a feature like “random gesture” and collect data with random movements (incl. idle)

This would be my recommentation. Use the anomaly detection as a failsafe, but for expected classes this will be the best.

Thanks Jan for your Reply. I will try this strategy.
Do you think this will be possible with just an acceleometer or would you recommend a 9 axis sensor?
My gesture is a bycicle turning signal to the left/right with the hand. The sensor will be in the sleeve.

@jens1185 If possible I’d capture data from 9-axes (or at least acc+gyro), you can always decide to throw away some axes if needed (by going to Create impulse and un-selecting some).

1 Like

Thanks Jan for your reply!!

1 Like

Hello,
as described in my first post I want to detect a specific gesture.
Now I trained three outputs: idle, my gesture, random movements.

During the training the model usually tries to optimize the total accuracy. In fact this is not what I really need. E.G. if the model predicts idle but the answer is random, is totally fine for me.

Having a look at the confusion matrix: The value: predicted: my gesture / actual: my gesture should be high > 90 %; and the both cells with predicted: my gesture / actual: idle or actual: random shall be near 0%

Is there a possibility to modify the training process?

Hi @jens1185, good question, but if that’s the case you can just combine these two classes (you can quickly copy your data to a new project, rename idle => random for the labels, and retrain).

Thanks for the answer.
Sorry for asking, but could you help me how to do that. I can’t find the option to download my data or to copy it to a new project. Is this also available for free users?

Hi @jens1185, yeah, two options!

  1. Go to Versions, make a new version, and restore immediately into a new project.
  2. Go to Dashboard > Export, download your data, then in a new project go to Data acquisition and click the Upload button.

Both available for free users!

1 Like

Thanks Jan. That worked quite well.

In the meantim I bought some new hardware to classify the model offline. I bought the Arduino Nano BLE 33 Sense. As suggested by you I used all 3 motion sensors (acc, gyro, mag). To record the data and upload it I wrote an own sketch, which reads all three sensors and writes them in float variables:

float aX, aY, aZ, gX, gY, gZ, mX, mY, mZ;
if (IMU.accelerationAvailable() && IMU.gyroscopeAvailable()&& IMU.magneticFieldAvailable() ) {
IMU.readAcceleration(aX, aY, aZ);
IMU.readGyroscope(gX, gY, gZ);
IMU.readMagneticField(mX, mY, mZ);
Serial.print(aX, 3);
Serial.print(’,’);
Serial.print(aY, 3);
Serial.print(’,’);
Serial.print(aZ, 3);
Serial.print(’,’);
Serial.print(gX, 3);
Serial.print(’,’);
Serial.print(gY, 3);
Serial.print(’,’);
Serial.print(gZ, 3);
Serial.print(’,’);
Serial.print(mX, 3);
Serial.print(’,’);
Serial.print(mY, 3);
Serial.print(’,’);
Serial.println(mZ, 3);

I used the data forwarder to upload the data. In the Create Impulse section I seperated them by creating three spectral feature blocks (one for each sensor). I used 1000 ms for window size and 500 ms for window increase. The data was recorded with 20 Hz which was the standard setting.
I trained my gesture and the model. Now I want to deploy the model back to my Arduino. Therefore I downloaded the zip file and integrated it in my Arduino IDE.
My problem is now that the examples, which came with the library are just for the accelerometer data and not for all 3 sensors.

So I guess I have to write my own sketch. As I want to have a continious sensor reading and classifying I think it is the best way to take the “nano_ble33_sense_accelerometer_continious” example as a start and modify it.

Could you please help me how to do that? What do I have to change? Sorry I am a real newbie to all this. I tried to find a similar topic in the forum, but with no good result. Thanks for your help!

Hi @jens1185 See here for a good getting started point: https://docs.edgeimpulse.com/docs/cli-data-forwarder#classifying-data-arduino

It builds on top of the data forwarder sketch to do classification, so it should translate well to what you have already.

hi @janjongboom Thanks again for the help.
I copied the sketch and modified it in this way according to my capture sketch:

float aX, aY, aZ, gX, gY, gZ, mX, mY, mZ;
if (millis() > last_interval_ms + INTERVAL_MS) {
last_interval_ms = millis();
// read sensor data in exactly the same way as in the Data Forwarder example
IMU.readAcceleration(aX, aY, aZ);
IMU.readGyroscope(gX, gY, gZ);
IMU.readMagneticField(mX, mY, mZ);
// fill the features buffer
features[feature_ix++] = aX;
features[feature_ix++] = aY;
features[feature_ix++] = aZ;
features[feature_ix++] = gX;
features[feature_ix++] = gY;
features[feature_ix++] = gZ;
features[feature_ix++] = mX;
features[feature_ix++] = mY;
features[feature_ix++] = mZ;

Correct??

The result was that I got an error during compiling:
cc1plus.exe: out of memory allocating 65536 bytes

exit status 1

What can I do?

Yeah that looks good. @aurel coul dyou take a look?

Hi @jens1185,

I just tried your sketch and compiling looks good on my side.
Can you verify you have selected the Arduino Nano 33 BLE as the target board?

Aurelien

Good morning,
@aurel @janjongboom Thanks for your help. I just tried it again without any changes in the code and it worked. I just closed everything else running in the background. I don’t know if that had an effect??

My problem is now:
When I tested the model online in the edge impulse web app with the live classification it seemed to work. The model detected my gesture, when I did it. Now it is not detected. The gesture has a length of around 1 second. Could the problem be that the collection thread is blocked like described here:
Note: These examples collect a full frame of data, then classify this data. This might not be what you want (as classification blocks the collection thread). See Continuous audio sampling for an example on how to implement continuous classification.”

Or do you think there is another problem? Thanks again for your great support!

Best regards

Jens

Hi @jens1185,

I don’t think it’s a blocking thread issue. What I would suggest to do:

  • In your project, make sure you have enough “right” samples in your Test Set to verify you don’t have an overfitting problem during training. If you have put all of your collected samples in the training set, just go in Dashboard -> Rebalance Dataset in order to get 80% training, 20% testing. Retrain the model and check that Model Testing accuracy is good.
  • If your Model Testing is good, you can verify the sensors raw values during inferencing and compare them to the data you have already collected. To do that just set the debug flag to true:
// run classifier
EI_IMPULSE_ERROR res = run_classifier(&signal, &result, true);

Aurelien

@aurel Thanks, for the suggestion. I will try that and report.

Do you think that could work to have one dataset with “right”, which is the specific gesture I want to detect and one dataset with “idle”, what I want to use later for “autoff” and additionally the anomaly as a result?

@jens1185 My preference is to use the anomaly detection only for real anomalies, and get enough data on the negative classes in the same model.

Thanks Jan. I will try that, but this is really tough for the model.

@janjongboom I added some more data now including a random class. I generated new features for all 3 DSP blocks. There are 1445 features in each block. When i start training (anomaly or model training) I get this error message. What is the problem?
grafik