Non continuous gesture recognition

I’m working on a project with my daughter, a light up LED skirt. When she spins, jumps or wiggles it should trigger different effects.

We have it kind of working, but not very reliably. I want to improve it but not sure what to do.

Repo is here: GitHub - mattvenn/laura_skirt
Project here: laura skirt v2 - Dashboard - Edge Impulse

I’ve been following the continuous tutorial, but is this still useful for non continuous gestures like a jump or a spin?

I only have about 4 mins of data collected, which I know isn’t enough, but I want to get on the right track before recording a lot more as it’s a pain to do.

All my data are single shots. Record one jump. Record one spin etc.

Should I leave the windowing on? I would have thought the data would be better with me selecting the pertinent part of each sample rather than window through the whole thing that includes waiting at the start and end.

I can see the features roughly in groups:

I can pan & zoom but I’m unable to rotate the graph as shown in the video. Do I have to hold a button? I’ve tried Chrome and Firefox.

Is there a way to remove bad data? I can see some features that are definitely wrong. I can click on them and see the sample but doesn’t seem to be a way to remove the feature. I would have thought the training would go better if I could remove bad features beforehand.

I’d love to collect data wirelessly, it’s a total pain holding the laptop over my daughter with a wire dangling and then having to untwist after every spin. Is it possible? Doesn’t seem so with nano33ble that I have. Does the ESP32 support data capture via wifi? That would be cool.

Finally, I thought I got better features with gyros instead of accelerometers. So I used gyrx, gyry and gyrz. That breaks the firmware like this: Problems With Creating/Deploying a model For Arduino - #6 by MMarcial

Is the solution to use the fusion example and delete all the sensors I’m not using?

Anyway, enough questions! Thanks for any help and I’m having fun learning about all this stuff. Thanks for the great tool!


1 Like

That’s fantastic, @mattvenn! It’s always great to hear about engaging and creative projects like the LED skirt for your daughter. It’s projects like these that really showcase the practical and fun side of machine learning and IoT.

I’m currently working on some documentation that could be very helpful for your continuous learning project.

Here’s an outline of what you will need to add to collect data from your esp32:

ESP32 integration

  1. First get your api key from the dashboard of your project:

  1. Include Necessary Libraries

Add the HTTP client library to handle the HTTP requests to Edge Impulse:

#include <WiFi.h>
#include <HTTPClient.h>
  1. Edge Impulse API Information

Define the constants needed to communicate with the Edge Impulse API:

const char* EDGE_IMPULSE_API_KEY = "your_api_key";
  1. Function to Send Data to your Edge Impulse project

Create a function that sends the collected sensor data to Edge Impulse:

void sendDataToEdgeImpulse(float* buffer, size_t buffer_size) {
    HTTPClient http;
    http.addHeader("Content-Type", "application/json");
    http.addHeader("x-api-key", EDGE_IMPULSE_API_KEY);

    // Prepare the JSON payload
    String payload = "{";
    payload += "\"values\": [";
    for (size_t i = 0; i < buffer_size; i++) {
        payload += String(buffer[i]);
        if (i < buffer_size - 1) {
            payload += ", ";
    payload += "]";
    payload += "}";

    // POST the data
    int httpResponseCode = http.POST(payload);
    if (httpResponseCode > 0) {
        String response = http.getString();
        Serial.println("Edge Impulse Response: " + response);
    } else {
        Serial.println("Error sending data: " + httpResponseCode);

  1. Modify the loop() Function

In the loop() function, you need to decide when to call sendDataToEdgeImpulse(). This depends on how frequently you want to send data to Edge Impulse. For example, you might want to send data every few seconds or after detecting a specific gesture.

void loop() {
    // Existing code...

    // Example of sending data every 10 seconds
    static unsigned long lastSendTime = 0;
    if (millis() - lastSendTime > 10000) { // 10 seconds
        lastSendTime = millis();
        sendDataToEdgeImpulse(buffer, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE);

OK that will get data flowing to your project now on the continuous tutorial is still relevant because it teaches you how to handle streaming data.

For non-continuous gestures like a jump or a spin, you might want to experiment with different window sizes in your model training to better capture these discrete events.

8 minutes of data is our minimum generally the more data the better the results, especially if it includes a variety of spins, jumps, and other movements you want to recognize.

Keeping windowing on is generally a good idea. It allows the model to learn from shorter segments of data and might capture gestures that don’t align perfectly with your recording start and end times. However, manually selecting pertinent parts of each sample can also be effective, especially if the non-gesture data is not representative or is too noisy.

You can actually use both accelerometer and gyroscope to get a more robust model via Sensor Fusion please see this tutorial:

It’s possible to modify the example to include only the sensors you need. If you are facing specific errors, please share them for more targeted advice.

For the rest of your queries please see: Impulse design - Edge Impulse Documentation and Increasing model performance - Edge Impulse Documentation for more info.

Hopefully you will start to see data appearing in your project, if not let me know.



Hi Eoin, thanks for the reply!

RE the ESP32, I’m going to stick with the nano33 for now because I have to hit the deadline of Laura’s christmas party! I mainly want to move to ESP32 for FastLED support (easier light effects).

You say ‘multiple gestures occur naturally’, but I thought to train the model we want to have the data labelled? I don’t know how to label data apart from when I upload it. So my data collection at the moment is to choose the gesture and then ask Laura to do that move.

What I’m not understanding is what happens to all that silence in the recordings. Won’t the model learn from that too?

Thanks for the video - I’m a fan of Shawn - looking forward to watching.


1 Like

Ah sorry yes, that was not clear @mattvenn

I mean for collecting your data on device, you can add buttons to adjust the label. Say 3 buttons one for each class and then use that to capture the labelled data from the esp32.

For the “silence”, you should have another label e.g. IDLE where the device is not moving. You can go a step further and use multi-label to label the areas of silence:



Thanks Eoin.

I’ve watched the sensor fusion video which is helpful - but one thing it leaves out is how to deploy to a device. I’ve run into the issue before, if I use anything except acc x y z then the program fails to compile.

I get the same issue as this post: Problems With Creating/Deploying a model For Arduino

Is there a guide for how to add the extra sensors to the script?


I’m just guessing, but does this look right as a diff on the example continuous firmware? To add the gyr x y z?

I also tried putting the acc in 6,5,4 and gyr in 3,2,1. While it compiles, the output is always uncertain or anomalous.