Recognizing Gestures on our Smart Insoles

Topic:
On-Device Gesture Recognition on our Smart Insoles

Project ID:
378589

Context/Use case:
Recognizing gestures on our smart insoles

Details:
Our smart insoles can run tflite models, piping raw sensor data (linear acceleration, gyroscope, magnetometer, and/or pressure data) into the model, sending back inference results via bluetooth. This saves battery life by running it on-device rather than streaming raw sensor data via bluetooth and running the model on a smartphone/desktop.

2 Likes

Hi @zack.qattan

Welcome to the forum, and thanks for sharing your project, that is a fascinating activity tracking SDK and product, also a great example of Sensor fusion | Documentation :star_struck:—have you already been in touch with our solutions engineers or sales team? They’d be happy to explore potential ways to support your project further, whether it’s through technical guidance or discussing more advanced features that Edge Impulse can offer. Contact Sales

Best

Eoin

I haven’t reached out (the documentation is good enough for me to figure things out)

I guess the only issue is I’d like to use the spectral analysis for better models, but our firmware just pipes the raw sensor data into the tflite model, and I think the models created using the spectral features requires a different set of inputs (pre-computed spectral features that would require more configuration in the firmware to accommodate).

I’m not sure if it’s possible to make a tflite model that takes raw inputs but takes advantage of spectral analysis (and anomaly detection using spectral analysis), but that would be awesome. If not, then maybe later on I can add more configuration settings in our sdk that allows the developer to specify the input type (e.g. raw imu data or particular features)

Oh, and our firmware pre-scales the raw imu data before piping it into the tflite model so it’s generally in the [-1,1] range, so we also pre-scale it when adding it to Edge Impulse - not sure if that affects model quality because it’s not in the proper imu units

2 Likes

That’s an awesome use of our websocket, and glad to hear the positive note on our docs! :smile:

Running multiple models is indeed possible, please refer to our Multi-Impulse docs. Of course you will need to move to a deployment e.g. C++ etc

If you move to our C++ SDK, you can use the run_classifier function with raw sensor values. You can check out the docs here: Input to the run_classifier function.

We have a dedicated DSP team and solutions engineers with experience in product development who can assist if you need any further help, I noticed someone already replied to your post on LinkedIn already fro our team, so I’ll leave it with them to discuss :sunglasses:

Best

Eoin