How to classify motion using a thermal sensor

Project ID: 290646

Hi,

I am trying to build a gesture recognition using the AMG8834 thermal sensor, which is a 64 pixel (8x8) array.

Two of the main gestures should be circling your hand in front of the sensor clockwise and counter-clockwise. After recording the raw data I can visually identify the phase shift and tell the gestures apart, but I have not been able to train a model that can differentiate them.

I think part of the problem is that the “background” pixels have a lot of noise.
I have tried using the raw data block and the spectrum analyser with FFT, which both land me at around 80% accuracy right now. But using live classification it seems to not work at all.

How could I make this work?

Hello @rwahidi,

I have been working similar sensors (TOF sensors 8x8 axes) where we did applied some gesture recognition to it.

The input here is 8x8 grid (64 axes) sampled at 15Hz
Here is an example of the raw data:

We sampled a window size and applied a custom DSP block to preprocess the data to look like an image (or heatmap) while keeping some information about the changes:

We applied a Gaussian standard deviation to the axis, The gaussian weights being used to create the RGB values based on the values in the window size. The latest distance measured will be more blue.

Please note that this requires to create a custom DSP block and its C++ implementation in our inferencing SDK.

I believe in your case instead of being distance it’ll be a color (or 3 values between 0 and 255 per pixel). Which does not change much the logic, you’ll already have a heat map but the gaussian std applied along the window size remains valid.
If this kind of custom implementation interest you, I’d be happy to put you in contact with someone in the sales/solution team.

Best,

Louis

Hi! I’m working on the gesture recognition project using the VL53L5CX 8x8 15Hz TOF sensor, following the exact setup from the tutorial.
Unfortunately, my self-trained model isn’t performing well—accuracy is inconsistent and unreliable. I suspect the issue lies with my data collection (either insufficient samples or inconsistent gesture execution). Would anyone be kind enough to help me out? It would be greatly appreciated if you could share model files, datasets, or Edge Impulse project links.

Just curious, how many samples have you collected? I went looking for that project, but it looks like the project was deleted at some point. :frowning:

I might remove that Tutorial just so folks don’t stumble upon it.

What is the Project ID for this? (It is a number, found in the URL).