I am trying to build a gesture recognition using the AMG8834 thermal sensor, which is a 64 pixel (8x8) array.
Two of the main gestures should be circling your hand in front of the sensor clockwise and counter-clockwise. After recording the raw data I can visually identify the phase shift and tell the gestures apart, but I have not been able to train a model that can differentiate them.
I think part of the problem is that the “background” pixels have a lot of noise.
I have tried using the raw data block and the spectrum analyser with FFT, which both land me at around 80% accuracy right now. But using live classification it seems to not work at all.
We sampled a window size and applied a custom DSP block to preprocess the data to look like an image (or heatmap) while keeping some information about the changes:
We applied a Gaussian standard deviation to the axis, The gaussian weights being used to create the RGB values based on the values in the window size. The latest distance measured will be more blue.
Please note that this requires to create a custom DSP block and its C++ implementation in our inferencing SDK.
I believe in your case instead of being distance it’ll be a color (or 3 values between 0 and 255 per pixel). Which does not change much the logic, you’ll already have a heat map but the gaussian std applied along the window size remains valid.
If this kind of custom implementation interest you, I’d be happy to put you in contact with someone in the sales/solution team.