Right now, the Edge Impulse studio only supports anomaly detection for simple, 1D type of data (e.g. accelerometer). Detecting anomalies for audio and image data is something we hope to support in the future. Do you have the sounds of the unloaded machine in your dataset? If you added an “unloaded” class, that may be the best approach right now.
I took a peek at your project–I also recommend setting the noise floor of the MFE block to something like -100 instead of -52, as your data is pretty quiet. The noise floor setting removes any sounds less than a certain dB level.
For classifier post-processing, I recommend checking out the last section of my course (https://www.coursera.org/learn/introduction-to-embedded-machine-learning) where I talk about a few ways to do such post-processing. It’s free to sign up, and you can just skip to the videos you need.
To find the features (e.g. spectrograms), I recommend downloading the C++ library in your project (after training a model). In there, you can find the
run_classifier() function in edge-impulse-sdk/classifier/ei_run_classifier.h file. The features are placed in a
features_matrix buffer. It’s fairly buried in the library’s source code, but it is possible to access that data.
run_classifier() is called if you’re not doing a continuous sliding window across recorded data for inference. If you are, then
run_classifier_continuous() is likely called instead. That function uses
static_features_matrix to store the features instead.
Hope that helps!