The above shows an example of a footstep audio clip. I would like to run the MFCC process excluding audio from the lower frequencies - for example with the above audio sample, since the noise occures up to around 300hz, I’d want to exclude audio from 300hz and below.
Ideally, the model would then be trained on the results of MFCC from those frequencies, and inference would only run on audio in those frequencies.
Is it possible with edge impulse to specify which audio frequencies to include/exclude?
Hi @ovedan, good idea actually. We naturally have these options already but don’t expose them at the moment (see here in the inferencing SDK for example). Let me see if we can add them to the UI without too much work.
edit: It’s actually not entirely clear if this would solve your problem or that you’d rather have a bandpass filter on the raw data. We’re releasing custom DSP blocks to the world somewhere this month which would allow you to plug random filters in before as well.
Yes a bandpass filter would be great, and awesome you already got something like that working!! It would be great if on the MFCC page there was some sort of documentation as to what each parameter does.
Are you saying that it already filters out things lower than 300hz, or it’s something you’ll do for next release?
Also, regarding using the filters in the inference SDK - does this mean filters only are applied at inference time? What happens then if the model is trained on the entire spectrum?
Are you saying that it already filters out things lower than 300hz, or it’s something you’ll do for next release?
The current behavior is that the band edges for the Mel filters are 300Hz - samplerate / 2 (so 8000Hz if sample rate is 16KHz), so my feeling is that noise under 300Hz won’t show up. This is the default behavior both in the studio and in the inferencing SDK. In the next release (update: released now) we make these edges configurable in the UI.
Also, regarding using the filters in the inference SDK - does this mean filters only are applied at inference time? What happens then if the model is trained on the entire spectrum?
Unfortunate word choice on my end! The inferencing SDK and the studio behave exactly the same, so every window that we process in the studio has the same filter applied.
It’s not really a bandpass filter there, but rather the begin and end of the MFCC buckets. I think these need more of the signal to work with then 500Hz. We’re releasing custom blocks somewhere this week, that would allow you to plug in a proper bandpass filter in quickly.