I am working on some event detection in time series, where the specific event I am looking for is approx 2 seconds per hour of data. I have recorded (offline) some initial data and sliced these events from the time series with the correct labels. I also random picked time slices of the same length not containg the event. I was able to train a model/classifier and also got some interesting results. To deploy the model I would like to know how to handle the live data (buffers) on a MCU. I have created a CMSIS package. Do I need to get 2 second buffers and run inference? I was thinking on shitfting through these buffers with some overlap, since I am not aware when the event will pop up. Does this overlap have something to with the window increase in the impulse?
Hope to get some advise Thanks!
Thank you for your reply. Interesting to read that the continuous classifier saves some RAM, how does this exactly work? For each window the features have to be calculated anyway right?
Just curious
Apologies for the delay. If you are using the Edge Impulse C++ library for “continuous classification,” then the MFE or MFCC spectrogram is treated as a queue. The oldest slice (e.g. 0.25 sec for 4 classifications per second) is dropped, and a new slice (e.g. 0.25 sec recording) is added. Classification is performed on the MFE/MFCC spectrogram window. That way, you’re not performing MFE/MFCC calculation on an entire window every time, and it saves RAM, as you don’t have to store the entire raw audio window.