Understanding how to implement ei_classifier_smooth_t

Hello everyone, I would like to know how I can implement ei_classifier_smooth.h in my KWS code (nano ble33 sense microphone continuous)

I have seen the usage example in (nano ble33 sense accelerometer continuous) but I don’t understand enough to be able to implement it in microphone continuous inference.

thanks

Hi @gasadinolfi,

You should not need to implement or call functions from ei_classifier_smooth.h directly. The only functions that you should be using to call the Edge Impulse C++ library are listed in this API reference guide.

Continuous keyword spotting is not an easy task. You very quickly run into timing constraints, as you must constantly fill a buffer with raw audio data while performing feature extraction (e.g. converting time slices to MFCCs) and inference. There are a few approaches to do this:

  • Use an RTOS where a low-priority task performs feature extraction and inference on rolling window of audio data. You’ll still need to use timer interrupts to sample raw audio data.
  • Use a dual-core processor where one core samples data and the other core performs feature extraction and inference
  • Use timer interrupts to sample the microphone at the correct interval and fill a buffer using DMA so that you don’t need to rely on the CPU

You can use the run_classifier_continuous() function to perform much of the heavy lifting for you. However, you still need to write your own callback functions to fill the raw data buffer (e.g. using threads or DMA). I have a few demos here that may work as a decent starting point.

3 Likes

what a great explanation!
I really appreciate your answer

thank you so much for helping me @shawn_edgeimpulse
:raised_hands: :clap: