Support for analog microphones

I was wondering if there is or will be any support for analog microphones. Everything that I’ve seen so far is for PDM microphones. I am using a Wio Terminal which uses an analog microphone on board.

I don’t think this would be too hard to implement. You need to fill the signal buffer with raw audio data, and the EI library takes care of feature extraction and inference, regardless of where the data comes from. The hard part is doing it quickly enough so you don’t overrun the audio buffer.

There are a few ways to handle this that I know of. The first is to use DMA (assuming your controller supports DMA on an ADC pin) to fill the the signal buffer while your processor does feature extraction and inference. I used I2S with DMA in my STM32 demo to fill the buffer.

The other option is to use a preemptive RTOS to interrupt the processor every few microseconds to read a sample on the ADC. I think this is what the EI PDM example is doing (I’m sure an EI person will correct me if I’m wrong on this :slight_smile:).

Edit: I’ve also dabbled with the Wio Terminal, and it is lacking a way to fill an audio buffer in parallel with inferencing. I wish I knew the SAMD51 better, as I can’t imagine it would be too hard to set up DMA if I did :confused:

@ShawnHymel, you’re correct. DMA prefered, but you can have a higher-prio thread sample every X ms. depending on the size of the buffer as well.

1 Like

Haven’t tried DMA yet but it seems that’s the right approach. I’m also new to the SAMD51 but it does DMA with the ADC. I think Adafruit may have extended their Zero DMA library to work with the SAMD51 in addition to the SAMD21. That’s probably what I’ll try first when I get a chance.

@ralphjy Please let me know if it works! I’d love to get my Wio Terminal doing speech recognition with the built-in mic.

Adafruit does has code and examples of how to get read in from an Analog microphone on the SAMD51. It uses a timer to take samples from the ADC at 16KHz. I got this approach working with the EI Arduino libraries using one of the Adafruit SAMD51 boards (the pybadge). The ADC is set to 12-bit resolution, but things seemed to work fine with a model that was trained with 16-bit data. I have it working with the Adafruit Electret mic that has a builtin pre-amp on the board and powers the mic. It is just hooked to A1.

This approach uses the Adafruit Arcada Lib, which in turn calls the Adafruit ZeroTimer Lib. I think this could be adapted to work with any SAMD51 based board… or anything that has good timers. Here is the timerCallback function which does the heavy lifting.

Here is the Callback function you would use to feed the analog audio into the EI inference struct:


  int32_t sample = 0;

    if (inference.buf_ready == 0) {
              sample = analogRead(USE_EXTERNAL_MIC);
              sample -= 2047; // 12 bit audio unsigned  0-4095 to signed -2048-~2047
                #if defined(AUDIO_OUT)
                  analogWrite(AUDIO_OUT, (sample+2048)); 
            inference.buffer[inference.buf_count++] = (int16_t) sample;

            if(inference.buf_count >= inference.n_samples) {
                inference.buf_count = 0;
                inference.buf_ready = 1;

You setup the timer with this call:


All of this is based on this Example:


Good find, thank you!

I managed to get DMA working on the Wio Terminal to sample from the ADC at 16 kHz. I created a demo that does continuous keyword spotting on the Wio Terminal:

I also made a video of it working:

I tried porting it to the PyBadge, but I wasn’t having much luck, so I put it aside for the time being. I think the PyBadge is using the same timers or DMA channel that I chose for something else, and I haven’t been able to fully figure out where the conflicts are occurring. Time permitting, I may look through that example you provided to see if I can get DMA/ADC working with my SAMD51 code (from the Wio Terminal).



I would like to classify sound with IMP23ABSU microphone (see datasheet) into 3 categories for escalator condition monitoring . I would like to use STM32L476 microcontroller. Do you think that a similar approach would be of application?

Here is a sample Impulse I created:

Yes, as long as you can capture the sound data and feed it to your neural network, I think that approach will work. I’ve had good luck using the STM32L476 for sound classification.

The IMP23ABSU is for ultrasound (80 kHz) applications. Do you need to sample at this high of a frequency? 80 kHz might be too taxing for the L476. I used a SPH0645LM4H (32 kHz) and downsampled it to 16 kHz for keyword spotting.

1 Like

Hi, I don´t think I need 80 kHz, but my application is for escalator maintenance so I would like to maintain the sampling rate quite high. Do you think SPH0645LM4H (32 kHz) without downsampling would be possible with STM32L476? Thanks

@iarakis Probably should be fine, but I see little information in the high frequencies in your sample project, so might want to downsample anyway.

I have a Thunderboard Sense 2 board. I would like to do sound classification with it on Edge Impulse. I would like to use which is a huge audio dataset. I don´t need such a huge dataset, I can use a reduced training dataset. What would be a reasonable dataset/category size to use with Edge Impulse?

Hi @iarakis,

If you extract 1 to 2 hours of data it will be a good start.

FYI I worked with this dataset a while ago to import within an Edge Impulse project, you can have a look at this code:

There are probably some changes to make but you can play with the dataset size and the foreground/background split of samples. It also resamples wav files to 16 kHz.


I am using an analog mic with a max9814 amplifier on a stm32f4 via adc

Note that the Sony Spresense also uses analog microphones, so we know this works too:


Hey guys, great discussion. I’ve designed an ultrasound mic with pre amps for automatic bat detection iot device, I’m looking to ingest the data into an impulse it would be nice to have full spectrum has anyone had any proven methods for this?

I assume you refer to the bat animal.
It is all time series data. You can do this in EI.

Take the Recognize sounds from audio example as a starting point. It is a similar approach, but for your application, you need to look in the spectogram at a different frequency range. If I am correct (but you know this better), your frequency range will be between 18kHz and 120kHz, depending on the species.

Maybe check this: Detection and Multi-label Classification of Bats and this batML

I am not sure, but maybe you can also try Transfer learning, where you change the number of classes. I am not sure if you can combine it with a spectrogram in a direct way without any modification. Probably you need to tweak the input shape. @louis Is this possible?

I hope this will help to get you started.

1 Like