Raspberry Pi Pico and Adafruit MAX9814 Audio classification

    while(1) {

        printf("Debug 1\n");

        printf("Inferencing...\n");
        
        float buffer[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE] = { 0 };

        for(size_t ix = 0; ix < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; ix += EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME) {

            uint64_t next_tick = ei_read_timer_us() + (EI_CLASSIFIER_INTERVAL_MS * 1000);

            buffer[ix] = get_axis(ADC_NUM);
            printf("%.5f\n", buffer[ix]);
            sleep_us(next_tick - ei_read_timer_us());
        }

        signal_t signal;
        int err = numpy::signal_from_buffer(buffer, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal);

        if(err != 0) {
            printf("Error: Failed to create signal from buffer");
            return 1;
        }

        ei_impulse_result_t result = { 0 };

        err = run_classifier(&signal, &result, false);
        if(err != EI_IMPULSE_OK) {
            printf("Error: Failed to run classifier");
            return 1;
        }

        for(size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
            printf("%s: %.5f\n", result.classification[ix].label, result.classification[ix].value);
        }
    }

the get_axis function

float get_axis(int adc_n) {
    unsigned int axis_raw = 0;

    for(int i = 0; i < NSAMP; i++) {
        axis_raw = axis_raw + (adc_read());
    }

    axis_raw = axis_raw / NSAMP;

    return axis_raw * ADC_CONVERT;
}

It always returns the same classification even tho the web model classifies correctyl. Is there a problem with my code?

Hi @zloberto,

The code looks right at first glance. I recommend doing the following to start debugging:

  1. Print the raw ADC values (as you’re doing with printf) in a comma-separated value format to the console (do this for only one captured frame).
  2. Copy the full CSV raw output and paste it into a simple Python program as a Numpy array.
  3. Convert the numpy array to an audio output for file (e.g. python - How to generate audio from a numpy array? - Stack Overflow).
  4. Listen to the recording: is it what you expect (i.e. does it sound like a good sample)?
  5. If the sample sounds OK, create a static inference program in the Pico where you paste the sample into a const array and call run_classifier() on just that array to see if the output is what you expect.
  6. You can also upload the converted wavefile to Edge Impulse as a test sample to make sure the inference results match that in #5.

So if I understood correctly, I need to run
for(size_t ix = 0; ix < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; ix += EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME)
once and then copy that to a numpy array?