C++ Library input datatype (int16/float)

I am working on a keyword spotting application for educational purposes and we are trying to implement a trained edge impulse on a Nordic nRF52832.

What I have so far:

  • Trained, tested and downloaded the quantized model
  • Implemented the PDM microphone on the SoC
  • Compiled the SoC code incl. the edge impulse, using the Edge Impulse Wrapper

We are very low on RAM resources but there is just enough space for the model and an int16 buffer that holds 600 ms of audio samples. As far as I can see, the run_classifier(signal, result, debug); function expects a pointer to floats which are 32 bits on the nRF, hence take too much space in memory. Is there anything that can be done to use int16 values instead?

Thank you very much in advance.

Welcome to the Edge Impulse community, @LL42!

Oh nice I will need to try out that wrapper!

Why not work with int8 quantization? - Model Quantization: Ensure your model is fully quantized. Quantized models use int8 operations instead of float32, significantly reducing the memory footprint. You mentioned you have a quantized model, so double-check if it’s being utilized correctly.



There is probably a misunderstanding on my side as I do want to use the quantized NN.

The problem is just that in all the examples I could find floats are used. For example when I look at the get_signal_data() callback function that is explained here. The output pointer is expected to be of type float:

// Callback: fill a section of the out_ptr buffer when requested
static int get_signal_data(size_t offset, size_t length, float *out_ptr) {
    for (size_t i = 0; i < length; i++) {
        out_ptr[i] = (input_buf + offset)[i];
    return EIDSP_OK;

The same applies to the standalone example provided here on GitHub.

I have my audio samples already in memory as int16_t so providing int8_t would be no problem. I only have to find and manipulate the right functions, I suppose. Any recommendations are highly appreciated!

Best regards