Need help capturing audio from X-NUCLEO-CCA02M2 on STM32N657 (Edge Impulse working with raw features only)

Hi Edge Impulse Team and Community,

I’m working on deploying an audio model on an STM32N6 using the X-NUCLEO-CCA02M2 microphone expansion board with the NUCLEO-N657X0-Q, and I’m currently stuck at the audio capture stage.

I was able to successfully run inference using manually provided raw features from this repository:

So the Edge Impulse model and inference pipeline are working correctly on the STM32N6.
However, I’m having difficulties capturing real audio data from the CCA02M2 board due to lack of clear examples or documentation for STM32N6-based Nucleo boards.

My setup:

• X-NUCLEO-CCA02M2 connected pin-to-pin via ST-Morpho to NUCLEO-N657X0-Q
• Power supplied through CN9 (1-2)
• On-board microphones selected (J2 = 1-2)
• J1 open

Current issue:

I cannot obtain changing audio data from the microphones, which suggests that the clock, pin mapping, or peripheral configuration may not be correct. Unfortunately, I couldn’t find any working examples for STM32N6 + CCA02M2 in STM32Cube or Edge Impulse integrations.

Could anyone please help with:

• A minimal working example for capturing audio from CCA02M2 on STM32N6
• Recommended approach for integrating microphone input with Edge Impulse on STM32N6
• Confirmation if this setup is currently supported or tested

Any guidance or pointers would be greatly appreciated.

Thank you very much!

Hi @danielferreira17 ,

I’m sorry but I have no experience with the CCA02M2 :frowning:
Have you searched for examples using this shield ? With a quick search I found this one: STM32CubeU3/Projects/NUCLEO-U385RG-Q/Examples/ADF/ADF_AudioRecorder at 453c7fafaeb4774b4b0f6e4f8486e7c183fc254e · STMicroelectronics/STM32CubeU3 · GitHub

The drivers seems to be this one: fp-sns-allmems1/Drivers/BSP/CCA02M2 at 4133d00ee448e2f145fd72a609891fc4da3a8642 · STMicroelectronics/fp-sns-allmems1 · GitHub

Are you going to use any OS or directly baremetal ?

For running inference on audio data, my suggestion is to check on our github some example on how we do it (doesn’t matter the board you are using as long as you can provide a function to collect the correct amount of data).
Here some example of the interface to call to run inference for 2 different boards:

the structure is a bit different, but at the end they do the same things, there’s an init phase, then recording plus running the classifier.
You can start setting up this “run inference” and pass fake or all 0 to test that the inference is running properly.
Then, on how to collect audio data, again check the BSP from ST and their example.
Once you are familiar on dealing with the mic, check how we have done it here:

and

just the functions that deals with the inference.

Hope all this can help you!

Let me know how it goes.

regards,
fv