Binary output for Nordic nRF9161 DK + IKS02A1 is not working as expected

Hi All,

I am trying to build a fire alarm detection system by analysing the sound.
For this, I am using the Nordic nRF9161 DK with IKS02A.
I created a public project (585872), used Audio (MFE) as a processing block, and Classification as a learning block.

When I tried to run the model in a web browser, it worked fine. But when I try to deploy the impulse to Nordic nRF9161 DK + IKS02A1 (binary output), its not working. Always showing the same result.

Starting inferencing in 2 seconds…
Sampling…
Timing: DSP 140.000000 ms, inference 16.000000 ms, anomaly 0 ms
#Classification results:
FireAlarm: 0.000000
Noise: 0.996094
Starting inferencing in 2 seconds…
Sampling…
Timing: DSP 141.000000 ms, inference 15.000000 ms, anomaly 0 ms
#Classification results:
FireAlarm: 0.000000
Noise: 0.996094

Any suggestions/input would be of great help.

Thanks in advance.

Hi @saroopvs

Welcome to the forum are you using our firmware to test with, and you are testing via impulse runner? If so its published on github here - > GitHub - edgeimpulse/firmware-nordic-nrf91x1: Official Edge Impulse firmware for nRF9161DK and nRF9151DK

If this is how you are running it I will try to give some debugging steps for you, but if I’m correct this iks02a1 is on an additional board you have connected to the devkit? If so you may need Nordics help here, did you ask on their forum?

If you are using an alternative microphone make sure its compatible with the required sampling rate of the project / firmware too 16khz from the firmware - > firmware-nordic-nrf91x1/src/sensors/ei_microphone.cpp at afb65a1285a4876e434fc0c5d535ebcd3cb9cec4 · edgeimpulse/firmware-nordic-nrf91x1 · GitHub

cc @mateusz @vojislav

Best

Eoin

Hi @Eoin,

Thanks for your reply.

I am not using ‘edge-impulse-run-impulse’ command to run the impulse, instead of this, I am connected to the devkit through minicom using the command ‘minicom -C communication.log -b 115200 -D /dev/ttyACM0’

I used the github repo to get the result, but the result was same as described.
after using this code, I tried acquiring the sound data to ensure the code and hardware setup worked. From this, I can eliminate my doubt about the hardware setup. The same setup was used to acquire the sound data for training.

In the GitHub repo code, I believe that the fun ‘inference_sampling_work_handler’ is used to get the samples for the classification, and ‘dmic_read’ (dmic_read(mic_dev, 0, &buffer, &size, READ_TIMEOUT)) is the fun to capture the sound data from the mic. While checking the buffer data, after a few cycles of dmic_read, the data is :- Buffer Data (3200 bytes): 07 00 07 00 07 00 07 00 07 00 07 00…