A custom response/handler after keyword spotting

Thanks for your awesome Edge Impulse and relatively easy opportunity to implement embedded machine learning!
I have a question about implementing a custom response after keyword spotting. Is it possible to receive not binary image in a Deployment section, building a firmware, and to obtain source code of project to modify it and have the custom response for keyword recognition? For instance, when I follow your tutorial Responding to your voice for nRF52840DK with X-NUCLEO-IKS02A1, then I want to blink LEDs, for example, when ā€œHello Worldā€ was recognized, and not merely write to debug. It seems, itā€™s easy to modify the firmware, but I canā€™t find the source code of the project, which built in the section of ā€œBuild firmwareā€, when I chose a specific board. Donā€™t you provide the source code of this project, which itā€™s possible to modify and build myself? The final goal is to trigger a voice record after keyword spotting and send it further. Could you clarify a bit how to write the custom handler after keyword spotting?

Hi @RAlexeev,

Thanks for your feedback!

You have 2 ways to customize your application code:

  • Follow our running your impulse locally tutorial. Youā€™ll also need to manage data acquisition from your microphone.
  • Retrieve the full firmware from our github, ie: Nordic nRF DK. Export the C++ library and replace folders in the firmware directory. Youā€™ll have access to the source code and can modify any part of it with your own custom code. This can work well if you use a devboard fully supported by Edge Impulse.

Aurelien

2 Likes

@RAlexeev adding to @aurel:

  1. Use firmware-nrf52840-5340-dk as a base.
  2. Remove edge-impulse-sdk, model-parameters and tflite-model and replace with the folders from your project (after C++ library export).
  3. Then you can modify the code here to f.e. toggle an LED:
  1. If you want to make the impulse run on startup of your board, in main.cpp add:
#include "ei_run_impulse.h"

And then after ei_init(), add:

run_nn_continuous_normal();

Thatā€™s it.

4 Likes

Itā€™s amazing! :partying_face: Many thanks for your quick response, @aurel and @janjongboom. Everything works after your advices. I asked the same question for a Nordic Semiconductor webinar ā€œPowering the next generation of IoT with Embedded Machine Learningā€ on February 24, but you have already detailed answered me. And you can see on the video that it works with my own Russian words after training a model in Edge Impulse - https://yadi.sk/i/iYfp4VvTltyirA Great! :+1: I wish your project all the success!

2 Likes

Hi, Iā€™m also trying to include the library in my project. Iā€™m working on nRF52840 with KEIL IDE. I included the library but, when compiling, a got a lot of errors.
Iā€™m probably missing some necessary steps, but I donā€™t know how to solve them.

Just to give you a better overview, Iā€™m working with the last nordic SDK with the whole BLE stack

thank you so much

Hi @Fede99, can you compile https://github.com/edgeimpulse/firmware-nrf52840-5340-dk with Zephyr successfully? Building through Zephyr and west is the only supported build toolchain for the nRF targets that we have, but always happy to assist if you give some more context on the errors you see.

Hi @janjongboom and thank you so much for your answer and your time.Iā€™m working on a nRF840 project base on easy scheduler OS. Building through zephyr and west is it possible also to customize the project or the output need to be directly flash into the board?

thank you so much

@Fede99 Iā€™d start with https://docs.edgeimpulse.com/docs/running-your-impulse-locally-1, the ā€˜on your desktop computerā€™ has a complete makefile that has the minimal requirements which has zero external dependencies and compiles on most MCUs out of the box. This just verifies that you can run a trained model on your hw (no sensors hooked up yet).

Once you have that you can look at the reference code for nRF52840 DKs on doing continuous audio classification (how to feed DMA buffers in etc.).

Hey, got this somewhat working but Iā€™m getting much lower scores on the device than during live classification through the site (e.g. 0.2-0.5 on device vs > 0.9 on site). All my training samples were recorded with the same PDM mic via the nrf52840 DK. It feels like some of the samples are falling on edges during inference but that shouldnā€™t be the case with continuous inference, correct?

EDIT: I will mention that I changed the PDM clock freq from NRF_PDM_FREQ_1280K to the default of NRF_PDM_FREQ_1032K although I canā€™t imagine why that would matter.

EDIT2: I feel like Iā€™m missing something and that the double buffering approach is not implemented on the NRF. I thought that was the meaning of continuous inference but perhaps not.

@jefffhaynes I thought I replied this as well (maybe in another thread), but yeah the continuous inferencing is implemented on the nRF52840 DK, but the moving average filter that we apply over the data might skew your results. See https://docs.edgeimpulse.com/docs/responding-to-your-voice#poor-performance-due-to-unbalanced-dataset on how to disable it.

You did! And that did help quite a bit. Increasing my training set did as well. One other question I had - where does the EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW value come from? I realize it is defined in model_metadata.h but why, for example, is it 4? Is that just a default or is it computed from something during model definition on the site?

Itā€™s a default that we know works well, if you have time to spare (or need to decrease power usage) you can increase / decrease this.

1 Like