Training resources for more advanced interactions with Nicla Voice?

Question/Issue:
I am looking for guidance on ways to further my learning of creative ways of using the Nicla Voice. I have trained an audio classifier on Edge Impulse Studio that has only two classes. I would like to run the classifier on the Nicla Voice, and I would like to be able to programmatically interact with inferences coming off of the device. The tutorials that I have watched and read, like this official tutorial “[Keyword Spotting with the NDP120-Powered Arduino Nicla Voice”] and “Audio Analysis with Machine Learning and the Nicla Voice” both lead to an unsatisfying endpoint, where the trained model is used simply spit out classifications / matches to the Terminal, using edge-impulse-run-impulse or the serial monitor, using the Arduino IDE.

That is nice and all, but what if I want to do something a little more interesting, like say, log each time a particular keyword is matched? Or perhaps trigger some event to occur when a keyword is matched? What I am looking for is a tutorial or roadmap of training that is accurate to the particulars of the Nicla Voice, and which shows me how I can write software that can do useful and creative things with the output of the neural network as it is running on the device. Please help point me in those directions. Here are some things I’d like to eventually be able to do:

  1. Each time a particular hotword is detected, log it into a file that I can access somehow (e.g. via bluetooth, or a wired connection)
  2. When the model detects a particular keyword like “GPS”, triggers an action like the logging of a waypoint using another Arduino module that I would connect to the Nicla;

I was hoping to be able to build software like this using the Arduino IDE.

For programmatically interacting with the output of the NN, I would need to know what deployment configurations to set, and how to set up the IDE, of course. And lots of other things that I don’t even know about yet! But I am looking for some guidance to further my learning.

Context/Use case:

For the deployment option, in Edge Impulse Studio, thus far, I have selected “Arduino Nicla Voice”.
I use a Mac
My connected Nicla voice shows up on these serial locations:
/dev/cu.usbmodemEF14F8FF3
/dev/tty.usbmodemEF14F8FF3
my trained model is called cough-detector-nicla-voice-v3

Environment:

  • Platform: Arduino Nicla Voice
  • Build Environment Details: Arduino IDE Version: 2.3.6 Date: 2025-04-09T11:22:51.016Z (7 months ago) CLI Version: 1.2.0
  • OS Version: Mac OS 15.4.1 (24E263)
  • Edge Impulse Version (Firmware): Edge Impulse impulse runner v1.36.0
  • Custom Blocks / Impulse Configuration: Audio (Syntiant) processing block, Classification learning block, with two output features.