Nicla Voice IMU Impulse Not Working

Hello guys! This is José, again :slight_smile:

I have a strange behavior with my Nicla Voice and a model I developed using Edge Impulse. I have made a simple motion classification impulse using the built-in IMU of the Nicla Voice (three different movements: moving the board up and down, left and right, and keeping it still without moving it). I had trouble uploading the generated firmware to the board, but that issue is already resolved. When I test the impulse using the Live Classification tool of Edge Impulse, everything seems to be ok; every movement is detected without any problem. But, when I deploy the impulse into the board and run it using the following command:


Inferencing starts on the board but can’t detect anything; it only shows a match on an incorrect movement. Any guess what can be happening with the impulse deployment on the board? The Project ID is the following: 230837. Any help with this is appreciated. Thank you!

Hi @josea.bagur

I will take a look at your project.



1 Like

Thank you @Eoin. The easiest way to test the project and see the issue I described is by leaving the Nicla Voice without moving it (idle). You should see that the Live Classification tool from Edge Impulse Studio does the job by recognizing correctly that the board is not moving. Still, the movement is not recognized when the model is deployed into the board.

@josea.bagur, one thing I noticed (haven’t tested it on my board yet) is that you don’t have a so-called negative class in your model. I checked our doc and indeed we explain this concept (specific to Syntiant) for audio but not in the IMU tutorial - we’ll fix that.
When deploying on the Syntiant target, the last class (alphabetical order) is automatically assigned as a negative one - meaning it will not trigger a prediction. You can notice this in the ph_params.json file in the exported zip file:

{"label": "up-down", "phwin": 0, "phth": 0.5, "phbackoff": 0, "phaction": 0, "phaction_arg": 2, "smoothing_queue_size": 1}, 
{"label": "up-down", "phwin": 255, "phth": 0.998, "phbackoff": 0, "phaction": 2, "phaction_arg": 3, "smoothing_queue_size": 1}]}]}

I’m not sure how the NDP chip reacts to this config but it probably has some side effects. What I’d suggest is renaming your idle class to z_idle, retrain the model and unselect the z_idle when generating posterior parameters.


1 Like

Oh, that makes sense. Sorry that I haven’t seen the audio documentation for Syntiant, I got into the IMU right away I got the board :slight_smile:, I am going to read the concept in the audio documentation. I will make tests with your suggestions and keep posting my findings here. Thank you guys for your prompt responses BTW! :smiley:


Doc has been updated: Motion recognition - Syntiant - Edge Impulse Documentation


1 Like

@aurel I have made what you suggested before, I renamed the idle class to z_idle and retrained the model, unselected the z_idle clase when generating posterior parameters and build again the firmware. I am getting the same results. When I use the live classification tool from Edge Impulse Studio, motion is detected as expected, but only one of the classes is detected when the model is deployed into the board.

Hi @josea.bagur

I’ve just landed a fix for this problem, should be live soon.


1 Like

Thank you @ei_francesco, looking forward for the fix! :smiley: