How do I run inference with a multiple input network deployed on a MCU?

I am working on the continuous motion recognition tutorial. I would like to integrate the obtained neural network in an application running on a Nucleo STM32 board with a sensor shield. The application should periodically spit on the UART the recognized gesture, nothing more nothing less. I successfully used the ingestion service to acquire accelerometer training/validation/test data from the board+shield. However I am not completely sure about how the run_classifier and run_classifier_continuous functions work. Here are my questions (I will refere the “Running your impulse locally” page for Cube.MX CMSIS pack, here.

1- Shall I feed the run_classifier_* functions the raw accelerometer data, that is, what are called accX, accY, accZ in the continuous motion detection tutorial?
2- What is the correct order I shall fill the features array? Is it something like {accX_t0, accY_t0, accZ_t0, accX_t1, accY_t1, accZ_t1, … }?

Pietro

Hi @pietrobraione ,

  1. basically yes, you can check some of our public repo to look how it is done ie firmware-arduino-nano-33-ble-sense/src/ingestion-sdk-c/ei_run_impulse.cpp at master · edgeimpulse/firmware-arduino-nano-33-ble-sense · GitHub , firmware-renesas-ck-ra6m5/src/inference/ei_run_audio_impulse.cpp at main · edgeimpulse/firmware-renesas-ck-ra6m5 · GitHub , or a more basic example for a Nucleo board example-standalone-inferencing-st-nucleo-f466re/ei_main.cpp at main · edgeimpulse/example-standalone-inferencing-st-nucleo-f466re · GitHub it uses the Open CMSIS pack which is more or less the same as the Cube.MX CMSIS pack
  2. this depends on your model_metadata.h, you can check here the order, ie #define EI_CLASSIFIER_FUSION_AXES_STRING "accX + accY + accZ"

regards,
fv