Firmware-xg24 silabs-efr32mg24-data-acq-proj-1 no TinyML inference output after deployment on EI

Question/Issue:
I’m using this existing project to train, test, validate, and deploy models onto the Silabs EFR32MG wireless SoC (BRD2601B RA01) but I’m not able to get any TinyML inference out of the board when I install the binary HEX file onto my Silabs dev kit.

Project ID:
silabs-efr32mg24-data-acq-proj-1

Context/Use case:
Not displaying TinyML inference output is always encountered after downloading and installing the binaries for the Silabs EFR32MG2 dev board from Edge Impulse.

Steps Taken:

  1. I cleared the training data and started from scratch the data collection process
  2. I then retrained the new model data to deploy onto the BRD2601 A01 dev kit after compiling it
  3. I then used the Silabs IDE Flash Programmer to program the dev board successfully.
  4. When connecting to the Silabs EFR Connect app, the inference output under inference mode (HEX 01) isn’t produced as an output. This is the symptom and is consistent with this project when I use Edge Impulse to produce a binary/hex file output.

Expected
Outcome:

The expected outcome is to have a usable product.

Actual Outcome:
The binary/hex file is generated but model inference isn’t performed when the mode selected.

Reproducibility:

  • [ X] Always
  • [ ] Sometimes
  • [ ] Rarely

Environment:
BRD2601B A01

  • Platform: [e.g., Raspberry Pi, nRF9160 DK, etc.]
  • Build Environment Details: [e.g., Arduino IDE 1.8.19 ESP32 Core for Arduino 2.0.4]
  • OS Version: [e.g., Ubuntu 20.04, Windows 10]
  • Edge Impulse Version (Firmware): [e.g., 1.2.3]
  • To find out Edge Impulse Version:
  • if you have pre-compiled firmware: run edge-impulse-run-impulse --raw and type AT+INFO. Look for Edge Impulse version in the output.
  • if you have a library deployment: inside the unarchived deployment, open model-parameters/model_metadata.h and look for EI_STUDIO_VERSION_MAJOR, EI_STUDIO_VERSION_MINOR, EI_STUDIO_VERSION_PATCH
  • Edge Impulse CLI Version: [e.g., 1.5.0]
  • Project Version: [e.g., 1.0.0]
  • Custom Blocks / Impulse Configuration: [Describe custom blocks used or impulse configuration]
    Logs/Attachments:
    Issue always comes back with a normal exception after compile.

Additional Information:
No additional information at this time.

I was able to resolve the issue by doing the following:

  1. Download all of the training and test data with the easy upload enabled
  2. Delete all the data for the private project
  3. Re-uploaded the training and test data
  4. Walked through the MLOps process to then build the binary for the EFR32MG24 Silabs BRD2601B dev kit at the end
  5. I then flashed the dev kit with the Silabs binary

I was then able to detect TinyML inference outpu by setting the Hex value to 01 (inference mode) and also on my Android app. It works perfectly!

Problem resolved.

Mark D.

1 Like

Hi @markdheilong

I think you have the wrong forum, unless you are using edge impulse in conjunction with this app?

Best

Eoin

Hi Eoin

Thanks for your response.

I was using Edge Impulse with the Silabs Connect app to validate inference output using the Edge Impulse firmware running on the EFR32MG24 BRD2601B dev kit.

I went through all the steps mentioned to get it to compile the model with the firmware on Edge Impule cloud to produce an inference output. I don’t have access to any log information to validate it but I though that starting from scratch by deleting the project data, without deleting the project would work, and it did work.

I hope that helps.

Mark D.

1 Like

Hi @markdheilong

Coming back to your post again and wondering if there is something we can do to improve the usage for your MLOPs lifecycle with a better integration?

@jbuckEI and Mark our lead sales engineer in your region can be in contact to discuss some options for your workflow, integrating with our api’s via git actions for your custom built firmware or additional hardware / software support.

@mateusz fyi

Best

Eoin

Hi Eoin

Thank you for revisiting my post on Edge Impulse and for sharing some possible options that could help me improve my MLOps lifecycle so I can complete my product development for my wireless edge AI smart sensor. Any assistance would be greatly appreciated. My email address may already be on file but you can reach me on LinkedIn and I can share it there with you via a DM.

Thanks.

Mark D.

HI @Eoin

Looks like the Edge Impulse binary compile isn’t working again on the server side. I’ve completed all the steps necessary to compile a binary using the MLOps toolchain process in Edge Impulse. When I install the firmware for the BRD2601B xG24 Dev Kit, I cannot enable inference using “01” in the HEX text field. I’ve tried it several times but it’s not working again. I’d hate to have to delete all my data and start all over again to produce a .hex file to flash to the MCU.

Is there anything on the server side in the logs that may indicate the cause of this repeated issue?

Thanks in advance.

Mark D.