Firmware-xg24 silabs-efr32mg24-data-acq-proj-1 no TinyML inference output after deployment on EI

Question/Issue:
I’m using this existing project to train, test, validate, and deploy models onto the Silabs EFR32MG wireless SoC (BRD2601B RA01) but I’m not able to get any TinyML inference out of the board when I install the binary HEX file onto my Silabs dev kit.

Project ID:
silabs-efr32mg24-data-acq-proj-1

Context/Use case:
Not displaying TinyML inference output is always encountered after downloading and installing the binaries for the Silabs EFR32MG2 dev board from Edge Impulse.

Steps Taken:

  1. I cleared the training data and started from scratch the data collection process
  2. I then retrained the new model data to deploy onto the BRD2601 A01 dev kit after compiling it
  3. I then used the Silabs IDE Flash Programmer to program the dev board successfully.
  4. When connecting to the Silabs EFR Connect app, the inference output under inference mode (HEX 01) isn’t produced as an output. This is the symptom and is consistent with this project when I use Edge Impulse to produce a binary/hex file output.

Expected
Outcome:

The expected outcome is to have a usable product.

Actual Outcome:
The binary/hex file is generated but model inference isn’t performed when the mode selected.

Reproducibility:

  • [ X] Always
  • [ ] Sometimes
  • [ ] Rarely

Environment:
BRD2601B A01

  • Platform: [e.g., Raspberry Pi, nRF9160 DK, etc.]
  • Build Environment Details: [e.g., Arduino IDE 1.8.19 ESP32 Core for Arduino 2.0.4]
  • OS Version: [e.g., Ubuntu 20.04, Windows 10]
  • Edge Impulse Version (Firmware): [e.g., 1.2.3]
  • To find out Edge Impulse Version:
  • if you have pre-compiled firmware: run edge-impulse-run-impulse --raw and type AT+INFO. Look for Edge Impulse version in the output.
  • if you have a library deployment: inside the unarchived deployment, open model-parameters/model_metadata.h and look for EI_STUDIO_VERSION_MAJOR, EI_STUDIO_VERSION_MINOR, EI_STUDIO_VERSION_PATCH
  • Edge Impulse CLI Version: [e.g., 1.5.0]
  • Project Version: [e.g., 1.0.0]
  • Custom Blocks / Impulse Configuration: [Describe custom blocks used or impulse configuration]
    Logs/Attachments:
    Issue always comes back with a normal exception after compile.

Additional Information:
No additional information at this time.

I was able to resolve the issue by doing the following:

  1. Download all of the training and test data with the easy upload enabled
  2. Delete all the data for the private project
  3. Re-uploaded the training and test data
  4. Walked through the MLOps process to then build the binary for the EFR32MG24 Silabs BRD2601B dev kit at the end
  5. I then flashed the dev kit with the Silabs binary

I was then able to detect TinyML inference outpu by setting the Hex value to 01 (inference mode) and also on my Android app. It works perfectly!

Problem resolved.

Mark D.

1 Like

Hi @markdheilong

I think you have the wrong forum, unless you are using edge impulse in conjunction with this app?

Best

Eoin

Hi Eoin

Thanks for your response.

I was using Edge Impulse with the Silabs Connect app to validate inference output using the Edge Impulse firmware running on the EFR32MG24 BRD2601B dev kit.

I went through all the steps mentioned to get it to compile the model with the firmware on Edge Impule cloud to produce an inference output. I don’t have access to any log information to validate it but I though that starting from scratch by deleting the project data, without deleting the project would work, and it did work.

I hope that helps.

Mark D.

1 Like

Hi @markdheilong

Coming back to your post again and wondering if there is something we can do to improve the usage for your MLOPs lifecycle with a better integration?

@jbuckEI and Mark our lead sales engineer in your region can be in contact to discuss some options for your workflow, integrating with our api’s via git actions for your custom built firmware or additional hardware / software support.

@mateusz fyi

Best

Eoin

Hi Eoin

Thank you for revisiting my post on Edge Impulse and for sharing some possible options that could help me improve my MLOps lifecycle so I can complete my product development for my wireless edge AI smart sensor. Any assistance would be greatly appreciated. My email address may already be on file but you can reach me on LinkedIn and I can share it there with you via a DM.

Thanks.

Mark D.

HI @Eoin

Looks like the Edge Impulse binary compile isn’t working again on the server side. I’ve completed all the steps necessary to compile a binary using the MLOps toolchain process in Edge Impulse. When I install the firmware for the BRD2601B xG24 Dev Kit, I cannot enable inference using “01” in the HEX text field. I’ve tried it several times but it’s not working again. I’d hate to have to delete all my data and start all over again to produce a .hex file to flash to the MCU.

Is there anything on the server side in the logs that may indicate the cause of this repeated issue?

Thanks in advance.

Mark D.

Hi Eoin

I’ve revisited this issue and it remains persistent for compiling binaries for the Silabs EFR32MG24 dev board.

I’m sharing additional information down below.

Question/Issue:
I’m unable to compile Edge Impulse binaries for the Silabs EFR32MG24 dev board. Once compiled in EI and flashed to my EFR32MG24 dev board, I’m unable to get inference to activate on the dev board for the deployed model.

Project ID:
213663

Context/Use case:
When I flash the EI compiled binary with the ML model, it doesn’t appear to generate a binary that can perform inference when the 01 HEX command is sent to the EFR32MG24 dev board. I was able to get this to work in the past but some time in December this doesn’t appear to work.

I’ve downloaded all of my data, deleted it, and then reuploaded the EI data and went through the entire training process. I did this before and it seemed to resolve the issue but this time this resolution doesn’t work in generating a binary that can be used to perform on-device inference with the deployed model.

Steps Taken:

  1. Went through MLOps process
  2. Build binary for Silabs EFR32MG24 dev board
  3. Deployed binary using Simplicity Studio firmware tool
  4. Used Silabs Connect Android app to connect to the EFR32MG24 dev board. Tried to write 01 HEX to the dev board to start live inference but that didn’t work. I tried erasing the memory and then reflashing it again. No success.
  5. I then downloaded all EI data collected
  6. I then deleted all EI project data
  7. I then reuploaded EI project data then verified the project data was valid from screenshot
  8. I then went through the MLOps process to then generate a new binary for the EFR32MG24
  9. I then flashed the binary using the Simplicity Studio Flash tool
  10. I then connected to the dev board using my Android phone, wrote 01 HEX but it didn’t start live inference. It didn’t work. LED status did not change for inference.
  11. I then used the command for DOS edge-impulse-run-impulse and that failed. I will include output after running the command for live inference.

Expected Outcome:
Running EFR32MG24 binary on the dev board for live inferencing of the compiled model and binary. This did not work. Unable to start inference using Android Simplicity Connect app. Unable to start inferencing using command line edge-impulse-run-impulse command.

Actual Outcome:
Using Android app, inference was unable to be started by sending 01 HEX to the EFR32MG24 dev board.

Using command line and the command edge-impulse-run-impulse, inference was unable to be started and the command failed at the command prompt.

Reproducibility:

  • [ X] Always
  • [ ] Sometimes
  • [ ] Rarely

Environment:

  • Platform: Silabs EFR32MG24 BRD2601B dev board
  • Build Environment Details: Edge Impulse cloud MLOps toolchain and binary compile for MCU
  • OS Version: Windows 11
  • Edge Impulse Version (Firmware): Edge Impulse version : v1.66.12
  • To find out Edge Impulse Version:
  • if you have pre-compiled firmware: run edge-impulse-run-impulse --raw and type AT+INFO. Look for Edge Impulse version in the output.
  • if you have a library deployment: inside the unarchived deployment, open model-parameters/model_metadata.h and look for EI_STUDIO_VERSION_MAJOR, EI_STUDIO_VERSION_MINOR, EI_STUDIO_VERSION_PATCH
  • Edge Impulse CLI Version: I don’t know where to find this info?
  • Project Version: [e.g., 1.0.0]
  • Custom Blocks / Impulse Configuration: [Describe custom blocks used or impulse configuration]
    Logs/Attachments:
    I changed screens so I’m unable to download logs on the cloud server side for reference.

Additional Information:
Do I need to delete my entire project and start from scratch?

Is there information on EI logs that identify any build or compile errors for the binary to be deployed to the Silabs EFR32MG24 BRD2601B dev board?

Any assistance would be great as I’m unable to resolve this issue as I did in the past (review steps taken where I resolved the issue by deleting all project data, reuploading it, and then retraining the model and building the binary using EI). This issue is now 100% repeatable.

Thanks.

Mark D.

Update and resloution:

  • I deleted my EI project and started a new project.
  • I imported the saved EI dataset into the new project and went through the MLOps steps to download and deploy the compiled model + firmware onto the EFR32MG24 dev kit. It worked.
  • I was able to start inference on the device successfully without disconnection and failures.

Problem solved.

Mark D.

1 Like

Excellent thanks for updating @markdheilong just catching up now.

That sounds like there is something funky happening in the firmware, usually when that happens you can either reboot the hardware or run with ‘–debug’ to make sure you are getting some contact with the device.

Best

Eoin