Could Someone Give me Guidance on Integrating Edge Impulse with Custom Hardware?

Hello there,

I am working on a project that involves integrating Edge Impulse with custom hardware; and I could really use some guidance from the community here.

I have developed a custom hardware device that collects sensor data (accelerometer and gyroscope) in real-time. My goal is to process this sensor data using Edge Impulse for real time classification of certain activities.

What is the recommended approach for streaming real-time sensor data from my custom hardware to Edge Impulse? Are there specific protocols or interfaces that are best suited for this purpose?

Once I have trained a suitable model in Edge Impulse, how can I deploy this model onto my custom hardware effectively? Are there any considerations I should keep in mind regarding the hardware specifications or compatibility issues?

Based on your experience, what are some common challenges or pitfalls I might encounter during the integration process? Any tips or best practices would be greatly appreciated.

Could you point me to any relevant documentation, tutorials, or community posts that cover similar projects? I am eager to learn from others who have tackled similar integration tasks.

Also, I have gone through this post: https://forum.edgeimpulse.com/t/using-edge-impulse-in-teaching-ai-for-university-students-uipath which definitely helped me out a lot.

I am new to Edge Impulse but excited about its potential. Any insights or advice from those who have experience with similar projects would be immensely valuable to me.

Thank you all in advance for your time and assistance. I look forward to your responses and learning from the community here.

Hi @leni,

For deploying a model trained on Edge Impulse to new hardware, I highly recommend using the C++ SDK (deployment option in your project) and linking to it from your application code. To understand how to use the C++ SDK, you should first work through this example: As a generic C++ library | Edge Impulse Documentation. That tutorial walks you through building a simple static inference C++ application on Linux and provides an example Makefile that demonstrates how to link everything properly.

Once you are familiar with how to link to the C++ SDK and build an application around it, you can then bring it into your custom hardware build system. For example, if you’re using STM32, I’ve got an example here that links everything in STM32CubeIDE to build a demo properly using the C++ SDK: ei-keyword-spotting/embedded-demos/stm32cubeide/nucleo-l476-keyword-spotting at master · ShawnHymel/ei-keyword-spotting · GitHub.

If you happen to be using the Keil IDE, you can deploy a Keil MDK CMSIS-PACK from Edge Impulse to make linking much easier: Arm Keil MDK CMSIS-PACK | Edge Impulse Documentation. Similarly, you can also deploy an IAR Library (IAR Library | Edge Impulse Documentation) if you’re working with the IAR IDE.

Note: I’m assuming you are using C++ on an embedded system. If you are using Python or JavaScript to perform inference on your custom hardware (i.e. with embedded Linux), then you probably want to see our inferencing tutorials here: Edge Impulse for Linux | Edge Impulse Documentation

Hope that helps!

hi shawn,

I’m wondering if the open-CMSIS pack solution is compatible with Keil MDK using Arm Compiler 5 ?
The target I’m using only supports Arm compiler 5…