I am working on a project that involves integrating Edge Impulse with custom hardware; and I could really use some guidance from the community here.
I have developed a custom hardware device that collects sensor data (accelerometer and gyroscope) in real-time. My goal is to process this sensor data using Edge Impulse for real time classification of certain activities.
What is the recommended approach for streaming real-time sensor data from my custom hardware to Edge Impulse? Are there specific protocols or interfaces that are best suited for this purpose?
Once I have trained a suitable model in Edge Impulse, how can I deploy this model onto my custom hardware effectively? Are there any considerations I should keep in mind regarding the hardware specifications or compatibility issues?
Based on your experience, what are some common challenges or pitfalls I might encounter during the integration process? Any tips or best practices would be greatly appreciated.
Could you point me to any relevant documentation, tutorials, or community posts that cover similar projects? I am eager to learn from others who have tackled similar integration tasks.
I am new to Edge Impulse but excited about its potential. Any insights or advice from those who have experience with similar projects would be immensely valuable to me.
Thank you all in advance for your time and assistance. I look forward to your responses and learning from the community here.
For deploying a model trained on Edge Impulse to new hardware, I highly recommend using the C++ SDK (deployment option in your project) and linking to it from your application code. To understand how to use the C++ SDK, you should first work through this example: As a generic C++ library | Edge Impulse Documentation. That tutorial walks you through building a simple static inference C++ application on Linux and provides an example Makefile that demonstrates how to link everything properly.
Note: I’m assuming you are using C++ on an embedded system. If you are using Python or JavaScript to perform inference on your custom hardware (i.e. with embedded Linux), then you probably want to see our inferencing tutorials here: Edge Impulse for Linux | Edge Impulse Documentation