Assistance Required for Developing a Sorting Machine Using Python and Edge Impulse

Hello Edge Impulse Team,

I am in the process of developing a sorting machine that relies on real-time object detection to determine the position of objects and sort them accordingly. For this project, I am using a webcam, Python, and an object detection model trained through Edge Impulse. The goal is to replicate the functionality seen on your web platform — where objects are identified and their coordinates pinpointed in real-time — but implemented locally on my computer using Python.

However, I have encountered challenges integrating the Edge Impulse exported model with the Python environment to process the webcam feed. I would appreciate a more detailed guide or tutorial specifically tailored to the following needs:

  1. Detailed instructions on exporting an object detection model from Edge Impulse for use with Python.
  2. Guidance on setting up Python to process a continuous video feed from a webcam, detect objects, determine their exact positions, and use these coordinates to trigger sorting mechanisms.
  3. Recommendations for Python libraries or frameworks best suited for interfacing with hardware components that might be involved in the sorting process.
  4. Example code snippets or a sample project that demonstrates real-time object detection using a webcam in Python.

Thank you very much for your support, and I look forward to your expert guidance.

Best regards

Hi @Kurdi,

Thank you for the feedback. Let me see if I can help with each of your questions:

1: The easiest way to use an Edge Impulse model with Python is to use the CLI on your local machine to deploy the .eim executable (Linux EIM Executable | Edge Impulse Documentation) to that machine. You can then use the Linux Python SDK (Linux Python SDK | Edge Impulse Documentation) to interact with the .eim impulse.
2 and 4: Once you deploy your .eim impulse, you should try running this example (linux-sdk-python/examples/image/ at master · edgeimpulse/linux-sdk-python · GitHub) to perform object detection. If you look at line 112, you can see how bounding box information is read from the impulse results. While not all webcams are supported (support is highly dependent on available Linux drivers), this code demonstrates how to capture images and use them to perform object detection on a trained model.
3. Such hardware libraries are outside the scope of Edge Impulse. It will depend on what computing platform you’re using and what hardware you’re planning to connect. For example, if you’re using a Raspberry Pi, you might use something like RPi.GPIO or gpiod. Your best bet would be to ask on the forums or support channel for your particular compute platform for recommendations on libraries.