I plan to use a Raspberry Pi4B to detect and classify speed limit signs. I earlier tried to solve this using Edge Impulse and a single detector, but I never managed to get the accuracy high enough.
The solution seems to be to build a 2 stage system. Stage 1: detect any speed limit sign, stage 2: classify the detected item-box.
I trained both stages and tested them separately on my Pi. They work both like a charm with very good accuracy. Now I need to link these systems together, but I did not manage to find out how to do this.
Can anyone guide me on this?
Here are my projects:
Detector: Dashboard - stage 1 detect speed limit signs - Edge Impulse
Classifier: Dashboard - stage 2 classify speed limit - Edge Impulse
Hi @robhazes ,
You might want to have a look at this video tutorial on multi stage inference on a Raspberry Pi with Edge Impulse by @AIWintermuteAI .
Great, exactly what I was looking for, will try!
So here is how to make this work.
This description to the video above is helpful.
However, the tutorial was made for Raspbian Buster, not for Bullseye. Bullseye handles the camera in a different way compared to Buster, so you will get some errors when trying to run this example. A workaround is to enable legacy camera support.
In short, some comments in addition to the tutorial, on how to use your own trained models:
1.From the Deployment section on Edge Impulse, build and download the eim package files for your trained models. Do this for both detector and classifier.
2.Place your eim files in a separate directory, together with this multi_stage.py file.
3.Open Terminal in the directory where you have saved these 3 files and run this command:
python3 multi_stage.py detector.eim classifier.eim
where detector.eim and classifier.eim should be the names of your eim files.