Question/Issue:
I’m building an object detection model for recognizing the olive fruit fly (Bactrocera oleae). The model detects the fly, but it also mistakenly classifies other insects as olive fruit flies, even when they are quite different in shape and color. I’d like to understand how the algorithm behind the model makes these classification decisions, and what steps I can take to improve accuracy and reduce false positives.
Project ID:
778693
Context/Use case:
The goal is to use a small, low-power device for automated olive pest monitoring in the field.
Steps Taken:
- Collected images of olive fruit flies and trained an object detection model in Edge Impulse Studio.
- Deployed the model to an ESP32 with an OV2640 camera.
- Tested the system in real-world conditions, observing both correct detections and many false positives.
Expected Outcome:
- The model should reliably detect olive fruit flies.
- Other insects should not be misclassified as olive fruit flies.
Actual Outcome:
- The model does detect olive fruit flies.
- However, it frequently classifies unrelated insects as olive fruit flies, leading to many false positives.
Reproducibility:
- Always
Environment:
- Platform: ESP32 with OV2640 camera module
- Build Environment Details: Arduino IDE 1.8.19, ESP32 Core for Arduino 2.0.4
- OS Version: [Your development PC OS, e.g., Ubuntu 20.04 / Windows 10]
- Edge Impulse CLI Version: [Insert version if needed]
- Project Version: 1.0.0
- Custom Blocks / Impulse Configuration: Standard object detection workflow
Logs/Attachments:
[Attach any logs, images of detections, or sample misclassified examples if possible]
Additional Information:
I’d really appreciate any insights into:
- How the model makes its classification decisions (what features it prioritizes).
- Best practices for improving model robustness and reducing false positives on similar-looking insects.
Thanks in advance!