As a beekeeper I have no real skill on developments. Therefore I need ML to build a simple and embedded system to analyze, count and classify type of pollen brought back to beehive.
I’ve searched for library but found nothing specific for honey bees.
My goal here is to do my best to build this system on open source basics. I will need help. I can use my beehives and my camera to collect images. I run two https://wiki.elphel.com/wiki/10393_manual Cameras.
This is a great application. There are plenty of examples abound to help you get on your way. This tutorial will be a great start: https://docs.edgeimpulse.com/docs/image-classification.
Thank you @yodaimpulse, excatly what I need ! I am figuring out if it is necessary to train to dissociate bee’s general anatomy from pollen’s baskets that are parts of bee’s third leg. Or just train to detect grains of pollen, that are roughly oval to round in most of cases.
Here is the problem
** this picture comes from the web **
Here is the kind of picture I attempt the camera will record. There is no pollen on this one but a lot of bees
So I can face two issues, identify each bees separately and detect pollen’s grain on each one.
Wow, this is a great project.
I would start a project using Object Detection and train a model to recognize the bees as a first start.
Once you have a model that gives you a good accuracy, I would then, add another object, the pollen’s grain.
Note that to run the object detection model, you would probably need a Raspberry Pi (or a linux based compatible hardware). The only drawback is that they are not really power efficient and it won’t last long using a battery but if you have a power supply nearby that’ fine.
Let us know of how your project goes!
Hey Louis, didn’t expect so much interest on this project so quickly. Good for bees !
This will help me start using my Raspi for the very first step. I was feeling that is a good idea to train recognize bees first. I was thinking of pollen on bee body as eye on human face. Almost the same shape, different colors. Maybe something already exists.
But as you’ve mentioned it, beehives are most of the time far away in the forest or countryland. I can run a 3.3V 0.5A battery charged from a small solar panel. And I was wandering if ESP32-CAM could be good enough for image classification.
So by now, I will draw the benefice/loss ratio for two kinds of systems. One with Raspi for object detection and one with ESP32_CAM for image classification.
Seeing as bees are so important to our existence this could be quite beneficial to mankind in general.
You provided links to cameras you already have, I am not familiar with the brand Elphel. The datahseet you link to is the 10393 and I have looked up the specs and that camera looks like a much better option for this application than the ESP32-CAM or the older Raspberry Pi Cam! Can you confirm which specific config you have and which image sensor your ones have?
These cameras run Linux and have a Xilinx FPGA on board so they were designed for some heavy lifting. I can see two approaches for you the first is that you could host an application on a Raspberry Pi or Jetson and capitalize one the built in support in Edge Impulse for HW acceleration on these two platforms by accessing the video feed via RTSP. The second is you could try to deploy the entire application on the camera itself and spend a lot of time on getting inference optimized on that hardware config. I think you will get quicker results for now hosting on the Raspberry Pi or Jetson and can also manage both cameras from one place
Since you are not a developer we will help you with this
You are confirming my thoughts regarding the setup. I will start with Raspberry Pi. My first action will be to install a camera on top of a special beehive and start to collect pictures with similar configuration as the system.
So I have decided to use first a dataset from kaggle (bee or wasp ?) from George Rey (UK) (CC0 Public Domain). I rebuild sets in two categories “Bee” and “Bee with pollen” with 100 pictures each. I have also kept two other sets “Insect that are no bees” and “Other things”. My question is how should I train the model to be efficient ? Is there an order to train it ? For example should I start with bee then bee with pollen then other insects and finally other things ? Or this just doesn’t matter ! Sorry if this sounds to you guys stupid, I’ve just jumped into it.
Next I will add a set of pictures from my own taken at the entrance of the beehive. Just like the final system will be set on. And I will start to train and test the model on those pictures.
Next I will connect my brand new raspi4 (still waiting for it !!) and try the model on live prediction.
Does that seems correct progression ?
Once more thank you for your support.
Sounds good! For a first try, I’ll try to keep the model the simplest possible. You can start with only three classes: Bees, Bees with pollen and other. See how it goes and then try adding another class or split your other into “other insects” and “other things”.
Well, I have started train the model with only two classes: bee and bee with pollen. Do you think I can add later other insect. The main thing is that there will be at 99,9% only bees in front of the camera. And just want to tell you that the project is public so you can see it as I am working on it.
I run the model with 626 bees and 101 bees with pollen.
I put 50 cycles for training cycles and keep rate and score threshold as they were.
Got this error message after training model . Will try again with less cycle (25).
Training model... Training on 456 inputs, validating on 115 inputs Building model and restoring weights for fine-tuning... Finished restoring weights Fine tuning... Attached to job 810058... Attached to job 810058... Attached to job 810058... Attached to job 810058... Attached to job 810058... Attached to job 810058... Attached to job 810058... Application exited with code 137 (OOMKilled) Job failed (see above)
It seems that you are getting an Out Of Memory issue.
try again. Have just changed image resize parameters to squash before training again. Note that I have no images yet in my data TEST. That could be a problem ?
here the message I’ve just got
Creating job... OK (ID: 810334) Copying features from processing blocks... Copying features from DSP block... Still copying 3%... Still copying 6%... Still copying 9%... Still copying 12%... Still copying 14%... Still copying 17%... Still copying 20%... Still copying 23%... Still copying 25%... Still copying 28%... Still copying 30%... Still copying 33%... Still copying 36%... Still copying 39%... Still copying 41%... Still copying 44%... Still copying 47%... Still copying 50%... Still copying 53%... Still copying 55%... Still copying 58%... Still copying 61%... Still copying 64%... Still copying 67%... Still copying 69%... Still copying 72%... Still copying 75%... Still copying 78%... Still copying 81%... Still copying 83%... Still copying 86%... Still copying 89%... Still copying 92%... Still copying 95%... Still copying 98%... Copying features from DSP block OK Copying features from processing blocks OK Job started Splitting data into training and validation sets... Splitting data into training and validation sets OK Training model... Training on 456 inputs, validating on 115 inputs Building model and restoring weights for fine-tuning... Finished restoring weights Fine tuning... Attached to job 810334... Attached to job 810334... Attached to job 810334... Attached to job 810334... Attached to job 810334... Attached to job 810334... Attached to job 810334... Attached to job 810334... Attached to job 810334... Application exited with code 137 (OOMKilled) Job failed (see above)
Here is an example of Picture I get directly from a GoPro kind of camera. Should I crop and resize the image before uploading it to my test set ?
I was a new person
I also love about bees, I watch them fly, and I get pollen
I think it’s simple, just count the pollen
Because counting the bees will lead to confusion, because bees go into the hive to store the pollen, and then fly out to find pollen again?
Or where am I wrong?
@alcab I’ve asked our infra team to look at the OOM. My first guess is that the size of your DSP output seems to be huge for the small dataset, what’s your image width / height set as in the Create impulse block?