@louis thank you Louis passing the message.
@janjongboom thank you so much.
Pictures are between 280 to 320 pixel width and roughly the same height in my trained dataset…
I will resize new pictures I am taking right now on my beehive. for the test dataset.
Here is the pictures resized 320 x 320
Hi @alcab,
This is such a great project! While determining the root cause of the issue I also looked into the accuracy of your model on the current dataset. I have a few thoughts:
Bounding boxes for object detection
To train a good object detection model, it’s super important that the bounding boxes are consistent between training samples. I noticed in your dataset that some of the “bee” boxes are around the entire bee, while some of them end at the bee’s thorax. I would recommend making sure that the “bee” boxes are drawn as tightly as possible around the entire body of the bee in all of your samples, and the same for the “pollen” boxes.
Choice of model type
Your project right now is using object detection. This is great if you are trying to count the exact number of pollen baskets in each photo, since it will tell you how many are present. However, perhaps this isn’t necessary and it’s sufficient just to know that a given photo contains a bee with pollen, a bee without pollen, or no bee at all. In this case, an image classification model might be a better choice—it will run on a microcontroller, while the object detection model will require an embedded Linux board.
If it sounds like image classification might work, you can try the transfer learning block shown in this tutorial.
Thanks for using Edge Impulse for your awesome project—let me know if you have any questions or ideas!
Warmly,
Dan
@alcab I noticed that your training job failed with a ‘DeadlineExceeded’ error. I increased the deadline for your project to 2 hours now, so if you restart the training job it should be able to complete.
@louis and @janjongboom
Sorry to tell received same error message
Copying features from processing blocks...
Copying features from DSP block...
Still copying 3%...
Still copying 6%...
Still copying 8%...
Still copying 11%...
Still copying 13%...
Still copying 16%...
Still copying 19%...
Still copying 21%...
Still copying 24%...
Still copying 26%...
Still copying 29%...
Still copying 31%...
Still copying 34%...
Still copying 36%...
Still copying 39%...
Still copying 42%...
Still copying 44%...
Still copying 46%...
Still copying 49%...
Still copying 51%...
Still copying 54%...
Still copying 56%...
Still copying 59%...
Still copying 61%...
Still copying 64%...
Still copying 66%...
Still copying 69%...
Still copying 71%...
Still copying 74%...
Still copying 76%...
Still copying 79%...
Still copying 81%...
Still copying 83%...
Still copying 85%...
Still copying 88%...
Still copying 90%...
Still copying 93%...
Still copying 95%...
Still copying 98%...
Copying features from DSP block OK
Copying features from processing blocks OK
Job started
Splitting data into training and validation sets...
Splitting data into training and validation sets OK
Training model...
Training on 456 inputs, validating on 115 inputs
Building model and restoring weights for fine-tuning...
Finished restoring weights
Fine tuning...
Attached to job 811799...
Attached to job 811799...
Attached to job 811799...
Attached to job 811799...
Attached to job 811799...
Attached to job 811799...
Epoch 1 of 50, loss=0.9245942, val_loss=0.9195528
Attached to job 811799...
Epoch 2 of 50, loss=0.7341106, val_loss=0.76637304
Attached to job 811799...
Attached to job 811799...
Attached to job 811799...
Epoch 3 of 50, loss=0.6610549, val_loss=0.71438926
Attached to job 811799...
Epoch 4 of 50, loss=0.6255293, val_loss=0.69823074
Attached to job 811799...
Attached to job 811799...
Epoch 5 of 50, loss=0.6041342, val_loss=0.689483
Attached to job 811799...
Attached to job 811799...
Epoch 6 of 50, loss=0.58702695, val_loss=0.681731
Attached to job 811799...
Attached to job 811799...
Epoch 7 of 50, loss=0.57149875, val_loss=0.6754334
Attached to job 811799...
Attached to job 811799...
Epoch 8 of 50, loss=0.558188, val_loss=0.6701353
Attached to job 811799...
Attached to job 811799...
Epoch 9 of 50, loss=0.5468838, val_loss=0.6655755
Attached to job 811799...
Attached to job 811799...
Epoch 10 of 50, loss=0.5370417, val_loss=0.66157746
Attached to job 811799...
Epoch 11 of 50, loss=0.52828056, val_loss=0.6579229
Attached to job 811799...
Attached to job 811799...
Epoch 12 of 50, loss=0.520264, val_loss=0.65451205
Attached to job 811799...
Attached to job 811799...
Epoch 13 of 50, loss=0.5127766, val_loss=0.65129876
Attached to job 811799...
Attached to job 811799...
Epoch 14 of 50, loss=0.50567704, val_loss=0.6482221
Attached to job 811799...
Attached to job 811799...
Epoch 15 of 50, loss=0.4988686, val_loss=0.6452371
Attached to job 811799...
Attached to job 811799...
Epoch 16 of 50, loss=0.49228242, val_loss=0.64231
Attached to job 811799...
Epoch 17 of 50, loss=0.4858687, val_loss=0.63941616
Attached to job 811799...
Attached to job 811799...
Epoch 18 of 50, loss=0.47959238, val_loss=0.63654006
Attached to job 811799...
Attached to job 811799...
Epoch 19 of 50, loss=0.47342986, val_loss=0.6336842
Attached to job 811799...
Attached to job 811799...
Epoch 20 of 50, loss=0.46737903, val_loss=0.6308648
Attached to job 811799...
Attached to job 811799...
Epoch 21 of 50, loss=0.46144402, val_loss=0.6281112
Attached to job 811799...
Attached to job 811799...
Epoch 22 of 50, loss=0.4556454, val_loss=0.6254617
Attached to job 811799...
Epoch 23 of 50, loss=0.4500056, val_loss=0.6229534
Attached to job 811799...
Attached to job 811799...
ERR: DeadlineExceeded - Job was active longer than specified deadline Try decreasing the number of windows or reducing the number of training cycles. If the error persists then you can contact support at hello@edgeimpulse.com to increase this time limit.
Terminated by user
Job failed (see above)
Thank you dansitu for your interest and the time you took to look for the dataset,
When you write
[quote=“dansitu, post:23, topic:1730”]
“bee” boxes are around the entire bee,
[/quote] Do you think I should draw all the body including wings ?
I will work on each picture.
Ok to train on image classification if this could be “lighter” .
And after train on object detection.
I would suggest drawing the boxes around just the body, since that would end up including less “non-bee” areas of the photograph, which will make things slightly easier for the model. Either way, the key thing is to fit the boxes as tightly as possible around the object you want to detect.
Good luck and let us know how it goes!
@alcab I’ve upped the job limit even further. Note that ‘Squash’ is probably not the right setting for your resizing. Fit longest axis probably is…
Creating job... OK (ID: 812130)
Job started
Splitting data into training and validation sets...
Splitting data into training and validation sets OK
Training model...
Training on 456 inputs, validating on 115 inputs
Building model and restoring weights for fine-tuning...
Finished restoring weights
Fine tuning...
Attached to job 812130...
Attached to job 812130...
Attached to job 812130...
Attached to job 812130...
Attached to job 812130...
Attached to job 812130...
Epoch 1 of 20, loss=0.9245942, val_loss=0.9195528
Attached to job 812130...
Epoch 2 of 20, loss=0.7341106, val_loss=0.76637304
Attached to job 812130...
Attached to job 812130...
Epoch 3 of 20, loss=0.6610549, val_loss=0.71438926
Attached to job 812130...
Attached to job 812130...
Epoch 4 of 20, loss=0.6255293, val_loss=0.69823074
Attached to job 812130...
Attached to job 812130...
Epoch 5 of 20, loss=0.6041342, val_loss=0.689483
Attached to job 812130...
Attached to job 812130...
Attached to job 812130...
Epoch 6 of 20, loss=0.58702695, val_loss=0.681731
Attached to job 812130...
Attached to job 812130...
Epoch 7 of 20, loss=0.57149875, val_loss=0.6754334
Attached to job 812130...
Epoch 8 of 20, loss=0.558188, val_loss=0.6701353
Attached to job 812130...
Attached to job 812130...
Attached to job 812130...
Epoch 9 of 20, loss=0.5468838, val_loss=0.6655755
Attached to job 812130...
Attached to job 812130...
Epoch 10 of 20, loss=0.5370417, val_loss=0.66157746
Attached to job 812130...
Epoch 11 of 20, loss=0.52828056, val_loss=0.6579229
Attached to job 812130...
Attached to job 812130...
Epoch 12 of 20, loss=0.520264, val_loss=0.65451205
Attached to job 812130...
Attached to job 812130...
Attached to job 812130...
Epoch 13 of 20, loss=0.5127766, val_loss=0.65129876
Attached to job 812130...
Attached to job 812130...
Epoch 14 of 20, loss=0.50567704, val_loss=0.6482221
Attached to job 812130...
Epoch 15 of 20, loss=0.4988686, val_loss=0.6452371
Attached to job 812130...
Attached to job 812130...
Epoch 16 of 20, loss=0.49228242, val_loss=0.64231
Attached to job 812130...
Attached to job 812130...
Epoch 17 of 20, loss=0.4858687, val_loss=0.63941616
Attached to job 812130...
Attached to job 812130...
Epoch 18 of 20, loss=0.47959238, val_loss=0.63654006
Attached to job 812130...
Attached to job 812130...
Epoch 19 of 20, loss=0.47342986, val_loss=0.6336842
Attached to job 812130...
Attached to job 812130...
Epoch 20 of 20, loss=0.46737903, val_loss=0.6308648
Finished fine tuning
Checkpoint saved
Attached to job 812130...
Finished training
Creating SavedModel for conversion...
Finished creating SavedModel
Loading for conversion...
Attached to job 812130...
Converting TensorFlow Lite float32 model...
Converting TensorFlow Lite int8 quantized model with float32 input and output...
Attached to job 812130...
Converting TensorFlow Lite int8 quantized model with int8 input and float32 output...
Attached to job 812130...
Calculating performance metrics...
Profiling float32 model...
Profiling int8 model...
Profiling 3% done
Profiling 7% done
Profiling 12% done
Profiling 16% done
Profiling 20% done
Profiling 25% done
Profiling 29% done
Profiling 33% done
Profiling 38% done
Profiling 42% done
Profiling 46% done
Profiling 51% done
Profiling 55% done
Profiling 60% done
Profiling 64% done
Profiling 68% done
Profiling 73% done
Profiling 77% done
Profiling 81% done
Profiling 86% done
Profiling 90% done
Profiling 94% done
Profiling 99% done
Model training complete
Job completed
You help me think about this new process
1- image classification model for bees or bees with pollen
2- record bees with pollen
3- object detection model on recorded bees for color recognition
Does that sound correct ? Maybe part 3 can be done with OpenCV , no need to ask for too much.
Please have a look at this picture from the training dataset and the box I have drawn around.
Is this correct for you ?
36610956464_0af824d834_w.jpg
Yep, that box looks perfect. I think your process of doing the initial classification with a deep learning model and then using OpenCV to detect the pollen colors is a good idea—always good to use the simplest tools for the job
@mathijs Thank you. Yesterday It works finally fine. But today same problem again !
Epoch 20 of 20, loss=0.51556426, val_loss=0.6717124
Finished fine tuning
Checkpoint saved
Finished training
Creating SavedModel for conversion…
Finished creating SavedModel
Loading for conversion…
Attached to job 814549…
Converting TensorFlow Lite float32 model…
Converting TensorFlow Lite int8 quantized model with float32 input and output…
Failed to check job status: [object Object]
Job failed (see above)
So I’ve modified boxes and labels according to @dansitu, generates again the features and train again this morning. Then I’ve discovered I can clone version of training model. That will be very helpful to increase the efficiency of the future models.
If I want to run in parallel classification model, should I publish a new version of the entire project and restart it with this basic parameter, or I can run it inside the same project ? My feeling is to create a new version ? Therefore can I import all the dataset with boxes and labels ?
I am learning a lot right now and trying to follow all the tutorials I need. There are some short cuts I am asking directly. And I feel so happy to receive good support from the developers around.
Thank you evry one.
And it works
But look at this performance 10% !!!
I have to improve a lot of things.
I ran a model testing session, but got this error message.
Will try again later.
Creating job... OK (ID: 814692)
Generating features for Image...
Scheduling job in cluster...
Job started
Creating windows from 165 files...
[ 1/165] Creating windows from files...
[ 65/165] Creating windows from files...
[165/165] Creating windows from files...
[165/165] Creating windows from files...
Error windowing Reduce of empty array with no initial value
TypeError: Reduce of empty array with no initial value
at Array.reduce (<anonymous>)
at createWindows (/app/node/windowing/build/window-images.js:87:71)
at async /app/node/windowing/build/window-images.js:21:13
Application exited with code 1 (Error)
Job failed (see above)
Hello @alcab,
It seems that you do not have any labels for your testing data.
The issue might come from it.
When testing your model, it compares the given output with the predicted output to give you an accuracy score.
Regards
Did I read Alcab’s output correctly though that the 0.1 precision score is against validation? ( And validation is originally split from training at the start? )
From post #30 it appears the model is overfitting (continuing drop in training loss while validation loss flattens out ) …
I did some work a few years back with bee images and initially framed it as object detection; one big win for me was to keep bounding boxes the same size (centered on the bee). This can make the problem much simpler for the training in the cases where the objects don’t vary much in size (especially with a low number of classes) But since the object count was high (i.e. loads of bees) I switched to framing it as per pixel segmentation, but that’s something that isn’t quite supported (as I understand it)… yet!
See http://matpalm.com/blog/counting_bees/ for some more ideas
Mat
It’s the quantized score though, f32 is 25%. Still not great. But indeed seems like overfitting. I figure if setting the LR lower and training longer would work @dansitu
@louis Cool. Makes sense. Work on it tomorrow. as I have to spend some times with some of my bees in a beautiful blueberry field driven in biodynamic.
@mat_kelcey Not sure I do understand all your message. Sounds great the job you have done. Bounding boxes appears more and more to be the key. I will work again on it.
@janjongboom I am supporting this idea too that training longer is the best to do. I will catch some more materials this afternoon. The new raspi should arrive tomorrow. Do you think I can run live test soon ? Is there a minimum ratio to start live prediction ?
Interesting results, and big thanks to @mat_kelcey for bringing your experience! Mat, you are right in saying that the precision score provided is based on the validation set, which is a 20% split from the training dataset.
So @alcab—before spending more time with object detection, I would definitely still do some thinking about whether classification vs. object detection is the right approach. Do you really need to know the location of the bees and pollen grains within each photograph, or is it fine just to know that a particular photo contains a bee that is carrying pollen grains?
Classification requires much less effort in terms of getting the labels right, and the resulting model will be a lot smaller.
I agree with @mat_kelcey and @janjongboom that it appears the model is overfitting. Since object detection is such a new feature we haven’t yet added some of our usual features that help with this, like regularization and data augmentation. But with that in mind, here are some ideas that might be worth exploring to try and get better results:
-
Train with a higher learning rate (perhaps try 0.35), which can have the side effect of reducing overfitting. But keep an eye on the “loss” and “val_loss” values, since you don’t want the “loss” to go significantly lower than “val_loss”. If that happens, run training again but reduce the number of epochs to the point it was at before the loss numbers significantly diverged.
-
Your training dataset appears to have a lot more images containing just bees than images containing bees with pollen. You might get better results if you “balance” these classes. Try removing images that have just bees until you have approximately the same number of both types of images. If most of your images have just bees, the model might learn during training that the pollen boxes don’t matter.
-
Since you are training a model to count and classify pollen, does your model even need to know about the bees? You could try removing the bee labels entirely from your dataset and just train a model to detect the pollen grains.
-
Failing that, and if you still want to proceed with object detection, I’d definitely recommend trying @mat_kelcey’s approach of re-labelling the bees (again) with a small, regularly sized box located at the center of each bee. @janjongboom maybe we should make it easier in the UI to create a box with specific dimensions?