ML Plant Growth

I would like to learn how to utilize a camera in order to chart plant growth. I am familiar with some ML terminology, but would definitely be considered a beginner.

I have an existing LoRaWAN gateway that I am using with other IOT devices for so should be able to publish data to the cloud, but would like some guidance on how I can process the data once it has been sent.

Can someone point me in the right direction?

Hi @nwagner,

My suggestion would be to look into object detection to figure out how many pixels tall your plant is. You can then do simple math to convert pixels to height (e.g. cm). Edge Impulse does offer a few object detection models that you can use out of the box.

That does sound interesting.

My task doesn’t seem that complex, but I don’t know what I am doing so it could be. I have an indoor lab where I am growing plants. I want to point the camera at the plants which would be growing at most 3ft away, and measure them probably once an hour (or so).

I don’t even know where to start though. I have been told to select a camera SenseCAP A1101, which I am happy to do, but I don’t even know what the end result will look like or where I would even find it. Is there any way I could view a reference project that has been setup to monitor and chart plant growth?

Hi @nwagner,

The SenseCAP A1101 is great if you need the waterproof housing because you are placing it outdoors. You could also use a Raspberry Pi and Pi cam if you are just using it indoors. Either will likely work.

I do not know of any tutorials that walk you through exactly what you are describing: measuring plant growth.

Because the SenseCAP is built around a microcontroller and AI accelerator, you will be limited in the object detection models you can use. This person detection model might run on the SenseCAP: GitHub - HimaxWiseEyePlus/Yolo-Fastest: ⚡ Based on yolo's ultra-lightweight universal target detection algorithm, the calculation amount is only 250mflops, the ncnn model size is only 666kb, the Raspberry Pi 3b can run up to 15fps+, and the mobile terminal can run up to 178fps+. I’m not positive because Seeed Studio does not say exactly which chip the SenseCAP uses. Note that the model there is trained for person detection, so you would need to re-train it to identify plants.

In my experience, it is much more difficult to get a full object detection model running on a microcontroller-based board, as those models are often larger and slower. You can try FOMO, but that’s not going to give you pixel-precision when it comes to measuring the height of the bounding box.

Here are a few academic papers that might help:

These projects are close and might offer a good starting point: