Question/Issue:
I’m interested in creating an array of smart hummingbird feeders. Use an ESP module to gather environmental information to predict the presence of mold in the nectar (ie 4::1 simple syrup). How would I setup the project in Impulse?
Any additional help would be appreciated!
Project ID:
182325 - haven’t created anything yet; wondering where to start.
Context/Use case:
The “nectar” is very sensitive to temperature.
My goal is to use an array of sensors to offset/predict the health of the nectar, by factoring in the ambient temperature outside the feeder, the temperature of the nectar, the clarity of the nectar - all while tracking the length of time in the feeder. And, eventually, using the rule of thumb depicted above - get a weeks worth of “feeding” with a single batch @ +90 deg F.
I hope to increase the “shelf life” of the nectar by using Thermoelectric Coolers around the reservoir and shielding each batch from light, using the model I create to actively monitor, predict and alert about “nectar cycles.”
…so maintaining a feeder isn’ such a pain in the butt.
The one thing that’s missing from my data was a measure of “time.” You could feed time into the model as one of the inputs to see if that helps. An RNN might perform very well, but it’s currently not supported in microcontroller output from Edge Impulse.
I’m assuming you are familiar with Edge Impulse, yes? If not, I might recommend making a few simple classification systems from our getting started docs: Getting Started - Edge Impulse Documentation
Once you grasp the basics of building classification systems, you will likely need to use the Expert Mode when defining a neural network in Edge Impulse. This will allow you to build a custom regression model that can predict “time until bad” for your syrup (which is exactly how I constructed the model for detecting toast “doneness.”).
I have not looked at ptm_dataset_curation.ipynb or ptm_feature_selection_and_dataset_curation.ipynb but can you give an overview of how time_to_burnt is related to the Generated Features?
You stated “The one thing that’s missing from my data was a measure of ‘time.’” and yet you are predicting time_to_burnt. What are the units of time_to_burnt?
I see in your Impulse Keras code:
# Final model (for regression)
model.add(Dense(classes,
name='y_pred',
activation='linear'))
Is this statement aiding in the creation of value for time_to_burnt?
@CerebralDad make me a collaborator on your project and I will help with the Impulse design.
Please post any error codes, etc. so we can see what’s going on. This post may help you.
Since the ESP32CAM AI Thinker (see this doc for compatibility solutions) is not yet an officially supported board, what method of connecting to EI Studio are you using? Currently only the EspressifESP-EYE (ESP32) is fully supported.
You have at least 3 options to get data into EI Studio:
Please take note that you can use the filename to automagically label your data.
@louis Do you recommend using the Espressif IDE or can we use the Arduino IDE for integrating the EI C++ library with our custom code. I remember some recent posts wherein the ESP32 variations are better executed in the Espressif IDE since the Ardy IDE isn’t there just yet with supporting the ESP32s.
@MMarcial Great questions I’ll see if I can answer them:
The whole idea behind the model is to perform regression: input features are sensor data (temp, humd, co2, voc1, voc2, etc.) sampled at 2 Hz for 10 sec. I use a raw block, so the features are just a window of these standardized sensor readings taken over time. The input to the NN is just the raw data: 9 channels * (10 s window @ 2 Hz) = 180 features for the input layer.
Using that window as input, the NN attempts to predict the “time until burnt” in milliseconds. The ground truth values are computed in that .ipynb notebook, so I recommend taking a look in there.
So, my input samples have some small amount of “sense of time” in that the input is a window over 10 sec. For measuring time until syrup goes bad, you may or may not want to have some sense of time in your inputs. Maybe that’s a “minutes since refreshed” channel or just a “rolling window” like I used. You could also explore other model architectures like RNNs (specifically, an LSTM) or transformers, which take time into account by design. However, note that EI doesn’t support those architectures right now, so deploying the model to an MCU might be tough at the moment.
Now Arduino examples with the ESP32 also leverage the ESP-NN hardware acceleration so both should give similar performances.
You might find more advanced examples of projects using ESP-IDF but Arduino IDE is probably easier to use for beginners.
So it’s up to you to choose which IDE.