When making a Camera based Impulse the final Arduino library has an example static_buffer which is your Arduino image classifying starting point.
Once things work (you have pasted an images raw data from your impulse to your Arduino features array), Then 2 items become fairly important.
signal_t and out_ptr:
To understand how your data moves from your camera to the classifier I need to understand both of these.
signal_t features_signal;
is used to process your image data and has the property .total_length and an overridden method .get_data
out_ptr
is a pointer that is sent to the classifier.
Where can I find more information about these items? I would like to know if signal_t has more properties and methods and where it is defined. I would also like to know where out_ptr is defined as it is barely mentioned on the static_buffer sketch.
The following lines do not even mention out_ptr although it is the main chunk of data being sent to the classifier.
features_signal.get_data = &raw_feature_get_data;
// invoke the impulse
EI_IMPULSE_ERROR res = run_classifier(&features_signal, &result, false /* debug */);
Looking at the above code I don’t see any methods called result.classification.max() or result.classification.sorted(). Not that either are hard to make, but it might be a good idea for Edge Impulse to show a small API of what is possible with Edge Impulse objects, so programmers can know what they have to code and what is already provided in the generated library.
Just a suggestion, not really important, but I am finding the coding part of my Edge Impulse experience much harder than the model making part.
On a positive note, I did solve a coding issue today that I am very happy about.
Thanks for your valuable feedback @Rocksetta. We’ll keep improving our documentation on the coding side, we could have a user guide on how to connect your own sensors and manipulate the features array and classification results.
/**
* Initialize a smooth structure. This is useful if you don't want to trust
* single readings, but rather want consensus
* (e.g. 7 / 10 readings should be the same before I draw any ML conclusions).
* This allocates memory on the heap!
* @param smooth Pointer to an uninitialized ei_classifier_smooth_t struct
* @param n_readings Number of readings you want to store
* @param min_readings_same Minimum readings that need to be the same before concluding (needs to be lower than n_readings)
* @param classifier_confidence Minimum confidence in a class (default 0.8)
* @param anomaly_confidence Maximum error for anomalies (default 0.3)
*/
void ei_classifier_smooth_init(ei_classifier_smooth_t *smooth, size_t n_readings,
uint8_t min_readings_same, float classifier_confidence = 0.8,
float anomaly_confidence = 0.3)
The issue I had in my WORDS project which identified the accelerometer drawn letters W-O-R-D-S was that “R” always worked well, but when drawing “S” we often got “R” with a slightly higher confidence. Say “S”: 40%, R: 43%. My students thought we could test for “R” and if “S” was high then write “S” to the screen.
A sorted classifier array would have been useful, for this kind of model hacking.
Our solution however was just to train “S” a few more times and things worked much better.