Camera example static_buffer uses "signal_t" and "out_ptr"

When making a Camera based Impulse the final Arduino library has an example static_buffer which is your Arduino image classifying starting point.

Once things work (you have pasted an images raw data from your impulse to your Arduino features array), Then 2 items become fairly important.

signal_t and out_ptr:

To understand how your data moves from your camera to the classifier I need to understand both of these.

signal_t features_signal;

is used to process your image data and has the property .total_length and an overridden method .get_data

out_ptr

is a pointer that is sent to the classifier.

Where can I find more information about these items? I would like to know if signal_t has more properties and methods and where it is defined. I would also like to know where out_ptr is defined as it is barely mentioned on the static_buffer sketch.

The following lines do not even mention out_ptr although it is the main chunk of data being sent to the classifier.

    features_signal.get_data = &raw_feature_get_data;

    // invoke the impulse
    EI_IMPULSE_ERROR res = run_classifier(&features_signal, &result, false /* debug */);

Searching for answers at https://github.com/edgeimpulse gets too much information. Any suggestions?

These questions are connected to the thread trying to get the OV7670 Camera working with the Nano33BleSense at Ov7670 Cam with Nano33BLE(Sense)

Hi @Rocksetta,

You can find the definition of signal_t in the SDK, such as: https://github.com/edgeimpulse/firmware-arduino-nano-33-ble-sense/blob/2c1e8f559a12151fcf73b86d3dfc12ccea4771b5/src/edge-impulse-sdk/dsp/numpy_types.h#L314.

out_ptr is the pointer to copy the raw features to. It is being filled by this function in the static_buffer sketch:

int raw_feature_get_data(size_t offset, size_t length, float *out_ptr) {
    memcpy(out_ptr, features + offset, length * sizeof(float));
    return 0;
}

That’s the function you need to rewrite in your case to get the frame buffer from the camera. I’ll look at the other thread.

Aurelien

1 Like

@aurel thanks. Along this same line of questions. Is there an API for the returned result?


result.classification

I am writing code to pick the top classification but it occurs to me that is probably already a property of the result object.

Yes it is an array with all classification results. Each item contains the label and its prediction. You can then sort them out or just pick the item with highest probability. It is defined here: https://github.com/edgeimpulse/firmware-arduino-nano-33-ble-sense/blob/39fa92ae1b28e318e90654eec09bd2c6d0b4f708/src/edge-impulse-sdk/classifier/ei_classifier_types.h#L42

1 Like

Looking at the above code I don’t see any methods called result.classification.max() or result.classification.sorted(). Not that either are hard to make, but it might be a good idea for Edge Impulse to show a small API of what is possible with Edge Impulse objects, so programmers can know what they have to code and what is already provided in the generated library.

Just a suggestion, not really important, but I am finding the coding part of my Edge Impulse experience much harder than the model making part.

On a positive note, I did solve a coding issue today that I am very happy about.

Thanks for your valuable feedback @Rocksetta. We’ll keep improving our documentation on the coding side, we could have a user guide on how to connect your own sensors and manipulate the features array and classification results.

Aurelien

@Rocksetta, there is https://github.com/edgeimpulse/inferencing-sdk-cpp/blob/master/classifier/ei_classifier_smooth.h which lets you set up these rules. E.g. out of the last 5 readings, at least 3 should be the same with min. confidence 70%, unless anomaly is >0.3. And then it just returns the class.

The continuous accelerometer example on the Arduino library has an example on how to use it.

1 Like

Nice, really good to know:

/**
 * Initialize a smooth structure. This is useful if you don't want to trust
 * single readings, but rather want consensus
 * (e.g. 7 / 10 readings should be the same before I draw any ML conclusions).
 * This allocates memory on the heap!
 * @param smooth Pointer to an uninitialized ei_classifier_smooth_t struct
 * @param n_readings Number of readings you want to store
 * @param min_readings_same Minimum readings that need to be the same before concluding (needs to be lower than n_readings)
 * @param classifier_confidence Minimum confidence in a class (default 0.8)
 * @param anomaly_confidence Maximum error for anomalies (default 0.3)
 */
void ei_classifier_smooth_init(ei_classifier_smooth_t *smooth, size_t n_readings,
                               uint8_t min_readings_same, float classifier_confidence = 0.8,
                               float anomaly_confidence = 0.3)

The issue I had in my WORDS project which identified the accelerometer drawn letters W-O-R-D-S was that “R” always worked well, but when drawing “S” we often got “R” with a slightly higher confidence. Say “S”: 40%, R: 43%. My students thought we could test for “R” and if “S” was high then write “S” to the screen.

A sorted classifier array would have been useful, for this kind of model hacking.

Our solution however was just to train “S” a few more times and things worked much better.