Prediction always returns [nan, nan, nan] on the arduino board

Hi,
I’m having problems with deploying my model into my arduino 33 BLE SENSE. I am using the static buffer example. I collect data from 6 analog pins of the arduino, put them together, and loop 10 times to get a feature vector of size 60. Then I start the prediction. In the non-prediction mode I rather print the data to serial port so that the data forwarder CLI can be used to send training data to Edge Impulse.

In data forwarder mode, the prediction works perfectly fine and I can see them in the “live classification” tab. However, when i download the library and set up the prediction (i change the boolean bool flag_prediction_mode = true; then the output of the algorithm is always nan:

run_classifier returned: 0
Predictions (DSP: 0 ms., Classification: 1 ms., Anomaly: 0 ms.): 
[nan, nan, nan]
    position_bonne: nan
    position_mauvaise: nan
    siege_vide: nan

I used all standard parameters while training the model, with flatten block and raw block, and classification NN. Script posted in reply. Project number: 51781

The script:

/* Includes ---------------------------------------------------------------- */
#include <SPI.h> // SPI pour le potentiometre
#include <Wire.h> // I2C pour l'écran
#include <LiquidCrystal_I2C.h>
#include <siege-intelligent-v2_inferencing.h>

// set pin 10 as the slave select for the digital pot:
const int slaveSelectPin = 10;
#define pinsArray (int[]){A0, A1, A2, A3, A6, A7} // A4 et A5 réservés pour l'I2C
const int numberOfSensors = 6;
float analogValuesArray[numberOfSensors];
int commonResistanceValue = 5; // 5*50000/256 = 1kohms
LiquidCrystal_I2C lcd(0x27, 16, 2); // set the LCD address to 0x27 for a 16 chars and 2 line display
String display_string = "";
float features[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE]; // ca doit etre 60, 6pins * 10 loops
// keep track of where we are in the feature array
size_t feature_ix = 0;
// FLAG: prediction_mode False to send raw data into the data forwarder for training on EI
// prediction_mode true to skip printing raw data and run the algorithm instead.
bool flag_prediction_mode = true;

/**
 * @brief      Copy raw feature data in out_ptr
 *             Function called by inference library
 *
 * @param[in]  offset   The offset
 * @param[in]  length   The length
 * @param      out_ptr  The out pointer
 *
 * @return     0
 */
int raw_feature_get_data(size_t offset, size_t length, float *out_ptr) {
    memcpy(out_ptr, features + offset, length * sizeof(float));
    return 0;
}

////////////////////////////////////////
////////////////////////////////////////
//     
//            S E T U P
//     
////////////////////////////////////////
////////////////////////////////////////

void setup() {
  // set the slaveSelectPin as an output:
  pinMode(slaveSelectPin, OUTPUT);
  // initialize SPI:
  SPI.begin();
  Serial.begin(115200);
  // setup once resistance
  for (int i = 0; i < numberOfSensors; i++)
  {
    digitalPotWrite(i, commonResistanceValue);
  }
  delay(100);
  // setup the oled screen
  lcd.init();  //initialize the lcd
  lcd.backlight();  //open the backlight
  delay(200);
}

////////////////////////////////////////
////////////////////////////////////////
//     
//              L O O P
//     
////////////////////////////////////////
////////////////////////////////////////

// Loop until feature array is filled to 60 values.
void loop() {
  // clear screen
  lcd.print("                ");  // clear screen
  // Read and do 3 things: fill the 6-int array of measures, fill the feature vector of 60, and build the string to display 
  for (int i = 0; i < numberOfSensors; i++)
  {
    analogValuesArray[i] = analogRead(pinsArray[i]);
    features[feature_ix++] = (float) analogValuesArray[i];
    display_string = display_string + int(analogValuesArray[i]) + " ";
  }

  // DISPLAY RAW DATA ON SERIAL PORT (in training mode only)
  if (!flag_prediction_mode){
    for (int i = 0; i < numberOfSensors; i++)
    {
      if (i < (numberOfSensors - 1)) {
        Serial.print(analogValuesArray[i]); Serial.print("\t");
      }
      else {
        Serial.println(analogValuesArray[i]);
      }
    }
  }

  // If prediction mode
  if (flag_prediction_mode){
    // check if 60 values are there:
    if (feature_ix == EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE) {
      // additionnal check
      if (sizeof(features) / sizeof(float) != EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE) {
          ei_printf("The size of your 'features' array is not correct. Expected %lu items, but had %lu\n",
              EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, sizeof(features) / sizeof(float));
          delay(1000);
          // there was a problem so i reset the index:
          feature_ix = 0;
          return;
      }
      ei_impulse_result_t result = { 0 };
    // the features are stored into flash, and we don't want to load everything into RAM
    signal_t features_signal;
    features_signal.total_length = sizeof(features) / sizeof(features[0]);
    features_signal.get_data = &raw_feature_get_data;

    // invoke the impulse
    EI_IMPULSE_ERROR res = run_classifier(&features_signal, &result, false /* debug */);
    ei_printf("run_classifier returned: %d\n", res);

    if (res != 0) return;

    // print the predictions
    ei_printf("Predictions ");
    ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
        result.timing.dsp, result.timing.classification, result.timing.anomaly);
    ei_printf(": \n");
    ei_printf("[");
    for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
        ei_printf("%.5f", result.classification[ix].value);
#if EI_CLASSIFIER_HAS_ANOMALY == 1
        ei_printf(", ");
#else
        if (ix != EI_CLASSIFIER_LABEL_COUNT - 1) {
            ei_printf(", ");
        }
#endif
    }
#if EI_CLASSIFIER_HAS_ANOMALY == 1
    ei_printf("%.3f", result.anomaly);
#endif
    ei_printf("]\n");

    // human-readable predictions
    for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
        ei_printf("    %s: %.5f\n", result.classification[ix].label, result.classification[ix].value);
    }
#if EI_CLASSIFIER_HAS_ANOMALY == 1
    ei_printf("    anomaly score: %.3f\n", result.anomaly);
#endif

    delay(1000);      
    
    
    // reset the index once prediction done
      feature_ix = 0;
    }
  }
   
  // wait a little until new measurement is triggered
  delay(63); // 63ms here gives exactly 10Hz
  lcd.setCursor(1, 0); // set the cursor to column 3, line 0
  lcd.print(display_string);  // Print a message to the LCD
  display_string = "";

  // reset iterator if feature buffer is full
  if (feature_ix>=60) {
    feature_ix = 0;
  }
}

////////////////////////////////////////
////////////////////////////////////////
//     
//   H E L P E R    F U N C T I O N S
//     
////////////////////////////////////////
////////////////////////////////////////

// Fonction pour modifier la résistance du potentiometre digital
void digitalPotWrite(int address, int value) {
  // take the SS pin low to select the chip:
  digitalWrite(slaveSelectPin, LOW);
  delay(100);
  //  send in the address and value via SPI:
  SPI.transfer(address);
  SPI.transfer(value);
  delay(100);
  // take the SS pin high to de-select the chip:
  digitalWrite(slaveSelectPin, HIGH);
}

/**
 * @brief      Printf function uses vsnprintf and output using Arduino Serial
 *
 * @param[in]  format     Variable argument list
 */
void ei_printf(const char *format, ...) {
    static char print_buf[1024] = { 0 };
    va_list args;
    va_start(args, format);
    int r = vsnprintf(print_buf, sizeof(print_buf), format, args);
    va_end(args);
    if (r > 0) {
        Serial.write(print_buf);
    }
}

Bonjour @fleurda,

Before answering you on the Arduino part, I just had a quick look at your project and it seems that you put most of your data samples in your test set. Is there any particular reason why you did that?
Usually I put 80% of my data samples in my training set and 20% in my test set:

When I move to the NN classifier tab, the models will predict always the same class. See the confusion matrix of your project:

Back on the embedded code, I check if I have the same error using the standalone example: https://github.com/edgeimpulse/example-standalone-inferencing

I’ll let you know

Regards,

Louis

1 Like

Thanks Louis, I’m aware of that, for now i’m trying to make the code work, and then i will upload some better training dataset of course. (i have a background of datascience :wink:

Hello @fleurda,

So I have been able to reproduce using one of your test sample raw features.
I’ve created an internal ticket so our embedded engineers can investigate.

Will let you know asap

Regards,

Louis

2 Likes

Bonjour @fleurda,

Just tried one last thing before leaving the embedded team on it.
When I used the quantized model (int8). I can see the inference results.
While we are looking for why it doesn’t work as expected with the float32 version of your model, you can select which version you want to deploy at the bottom of the deployment tab:

Here are my results when runnning the inference following this guide https://docs.edgeimpulse.com/docs/running-your-impulse-locally:

./build/edge-impulse-standalone "0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 3, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0"
Features (0 ms.): 0.300000 0.000000 3.000000 0.948683 0.900000 2.666666 5.111110 0.100000 0.000000 1.000000 0.316228 0.300000 2.666667 5.111113 0.100000 0.000000 1.000000 0.316228 0.300000 2.666667 5.111114 0.400000 0.000000 4.000000 1.264911 1.200000 2.666667 5.111111 0.000000 0.000000 0.000000 0.000000 0.000000 nan nan 0.200000 0.000000 1.000000 0.447214 0.400000 1.500000 0.250001 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 3.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 4.000000 0.000000 0.000000 
Running neural network...
Predictions (time: 0 ms.):
position_bonne:	0.000000
position_mauvaise:	0.000000
siege_vide:	0.996094
run_classifier returned: 0
Begin output
[0.00000, 0.00000, 0.99609]
End output

Regards,

Louis

1 Like

Hi @fleurda,

This issue has been resolved and live. You can re-export your (float32) model.

1 Like

(To add to @rjames this also works for quantized models naturally :slight_smile: )

1 Like