I can't always choose Arduino as build target

I trained three classification models: one using input capabilities from Raw Data, the second using input capabilities from IMU (Syntiant), and the last using input capabilities from Flatten.

Why does only the last one that uses Flatten input features allow me to choose arduino when I tried to deploy? If I want to use the first one using Raw Data or the second one using IMU (Syntiant) do I have to choose C++ and then integrate the library myself into Arduino?

PS: I’m using Adafruit nRF52840 Feather Express devBoard.

1 Like

Thew IMU Syntiant is specific to Syntiant hardware only.
However the Raw Data should allow you to deploy the Arduino library. Do you have the project ID associated to it?

Aurelien

Hi

Project ID: 122069

This project ID currently has a classification model that uses IMU Syntiant features and shows 93% efficiency in model testing. But I can’t deploy to Arduino. I’ts possible to deploy the C++ library on Arduino project? I also tried a classification model using raw data resources and I can’t deploy to Arduino.

Any idea why?

The IMU Syntiant block can only be used with Syntiant hardware which is why the Arduino deployment is disabled. You won’t be able to run it on Arduino. The Raw block is actually very similar to the IMU Syntiant, you can read more about our processing blocks here.

Aurelien

What do you mean as Syntiant hardware?

Hello @gabriel.brito,

@aurel meant this hardware which has a specific architecture. This is why we provide specific blocks (DSP / NN) that work only on this target.

Best,

Louis

Thanks for your answser.

So my project (ID: 122069) has a classification model using RAW DATA with 91.95% accuracy and 25ms latency when tested using test data (ARM Cortex M4 80MHz). Then It allows me to export an optimization model with 91.28% accuracy and 59ms latency for Arduino and I just build and run the static buffer example on my nRF52840 Feather Express devboard with Arm Cortex M4 64MHz.

I’m getting the following results:
Predictions (DSP: 0ms, Classification: 13ms, Anomaly: 0ms)

How can I get 13ms with 64MHz when the platform estimates 59ms using 80MHz cpu? Or is the 13ms of classification not the same as the latency that the platform indicates?

Hello @gabriel.brito,

What we provide is an estimation based on different things.

We use simulation tools to calculate latency performance and memory usage while creating a model. Next to that, we run benchmark models on all of the devices regularly to verify.

Note that these calculations only apply on the model and doesn’t take into account the latency caused by capturing the data and the memory needed to store / run the rest of the application. So not a 100% but it usually is a good estimate and comparison between devices.

Please also see Inference performance metrics - Edge Impulse Documentation

Just to be sure, you ran the quantized model or the float32 one?

Best,

Louis

I ran the Quantized (int8) model. The estimation on EI plataform is about 59 ms latency (Arm Cortex M4 64MHz) and the static buffer example on Adafruit nRF52840 Feather Express devboard (Arm Cortex M4 MHz) outputs (DSP: 0ms, Classification: 13ms, Anomaly: 0ms). It’s normal? Or should this value be slightly higher than the EI platform estimate?

Thanks for the help!

The value that you get from your device is the real value during the inference. The one provided on the studio is an estimation. So something might be wrong with our estimation on the studio. I’ll ask our core engineers to have a look.

Best,

Louis

Thank you very much for your kindness and support. I’ll be waiting for some updates on the studio’s estimate.

Best regards,
GB