Inference performance metrics


I like to track the Inference performance metrics using MLflow (optional wandb).

So far I see EI API is the way to go to obtain the performance metrics. Correct?

You need to provide some Body parameters. Are these related to an audio application? Is it possible to provide some background about the meaning of these body parameters?

Case study.

The case study that I am working on is a “motion” application, using IMU data and it is a regression problem.

To goal is to perform a similar study as in Inference performance metrics, but this for different datasets, models, signal processing approaches and embedded devices (Unoptimized vs. Quantized).

An approach is using the results of the EON tuner. However, currently I am setting up a data/ML-pipeline (MLflow for metrics/artifact tracking) on a local machine were I also will train some models. For comparison it could be of interest to obtain also metrics from these models.

Tips & ideas are welcome.

Note. In Deployment, EI platform gives: RAM, Latency, flash and accuracy. Currently I am using the EI API to obtain the needed data for the calculation of MSE. Could be of interest, for regression, to add Mean Squared Error (MSE) (or Mean Absolute Error (MAE) to the table, instead of accuracy


Hello @Joeri,

Currently, the application testing API call is under development (, access to this API call will be available when we release the Application Testing feature in coming months. Apologies for the confusion!

I will forward your feedback regarding MSE/MAE to our engineering team!

– Jenny

@jenny Thanks for the update. Keep me posted.

@Joeri We will post a blog post on when we fully release Application Testing to the public, so keep an eye on the blog! :slight_smile:

– Jenny