MLPerf Tiny Inference Is a New Benchmarking Suite for tinyML Devices

When you’re optimizing a neural network for on-device efficiency, how can you be sure which changes you make will move your model in the right direction? And if you’re trying to select embedded hardware for your machine learning application, how do you know which devices are best suited to your workload? Making the right choices can be crucial for tinyML, where each byte and milliwatt is of critical importance.

This is a companion discussion topic for the original entry at
1 Like