How Image classification models are generated in EON Tuner?

Hi team,

I have doubt regarding how EON tuner works? Just curious about model selection though.
I could see 22 model in 2D convolution network type.
Are these 22 models predefined? If so how these models were selected?

Ramson Jehu K

Hi @Ramson,

The EON tuner selects models by varying the following parameters based on your selected target device and latency:

  • Model type (transfer learning or 2D convolutional)
  • Transfer learning base model
  • Dropout rate
  • Color channels (RGB or grayscale)
  • Dropout rate for dropout layers
  • Model height (number of 2D convolutional layers)
  • Model width (number of convolutions in 2D convolutional layers)
  • Data augmentation (enabled or disabled)
  • Number of neurons in final dense layer

In the near future we’ll be releasing an API that will allow you to customize the EON Tuner search space: using this API you could for example exclude all ‘RGB’ models from the search space.

I hope this answers your question, please let me know if anything is unclear…

Best regards,


Thanks @mathijs, Now i get it.

This would be very helpful. I really like the EON feature as it makes trial and error much faster, but it would be really good to filter its generated models based not only on inference time and system but also in metrics like window size (for audio classification), frame lenght etc. Glad you are working on it!