K-fold cross validation

Hello,
Have you thought about adding automated k-fold cross validation to Edge Impulse?

Explanation:
A Gentle Introduction to k-fold Cross-Validation - MachineLearningMastery.com

If you have a machine learning model and some data, you want to tell if your model can fit. You can split your data into training and test set. Train your model with the training set and evaluate the result with test set. But you evaluated the model only once and you are not sure your good result is by luck or not. You want to evaluate the model multiple times so you can be more confident about the model design.

This approach involves randomly dividing the set of observations into k groups, or folds, of approximately equal size. The first fold is treated as a validation set, and the method is fit on the remaining k − 1 folds.

I know that Edge Impulse metadata allows nicely to control your train validation split, but it just makes sure they don’t end up in the same side. It cannot be used to control that particular metadata key ends up in validation.

Given a metadata key, samples with the same value for that key will always be on the same side of the validation split

In my use case I have quite limited training+testing data and it depends on luck how I choose them whether I get good performance or not.

In case of good luck, my training set contains a lot of variation and test set has easy samples. With k-fold cross validation I could avoid this luck factor.

This is of course possible to do outside Edge impulse, but it requires to manually shuffle training and test set samples around.