Splitting a model between multiple devices for distributed inference using Edge Impulse

I am curious as to whether edge impulse is able to provide me with enough fine grained control over the model such that I am able to split a trained and optimized model onto two devices? I am trying to perform some research on distributed ML inference techniques and have been running into issues using the ESP-DL toolset. Is it possible to do this type of work on edge impulse?

Hello @KetAvery,

Unfortunately this is not something we support today.
You can have a look at https://flower.ai/ for federated learning. I had some contacts in the past with them but I have not had time to further investigate.