Is this platform able to deploy two models on the same MCU at the same time, and then switch between them during inference?