Eye-on-Hand Cobot position error estimation

I wonder if EI could be used for application to mimic visual servoing, basically the idea is that we have a camera installed on a cobot end effector, we have a photo of our target in the “ideal” cobot position, and then we have to create a dataset of translational and rotational deviations (x, y, z, Rz, Ry, Rz) to train the network. Does it make sense? The goal is to estimate a 6-DOF error just by looking at the random photo and sending a correction command to the cobot.

Hi @Behi,

At this time, Edge Impulse cannot do visual servoing, as that requires a multi-dimensional regression output. Edge Impulse can only do a single regression output right now. You will need to use another ML tool to train such a model. Do note that we offer BYOM if you want to optimize your pre-trained model to run on an edge device.