I teach an after school group and we are connecting RC cars with WiFi Arduinos. The original program 3 years ago had the arduino controlling the car and wifi to a cell phone through a web socket that then used TensorflowJS Posenet to read a person limbs to control what the car did, video here .
Now I have the Portenta with Vision shield kind of working thread here, and could potentially skip the whole websocket, cell phone browser tensorflowJS thing. We can already analyze a few objects for object recognition, or analyze one object for size (volume), the x,y location of an object would be great as we could translate that into where the car should go or commands for the car, but presently multi-object detection is too slow, a fix is in the works, see thread here.
Does anyone know of anything else we could do with the cars while waiting for faster multi-object detection? The students have worked on symbols that represent car commands with the idea to try to keep the car on a track.
Example “L” go left, “R” go right, no symbol stop, “S” speed up. Not really sure if that will work well.
It would be good to have the car turn towards a stronger classification score, then it could follow a symbol, but that doesn’t seem to be very linear. Any other ideas? Is there some Machine Learning skill I am not aware of, that might apply here with TinyML and RC cars?