Character recognition.how to start?

I’m just getting acquainted with Edge and ML…
I was wondering if I can fix a meter’s character recognition.
Is it reasonable to think of using EDGE to achieve this goal?
comments welcome…
Thanks

Hi @FerV, what do you mean with ‘fix a meter’s character recognition’? Are you doing something like OCR on meter readings?

Thanks for answering and sorry for my English :(.
yes. an ocr to get the data from a meter. due to bandwidth reasons it is difficult to take an image to the cloud for processing.

So you’ll need some CV pipeline to extract the characters from the image first. E.g. some way of detecting the license plate first, then cutting out the invidual characters: https://stackoverflow.com/questions/58802279/extract-numbers-and-letters-from-license-plate-image-with-python-opencv Then you’ll want to label those individual character images (0-9 and A-Z) and upload them to Edge Impulse. From there you can train a model. There’s probably datasets for this available already (maybe https://figshare.com/articles/dataset/Character_classification_data_for_license_plates/3113449, I haven’t looked at it).

The downside is that while running the model on-device is easy, you’ll also need the CV pipeline to run on device. Probably using the OpenMV board is easiest to prototype this, as you can probably use the same code as in OpenCV, and you can export your model in Edge Impulse to be used by OpenMV already. If you have it working you could port the CV code to a lower class device if needed.