Just trying what I believe is a simple test.
I’ve generated the same simple black and white 2D image object but at different scales and different rotations.
I am have labelled and trying to train so that the system will recognise
if the object is ‘pointing’ (rotated) to the north west segment, north, or north east
I have set up with what appears to be the standard flow. Image Data -> Image -> Object Detection
The feature generator on the image view yields what appears will be very good separation.
However I get poor results - 50% accuracy on 100 test units after being trained on 300 training units.
I was expecting much higher accuracy for what I thought would be a simple test case.
Data has been checked and is labelled correctly.
Just wondering if I have missed an obvious step or have made some fundamental mistake.
An example of the a training image is here.
projectid is 28319