Help with Object Recognition Accuracy

Just trying what I believe is a simple test.

I’ve generated the same simple black and white 2D image object but at different scales and different rotations.

I am have labelled and trying to train so that the system will recognise
if the object is ‘pointing’ (rotated) to the north west segment, north, or north east

I have set up with what appears to be the standard flow. Image Data -> Image -> Object Detection

The feature generator on the image view yields what appears will be very good separation.

However I get poor results - 50% accuracy on 100 test units after being trained on 300 training units.

I was expecting much higher accuracy for what I thought would be a simple test case.

Data has been checked and is labelled correctly.

Just wondering if I have missed an obvious step or have made some fundamental mistake.

An example of the a training image is here.
rotate_left.293

projectid is 28319

@gjsmith My initial thought was that this would not be a good fit for any of the transfer learning blocks, as they are trained on a larger dataset of photos, and that does not translate to these abstract terms. So my guess would be to use a non-object detection flow, and select a ‘Neural network’ block instead of the transfer learning block.

However, that model does not converge either (whatever architecture I try) so I’m wondering if there’s something else here. Pinging @dansitu and @matkelcey from our ML team. F.e. result from normal image block:

image

Ahh facepalm. Thanks. I thought the transfer learning would work because I assumed the prior trained layers were capturing things like edges, curves and segments etc and so would be ok for transfer to my simple black and white test case. However what you are saying makes sense.

I’ll wait on comments from @dansitu and @matkelcey

Just an update:

outside of edgeimpulse framework I have managed to transfer learn the same images used here using keras/tensorflow on a VGG16 model with imagenet weights.

The results are excellent.

So I’m not sure why I cannot get the same results using the SSD model that Edge Impulse uses for this task. Something to do with the SSD model??