I have created a model of a part I want my Raspberry Pi to recognize. If I take an image from my camera and upload it into the Edge Impulse Studio Model Testing feature I get a result of 100%, when I take the same image and ask my Raspberry Pi to analyze the image it comes back with a completely, in some cases a 0% result. My question is why does the Edge Impulse Studio give different results to a deployment when I am using the same image to test both systems and both systems are using the same A.I Model ?
I think I have now found how to switch between int8 and float32 models on the Deployment page. I would just say that the option is not very well displayed, I didn’t realize that the text " Or, show all Linux deployment options on this page" was a clickable link as it is not highlighted or looks like an option button. I have attached a screenshot.
I will now test my Python program against the Studio and see how they compare.
I have now done a test on my project, which is project number 194170, which has approx 5000 images in the data set and approx 1000 in the test set. On the Edge Impulse Studio Transfer Learning page it states that the model is 100% accurate and all the test data passes with a result of 100%. However, when i run the same model and test images on a Raspberry Pi using the Python “runner.classify” function I get an accuracy of approx 10%.
I have worked out that the code you originally listed on your Edge Impulse Linux SDK for Python page, which I was using, has now been superseded by your new code for classifying a still image. Once I used your new still image code my Raspberry Pi seems to match the Studio classification results.