When running my FOMO project on the Raspberry Pi 3 A+ I’m getting partially different results than from the Website. Then I’ve tried to output the features but the features that get extracted from the photo on the raspberry pi hugely differ from the ones from the website.
On the website the values are in the range from 0 to 1. But the Raspberry Pi outputs values go up to 16777215 which seems to correspond to a 1 (white pixel) I believe.
Is that a problem on my Side?
The code I wrote to classify a 256x256 image, which is in BGR format (because it seems like the grayscale pictures have to be in BGR Raspberry Pi camera script using grayscale - #2 by shawn_edgeimpulse) is here:
model = "modelfile.eim" dir_path = os.path.dirname(os.path.realpath(__file__)) modelfile = os.path.join(dir_path, model) runner = ImageImpulseRunner(modelfile) model_info = runner.init() print('Loaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"') labels = model_info['model_parameters']['labels'] features, cropped = runner.get_features_from_image(img) print(len(features)) results = runner.classify(features) img2 = img.copy() classification_result =  bounding_boxes = results["result"]["bounding_boxes"] for bb in bounding_boxes: img2 = cv2.rectangle(img2, (bb['x'], bb['y']), (bb['x'] + bb['width'], bb['y'] + bb['height']), (255, 0, 0), 1) print(bb) classification_result.append(bb) if (runner): runner.stop()