Hello,
I am getting different accuracy results between the python image classify function and the Edge Impulse web console. I have used the same image and fed it into both systems and on the web I get 100% accuracy and on the python function i get around 46%-60% accuracy. Can anyone help ?
Thanks.
Hi @ross6699,
Could you share your project ID so we can try to replicate the issue? Also, could you share your code for how you are testing the linux_runner along with what test image(s) you tried?
Hello,
My project id is 105256 and the test image I am using is called Test1.jpg, but I have also tried it on the file Bad17.jpg. I have since I last emailed you added more images into the model and the web and python programs are starting to converge. My issue is why do they produce different results. I can understand that I do not have many images in my project and the model maybe over-fitting, but I would expect both the web studio and the python code to report the same results.
Thanks.
Ross.
The code I am using is:
#usr/bin/env python3
-- coding: utf-8 --
import cv2
import os
import sys, getopt
import numpy as np
def evaluate_image(cv2,runner,model_info,labels,camera_image):
show_camera = True
img = cv2.imread("/mnt/ramdisk/cropped_image.jpg")
features = []
I_CLASSIFIER_INPUT_WIDTH = runner.dim[0]
EI_CLASSIFIER_INPUT_HEIGHT = runner.dim[1]
in_frame_cols = img.shape[1]
in_frame_rows = img.shape[0]
factor_w = EI_CLASSIFIER_INPUT_WIDTH / in_frame_cols
factor_h = EI_CLASSIFIER_INPUT_HEIGHT / in_frame_rows
largest_factor = factor_w if factor_w > factor_h else factor_h
resize_size_w = int(largest_factor * in_frame_cols)
resize_size_h = int(largest_factor * in_frame_rows)
resize_size = (resize_size_w, resize_size_h)
resized = cv2.resize(img, resize_size, interpolation = cv2.INTER_AREA)
crop_x = int((resize_size_w - resize_size_h) / 2) if resize_size_w > resize_size_h else 0
crop_y = int((resize_size_h - resize_size_w) / 2) if resize_size_h > resize_size_w else 0
crop_region = (crop_x, crop_y, EI_CLASSIFIER_INPUT_WIDTH, EI_CLASSIFIER_INPUT_HEIGHT)
cropped = resized[crop_region[1]:crop_region[1]+crop_region[3], crop_region[0]:crop_region[0]+crop_region[2]]
if runner.isGrayscale:
cropped = cv2.cvtColor(cropped, cv2.COLOR_BGR2GRAY)
pixels = np.array(cropped).flatten().tolist()
for p in pixels:
features.append((p << 16) + (p << 8) + p)
else:
pixels = np.array(cropped).flatten().tolist()
for ix in range(0, len(pixels), 3):
b = pixels[ix + 0]
g = pixels[ix + 1]
r = pixels[ix + 2]
features.append((r << 16) + (g << 8) + b)
res = runner.classify(features)
if "classification" in res["result"].keys():
print('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']['classification']), end='')
loop=0
scores=[0,0,0,0,0,0,0,0,0]
for label in labels:
score = res['result']['classification'][label]
print('%s: %.2f\t' % (label, score), end='')
scores[loop]=score
loop+=1
print('', flush=True)
return labels,scores