How to prepare data for the inference on the device

Hi,

I’ve used this example

and this works fine:

My project on EI:
https://studio.edgeimpulse.com/studio/319646

but now I have no clue how to integrate my camera into it.
Currently, during debugging, I send 8bit data (grayscale) from PC (to emulate the camera)

I can’t find a way to convert grayscale to out_ptr format, all examples are for RGB. Should I use RGB?

how can I put my grayscale data to

    signal_t signal;
	signal.total_length = sizeof(features) / sizeof(features[0]);
	signal.get_data = &get_feature_data;

from

#define FRAME_WIDTH (80)
#define FRAME_HEIGHT (60)

static volatile uint8_t camera_buff[FRAME_WIDTH*FRAME_HEIGHT];

my current flow: buffer 80x60 grayscale - > ei-infernece

How can I extract features from the image?

Thanks,
Enko

1 Like

Now looks like I’ve found the fay how-to feed ML Network:

int get_camera_data(size_t offset, size_t length, float *out_ptr)
{
	size_t bytes_left = length;
	size_t out_ptr_ix = 0;
	uint8_t r, g, b;

	// read byte for byte
    while (bytes_left != 0)
    {
        // grab the value and convert to r/g/b
        uint8_t pixel = camera_buff[offset];

        mono_to_rgb(pixel, &r, &g, &b);

        // then convert to out_ptr format
        float pixel_f = (r << 16) + (g << 8) + b;
        out_ptr[out_ptr_ix] = pixel_f;


		// and go to the next pixel
		out_ptr_ix++;
		bytes_left--;
    }
    return 0;
}

signal.total_length = EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT;
signal.get_data = &get_camera_data;

but my prediction is incorrect, even though I have no day I see accuracy for class 2: about 0.98

I have used 28x28 dataset for model training, and it works fine if I put raw, data.

but when I put 80x60 images, I see the incorrect result

my Image data setting:
image

what I’m doing wrong?

Hello @enko,

I’d suggest you use a square image ratio to train your model. Especially if you used a transfer learning pre-trained model.
And on the device, you can resize the image to match the input shape of your impulse.

Best,

Louis

Hello @louis,

Thanks for the suggestion. I’ll try to retrain my model.
Unfortunately, I can’t use the transfer learning, as the training process takes longer than EI provides for free.

Sorry, but I didn’t get why I needed to resize my image, as I provided input width and height the same as on the picture.

Also, I’m trying to use the code example from

// Callback: fill a section of the out_ptr buffer when requested

static int get_signal_data(size_t offset, size_t length, float *out_ptr) 
{
  uint8_t c;
  float pixel_f;

  // Loop through requested pixels, copy grayscale to RGB channels
  for (size_t i = 0; i < length; i++) 
  {
    c = (image_buf + offset)[i];
    pixel_f = (c << 16) + (c << 8) + c;
    out_ptr[i] = pixel_f;
  }

  return EIDSP_OK;
}