Raspberry Pi 4 Object Detection - A Camera Question

Hello Edge Impulse folks!

I’ve been playing about with the audio side of EI for a bit, and now I’m moving onto Image/Video based projects. From what I can see, the Raspberry Pi 4 is a supported development board for Object Detection, which is great, especially since the camera component is also supported.

I’d like to ask if it’s possible to take a photo using the camera module during Object Detection using the standard “raspistill” command.

I’m currently working on a volunteer project for the RSPB to help detect birds from a camera feed using ML, so Edge Impulse and the Raspberry Pi 4 are perfect for this. Ideally when a bird is detected I would like to save an image of what the camera module is seeing.

Also, whilst I’m here, what’s the size limit (MB/GB) data wise for training and testing data using Edge Impulse. Is there a way to increase it?

Thanks in advance for any help that you can offer me!

PS. I enjoyed the talks Daniel Situnayake did over at Wildlabs, that’s how I found out about Edge Impulse. And the tutorials by Jan Jongboom are spot on, just what I need to get me started on the right track, kudos for attention to detail!

Hi @TechDevTom

There are examples of classifying images - that all work on the Pi - in Go, Python, Node and C++ here: https://docs.edgeimpulse.com/docs/edge-impulse-for-linux - you could also snap a photo and classify it, just see the camera example for Python for example, can just load image from disk instead.

Also, whilst I’m here, what’s the size limit (MB/GB) data wise for training and testing data using Edge Impulse. Is there a way to increase it?

4GB intermediate file size for the output of a processing block, if you run into the limit just reach out to hello@edgeimpulse.com and they can up the limit for you.

Hey @janjongboom! Thanks for replying.

I’ve taken a look at the Linux docs and they seem to be what I’m looking for. I’m taking it you mean this camera example? https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/image/classify.py

If so, am I right in thinking I’d need to modify the code from line 94 down in order to feed the classifier a still image, rather than the video feed? My python isn’t too bad, however I’m still learning about ML. The loop looks like it’s taking images from the camera feed then processing them/waiting. Could I take this out and, like you said, load an image from the disk? How might that call to the classifier look?

And ok, noted about the size limit, I don’t think I’ll reach 4GB worth of image files for this initial test I’m doing, but if the project scales up in the future, then I’ll most likely get in touch about it!

Hi @TechDevTom yep, correct. You can use cv2.imread to read a file from disk, then just feed it into the classifier. Here’s an end-to-end example (a bit convoluted as it does the image resizing / cropping so you can feed arbitrary sized images in):

#!/usr/bin/env python

import cv2
import os
import sys, getopt
import numpy as np
from edge_impulse_linux.image import ImageImpulseRunner

runner = None
show_camera = True

def help():
    print('python classify.py <path_to_model.eim>')

def main(argv):
    try:
        opts, args = getopt.getopt(argv, "h", ["--help"])
    except getopt.GetoptError:
        help()
        sys.exit(2)

    for opt, arg in opts:
        if opt in ('-h', '--help'):
            help()
            sys.exit()

    if len(args) == 0:
        help()
        sys.exit(2)

    model = args[0]

    dir_path = os.path.dirname(os.path.realpath(__file__))
    modelfile = os.path.join(dir_path, model)

    print('MODEL: ' + modelfile)

    with ImageImpulseRunner(modelfile) as runner:
        try:
            model_info = runner.init()
            print('Loaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"')
            labels = model_info['model_parameters']['labels']

            img = cv2.imread('/Users/janjongboom/Desktop/jan.jpg')

            features = []

            EI_CLASSIFIER_INPUT_WIDTH = runner.dim[0]
            EI_CLASSIFIER_INPUT_HEIGHT = runner.dim[1]

            in_frame_cols = img.shape[1]
            in_frame_rows = img.shape[0]

            factor_w = EI_CLASSIFIER_INPUT_WIDTH / in_frame_cols
            factor_h = EI_CLASSIFIER_INPUT_HEIGHT / in_frame_rows

            largest_factor = factor_w if factor_w > factor_h else factor_h

            resize_size_w = int(largest_factor * in_frame_cols)
            resize_size_h = int(largest_factor * in_frame_rows)
            resize_size = (resize_size_w, resize_size_h)

            resized = cv2.resize(img, resize_size, interpolation = cv2.INTER_AREA)

            crop_x = int((resize_size_w - resize_size_h) / 2) if resize_size_w > resize_size_h else 0
            crop_y = int((resize_size_h - resize_size_w) / 2) if resize_size_h > resize_size_w else 0

            crop_region = (crop_x, crop_y, EI_CLASSIFIER_INPUT_WIDTH, EI_CLASSIFIER_INPUT_HEIGHT)

            cropped = resized[crop_region[1]:crop_region[1]+crop_region[3], crop_region[0]:crop_region[0]+crop_region[2]]

            if runner.isGrayscale:
                cropped = cv2.cvtColor(cropped, cv2.COLOR_BGR2GRAY)
                pixels = np.array(cropped).flatten().tolist()

                for p in pixels:
                    features.append((p << 16) + (p << 8) + p)
            else:
                cv2.imwrite('test.jpg', cropped)

                pixels = np.array(cropped).flatten().tolist()

                for ix in range(0, len(pixels), 3):
                    b = pixels[ix + 0]
                    g = pixels[ix + 1]
                    r = pixels[ix + 2]
                    features.append((r << 16) + (g << 8) + b)

            res = runner.classify(features)

            if "classification" in res["result"].keys():
                print('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']['classification']), end='')
                for label in labels:
                    score = res['result']['classification'][label]
                    print('%s: %.2f\t' % (label, score), end='')
                print('', flush=True)

                if (show_camera):
                    cv2.imshow('edgeimpulse', img)
                    if cv2.waitKey(1) == ord('q'):
                        return

            elif "bounding_boxes" in res["result"].keys():
                print('Found %d bounding boxes (%d ms.)' % (len(res["result"]["bounding_boxes"]), res['timing']['dsp'] + res['timing']['classification']))
                for bb in res["result"]["bounding_boxes"]:
                    print('\t%s (%.2f): x=%d y=%d w=%d h=%d' % (bb['label'], bb['value'], bb['x'], bb['y'], bb['width'], bb['height']))
        finally:
            if (runner):
                runner.stop()

if __name__ == "__main__":
   main(sys.argv[1:])
3 Likes

Wow @janjongboom, thanks! I wasn’t expecting a full example, that’s awesome. You can tell when people really love the work they do when they give you full examples haha! On some of the other forums for other things I usually have to dig for a few hours to figure something out.

I’ll give this a try when I start getting my image data together and let you know how I get on. Need to get the CAD design done for the case too!

3 Likes

Now also added a full example to the Python SDK here: https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/image/classify-image.py

1 Like

Cheers @janjongboom! Nice to have a proper reference :smiley:

Hey @janjongboom, I’m thinking about implementing the code above soon on my RPi4. Am I right in thinking that I first need to install the model from Edge Impulse onto my device, then run that code on my RPi4?

Is there a particular order that things should be done in? If so, could you list what I need to be aware of please? I’ve got my model running at about 85.5%-90% accuracy for something vs nothing now, so I need to go field test it :smiley:

So, I’ve tried to run a mix of the above code but I’m getting an error right at the start of the example when trying to import “EdgeImpulseRunner”, see below for error:

Traceback (most recent call last):
File “processor.py”, line 7, in
from edge_impulse_linux.image import ImageImpulseRunner
ImportError: No module named edge_impulse_linux.image

Any ideas on why it’s not finding the image at all? Am I missing something from my installation perhaps? Any advice would be appreciated.

Hope you had a nice weekend :slight_smile:

Hi @TechDevTom,

Have you installed the Python package first?

pip3 install edge_impulse_linux

Aurelien

Hey @aurel, I thought I had, but just to make sure I ran that line of code again to make sure I wasn’t mistaken. It looks like it might have installed a couple of different associated packages, but when I try to run the code again I’m having no luck, I still get the same error.

If it were a line of code later on I could probably figure out how to fix it myself (probably), but since this is in direct relation to the edge impulse linux image I don’t know where to start!

Hello @TechDevTom,

Just to make sure, which python version are you using?

Regards,

Louis

Yeah @TechDevTom could you output the values of:

$ python3 --version
$ pip3 --version
1 Like

Hey @louis @janjongboom, I’m using Python version 3.7.3 and pip version 18.1

Are there specific versions that I should be using? Do I need to place the python code in a specific location for it to work, or can it be placed anywhere on the SD card that I’m using?

Thanks for the assist so far!

@TechDevTom,

Could you provide the full output of the commands mentioned by Jan?
If you have multiple Python3 versions installed, pip3 may be pointing out to another version than 3.7.3.

Thanks,
Aurelien

@aurel @janjongboom Here you go, and, interestingly enough, on the pip3 command it mentions python 3.7, rather than python 3.7.3 at the end of it. Could this be an issue, or is just not printing out the full version?

Sorry to pester @janjongboom/@louis, do you have any more input on the above issue please?

Hey @TechDevTom,

Do you also have python 2 installed on your RPi? Can you make sure you run your python code using python 3. Otherwise, I am kind of running out of suggestion :confused:

Maybe try removing python & pip and do a clean install again to see if that fixes it.

Let us know of it goes.

Louis

Hi @TechDevTom you can try:

python3 -m pip install  edge_impulse_linux

To make sure it’s the matching Python version (for some f*n reason I have 8 different Pythons on my work laptop all with their own list of packages, so I feel your pain).

@louis @janjongboom Apparently I DID have Python 2 installed on my RPi! Annoying! I followed the advice on this link by Seamus and the code runs now: https://raspberrypi.stackexchange.com/questions/118760/uninstall-python2-in-raspberry-pi

Should I be feeding arguments into the command when I run it?