Upload dataset for ESP-32 CAM

Hello,
I’ve been working on my ESP-32 CAM, the problem is that I am new and I don’t know how the platform works, I want to load some dataset so that the ESP-32 can recognize multiple objects such as people, cars, chairs, etc. . But I don’t know how to do it, when I upload the dataset I can only indicate the object by hand or with YOLO, I was investigating and there is a way to do it automatically with a JSON but I have no idea how to do it, any recommendations?

Question/Issue: Help me

Project ID:

Context/Use case: Detect multiple objects with ESP-32 CAM

I am working on a workshop that use ESP32-S3 with OV2640. You can do dataset preparation in 2 choices.

  1. Manual way: use http example code to export images via webpage. Copy image and upload to edge impulse UI.
  2. My way (automated one): write Python code to use ingestion API. I use RPC from OpenMV to transfer jpg images from ESP32 to PC. Then, use OpenCV to label and upload image file via Ingestion API.

Thanks for answering, I used a python code with OpenCV, but I decided to discard it since I do not want the PC to be running the code, but rather for the ESP-32 to detect objects without the PC being on. I don’t know if it’s possible…

Good news is ingestion API is quite simple. So you have two choices:

  1. Easier to find example code is to open webserver in ESP32 and update JPEG image. Then, write code on server side to grab images. This is, some how, not a good choice since you have to ensure that ESP32’s IP address is accessible and server side must forward images to Edge impulse.
  2. Send images directly to Ingestion API. You have to check how to encode binary JPEG into HTTP body text. Better than first choice but need to search for code.

BTW, why not save JPEG files in SD-card and upload it later.

1 Like

I already have a code that opens a web server with IP and takes a JPEG photo. Do you have any tutorial to use the API you mention? Because I really can’t find anything for what I’m looking for, which would be for ESP-32 to take photos, detect what object it is and print it in the Arduino IDE serial.

I don’t want to save my photos to SD because I want it to be in real time. When I take the photo it automatically tells me what’s in it. Without having to go to Edge Impulse and indicate what the object is.

This is a part of my Python code to send data to ingestion API from UI code.

    def upload_data(self):
        bbox = {
            "version": 1,
            "type": "bounding-box-labels",
            "boundingBoxes": {
                "tmp.jpg": [
                    {
                    "label": self.label_txt.text(),
                    "x": self.bbox[0],
                    "y": self.bbox[1],
                    "width": self.bbox[2],
                    "height": self.bbox[3]
                    }
                ]
            }
        }
        bbox_label = json.dumps(bbox, separators=(',', ':'))
        headers = {'x-api-key': self.ei_api.text(),
                   'x-label': self.label_txt.text(),
                   'x-add-date-id': '1',
                   }
        print(headers)
        payload = (('data',('tmp.jpg', self.buf, 'image/jpeg')), ('data', ('bounding_boxes.labels', bbox_label)))
        #payload = (('data',('tmp.jpg', self.buf, 'image/jpeg')),)
        #payload = (('data',self.buf),)
        res = requests.post('https://ingestion.edgeimpulse.com/api/training/files',
                            headers=headers,
                            files=payload)
        print('Uploaded file(s) to Edge Impulse\n', res.status_code, res.content)

Forgive me, I’m new. But then should I use Python directly in Edge Impulse? How can I do that? And from my code that I upload to the ESP-32, set it to send the photo to Edge Impulse? And with the code you gave me, is the object identified? Thank you very much for answering

I use this code for transfer labeled images to Edge Impulse. If you concentrate on live object detection in ESP32, you can deploy model as Arduino library. Then, you just run code from example included with the library. But, please aware that the image dataset collected by other device may give you not so good performance due to the quality of images from ESP camera.

My approach is as follows:

  1. Prepare ESP32 code to snapshot and transfer images to PC.
  2. Use PC to label and upload to Edge Impulse using Ingestion API.
  3. Train model on Edge Impulse.
  4. Build model as Arduino library.
  5. Study example code and make one.

My ESP32-S3 gave performance around 1.5 Hz at 120x120 resolution of input.

Top work, well done! I get your point/concern about “the quality of images from ESP camera” for capture/training to later be pushed back to an edge device. I discovered that a lot of the OV… cameras are cheaply available as -to-USB equivalents, so I’m trying an OV3660-to-USB with the M5Stack Timer Camera F (OV3660).

Did pushing the bounding-box-labels like you show in your python code work?

I tried sending it on the payload just like you but even though I get 200 back, the image doesn’t show up on my edge impulse.

Got any advice or reference to documentation?