ESP32 Image Classification Convo.cpp labels problem

Question/Issue:
The compiler is complaining about missing initializer clauses in several places within the code related to convolutional layers.

Project ID:
NA

Context/Use case:
I used edge impulse to create a model for image classification for my esp32-s cam connected with mb board. My camera works perfectly, but as soon as I include the header file from edge impulse there’s this following error:
Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/conv.cpp: In function ‘TfLiteStatus tflite::{anonymous}::Prepare(TfLiteContext*, TfLiteNode*)’:
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/conv.cpp:1789:67: error: either all initializer clauses should be designated or none of them should be
1789 | .channels = input->dims->data[3], 1
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/conv.cpp:1793:68: error: either all initializer clauses should be designated or none of them should be
1793 | .channels = output->dims->data[3], 1
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/conv.cpp:1795:80: error: either all initializer clauses should be designated or none of them should be
1795 | data_dims_t filter_dims = {.width = filter_width, .height = filter_height, 0, 0};
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/conv.cpp: In function ‘void tflite::{anonymous}::EvalQuantizedPerChannel(TfLiteContext*, TfLiteNode*, const TfLiteConvParams&, const NodeData&, const TfLiteEvalTensor*, const TfLiteEvalTensor*, const TfLiteEvalTensor*, TfLiteEvalTensor*)’:
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/conv.cpp:1883:58: error: either all initializer clauses should be designated or none of them should be
1883 | .channels = input_depth, 1
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/conv.cpp:1887:59: error: either all initializer clauses should be designated or none of them should be
1887 | .channels = output_depth, 1
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/conv.cpp:1889:80: error: either all initializer clauses should be designated or none of them should be
1889 | data_dims_t filter_dims = {.width = filter_width, .height = filter_height, 0, 0};
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/depthwise_conv.cpp: In function ‘void tflite::{anonymous}::EvalQuantizedPerChannel(TfLiteContext*, TfLiteNode*, const TfLiteDepthwiseConvParams&, const NodeData&, const TfLiteEvalTensor*, const TfLiteEvalTensor*, const TfLiteEvalTensor*, TfLiteEvalTensor*)’:
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/depthwise_conv.cpp:1727:58: error: either all initializer clauses should be designated or none of them should be
1727 | .channels = input_depth, 1
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/depthwise_conv.cpp:1731:59: error: either all initializer clauses should be designated or none of them should be
1731 | .channels = output_depth, 1
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/depthwise_conv.cpp:1733:80: error: either all initializer clauses should be designated or none of them should be
1733 | data_dims_t filter_dims = {.width = filter_width, .height = filter_height, 0, 0};
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/depthwise_conv.cpp: In function ‘TfLiteStatus tflite::{anonymous}::Prepare(TfLiteContext*, TfLiteNode*)’:
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/depthwise_conv.cpp:1836:67: error: either all initializer clauses should be designated or none of them should be
1836 | .channels = input->dims->data[3], 1
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/depthwise_conv.cpp:1840:68: error: either all initializer clauses should be designated or none of them should be
1840 | .channels = output->dims->data[3], 1
| ^
/Users/yam/Documents/Arduino/libraries/Smart-Waste-Segregation_inferencing/src/edge-impulse-sdk/tensorflow/lite/micro/kernels/depthwise_conv.cpp:1842:80: error: either all initializer clauses should be designated or none of them should be
1842 | data_dims_t filter_dims = {.width = filter_width, .height = filter_height, 0, 0};
| ^

exit status 1

Compilation error: exit status 1

Steps Taken:

  1. [Step 1]
  2. [Step 2]
  3. [Step 3]

Expected Outcome:
[Describe what you expected to happen]

Actual Outcome:
[Describe what actually happened]

Reproducibility:

  • [ ] Always
  • [ ] Sometimes
  • [ ] Rarely

Environment:

  • Platform: [e.g., Raspberry Pi, nRF9160 DK, etc.]
  • Build Environment Details: [e.g., Arduino IDE 1.8.19 ESP32 Core for Arduino 2.0.4]
  • OS Version: [e.g., Ubuntu 20.04, Windows 10]
  • Edge Impulse Version (Firmware): [e.g., 1.2.3]
  • To find out Edge Impulse Version:
  • if you have pre-compiled firmware: run edge-impulse-run-impulse --raw and type AT+INFO. Look for Edge Impulse version in the output.
  • if you have a library deployment: inside the unarchived deployment, open model-parameters/model_metadata.h and look for EI_STUDIO_VERSION_MAJOR, EI_STUDIO_VERSION_MINOR, EI_STUDIO_VERSION_PATCH
  • Edge Impulse CLI Version: [e.g., 1.5.0]
  • Project Version: [e.g., 1.0.0]
  • Custom Blocks / Impulse Configuration: [Describe custom blocks used or impulse configuration]
    Logs/Attachments:
    [Include any logs or screenshots that may help in diagnosing the issue]

Additional Information:
[Any other information that might be relevant]

Hi @Yamm,

Please try the solution here to see if it fixes the problem: Error compiling Arduino Library for XIAO ESP32S3 Sense

@shawn_edgeimpulse I tried the solution but, it seems that my code exceeds the available space in board. Could you suggest at solution towards this issue.

error message:
Sketch uses 1977725 bytes (150%) of program storage space. Maximum is 1310720 bytes.
Global variables use 66096 bytes (20%) of dynamic memory, leaving 261584 bytes for local variables. Maximum is 327680 bytes.
Sketch too big; see https://support.arduino.cc/hc/en-us/articles/360013825179 for tips on reducing it.
text section exceeds available space in board

Compilation error: text section exceeds available space in board

I am using ESP32-S cam with a MB module board (micro usb B ) to connect.

Since I’m using a pre-trained, optimized(int8) model from Edge Impulse and can’t modify it further, storing it on the microSD card and loading it at runtime is a suitable approach for my ESP32-S cam with MB module board. It would be grateful if I could get the coding guidance required for this approach.

Hi @Yamm,

I am using ESP32-S cam

Which ESP32-S? There’s the S2 and S3.

pre-trained, optimized(int8) model

Which model are you using? Can you provide your project ID number so we can take a look?

My guess is that you are using an object detection model (e.g. YOLO), which is too large for the ESP32-S3. You will need to use another board with more flash storage or find a different way to do object detection, such as FOMO (FOMO: Object detection for constrained devices | Edge Impulse Documentation).

Thank you for your response @shawn_edgeimpulse
Project ID: 428129
Image Classification using MobileNetV2 160x160 0.5
Device: https://ghumtipasal.com.np/wp-content/uploads/2024/04/1647569906_esp20cam2032-36.jpeg

Hi @Yamm,

Thank you for the info. In my experience, MobileNet is quite large and will struggle to run on most microcontrollers. It looks like you’ve set your input dimensions to 48x48 but chose the 160x160 version of MobileNetV2 (“Uses around 700.7K RAM and 982.4K ROM”). I recommend choosing something like MobileNetV2 96x96 0.1 (“Uses around 270.2K RAM and 212.3K ROM”) and changing your input dimensions on the impulse to 96x96 to match the expected input of MobileNet.

You might also see if something like a basic CNN (using the “Classification” block) would work instead.

I used the MobileNetV2 96x96 0.1 model. But, I’m not able to do inferences with my esp32 cam how do I do it?

Additionally, I am trying to connect my device in edge impulse for live classification. I found out that we need to enter edge-impulse-daemon command . However, I encounter this error
zsh: command not found: edge-impulse-data-forwarder

@shawn_edgeimpulse could you help me with this.
Additionally, I’m trying to use the result of this classification to turn a servomotor on specific angles. I’m thinking of using Arduino uno to turn the servos based on the result of the classification from esp32. However, I’m confused on the data pipeline and how should I code in the Arduino IDE to synchronize these processes.
I got to know that we have three different methods to connect a device to edge impulse from this video. (https://youtu.be/rszoQsMIIAI?si=jhot8iG8hYb5kBfE). I’m not sure how do I use ESP32 CAM to classify real time images continuously and use its result to move some servos.

Hi @Yamm,

I used the MobileNetV2 96x96 0.1 model. But, I’m not able to do inferences with my esp32 cam how do I do it?

What error are you seeing when you try to compile or run inference on the ESP32?

zsh: command not found: edge-impulse-data-forwarder

You need to install the Edge Impulse CLI (Installation | Edge Impulse Documentation) to have access to these commands.

I’m not sure how do I use ESP32 CAM to classify real time images continuously and use its result to move some servos.

It sounds like you need to use the bounding box information that comes from performing object detection. Please see this thread on how to use that information to move servos.

I have the same issue I guess

You need to change the value of that constant from 1 to 0 in classify file under the sdk directory.

how you do that?, I was just using the default code and suddenly stop working

If you’re running the example code then there shouldn’t be any problem. On the other hand, when you try to do some inferencing through esp32CAM you need to tweak the underlying code.

Can you send me your error message again with the entire code.

1 Like
/* Edge Impulse Arduino examples
 * Copyright (c) 2022 EdgeImpulse Inc.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
 * SOFTWARE.
 */

// These sketches are tested with 2.0.4 ESP32 Arduino Core
// https://github.com/espressif/arduino-esp32/releases/tag/2.0.4

/* Includes ---------------------------------------------------------------- */
#include <ESP32_inferencing.h>
#include "edge-impulse-sdk/dsp/image/image.hpp"

#include "esp_camera.h"

// Select camera model - find more camera models in camera_pins.h file here
// https://github.com/espressif/arduino-esp32/blob/master/libraries/ESP32/examples/Camera/CameraWebServer/camera_pins.h

#define CAMERA_MODEL_ESP_EYE // Has PSRAM
//#define CAMERA_MODEL_AI_THINKER // Has PSRAM

#if defined(CAMERA_MODEL_ESP_EYE)
#define PWDN_GPIO_NUM    -1
#define RESET_GPIO_NUM   -1
#define XCLK_GPIO_NUM    4
#define SIOD_GPIO_NUM    18
#define SIOC_GPIO_NUM    23

#define Y9_GPIO_NUM      36
#define Y8_GPIO_NUM      37
#define Y7_GPIO_NUM      38
#define Y6_GPIO_NUM      39
#define Y5_GPIO_NUM      35
#define Y4_GPIO_NUM      14
#define Y3_GPIO_NUM      13
#define Y2_GPIO_NUM      34
#define VSYNC_GPIO_NUM   5
#define HREF_GPIO_NUM    27
#define PCLK_GPIO_NUM    25

#elif defined(CAMERA_MODEL_AI_THINKER)
#define PWDN_GPIO_NUM     32
#define RESET_GPIO_NUM    -1
#define XCLK_GPIO_NUM      0
#define SIOD_GPIO_NUM     26
#define SIOC_GPIO_NUM     27

#define Y9_GPIO_NUM       35
#define Y8_GPIO_NUM       34
#define Y7_GPIO_NUM       39
#define Y6_GPIO_NUM       36
#define Y5_GPIO_NUM       21
#define Y4_GPIO_NUM       19
#define Y3_GPIO_NUM       18
#define Y2_GPIO_NUM        5
#define VSYNC_GPIO_NUM    25
#define HREF_GPIO_NUM     23
#define PCLK_GPIO_NUM     22

#else
#error "Camera model not selected"
#endif

/* Constant defines -------------------------------------------------------- */
#define EI_CAMERA_RAW_FRAME_BUFFER_COLS           320
#define EI_CAMERA_RAW_FRAME_BUFFER_ROWS           240
#define EI_CAMERA_FRAME_BYTE_SIZE                 3

/* Private variables ------------------------------------------------------- */
static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal
static bool is_initialised = false;
uint8_t *snapshot_buf; //points to the output of the capture

static camera_config_t camera_config = {
    .pin_pwdn = PWDN_GPIO_NUM,
    .pin_reset = RESET_GPIO_NUM,
    .pin_xclk = XCLK_GPIO_NUM,
    .pin_sscb_sda = SIOD_GPIO_NUM,
    .pin_sscb_scl = SIOC_GPIO_NUM,

    .pin_d7 = Y9_GPIO_NUM,
    .pin_d6 = Y8_GPIO_NUM,
    .pin_d5 = Y7_GPIO_NUM,
    .pin_d4 = Y6_GPIO_NUM,
    .pin_d3 = Y5_GPIO_NUM,
    .pin_d2 = Y4_GPIO_NUM,
    .pin_d1 = Y3_GPIO_NUM,
    .pin_d0 = Y2_GPIO_NUM,
    .pin_vsync = VSYNC_GPIO_NUM,
    .pin_href = HREF_GPIO_NUM,
    .pin_pclk = PCLK_GPIO_NUM,

    //XCLK 20MHz or 10MHz for OV2640 double FPS (Experimental)
    .xclk_freq_hz = 20000000,
    .ledc_timer = LEDC_TIMER_0,
    .ledc_channel = LEDC_CHANNEL_0,

    .pixel_format = PIXFORMAT_JPEG, //YUV422,GRAYSCALE,RGB565,JPEG
    .frame_size = FRAMESIZE_QVGA,    //QQVGA-UXGA Do not use sizes above QVGA when not JPEG

    .jpeg_quality = 12, //0-63 lower number means higher quality
    .fb_count = 1,       //if more than one, i2s runs in continuous mode. Use only with JPEG
    .fb_location = CAMERA_FB_IN_PSRAM,
    .grab_mode = CAMERA_GRAB_WHEN_EMPTY,
};

/* Function definitions ------------------------------------------------------- */
bool ei_camera_init(void);
void ei_camera_deinit(void);
bool ei_camera_capture(uint32_t img_width, uint32_t img_height, uint8_t *out_buf) ;

/**
* @brief      Arduino setup function
*/
void setup()
{
    // put your setup code here, to run once:
    Serial.begin(115200);
    //comment out the below line to start inference immediately after upload
    while (!Serial);
    Serial.println("Edge Impulse Inferencing Demo");
    if (ei_camera_init() == false) {
        ei_printf("Failed to initialize Camera!\r\n");
    }
    else {
        ei_printf("Camera initialized\r\n");
    }

    ei_printf("\nStarting continious inference in 2 seconds...\n");
    ei_sleep(2000);
}

/**
* @brief      Get data and run inferencing
*
* @param[in]  debug  Get debug info if true
*/
void loop()
{

    // instead of wait_ms, we'll wait on the signal, this allows threads to cancel us...
    if (ei_sleep(5) != EI_IMPULSE_OK) {
        return;
    }

    snapshot_buf = (uint8_t*)malloc(EI_CAMERA_RAW_FRAME_BUFFER_COLS * EI_CAMERA_RAW_FRAME_BUFFER_ROWS * EI_CAMERA_FRAME_BYTE_SIZE);

    // check if allocation was successful
    if(snapshot_buf == nullptr) {
        ei_printf("ERR: Failed to allocate snapshot buffer!\n");
        return;
    }

    ei::signal_t signal;
    signal.total_length = EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT;
    signal.get_data = &ei_camera_get_data;

    if (ei_camera_capture((size_t)EI_CLASSIFIER_INPUT_WIDTH, (size_t)EI_CLASSIFIER_INPUT_HEIGHT, snapshot_buf) == false) {
        ei_printf("Failed to capture image\r\n");
        free(snapshot_buf);
        return;
    }

    // Run the classifier
    ei_impulse_result_t result = { 0 };

    EI_IMPULSE_ERROR err = run_classifier(&signal, &result, debug_nn);
    if (err != EI_IMPULSE_OK) {
        ei_printf("ERR: Failed to run classifier (%d)\n", err);
        return;
    }

    // print the predictions
    ei_printf("Predictions (DSP: %d ms., Classification: %d ms., Anomaly: %d ms.): \n",
                result.timing.dsp, result.timing.classification, result.timing.anomaly);

#if EI_CLASSIFIER_OBJECT_DETECTION == 1
    ei_printf("Object detection bounding boxes:\r\n");
    for (uint32_t i = 0; i < result.bounding_boxes_count; i++) {
        ei_impulse_result_bounding_box_t bb = result.bounding_boxes[i];
        if (bb.value == 0) {
            continue;
        }
        ei_printf("  %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\r\n",
                bb.label,
                bb.value,
                bb.x,
                bb.y,
                bb.width,
                bb.height);
    }

    // Print the prediction results (classification)
#else
    ei_printf("Predictions:\r\n");
    for (uint16_t i = 0; i < EI_CLASSIFIER_LABEL_COUNT; i++) {
        ei_printf("  %s: ", ei_classifier_inferencing_categories[i]);
        ei_printf("%.5f\r\n", result.classification[i].value);
    }
#endif

    // Print anomaly result (if it exists)
#if EI_CLASSIFIER_HAS_ANOMALY
    ei_printf("Anomaly prediction: %.3f\r\n", result.anomaly);
#endif

#if EI_CLASSIFIER_HAS_VISUAL_ANOMALY
    ei_printf("Visual anomalies:\r\n");
    for (uint32_t i = 0; i < result.visual_ad_count; i++) {
        ei_impulse_result_bounding_box_t bb = result.visual_ad_grid_cells[i];
        if (bb.value == 0) {
            continue;
        }
        ei_printf("  %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\r\n",
                bb.label,
                bb.value,
                bb.x,
                bb.y,
                bb.width,
                bb.height);
    }
#endif


    free(snapshot_buf);

}

/**
 * @brief   Setup image sensor & start streaming
 *
 * @retval  false if initialisation failed
 */
bool ei_camera_init(void) {

    if (is_initialised) return true;

#if defined(CAMERA_MODEL_ESP_EYE)
  pinMode(13, INPUT_PULLUP);
  pinMode(14, INPUT_PULLUP);
#endif

    //initialize the camera
    esp_err_t err = esp_camera_init(&camera_config);
    if (err != ESP_OK) {
      Serial.printf("Camera init failed with error 0x%x\n", err);
      return false;
    }

    sensor_t * s = esp_camera_sensor_get();
    // initial sensors are flipped vertically and colors are a bit saturated
    if (s->id.PID == OV3660_PID) {
      s->set_vflip(s, 1); // flip it back
      s->set_brightness(s, 1); // up the brightness just a bit
      s->set_saturation(s, 0); // lower the saturation
    }

#if defined(CAMERA_MODEL_M5STACK_WIDE)
    s->set_vflip(s, 1);
    s->set_hmirror(s, 1);
#elif defined(CAMERA_MODEL_ESP_EYE)
    s->set_vflip(s, 1);
    s->set_hmirror(s, 1);
    s->set_awb_gain(s, 1);
#endif

    is_initialised = true;
    return true;
}

/**
 * @brief      Stop streaming of sensor data
 */
void ei_camera_deinit(void) {

    //deinitialize the camera
    esp_err_t err = esp_camera_deinit();

    if (err != ESP_OK)
    {
        ei_printf("Camera deinit failed\n");
        return;
    }

    is_initialised = false;
    return;
}


/**
 * @brief      Capture, rescale and crop image
 *
 * @param[in]  img_width     width of output image
 * @param[in]  img_height    height of output image
 * @param[in]  out_buf       pointer to store output image, NULL may be used
 *                           if ei_camera_frame_buffer is to be used for capture and resize/cropping.
 *
 * @retval     false if not initialised, image captured, rescaled or cropped failed
 *
 */
bool ei_camera_capture(uint32_t img_width, uint32_t img_height, uint8_t *out_buf) {
    bool do_resize = false;

    if (!is_initialised) {
        ei_printf("ERR: Camera is not initialized\r\n");
        return false;
    }

    camera_fb_t *fb = esp_camera_fb_get();

    if (!fb) {
        ei_printf("Camera capture failed\n");
        return false;
    }

   bool converted = fmt2rgb888(fb->buf, fb->len, PIXFORMAT_JPEG, snapshot_buf);

   esp_camera_fb_return(fb);

   if(!converted){
       ei_printf("Conversion failed\n");
       return false;
   }

    if ((img_width != EI_CAMERA_RAW_FRAME_BUFFER_COLS)
        || (img_height != EI_CAMERA_RAW_FRAME_BUFFER_ROWS)) {
        do_resize = true;
    }

    if (do_resize) {
        ei::image::processing::crop_and_interpolate_rgb888(
        out_buf,
        EI_CAMERA_RAW_FRAME_BUFFER_COLS,
        EI_CAMERA_RAW_FRAME_BUFFER_ROWS,
        out_buf,
        img_width,
        img_height);
    }


    return true;
}

static int ei_camera_get_data(size_t offset, size_t length, float *out_ptr)
{
    // we already have a RGB888 buffer, so recalculate offset into pixel index
    size_t pixel_ix = offset * 3;
    size_t pixels_left = length;
    size_t out_ptr_ix = 0;

    while (pixels_left != 0) {
        // Swap BGR to RGB here
        // due to https://github.com/espressif/esp32-camera/issues/379
        out_ptr[out_ptr_ix] = (snapshot_buf[pixel_ix + 2] << 16) + (snapshot_buf[pixel_ix + 1] << 8) + snapshot_buf[pixel_ix];

        // go to the next pixel
        out_ptr_ix++;
        pixel_ix+=3;
        pixels_left--;
    }
    // and done!
    return 0;
}

#if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_CAMERA
#error "Invalid model for current sensor"
#endif

c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\conv.cpp: In function 'TfLiteStatus tflite::{anonymous}::Prepare(TfLiteContext*, TfLiteNode*)':
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\conv.cpp:1789:67: error: either all initializer clauses should be designated or none of them should be
 1789 |                                 .channels = input->dims->data[3], 1
      |                                                                   ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\conv.cpp:1793:68: error: either all initializer clauses should be designated or none of them should be
 1793 |                                 .channels = output->dims->data[3], 1
      |                                                                    ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\conv.cpp:1795:80: error: either all initializer clauses should be designated or none of them should be
 1795 |     data_dims_t filter_dims = {.width = filter_width, .height = filter_height, 0, 0};
      |                                                                                ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\conv.cpp: In function 'void tflite::{anonymous}::EvalQuantizedPerChannel(TfLiteContext*, TfLiteNode*, const TfLiteConvParams&, const NodeData&, const TfLiteEvalTensor*, const TfLiteEvalTensor*, const TfLiteEvalTensor*, TfLiteEvalTensor*)':
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\conv.cpp:1883:58: error: either all initializer clauses should be designated or none of them should be
 1883 |                                 .channels = input_depth, 1
      |                                                          ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\conv.cpp:1887:59: error: either all initializer clauses should be designated or none of them should be
 1887 |                                 .channels = output_depth, 1
      |                                                           ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\conv.cpp:1889:80: error: either all initializer clauses should be designated or none of them should be
 1889 |     data_dims_t filter_dims = {.width = filter_width, .height = filter_height, 0, 0};
      |                                                                                ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\depthwise_conv.cpp: In function 'void tflite::{anonymous}::EvalQuantizedPerChannel(TfLiteContext*, TfLiteNode*, const TfLiteDepthwiseConvParams&, const NodeData&, const TfLiteEvalTensor*, const TfLiteEvalTensor*, const TfLiteEvalTensor*, TfLiteEvalTensor*)':
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\depthwise_conv.cpp:1727:58: error: either all initializer clauses should be designated or none of them should be
 1727 |                                 .channels = input_depth, 1
      |                                                          ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\depthwise_conv.cpp:1731:59: error: either all initializer clauses should be designated or none of them should be
 1731 |                                 .channels = output_depth, 1
      |                                                           ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\depthwise_conv.cpp:1733:80: error: either all initializer clauses should be designated or none of them should be
 1733 |     data_dims_t filter_dims = {.width = filter_width, .height = filter_height, 0, 0};
      |                                                                                ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\depthwise_conv.cpp: In function 'TfLiteStatus tflite::{anonymous}::Prepare(TfLiteContext*, TfLiteNode*)':
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\depthwise_conv.cpp:1836:67: error: either all initializer clauses should be designated or none of them should be
 1836 |                                 .channels = input->dims->data[3], 1
      |                                                                   ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\depthwise_conv.cpp:1840:68: error: either all initializer clauses should be designated or none of them should be
 1840 |                                 .channels = output->dims->data[3], 1
      |                                                                    ^
c:\Users\Tiger\Documents\Arduino\libraries\ESP32_inferencing\src\edge-impulse-sdk\tensorflow\lite\micro\kernels\depthwise_conv.cpp:1842:80: error: either all initializer clauses should be designated or none of them should be
 1842 |     data_dims_t filter_dims = {.width = filter_width, .height = filter_height, 0, 0};
      |                                                                                ^

exit status 1

Compilation error: exit status 1