TensorRT JetsonNano flash Error

Hi,

I can’t build the Image Classification model with TensorRT on JetsonNano board.

What I did

Follow the Document I build the projects.
How can I flash the model after building it?

Hi, looks like building succeeded. You can run the binary via:

./build/custom

(you probably want to build the camera one for image classification, but then you’ll also find the output in the build folder).

Thanks for your reply @janjongboom

nota@nota-desktop:~/edgeimpulse$ ./build/custom
Requires one parameter (a comma-separated list of raw features, or a file pointing at raw features)

I guess, the model code needs to be modified.
Can you give me some advice for it?

Regards,

@SunBeenMoon You’re running the custom classifier and that requires features like you get from https://docs.edgeimpulse.com/docs/running-your-impulse-locally#running-the-impulse - just copy some from live classification, put it in a features.txt file and invoke via ./custom features.txt.

If you have an image model and want to run classification on it, build the camera example.

Thank you for your reply @janjongboom.
I tried again and succeed it.

I want to run this model in real time, just like the object detection!
I think we can fix the code and run it before building the model.
I’d like to know if it’s possible to fix the code and run it.

Also, I have a questions about the output of the model using TensorRT.

I’ve got the output.
image
By the way, I can’t tell if the GPU is running just by looking at the screen above.
I’d like to know if it’s possible to know that the GPU is running because it took 10ms less time to pass the classification.

Regards,

@SunBeenMoon You can see it through:

$ tegrastats

and looking at GR3D_FREQ.

E.g. here running at 98% active on a model on my Jetson Nano:

RAM 2661/3964MB (lfb 75x4MB) SWAP 36/1982MB (cached 0MB) CPU [38%@1479,23%@1479,17%@1479,48%@1479] EMC_FREQ 0% GR3D_FREQ 98% PLL@32C CPU@35C PMIC@100C GPU@34C AO@43C thermal@34.5C

To see this properly you can wrap the run_classifier in a while (1) loop so you keep seeing it running.

Thank you for your reply! @janjongboom

I’ve checked GR3D_FREQ! Then, I can find the GPU is working!!

I’m still stuck in performing model in real time.
I’ve change the code example-impulse-sdk/classifier/ei_run_classifier_c.cpp file.
I put the code in while(1) loop.
Then, build model again.
But, still it doesn’t work.

Hey @SunBeenMoon I meant in custom.cpp, not in the SDK.

What do you mean with:

But, still it doesn’t work.

The English expression was clumsy.
I wanted to say, “I tried to modify the code, but it didn’t work in real time”.
Earlier, I fixed the .cpp file. Can you run it in real time in any other way?

@SunBeenMoon This is what we use for performance tests:

custom.cpp

/* Edge Impulse Linux SDK
 * Copyright (c) 2021 EdgeImpulse Inc.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
 * SOFTWARE.
 */

#include <stdio.h>
#include <cstring>
#include <iostream>
#include <sstream>
#include "edge-impulse-sdk/classifier/ei_run_classifier.h"

std::string trim(const std::string& str) {
    size_t first = str.find_first_not_of(' ');
    if (std::string::npos == first)
    {
        return str;
    }
    size_t last = str.find_last_not_of(' ');
    return str.substr(first, (last - first + 1));
}

std::string read_file(const char *filename) {
    FILE *f = (FILE*)fopen(filename, "r");
    if (!f) {
        printf("Cannot open file %s\n", filename);
        return "";
    }
    fseek(f, 0, SEEK_END);
    size_t size = ftell(f);
    std::string ss;
    ss.resize(size);
    rewind(f);
    fread(&ss[0], 1, size, f);
    fclose(f);
    return ss;
}

int main(int argc, char **argv) {
    if (argc != 2) {
        printf("Requires one parameter (a comma-separated list of raw features, or a file pointing at raw features)\n");
        return 1;
    }

    std::string input = argv[1];
    if (!strchr(argv[1], ' ') && strchr(argv[1], '.')) { // looks like a filename
        input = read_file(argv[1]);
    }

    std::istringstream ss(input);
    std::string token;

    std::vector<float> raw_features;

    while (std::getline(ss, token, ',')) {
        raw_features.push_back(std::stof(trim(token)));
    }

    if (raw_features.size() != EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE) {
        printf("The size of your 'features' array is not correct. Expected %d items, but had %lu\n",
            EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, raw_features.size());
        return 1;
    }

    while (1) {
        ei_impulse_result_t result;

        signal_t signal;
        numpy::signal_from_buffer(&raw_features[0], raw_features.size(), &signal);

        EI_IMPULSE_ERROR res = run_classifier(&signal, &result, false);
        printf("run_classifier returned: %d (DSP %d ms., Classification %d ms., Anomaly %d ms.)\n", res,
            result.timing.dsp, result.timing.classification, result.timing.anomaly);

        printf("Begin output\n");

    #if EI_CLASSIFIER_OBJECT_DETECTION == 1
        for (size_t ix = 0; ix < EI_CLASSIFIER_OBJECT_DETECTION_COUNT; ix++) {
            auto bb = result.bounding_boxes[ix];
            if (bb.value == 0) {
                continue;
            }

            printf("%s (%f) [ x: %u, y: %u, width: %u, height: %u ]\n", bb.label, bb.value, bb.x, bb.y, bb.width, bb.height);
        }
    #else
        // print the predictions
        printf("[");
        for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
            printf("%.5f", result.classification[ix].value);
    #if EI_CLASSIFIER_HAS_ANOMALY == 1
            printf(", ");
    #else
            if (ix != EI_CLASSIFIER_LABEL_COUNT - 1) {
                printf(", ");
            }
    #endif
        }
    #if EI_CLASSIFIER_HAS_ANOMALY == 1
        printf("%.3f", result.anomaly);
    #endif
        printf("]\n");
    #endif

        printf("End output\n");
    }
}
1 Like

Thank you for your reply @janjongboom :blush:

I’ve checked the model runs in real time!
However, GPU is not run.

What I want to confirm is,

  1. Camera model in real time
  2. Run GPU

I wanted to enter the code as below and check if the real-time camera model is running and the GPU is working, but an error message came up.

command words

nota@nota-desktop:~/example-standalone-inferencing-linux$ APP_CAMERA=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j

error message

source/camera.o: In function run_inference': /home/nota/example-standalone-inferencing-linux/./edge-impulse-sdk/classifier/ei_run_classifier.h:886: undefined reference to libeitrt::create_EiTrt(char const*, bool)’
/home/nota/example-standalone-inferencing-linux/./edge-impulse-sdk/classifier/ei_run_classifier.h:891: undefined reference to `libeitrt::infer(EiTrt*, float*, float*, int)’
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Makefile:95: recipe for target ‘runner’ failed
make: *** [runner] Error 1

I’ve already installed the clang through the command.

sudo apt install -y clang

Can you tell me why this problem occurs?

Hi,
I’ve just tried (you gave me) ‘custom.cpp’ file in my Jetsonnano board.

command word(Just like it says on the site)
image

error code
image

I don’t think the above command is working in the first place. It’s not work in GPU.
I wonder if there is a solution.

Regards,

1 Like

@SunBeenMoon You need to build with these flags: https://github.com/edgeimpulse/example-standalone-inferencing-linux#tensorrt

So:

APP_CUSTOM=1 TARGET_JETSON_NANO=1 make -j

Hi, @janjongboom

I always appreciate your kind reply.

After listening to you, I deleted all the files and started all over again.

$ git clone https://github.com/edgeimpulse/example-standalone-inferencing-linux
$ cd example-standalone-inferencing-linux && git submodule update --init --recursive
$ sudo apt install libasound2
$ sh build-opencv-linux.sh

Then, I write the order exactly same in TensorRT docs.

But I’ve got the Error code.

nota@nota-desktop:~/example-standalone-inferencing-linux$ APP_CUSTOM=1 TARGET_JETSON_NANO=1 make -j
mkdir -p build
g++ edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/arm_cfft_radix2_q15.o edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/arm_rfft_fast_init_f32.o edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/arm_cfft_radix2_f32.o edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/arm_rfft_q15.o edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/arm_rfft_init_q31.o edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/arm_rfft_fast_f64.o edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/arm_dct4_init_q31.o edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/arm_cfft_radix2_q31.o edge-impu

optional_debug_tools.o edge-impulse-sdk/tensorflow/lite/micro/all_ops_resolver.o edge-impulse-sdk/tensorflow/lite/micro/micro_utils.o edge-impulse-sdk/tensorflow/lite/micro/micro_interpreter.o edge-impulse-sdk/tensorflow/lite/micro/micro_allocator.o edge-impulse-sdk/tensorflow/lite/micro/memory_planner/linear_memory_planner.o edge-impulse-sdk/tensorflow/lite/micro/memory_planner/greedy_memory_planner.o edge-impulse-sdk/tensorflow/lite/core/api/flatbuffer_conversions.o edge-impulse-sdk/tensorflow/lite/core/api/tensor_utils.o edge-impulse-sdk/tensorflow/lite/core/api/error_reporter.o edge-impulse-sdk/tensorflow/lite/core/api/op_resolver.o -o build/custom -lm -lstdc++ tflite/linux-jetson-nano/libei_debug.a -Ltflite/linux-jetson-nano -lcudart -lnvinfer -lnvonnxparser -Wl,–warn-unresolved-symbols,–unresolved-symbols=ignore-in-shared-libs
/usr/bin/ld: source/custom.o: relocation R_AARCH64_ADR_PREL_PG_HI21 against symbol _ZTTNSt7__cxx1119basic_istringstreamIcSt11char_traitsIcESaIcEEE@@GLIBCXX_3.4.21' which may bind externally can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: source/custom.o(.text+0x27b4): unresolvable R_AARCH64_ADR_PREL_PG_HI21 relocation against symbol _ZTTNSt7__cxx1119basic_istringstreamIcSt11char_traitsIcESaIcEEE@@GLIBCXX_3.4.21
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
Makefile:94: recipe for target ‘runner’ failed
make: *** [runner] Error 1

Also

APP_CAMERA=1 make -j

has the same error.

I’m wondering to solve this problem.

Hi @SunBeenMoon I’m on an offsite this week without access to a Jetson Nano, so will come back to you next week…

I see. Thank you for your concern.
I enjoyed TinyML Conference and TinyML Summit videos. Edge Impulse is a very promising company and is in the spotlight. I fully understand that you are busy. Nevertheless, thank you for your kind reply.
I look forward to your solution sometime next week.

@SunBeenMoon one more thing to try, may or may not make a difference, but when I was developing on the Jetson, I started seeing some weird behavior from make, which turned out to be a running out of memory issue (but it was not obvious that this was happening!)

Can you rerun make, but leave off -j (-j uses all your processor cores, which is fine if you have plenty of memory and virtual memory/swap space, which you don’t on the Nano!)

so

APP_CUSTOM=1 TARGET_JETSON_NANO=1 make

Thank you for your reply @AlexE :blush:

First, I tried without ‘-j’.
Second, I increase swap memory.
But both of them failed

Hi,
I’m still thinking over in same problem.
Do you have any other solution or wrong, I missed anything.

Regards,