AWS GreenGrass Deployment Package

I have a small fleet of Jetson Orin Nano Devices that I manage with AWS Greengrass(V2) as Greengrass Core Devices. I’m trying to understand which deployment method would be best here to use my (or any) impulse as a Greengrass Component. I use Components pretty heavily already to manage the software that goes out to my edge/field devices and it would be efficient to continue this method as i develop more models. I think the docker container for Jetson with HTTP is the route to go. I can then package this as a Greengrass Component. However I wanted to ask the community their thoughts. If you are also using Greengrass/AWS What is your methodology for exports?

I understand EI has a documented set of instructions on how to operate/integrate within Azure IoT. Are there any plans to expand this to the AWS IoT suite as well? Thanks!

Great question @pc3975!

Welcome to the forum,

I’m not that famillier with AWS IoT Suite, but would be happy to discuss on this thread.

Did you see our Lifecycle Management section? Lifecycle Management | Edge Impulse Documentation

It would fit here for sure, if you want to review we can see what workflow would fit.

My first guess would be to use our Docker Deploy method, you will find that it also has a Jetson deployment option

  1. Exporting the Model as a Docker Container:

  2. Integration with AWS Greengrass:

  • Once you have the Docker container containing your machine learning model, you can package it as a Greengrass Component.I’m not sure how that will look for you, do you use docker compose for that?
  1. Run the Docker Container on Jetson Orin Nano:
  • Once deployed, run the Docker container on each Jetson Orin Nano device using a command similar to the following:

What you want to run on your orins will look something like this:

docker run --rm -it --runtime=nvidia --gpus all \
    -p 1337:1337 \ \
        --api-key ei_6964d445ef2xxxxxxxxxxx3514109a \
        --run-http-server 1337

This command will start your model’s HTTP server, allowing it to receive inference requests.

Does this make sense? Let me know how that would fit in your workflow…



1 Like