Question/Issue:
I have a small fleet of Jetson Orin Nano Devices that I manage with AWS Greengrass(V2) as Greengrass Core Devices. I’m trying to understand which deployment method would be best here to use my (or any) impulse as a Greengrass Component. I use Components pretty heavily already to manage the software that goes out to my edge/field devices and it would be efficient to continue this method as i develop more models. I think the docker container for Jetson with HTTP is the route to go. I can then package this as a Greengrass Component. However I wanted to ask the community their thoughts. If you are also using Greengrass/AWS What is your methodology for exports?
I understand EI has a documented set of instructions on how to operate/integrate within Azure IoT. Are there any plans to expand this to the AWS IoT suite as well? Thanks!
Once you have the Docker container containing your machine learning model, you can package it as a Greengrass Component.I’m not sure how that will look for you, do you use docker compose for that?
Run the Docker Container on Jetson Orin Nano:
Once deployed, run the Docker container on each Jetson Orin Nano device using a command similar to the following:
What you want to run on your orins will look something like this:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9xxxxxxxt5/inference-container-jetson-orin:d7eexxxxxfe9253 \
--api-key ei_6964d445ef2xxxxxxxxxxx3514109a \
--run-http-server 1337
This command will start your model’s HTTP server, allowing it to receive inference requests.
Does this make sense? Let me know how that would fit in your workflow…