Don't get how to actually run EI in production

I read the tutorial. I recorded some data, I trained a model. I can’t seem to find pointers on the last step:
How do I now deploy the model and have it run continuously while streaming the predictions somewhere else in the cloud?

Not expecting full answer. Appreciate any guidance.

From the Studio Deployment page for your project you Build a model that you then load into your microcontroller, Linux machine, etc. If you Build a library, you then incorporate that library into you custom code, compile it and load the firmware onto your device. Note Edge Impulse also supports WebAssembly so you can run your model on any device that supports rendering web pages.

I have built and flashed the firmware with EI. I can perform inference on device as shown in this log:

(pyenv) user@hostname:~$ edge-impulse-run-impulse 
Edge Impulse impulse runner v1.17.3
[SER] Connecting to /dev/ttyACM0
[SER] Serial is connected, trying to read config...
[SER] Retrieved configuration
[SER] Device is running AT command version 1.8.0
[SER] Started inferencing, press CTRL+C to stop...
LSE
Inferencing settings:
	Interval: 50.0000ms.
	Frame size: 600
	Sample length: 10000.00 ms.
	No. of classes: 2
Starting inferencing, press 'b' to break
Starting inferencing in 2 seconds...
Predictions (DSP: 44 ms., Classification: 0 ms., Anomaly: 0 ms.): 
    active: 	0.304688
    inactive: 	0.695312

My question is: how do I read the inference from some other local piece of software?
Am I supposed to read from the serial device directly? Or am I supposed to launch a edge-impulse-run-impulse and read from its stddout?

I see a few possibilities assuming you are want to run the whole solution on Linux.

  • Pipe the output of edge-impulse-run-impulse to a file, create a monitor to monitor to alert on a newly spawned file and then act on it.
  • Use the Runner.
    • Assuming you are doing audio run this python script. In the for-loop near line 55 add your code to send the results to another application running locally or remotely assuming you have LoRa, Amazon Sidewalk, Bluetooth, a WiFi network or Internet access.
  • Deploy a WASM solution.
1 Like

Thank you for your answer. I investigated the suggestions thoroughly and ended up going with your first suggestion.

I was hopeful about the Runner idea, but found that while the audio Runner code example can both pull the data from the device and classify it, the custom runner example is only able to work with data already on disk.

I subsequently realized that I can talk to the device over the serial and get the data that way, but at that point I might as well go back to suggestion 1.

The WASM suggestion I understand the worst.

1 Like