As I’m working on Edge AI and TinyML content, I’m seeing that there’s a lot of interest in deploying TensorFlow models to single board computers (e.g. Raspberry Pi). If it’s not in the works already, I would like to request a feature that allows users to download a starter Python library/code that performs feature extraction and inference for an impulse project (just like you have for C++). I think this would help a lot of people looking to experiment with or deploy quickly to SBCs.
Hey! Would this work: https://github.com/wasmerio/python-ext-wasm with the webassembly package you get from the deploy tab?
I like the webassembly approach as it’s self contained, fast, and runs everywhere without dependency hell.
Ah! Interesting, I’ll have to check that out. Does the webassembly export from Edge Impulse include feature extraction or just the model and inference engine?
Contains the whole impulse! Both signal processing blocks, and any machine learning models.
Hi! This feature would also be very interesting to me since I’m working with several other Python ML approaches in parallel. I’m not familiar with webassembly & wasmer yet, but will find out how this works. @ShawnHymel Have you already tried this out?
- Install wasienv
- Retrieve the standalone inferencing C++ example
- Update the Makefile with:
CC ?= wasicc CXX ?= wasic++ CXXFLAGS += -DEI_PORTING_POSIX CXXFLAGS += -fno-exceptions
- Build the C++ library for your project in the Edge Impulse Studio and unzip the content in the standalone inferencing example directory
makeand it should generate a wasm file in the build/ folder
- You can then run it directly with wasmer command line:
% wasmer edge-impulse-standalone.wasm, or invoke it in your Python app using wasmer-python.
That’s excellent help, thank you!
FYI we now have full SDKs for Python, Node, Go and C++: https://docs.edgeimpulse.com/docs/edge-impulse-for-linux