We are looking at developing a camera-trap type device to identify animals for conservation projects. Different customers will be interested in different species. We want to avoid having to recompile and flash the micro with different models for different customers, so are looking for a simpler way to allow customers to install their model onto a common platform.
I recognise this is not an AI question, but a generic embedded programming question. I ask it here in case someone can point me in useful directions.
For example, I am thinking the customer might bring their model as a compiled library on an SD card; the device could copy this to a part of flash that has been set aside for it, and some kind of wrapper layer on the core device knows how to interact with the dynamically loaded library…
Indeed this is more related to fleet management.
If you are using linux based systems (such as raspberry pi), you can have a look at Balena.io. @mithundotdas and @arijit_das_student and @aurel wrote a tutorial on how to do this and you can create different versions that can even be deployed with a one click button:
Hi @acutetech, another option (if you’re on a Linux-based device) is to just use the Edge Impulse for Linux model files. These are completely self contained models that you can put anywhere (including an SD card ) and just pass the model to your application. Done.
On microcontrollers we can do something similar by disabling EON. This will give you a TensorFlow Lite model file (this is just the neural network, so you’re limited to changing just that part) which is normally put in internal flash on the MCU, but you can easily change this to be loaded from SD card instead.