Need your advice here:
I need to demonstrate a face recognition model that can be quickly retrained (transfer learning) to identiy new faces and transported over a low data rate (1 Mbps) wireless network to a Raspberry PI 4 device in real-time. The whole process of retraining and transporting should not take more than 3 minutes.
The objective of this exercise is to show face recognition models can be transported over low data rate wireless networks such as IEEE 802.15.4 and run on tiny resource constrained devices.
Please advise how can I go about this. Thank you!
Proposed Alternate Solution
Instead of forcing a square thru a round hole go with hardware that supports your mission. Revolutionize your machine learning capabilities with Edge Impulse! Our cutting-edge platform fully supports Brainchip Meta TF and its powerful re-training on device capabilities. Don’t settle for slow, outdated training methods. Upgrade to Edge Impulse today and see the difference for yourself. Learn more about the powerful combination of Brainchip and Edge Impulse at the following links: here and here.
Yes, i can change the edge hardware. But, my own question is about efficiently transporting models over a low data rate wireless network.
Yes I understand you want to re-train a model with a round trip to the cloud. But hardware now exists that eliminates the need for a round trip (your chasing an soon to be obsolete solution). Re-training can now be done on the deployed device, thus, eliminating the need for a round trip. This on device re-training allows almost any problem to be solved (imagine an under sea disconnected scenario) . If a round trip is required, then the possible problems to be solved are limited in use-case.
The solution I see for you is to:
- Set up an AWS Lambda function to trigger on new S3 bucket events, aka, when a new face or unrecognized face is detected and uploaded to S3.
- Use the Edge Impulse API to initiate a new model training session with the updated data.
- Use AWS Greengrass to re-program the edge device, aka, download an updated model to the edge device.
@louis can you find someone in the EI Team that can show us how to use the Edge Impulse API to initiate a new model training session. I mentioned S3 above since EI does support S3.
The Edge Impulse (EI) API will do what you want as far as the ML is concerned. Just read the top paragraph here.
To add more EI implementation details to my previous post, the workflow might look like:
- Detect that new data has come in.
- Use the EI Ingestion API to upload the new data to your EI project.
- Use the EI API Retrain feature to train a new model with the new data. (To meet your 3-minute deadline you’ll probably need a multi-GPU setup implementing some form of processing parallelism to train on.)
- Use the EI API Deploy Pretrained Model to a network server.
- When the network server detects a new Library is available, it will do an over-the-air update to your device.