So my https://www.gearbots.org/ after school group is enjoying Edge Impulse much more than my, from scratch method of doing Machine Learning for high school students. As one grade 9 says, “Edge Impulse is like what you teach Mr. Ellis, just so much easier.”
I get students to make Machine Learning models using TensorflowJS, convert them to Tensorflow keras .pb files, then convert these to TFlite and finally to c header files to load on micrcontrollers.
One of these students exported a keras tensorflow .pb file from Edge Impulse and asked if it could be loaded on a webpage using TensorflowJS. So we converted the .bp model file into a TF.json file with supporting .bin weights file. Not really sure how to load it on a webpage but reasonably sure we can do it.
Wondering if anyone has tried this before and has any suggestions on getting the files working on a webpage.
I would like to get both TensorflowJS and WebAssembly working with Edge Impulse. Presently I haven’t got WebAssembly working with https://gitpod.io my browser docker cloud of choice ($100 free hours a month).
I think I can do both, just need to find time for it. I will research your links and see if their is something I have missing.
I had to look up DSP, but still don’t really understand what it is (audio processing?). Is the webAssembly a server running a backend? Now even more curious if TensorflowJS can do most of the processing on the client side.
@Rocksetta, the WebAssembly package already runs in the browser. No need for anything else. It contains all signal processing (DSP) code to extract features from audio or vibration data, and the complete neural networks and other ML blocks. See the guide here: https://docs.edgeimpulse.com/docs/through-webassembly-browser
Getting better with the WebAssembly. Yes it runs as a regular webpage, but only from a true online webserver, unfortunately not from githubs special website called gitPages. which would have been nice. I have the shower sounds demo working on my Gitpod, so that is good, and will share when I tidy it up.
Not finding the supporting .js files for running any of the webAssembly Camera projects. I will keep looking at the docs.
Not finding the supporting .js files for running any of the webAssembly Camera projects. I will keep looking at the docs.
Calling the model should be the same for any model - whether it’s vision, audio or vibration - e.g. that’s what we do for classifying images in the mobile client (see https://github.com/edgeimpulse/mobile-client). We pull down the WebAssembly build through the API, then feed images in that we capture from the camera.
name: gh-pages
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup .NET Core
uses: actions/setup-dotnet@v1
with:
dotnet-version: 3.1.301
- name: Publish with dotnet
run: dotnet publish --configuration Release --output build
- name: Deploy to Github Pages
uses: JamesIves/github-pages-deploy-action@releases/v3
with:
ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }}
BASE_BRANCH: development # The branch the action should deploy from.
BRANCH: master # The branch the action should deploy to.
FOLDER: build/wwwroot # The folder the action should deploy.
SINGLE_COMMIT: true
Which hopefully just works, but it is above my abilities. Looks really interesting, I was wondering how Gitpages ran specific actions, I used to just make the github private and run it on https://www.heroku.com/
I will dig into the Camera part, your link has given me some ideas.
I know what the problem is and have run into it before, but I can’t remember my solution. Here is a link to my forked version of your shower tracker github
If you open the console you can see that the asset links are looking in my root github instead of in my repository/subfolder. Gitpages do not do relative links very well. Should be an easy fix to just change all the links.
So I put the relative reference “./” infront of most of the links in the index.html file and it now sort of works. A few fonts are not loading properly.