ML Model Compression Conversion

So my https://www.gearbots.org/ after school group is enjoying Edge Impulse much more than my, from scratch method of doing Machine Learning for high school students. As one grade 9 says, “Edge Impulse is like what you teach Mr. Ellis, just so much easier.”

I get students to make Machine Learning models using TensorflowJS, convert them to Tensorflow keras .pb files, then convert these to TFlite and finally to c header files to load on micrcontrollers.

One of these students exported a keras tensorflow .pb file from Edge Impulse and asked if it could be loaded on a webpage using TensorflowJS. So we converted the .bp model file into a TF.json file with supporting .bin weights file. Not really sure how to load it on a webpage but reasonably sure we can do it.

Wondering if anyone has tried this before and has any suggestions on getting the files working on a webpage.

Hoping to get something like this running with our impulse https://www.rocksetta.com/tensorflowjs/tfjs-models/blazeface/index.html

Hi @Rocksetta,

Thanks for the positive feedback! Really glad to hear that your students enjoy using Edge Impulse.

I haven’t used TensorFlow JS but you can also look at our WebAssembly deploy option: https://docs.edgeimpulse.com/docs/through-webassembly-browser

Also this tutorial with Balena might be helpful: https://github.com/edgeimpulse/balena-cam-tinyml. Inference runs within NodeJS and results are sent back to a python webapp.

Aurelien

Yeah I second Aurelien, the WebAssembly package will have all DSP code + ML code in one convenient package that runs in the browser.

And:

“Edge Impulse is like what you teach Mr. Ellis, just so much easier.”

I’m going to put this in a presentation :wink:

2 Likes

I would like to get both TensorflowJS and WebAssembly working with Edge Impulse. Presently I haven’t got WebAssembly working with https://gitpod.io my browser docker cloud of choice ($100 free hours a month).

I think I can do both, just need to find time for it. I will research your links and see if their is something I have missing.

@Rocksetta, note that you might be able to get the ML model running in the browser with TF.js but you’ll still miss the DSP part of the code.

I had to look up DSP, but still don’t really understand what it is (audio processing?). Is the webAssembly a server running a backend? Now even more curious if TensorflowJS can do most of the processing on the client side.

@Rocksetta, the WebAssembly package already runs in the browser. No need for anything else. It contains all signal processing (DSP) code to extract features from audio or vibration data, and the complete neural networks and other ML blocks. See the guide here: https://docs.edgeimpulse.com/docs/through-webassembly-browser

1 Like

Getting better with the WebAssembly. Yes it runs as a regular webpage, but only from a true online webserver, unfortunately not from githubs special website called gitPages. which would have been nice. I have the shower sounds demo working on my Gitpod, so that is good, and will share when I tidy it up.

Not finding the supporting .js files for running any of the webAssembly Camera projects. I will keep looking at the docs.

but only from a true online webserver

Why would this not work on GH Pages? There seem to be WebAssembly projects hosted there (e.g. https://mizrael.github.io/BlazorOnGitHubPages/)

Not finding the supporting .js files for running any of the webAssembly Camera projects. I will keep looking at the docs.

Calling the model should be the same for any model - whether it’s vision, audio or vibration - e.g. that’s what we do for classifying images in the mobile client (see https://github.com/edgeimpulse/mobile-client). We pull down the WebAssembly build through the API, then feed images in that we capture from the camera.

You are right, but it does seem a bit complex. You need to add this .yml file. https://github.com/mizrael/BlazorOnGitHubPages/blob/development/.github/workflows/gh-pages.yml

name: gh-pages

on: [push]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.301
    - name: Publish with dotnet
      run: dotnet publish --configuration Release --output build
    - name: Deploy to Github Pages
      uses: JamesIves/github-pages-deploy-action@releases/v3
      with:
        ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }}
        BASE_BRANCH: development # The branch the action should deploy from.
        BRANCH: master # The branch the action should deploy to.
        FOLDER: build/wwwroot # The folder the action should deploy.
        SINGLE_COMMIT: true


Which hopefully just works, but it is above my abilities. Looks really interesting, I was wondering how Gitpages ran specific actions, I used to just make the github private and run it on https://www.heroku.com/

I will dig into the Camera part, your link has given me some ideas.

I think that’s just what the person needed for their .NET deploy, the underlying WebAssembly project does not seem to affect this.

Do you see an error when deploying on GH Pages? Would be happy to take a look if you give me a link.

I know what the problem is and have run into it before, but I can’t remember my solution. Here is a link to my forked version of your shower tracker github

https://hpssjellis.github.io/demo-shower-timer/webapp/index.html

If you open the console you can see that the asset links are looking in my root github instead of in my repository/subfolder. Gitpages do not do relative links very well. Should be an easy fix to just change all the links.

So I put the relative reference “./” infront of most of the links in the index.html file and it now sort of works. A few fonts are not loading properly.

https://hpssjellis.github.io/my-examples-of-edge-impulse/public/index.html

1 Like

Great, WebAssembly is working for me there as well. :white_check_mark:

1 Like