I used the platform to generate a model, and loved how intuitive it all was - and the hot word detection worked a treat with the test in the browser, and I can’t wait to use it.
However, once I downloaded the wasm package, I was a little confused as to replicate the microphone/audio piece - as the packaged html file seemed to be just used for classifying the data.
I attempted to build my own audio listener, which would pass the data through to the classifier, but I got extremely different results from the edgeimpulse classifier. I can see the repo for the classifier is open source and listed at the bottom, and I found the Microphone class, so I can possibly extract that out. I just wondered if that would be the correct way to do it, and how I should collate that data to pass through to
classifyContinuous, I’m not sure which to use).
Going to play with it some more to see if I can get to the bottom of it, but thought it might be worth asking here first just in case I’m going down a blind/wrong path Any help anyone can give would be tremendous. Thank you!