Generate Raw Features from Audio using API

I want to create a website which will record an audio. And then I want to generate raw features for the audio file and then test it with my model and show the result in the website itself. How can i do so?

Hi @BoundLess,

I’ve got some very simple examples of websites used to collect audio samples here: GitHub - ShawnHymel/simple-recorderjs-projects. They are based on this demo (GitHub - addpipe/simple-recorderjs-demo: A simple HTML5/JS demo that uses Recorder.js to record audio as uncompressed pcm (wav) and POST it to a server side script.), and I’m sure you can find more out there.

You can likely ingest a new sample (Ingestion API - Edge Impulse API) and get the features for that sample (Features For Sample - Edge Impulse API) to display the MFCC/MFE features. If that doesn’t work, you could use something like Welcome to python_speech_features’s documentation! — python_speech_features 0.1.0 documentation to generate MFCCs on the back end that come very close to what you’re seeing in Edge Impulse.

To run inference, you can use the classify API call (https://docs.edgeimpulse.com/reference/edge-impulse-api/classify/classify-sample) or deploy the whole model as WebAssembly to your site.

Hope that helps!