Live classification in WASM (browser)?

Attempts to async / classify single images (vs. webcam feed) do not output expected results, nor throws errors in console. I must assume I’m not ingesting the images correctly.
I’ve not found examples other than the raw features testing.

Have successfully created dataset consisting of images.
Created impulse: Transfer Learning (images) with 2 output features.

I deployed via the WASM (browser) option.
The documentation was followed, and testing of pasted raw of processed features. Console matches expected results.

Project ID:

Context/Use case:
Documentation notes:

These deployment options let you turn your impulse into a fully optimized source code that can be further customized and integrated with your application.

What method can be used, or example provided that would provide classification of single image files (JPG) hosted via URL on the same server?
(WASM with JavaScript)

Do those images need to match the 96x96 dimensions of the processed training model or otherwise need processing to match accuracy?

NOTE: This forum appears to be the best option to gain insight into using Edge Impulse. However, let me know if these types of questions are beyond the scope of support. I’ve made it this far, and have hit a brick wall. Thank you.

1 Like

Hi @Gigcity

WASM is not something we typically get much on the forum, I think you may benefit from speaking to our sales / solutions teams about getting more advanced support with your work, although you should be able to follow the doc for most:

We also have services companies they can recommend for your work if this is for a production device or commercial usage.

If you want to share some detail on your project with me over pm and your timezone I can try to get you in contact with the type of support you need.

Or feel free to reach out yourself via:



I understand there are commercial options, but have typically seen a clear ‘contact sales’ noted when one wanders into such territory. And I hope you’re getting lots of traction in that area.

I’m an individual / hobbyist who just discovered EI, and have been impressed with the approach to make ML more accessible. It allows for what-if scenarios without building from scratch.

My use case is for my own learning and to potentially provide a public safety feature.

I don’t know what I don’t know - so may ask questions that seem simple but aren’t.
Given the multiple examples and integration of streaming video ‘live learning’, one might assume feeding a static image would not be an advanced request. Perhaps someone else has done this and I’ll find their example.

At this point - I’ve spent more time than intended. I’ve learned a lot from this effort and will pursue other options.

Thank you.

1 Like