Run Your Model Locally with the Edge Impulse C++ Library

Did you know that you can deploy your impulse to almost any platform using our C++ Inferencing SDK library? 


This is a companion discussion topic for the original entry at https://www.edgeimpulse.com/blog/run-your-model-locally-with-the-edge-impulse-c-library