Local Image Inference

Hey there,

first of all, thank you for reading this :wink: Since I just discovered this place and therefore not helped anybody so far I feel a bit bad to ask straight forward …

I managed to compile the c++ application for local inference on windows.

Now, the next step is to not provide the raw features in an array but to hand over an image.
The goal is to make the methods and classes inside the example kinda ‘public’ and then access them from C# .NET

If anybody has suggestions or ideas, I would greatly appreciate them :wink: Stay healthy!

Hi @MartinM, welcome to the Edge Impulse community!

From my past experience of utilizing a C++ library within a C# application, you will need to build the library as a DLL, see Microsoft’s documentation here: https://docs.microsoft.com/en-us/cpp/build/dlls-in-visual-cpp?view=msvc-160

And then from your C# application you can invoke the DLL the following way: https://docs.microsoft.com/en-us/cpp/dotnet/calling-native-functions-from-managed-code?view=msvc-160

And this should allow you to access the Edge Impulse functions within your C# application.

Please let me know how if you have any issues (I have not tried this personally with the Edge Impulse SDK so I am curious how it goes)!

Jenny

If you are not completely attached to C#, you can also utilize our WASM library, and even our Python SDK as well: https://github.com/edgeimpulse/linux-sdk-python & https://github.com/edgeimpulse/example-standalone-inferencing-wasm

Hi Jenny,

well, sorry for replying that late - I have been very busy over the last days …

Anyways, thank you a lot for those really detailed answers - they already helped :wink:
That being said, since I’m a developer these kind of things aren’t my really big issues. I’m fairly new to the whole AI stuff and I really being overwhelmed by how good your service actually works :wink:

In the last weeks I tinkered a lot with ml.net, darknet and so on.
To the point where there were examples; just put an image into it and see how objects are being detected.
Just like it works on your webapp - after an object detection model has been trained you can run images against it. That’s the kind of thing I want the c++ library to do. But I don’t really get it :sweat_smile:

Regards,
Martin

Hi @MartinM,

Using the example standalone inferencing app, you need to feed in the raw_features vector with your image pixels in an RGB format. To test it quickly, just use the “Live Classifier” from your Edge Impulse project and copy/paste the raw features in a txt file as shown here:

Your image needs to have the resolution defined in your impulse (usually 320x320 for object detection).
You can then call the standalone application and check the predictions: ./edge-impulse-standalone features.txt

Once this is working, you can work on retrieving raw pixels from images directly in your C++ application, there should be some existing libraries for it.

Quick note on object detection: our Linux SDK supports full hardware acceleration in case you can switch your OS from Windows to Linux.

Aurelien

Yeah my suggestion would be to:

  1. Build https://github.com/edgeimpulse/example-standalone-inferencing .
  2. Just invoke that binary from C#.

Hey guys :wink:

Thank you a lot for all of your replies :+1:
Okay, so I guess my last - and biggest - obstacle is effective yet fast feature extraction. Since I’m fairly new I actually even don’t know what these features actually are?
Are those just the extracted pixels or are they mixed with some algorithms - how do you guys do it :blush:

Hi @MartinM It’s just an array of pixels, see https://docs.edgeimpulse.com/docs/running-your-impulse-locally-1#signal-layout-for-image-data

To quickly get good sample data go to Live classification and under ‘Raw features’ you’ll get the right input:

image