Features raw eval - different results in node.js and live classification

Hello,

Why my project returns different results when comparing the raw features eval in my pc (node.js) and the live classification option in the edge console?

Hello @jcanais,

It seems that your raw features are somehow different. How did you get your features.txt values?

Regards,

Louis

Hello @louis,

The notepad is not in the top of the vertical scroll.
The features.txt is the raw features, copy/paste/save.

Include another example.


Best regards,
João Canais

Oh did not notice, thank for the clarification.

I’ll try to reproduce your issue today. I haven’t used the WebAssembly deployment much.
I come back to you when I have more info.

Regards,

Louis

Indeed, I can reproduce your error.

@janjongboom, do you know if we need to apply a processing of some sort (resize?) on images for the WebAssembly inferencing?

@jcanais,

Ok I understood the issue.

There is a big difference between the quantized version of the model and the float32 one:

When downloading the model, make sure to select the float32. Indeed, in the Live Classification page, we are using the float32 version for the inference.

I just tried with the WebAssembly float32 export and I am having matching results:

.

Thanks @aurel for the hint.

@jcanais, I hope this can solve your issue. Let me know if you have further questions

Regard,

Louis

Hello @louis, yes it worked very well. Now the values are equal and accurate! :grin:

Can you give me some ideias to put this model running in real-time?
On a PC how can I generate the raw features in real time? There is some type of example?
Can I use arduino or openMV hardware to run this type of model in real-time?

Best regards,
João Canais

Hello @jcanais,

If you are using a Linux system, I invite you to check our Python SDK, these contains several examples.

You can download your model with Edge Impulse Linux CLI (available with npm) and then integrate it with your custom code.
Here are some example that we prepared so you can start from there (classify images, video or directly from the camera):

Alternatively, you can also run:
edge-impulse-linux-runner --clean and this will create a web interface that uses your camera to classify the images. The source code is here: https://github.com/edgeimpulse/edge-impulse-linux-cli

Regards,

Louis

Hi @louis, Python SDK is great!

But, can I run this model in more lightweight hardware like the Arduino BLE or Portenta, or must migrate to Nano or Raspberry PI? (memory 2GB or 4GB?)
In MS Windows, can I run the Python SDK?

Many thanks,
João Canais

Hello @jcanais,

With the model you selected (MobileNetV2 0.35), you probably won’t be able to use this one on a microcontroller. Especially given that the quantized version of your model does not perform well.
You can try to chose a different model version that is less greedy in resources and train again your model to see if the accuracy and the performances suits you:

No at the moment, you won’t be able to use the Linux Python SDK on Windows as we reply on the edge impulse model (.eim) binary which is compiled for a specific architecture.
You might be able to run your application in a Docker container but I have no idea how to access the camera or the microphone from a Docker container… It will probably require some extra work.

And the RaspberryPi 4 is fully supported by Edge Impulse if you have one and want to try (I also tried with a RPi 3 model B+ some time ago and it worked fined): https://docs.edgeimpulse.com/docs/raspberry-pi-4

Regards,