Hi guys, I have a question about .eim model, I have read from this doc. But still don’t understand, can you please explain the easiest way what is .eim file is?
is it same with another model format, such as .tflite or .lite?can we see what is the content inside the file?
If I have run model on my machine, and save the model in .H5 or .tflite format, can we convert to .eim? how we convert it?
I’m working on a sound classification project and running .eim using linux sdk. and it’s amazing, it can classify sounds in real-time in a short time.
my question is:
What affects the speed of inference time? Is it windows size or frame size and FFT size?
Do we record every detected signal?
The .eim files are linux executables and they package everything you need to run your model (.eim stands for edge impulse model). It is compiled for your dedicated kernel architecture.
No it’s not the exactly the same as it is already compiled. You cannot see directly what is inside but you can see the source code, see below:
Thanks! Many things can have impacts on the inference speed, window size and FFT size have impacts but also your NN architecture. We try to output the inference time (and RAM/ROM usage) both in the DSP blocks and in the learning block so you can have an idea of the impact as you go. You can find also the on-device performances on the deployment page.
hi @louis thanks for response.
I have tried to run with linux sdk and edge-impulse-linux-runner from terminal in raspberry pi, and why is the the inference time is so much different, twice or more, here is the snapshot:
from linux sdk:
from linux-runner
why is so much different in time max 6ms vs 37ms?since I need to use in linux-sdk, can you help me?
@dexvils There should be no difference in speed, you can build your own EIM model from the downloaded SDK as @louis mentioned in the above link.
Either you downloaded an EIM model that has been compiled with the EON compiler, and the SDK is not downloaded with the EON compiler support, or you have added some code that is taking some execution time in between.
I would be really interested in seeing how you are calculating your inferencing time with the SDK.
Thanks for reply.Sorry, Maybe i need to install Ubuntu?But I want to package our Edge Impulse with other C++ demos, do I need to use the C++ SDK and cross-compile ?How could I run it with a simple binary file?
Hi,Omar.Thanks for reply.
Maybe i don’t have a package manager right now.All my package is got by cross-compile.Could you please give me some advises to run our Edge Impulse demo easily on my embeded hardware.Here is my embeded hardware:
This is a Linux device, it should be supported to output the box as long as the sensor connection is detected.
Either you use the CLI tooling, as described above, or you choose to use the SDK. If you decided to go with the SDK you need to compile the source code on the device, therefore you will need to install the compiler toolchain locally using your package manager. If this is impossible, you will need to cross-compile it on the host machine and then transfer it to the rockship target.
If you need more info, have a look at our docs for the raspberry pi, it should similarly applicable to you,