Multiple models in same device

Hi,

I would like to have multiple models in the same microcontroller (a ST) where I can decide which model to execute each time.
The problem that comes to my mind is that when calling to a function for a certain model, the program may get confused as each models library contains same define and function names. Is there a way to make this work? Maybe by renaming the functions and defines in every model? (this would be a pain but could it work like that?).

I would really appreciate any advice.
Thanks,
Julen

Hello @Julen,

No it’s not possible at the moment for the reasons you mentioned. We are still evaluating what is the best way to do that as it would required plenty of changes in our SDK. I cannot provide ETA at the moment.

See also:

Best,

Louis

Hello @louis,

Would it work if I create a file to run each model, where it has #include for just it’s corresponding model, and if I modify the #includes inside each SDK in order to give access to just that folder.

image

In the project properties of STM32CubeIDE by including the next path:
Middlewares/Third_Party/ (without the EdgeImpulse_Model1/edgeimpulse/ part)

And in the run_model1.h file including the next:
#include “EdgeImpulse_Model1/edgeimpulse/edge-impulse-sdk/classifier/ei_run_classifier.h”

And I would change the SDKs libraries #includes from this:
#include “edge-impulse-sdk/classifier/ei_model_types.h”
to this:
#include “EdgeImpulse_Model1/edgeimpulse/edge-impulse-sdk/classifier/ei_model_types.h”

Would that make Model1 SDKs functions to only use their corresponding library?
Also, are #defines connected globally or do they require a #include to the file where they are defined, thus allowing to have multiple defines with the same name?

I am not sure if I have explained myself properly…
Thanks,
Julen

@Julen as @louis mentioned this is not supported out of the box. However, your solution is a hack, you might run into different errors so you need to be sure that the functions that are being called are related to the their respective models.
I would recommend isolating these two models into different namespaces if you can and then continue with your proposal.
Omar

Hi @OmarShrit,
Thanks for your response.
I am not familiar with namespaces, could you explain me how I can isolate an entire model into a single namespace?
Thank you,
Julen

You will need to define a namespace for each function that you need to use (exposed functions).
More details can be found here:

Hi @Julen,

Did you make it works with the solution namespace?

Thanks,

Thang.Do

Hi @thangdhz,

Not completely, the problem were the #defines that contain information for the preprocessing of the samples. The only way that I found to solve that was to modify each define in the entire library that was defined in model-parameters or tflite-model, so I gave up there.

In my case, as I am using a microcontroller of ST, and STM32Cube.AI allows to combine models, so in case your neural networks has an input of raw data (without preprocessing) or you are able to do the preprocessing on your own, you may be able to export the trained model from Edge Impulse as a .h5 and import it directly. If you have other type of microcontroller you could try to use TensorFlow to import the .h5.

I hope you make it work, if you achieve it, please let me know.
Good luck!!
Julen

1 Like

Hello @Julen,

We have made some changes in our inferencing C++ SDK and put this tutorial together: Multi-impulse - Edge Impulse Documentation

The easiest is to use the deployment block to automatically make the changes and combine both C++ libraries.

Best,

Louis

Hello @louis,
Awesome, thank you for letting me know,
Julen