BYOM - "Check model behavior" keeps loading forever

Question/Issue:
Hi,
i have imported my own model and tried to test it with the “Check model behavior”. I tried a 1 sec file and a 25 second wav file. But it keeps saying “Precessing Sample…”
Is there something i am doing wrong or is it a bug on EI side?

I have tried it on Brave and Edge browser with same results.

Related question: Is “Live classification” the same as “Check model behaviour”?

Project ID:
186377

Context/Use case:
The project under this ID is only for testing the BYOM feature.

Hello @edsteve,

Could you provide your model and a test file so I can reproduce on my end please?

Best,

Louis

Hi Louise,

I used the model in the link below, but have also tried different once. This model was downloaded from Edge Impulse Project 214986. (Dashboard, Classifier Model (TensorFlow SavedModel))
And then Uploaded to a new project (ID: 186377)

The model and two test files:

Hello @edsteve,

Thanks, I’ll try to reproduce later today but if you just uploaded the saved model, you’re missing the DSP part, thus you’re model is expecting the extracted features while you’re passing raw audio as an input.

Best,

Louis

Thanks for checking.

The DSP part is still unclear to me. I was hoping the DSP part is inside the saved_model.zip.
How can i add the DSP part to Edge Imopulse?

Hi @louis,

Somehow i can’t reporuce the loading issue any more. Not sure why that is. But i guess that’s good :slight_smile:
I still need to come back to the second question:

How can I export a model from an Edge Impulse project in a way that allows it to be imported into a new Edge Impulse project, including the sound processing part?

My developer asked me to send him a example-model which can be imported to EI using BYOM. And i haven’t found one yet.

Would be great to get some info about it. I am a bit stuck with that.
Thanks

Hello @edsteve,

The easiest way is to version your project and clone it in a new project to retrieve it.
BYOM only supports bringing a model, not the preprocessing part.

If you want to see how you can use preprocessing with BYOM see this page (the “preprocessing your data…” section): Bring your own model (BYOM) | Edge Impulse Documentation

If you want examples to bring your model using BYOM, you can have a look at our Python SDK examples, the profile and deploy function upload the models to your project (using BYOM).

Best,

Louis

Hi @louis,

Thanks for your reply.

BYOM only supports bringing a model, not the preprocessing part.

That means i can bring my own model (without sound processing), but i won’t be able to test it on the Edge Impulse platform, because i can’t bring the sound precessing part? That’s not really clear during the process of BYOM. For me at least :slight_smile:

If you want to see how you can use preprocessing with BYOM see this page (the “preprocessing your data…” section): Bring your own model (BYOM) | Edge Impulse Documentation

Doesn’t look like an easy task. I was already looking at that a few times, but i can’t make sense of it. It brings me more questions than answers.

In Short:
It seems there is no easy way right now to BYOM for sound classification including the preprocessing part?

Hello @edsteve,

Let me double check the correct workflow with our core engineering team.
I have never used BYOM model with audio projects (mostly because the preprocessing part in Edge Impulse is very efficient and makes my life easier :stuck_out_tongue:).

I’ll let you know when I know more.

Best,

Louis

Thanks @louis

I will wait for your response once you have more info about it.

…mostly because the preprocessing part in Edge Impulse is very efficient and makes my life easier

I agree. And i love how easy it is to make a model without coding. Our AI developer on the other hand prefers to make his own model without Edge Impulse. So he wants to see an example model which can be imported to Edge Impulse. That’s why i am trying to find/make a model that can be imported.
After that he can make his model which we will bring to Edge Impulse so we can deploy it to our edge device.

Hello @edsteve,

In Short:
It seems there is no easy way right now to BYOM for sound classification including the preprocessing part?

I got the confirmation that this workflow is currently not supported.
The original design was that pretrained model replaces our impulse.

My initial thought was to bring to extracted features to your project instead of the raw data but we don’t support that out of the box unless you define a sampling rate (even if you don’t use it) which does not make sense for already extracted features.

Your question raised some interest internally, we’ll probably work on something using the Python SDK first but I cannot guarantee it will be prioritized nor any ETA.

Another option is to bring your model architecture using custom learning blocks. That way you can train your custom architecture in edge impulse and leverage our feature preprocessing or bring your own using the custom processing blocks.

Best,

Louis

Best,

Louis

Thanks for your reply @louis

It would be great if you could also explain HOW it is supported and how to achieve it.

Sorry, but I can’t make much sense of your answer. It would be great if you can explain more. Unless i define a sampling frequency? Where? How? Can you explain more about this solutuion?

That seems to be a very complex solution. So if i understand your answer correctly. It is not easily possible to BYOM for sound classification to EI including preprocessing.

Hi @edsteve,

I got the confirmation that this workflow is currently supported.

Sorry, I meant NOT supported, let me edit my original message if anyone else is having the same question.

It is not easily possible to BYOM for sound classification to EI including preprocessing.

Correct,

Supporting this workflow is currently under investigation but I can’t confirm that it will get prioritize nor give any ETA on this.

Best,

Louis