Arduino build not available

I have read here (Arduino Deployment Not Visible) that downloading an Arduino model won’t be an option for models that don’t support it. But I’m unclear how to make sure the model will. I chose the Nordic nRF52840 when building the classifier so I was expecting that would be what I would get (using a XIAO BLE Sense, which you guys should definitely officially support!). Then the EON optimizer seemed like it allowed choosing even smaller models. But in the end, I’m only given the option of a C++ model, which, I don’t know maybe could be added through Arduino IDE? BTW that thread suggests a FOMO model, but I don’t recall seeing that option anywhere. Probably missed it.

I’m finding lots of things to be confusing about the user experience on this app, but thought I would push through and see if I could get to the end, which I did, but not with the expected result.

Any tips there? I’ll make a separate post with general observations on the UI, areas that probably could be improved with just a little bit of tooltippery I suspect.

The site seems very promising, thanks for bringing it!

Hi @braddo,

Because the XIAO BLE Sense is not officially supported, you will not see an option to deploy directly to that board.

Your best option is to search for and select “Arduino library” on the deployment page.

Here is a guide on how to import and use the Arduino library: Arduino library - Edge Impulse Documentation

The EON Compiler optimizes models to be smaller, but it is only available to enterprise customers.

Hey Shawn, thanks for checking in - I’ve seen a bunch of your videos, great work. The deployment page did not have any other option than C ++ unless I’m not looking in the right place. Then I did actually use the EON tuner, although the outcomes weren’t wonderful. Maybe that was my problem, that once choosing an EON tuned model the path to choose Arduino deployment was removed?

I had read that the Xaio is not supported but have seen some successes. I would like to suggest that it’s a great target to support, along with the similar Adafruit boards, the Qt Py. The best thing about the XIAO is that it has a built in lipo charger and in a very small form factor compared to other similarly spec’d boards.

Hi @braddo,

Thanks! If you see the C++ option, then the Arduino library option should be there as well (since it just wraps the C++ library). If you share your project ID number, I can take a look to see what’s going on.

Thanks for the offer! The project is 352524

It might be in a weird state because I went back and disabled quite a few samples (I have a huge number of noise samples, 3 sec each) and probably just didnt need those. Although I figured out how to disable a bunch of those, I couldn’t tell if the downstream portions of feature/model building would all need to be rerun, and I wasn’t sure how to know that they were only running on the enabled samples, since it seemed like that designation was being ignored. Then the processing and classification blocks, it’s not clear if they are computed as alternatives or if they are “additive”. Also, when the compute is done, I’m only seeing a chart that looks like one principal component, I was expecting at least a 3d chart and and scree plot, and some tools to do something about that. The first component did not at all differentiate the classes, but at least my signal classes should be very different from the noise. I know it’s hard to build an interface that is both powerful and easy to use, but somehow I feel like EI is not clear enough about what is actually happening when so the user can make informed choices

Hi @braddo,

I cloned your project and got rid of the “Transfer Learning” block (as you can only have one ML model per project). From there, I trained your classifier. I can now go into the deployment page and search for “Arduino library.” It looks like everything is working as intended. If this is not what happens for you, could you tell us what error messages you see (or post screen shots)?

To answer your question, the “processing” block(s) perform some sort of data transformation (such as doing an FFT or creating a spectrogram). The output of that block (i.e. the FFT or spectrogram) is then fed into the ML model (e.g. neural network) as the input for that model (that way, you’re not feeding raw data directly into the model). Hope that helps!

Hi, thanks - yes I added a transfer learning classifier because I wondered if the system would compare the two approaches for classification with ROC curves or something. Similarly, one can choose more than one processor, it’s not clear if that is allowed and the system will create a longer feature list, or if it will compare the effectiveness of two processing approaches. Is it also true that only one processing block is allowed? Why does the system allow choosing more than one if only one is allowed? It’s strange that the UI suggests to “Add a Learning Block” if only one is allowed. I must be misunderstanding you. How can one “zero out” what has been done and start again? Most ML tools show a workflow and give the use an ability to invalidate a portion, which then invalidates all following steps on a given branch. The EI UX seems to be linear, which is fine, but how to tell which calculations are in effect and available versus which not?

Hi @braddo,

You can select more than one “processing” block, but you can only have one “learning” block (for now).

Edit: yes, the tool allows you to select multiple learning blocks, you are limited to one when you deploy.

I can see now that there is a “search deployment options” box at the top, which has an Arduino choice. From the screenshots, I was expecting to see a list of deployment options, not ony the default on the page. Thanks for this help. It seems obvious in retrospect, but despite that it’s at the top, it wasn’t clear someone should need to search before seeing more than one choice. Awesome, ready to try it out.

1 Like