Non model vision with OpenMV


I have a model which is not machine vision. Would it be possible to deploy it as a OpenMV library ?
I do not see the option at the moment.
I’m asking because it seems the Nano BLE sense 33 (and also the RP2040) are compatible with the OpenMV IDE.

Hi @Keja not through the MicroPython interface. All models depend on DSP and normalization code that needs to be loaded [1]. For image models this is included in the OpenMV base firmware, but not for other models (e.g. audio or anything). Not sure if there’s a way to run Arduino sketches on the OpenMV but that could be a way around it.

[1] This code ships as C++ code, and in the MicroPython mode you cannot load native code.

hi @janjongboom ,
That makes sense so that would be up to OpenMV to modify their firmware to make it work on other type of models .
thanks for the answer.

1 Like

@Keja I know the OpenMV people were thinking about audio data as well - maybe a post on their forum will help that process :slight_smile:

@janjongboom, the OpenMV told me they will update the tensorflow library later this year so that’s good.

For my particular use case , thanks fo the very good course " Computer Vision with Embedded Machine Learning" by Shawn Hymel I found a workaround. I downloaded the tensorflow lite model from the dashboard and as my model is a simple FCNN passed my features as a 1D Image .
I was then able to do inference on my portenta H7 using the openMV micropython librairy , so I’m classify something which is not an image using micropython .

It doesn’t work on the arduino RP2040 yet, seems the tf librairiy is not completely implemented on the RP2040,I’m checking with the OpenMV folks.

I was wondering if anyone had managed to run tf on RP2040 or nano BLE 33 via openMV micropython build ?


@Keja Not with the OpenMV MicroPython builds, but the Arduino library export audio library works on the RP2040 (as per @jenny) and there’s which lets you hook in other sensors (in C++).


The Arduino Nano 33 BLE Sense example sketches for audio & motion works on the Arduino Nano RP2040 Connect from the Arduino deployment export option from the studio.

For Audio models, you will need to train a smaller model with the following MFCC and NN configuration:

For Motion models, all you need to do to flash onto the RP2040 is swap the IMU driver header file at the top of the Edge Impulse example sketch to #include <Arduino_LSM6DSOX.h> (and install the corresponding library from the Arduino IDE)

Blog post about this coming soon :slight_smile:

1 Like