Image Recognition on Arduino Nano BLE 33 Sense

Hi there,

I just want to know if it is possible to deploy image recognition to Arduino Nano BLE 33 Sense
I am able to compile "static_buffer " in Arduino IDE and it can be uploaded to the board. Yet, when I want to have a look on serial port, the IDE gets stuck at the beginning. After more than 10mins, the serial port comes out but nothing can be seen, there is a message saying that something goes wrong on serial port setting(as you can see below)

My image recognition model is still within the capacity of the board so I am not sure the reason behind it.(Maybe related to RAM?)

Size of image in the model above is 96*96 and I am using MobileNetV1 0.2 as NN model.

I have also tried a smaller model(see below), image size is 48*48 with MobileNetV1 0.1 (no final dense layer, 0.1 dropout) as NN model. Yet, after uploading to Arduino, the board cannot be found on COM port until I press reset button in order to trigger bootloader mode


I am using Arduino Mbed OS Nano Boards 2.1.0 with platform.local.txt
Looking forward to any suggestion!!!

Cheers,
Morris

Hello @Morrist,

This is because you are passing the raw data from the image and not the processed features of it (after the 96x96 resize).

Also, note that the Arduino nano BLE has no camera.
But if you find a camera that can be used with the Arduino Nano BLE, you would need to implement the resize to get a 96x96 image in the RGB565 format to pass this value to your run_inference function.

Regards,

Louis

FYI Ov7670 Cam with Nano33BLE(Sense) <-- here are people using the Nano 33 BLE Sense.

2 Likes

Hi @louis,
Thanks for you remind!!!
I am now using an smaller input 48*48(2304), but when I copy the processed features to static buffer and run the program, it tells me that the number of features is 6912 which is out of model expectation,2304. I have chosen RGB format but the expected one seems to be grayscale which is 1/3 of RGB, do I miss something? Below are my settings
Cheers,
Morris



@Morrist,

Indeed this is weird, with RGB you are supposed to have 48 * 48 * 3 = 6 912
Could you try to regenerate your features to see if it fixes your issue please?

Regards,

@louis,
I have generated the whole model and features again but it is still telling me the model only accepts 2304 features. 6912 is accepted on Live classification but when I build the model and put processed features to static buffer, it shows that 2304 is needed.

However, if I change the color depth from RGB to grayscale and build the model again, the number of processed features becomes the same as the expected one(2304 )and the model can run on Arduino. But the accuracy is different from live classification even I am using the same set of features.
cheers,
Morris

@dansitu, could this be because of an unknown bug in the MobileNetV1 0.2 model?

@Morrist, your project id is 32950 or 33201?
On one, I see 96x96 RGB and on the other 48x48 Greyscale.
Also, can you make sure you deleted the old libraries present in your Arduino/libraries folder before importing your new version, otherwise, it can be confusing for Arduino IDE and it might not be using the right one.

@louis,
You can try id 33469, this is the latest one I build today. It is using 96*96 image and it shows the expected one becomes 9216 (see below),the previous one is 2304 so it should be from id 33469.I am still using MobileNetV1 0.2 as NN model.


Cheers,
Morris

Ok, just uploaded your arduino firmware onto my Arduino Nano 33 BLE and I am having the same issue.

I’m asking the team internally to know if they have any idea.

Regards,

Louis

Ok I think I understood! @janjongboom, please correct me if I’m wrong.

Sorry I think I guided you in the wrong direction at the beginning of this thread.

The static const float features[] = {} actually expect the raw data but in the image classification case, it expects a 96x96 image (or the size you defined in the studio) in RGB565 (so 9216 values). However, we do not show in the studio the “raw data” after the resize.

The generated features we were trying to copy are some values to be passed to the NN where a RGB565 pixel has been split in three normalised values.

Unfortunately, this means that you have to implement the resize function to get a 96x96-pixels image from your original image.
I will try to provide an example in the following days that works on the Arduino Nano 33 BLE.

Regards,

Louis

And @Morrist,

You can also have a look at a code sample @janjongboom wrote:

Regards,

Louis

Hi @louis,
I find that the size of raw features is actually the same as the size I defined in the studio(96*96) that’s why I assume it is not needed to resize the image but I have transformed the raw features into RGB565 and put them in static const float features[] directly. Yet, the result from Arduino is not the same as Live classification. I have looked at the code sample but if I want to test my model by using the raw features provided by studio at this early stage, do I need to bring RGB565 back to RGB which is mentioned in the code sample? Or do I still miss something so I cannot make my model works?
Cheers,
Morris

Hi @louis,

I have done 3 tests and it seems that only MobileNetV2 can run on my Arduino Nano BLE 33 Sense. I just use raw features which are generated by the studio and put them to static const float features[] directly(without any resize or RGB format transformation).Below are the details of the 3 tests.

1)id :32950, MobileNetV1 0.2 (no final dense layer, 0.1 dropout), image size 48 *48, arduino gets stuck and serial port cannot be opened.
2)id: 33201,MobileNetV2 0.05 (final layer: 8 neurons, 0.1 dropout),image size 48 * 48,the model can run on arduino but the result is slightly different from Live classification.
3) id: 33469,MobileNetV1 0.2 (no final dense layer, 0.1 dropout),image size 96 *96,arduino gets stuck and serial port cannot be opened.

Cheers,
Morris

Hi @Morrist there’s an issue with an unsupported operation in the MobileNetV1 neural network that we generate leading to errors 1 & 3. We’ll push a hotfix later today (we had a fix already, but was sitting in the review queue).

Re: 2) live classification uses the unoptimized model to see the output, whereas you run the quantized one on your Arduino, on small images and small models the quantization error is worse than on larger models, hence the slight difference.

1 Like

Hi @janjongboom,

Thanks for your help and the detailed explanation of 2).
I am really excited about how the studio can make ML applications running on MCU easier!!!

Cheers,
Morris

@Morrist,

I just tried with Jan’s fix on your MobileNetV1 0.2 96x96 project and it works, I will let you know when the fix has been deployed.

Regards,

Hi @Morrist this has now been deployed!

1 Like

Hi @louis @janjongboom ,

The model works great on Arduino!
Thank you very much and you guys make my day!!!

Cheers,
Morris

3 Likes