Ov7670 Cam with Nano33BLE(Sense)

Does anyone in the community have an OV7670 Camera module for use with the Nano 33 BLE (Sense)?

Having a few minor issues.

  1. New Arduino MBED Board 1.3.0 does not seem to work with the OV7670 CAM, old version 1.1.6 works fine. Anyone go t any ideas why this might be?

  2. My Static Buffer impulse loads but I lose the use of the Serial Port. I am thinking of just making my model smaller and trying again.

  3. Out of curiosity which version of the MBED Board is the Impulse generated using. I think once I read 1.1.4. Is there a plan to upgrade the Impulse to MBED version 1.3.0 ?

Hey @Rocksetta

Re: 1, what error do you see? I’ve just compiled a continuous gestures example for the Nano 33 BLE Sense and that works against 1.3.0. In general we try to support the latest board package, but we trail a bit because releases are unpredictable. We ran our integration tests again 1.1.6 but we’ll update to 1.3.x.

Double check, you have the right Mbed package? There are two and they both have version 1.3 which is fantastically confusing:

I will try the ov7670 camera again with MBED 1.3.0.

Yes the two Arduino Board types is really confusing, but after bugging Arduino on the facebook page https://www.facebook.com/groups/portenta/ they finally did the (Deprecated) label.

So my Impulse that works fine on the Portenta Vision Shield, when exported to the Nano33BleSense and running the Static Buffer (with raw data from the impulse) does not compile for either MBED 1.3.0 or 1.1.6. My OV7670 Camera does work using MBED 1.1.6 but I can’t get proof of it working on MBED 1.3.0 (It compiles and might work just the processes view does not prove that it works.).

I am thinking about trying the OV7670 Camera on the Portenta as I am a bit stuck with the Nano33BLESense.

@janjongboom any suggestions for how to change camera data 160 x 120 rgb565 to 96 x 96 rgb565 . If I was using tensorflowjs there is a method called slice that works great.

https://js.tensorflow.org/api/latest/#slice

What error do you see?

any suggestions for how to change camera data 160 x 120 rgb565 to 96 x 96 rgb565 . If I was using tensorflowjs there is a method called slice that works great.

Here’s an example: https://github.com/edgeimpulse/example-signal-from-rgb565-frame-buffer

1 Like