Custom dataset error OSError: Currently, only float32 input type is supported

I posted this topic up on openMV since openMV IDE is the environment that produced the error, but they said it wasn’t in their code and was something inside of tensor flow.

Using Edge Impulse to try and create a custom dataset.

After following the 15 minute tutorial of creating a custom dataset, I quickly realized this may only be for the H7 plus. Upon running the example .py they provide I immediately ran into the memory size problem and got that handled.

The next error is one that I can not seem to get past. As described in the topic of this post it is OSError Currently only float32 input type is supported.

It only occurs after the program runs and tries to take the first capture.

Hardware/software:
H7 R1 camera (non plus) OV7725 Sensor. Firmware 3.9.4 [Latest]

I have found some information relating to this from older posts in 2018 or so about straight tensor flow, but nothing related to edge impulse.

I noticed that on some of the deployments in edge impulse it was possible to change the output from unit8 to float32, but this is not possible for the OpenMV build.

There was a firmware build for openMV from 19 that says it now supports int8, uint8, or float32. This seems like it should have taken care of this issue given the context of the thread it was in.

Is it currently possible to use edge impulse with the non plus H7? Edge impulse site does say it supports the H7 plus with not mention of the original H7.

Thank you for taking the time to look at this.

Hi @Trainadana,

I tried with the latest firmware on my OpenMV H7 Plus and it is working fine.

The OpenMV H7 has less RAM so you need to do a few changes in your impulse and MobileNet model:

  • crop images to 48x48px in the Impulse Design
  • Select Grayscale in Image block
  • Select MobiletNet v2 0.05 in Training block

You can also check this tutorial on the H7 camera: https://www.hackster.io/ishotjr/mega-or-mini-image-classification-on-the-1mb-openmv-cam-h7-be57ac

Your error seems to point out to another type of issue but it’s worth trying those changes first.

Aurelien

@aurel thanks for the reply and information.

On the standard H7 I did run into the memory issue, and that was resolved by what you lined out below.

I’ve tried every build version with no luck on the float32 error.

I will try out the link you supplied to see if that takes care of it.

I’m in Africa for a little while so I do have 7 plus’s coming but still a couple weeks out. Just seems odd that I can’t get the standard H7 to work at all.

@aurel Just following up on this based on your advice.

I followed the tutorial you mentioned exactly, and it still produced the same error.

Not sure what else to try at this point, and will probably just wait until I get the H7plus. The only other thing I can think of, and have seen this before is that my work PC causes issues due to the profile IT has on it. Seems odd, but if it is trying to use something in the background locally I guess it’s possible.

I might try doing this on my surface just to rule out any PC issues.

Any other ideas are welcome. Thanks

Just to confirm, you’re using the OpenMV deploy right?

We have a number of tensorflow lite models available if you go to Dashboard in your project, you could try and download this one:

@janjongboom Thanks for the reply.

I am using the OpenMV deployment.

The screen you are showing for the dashboard is something I have have not seen before. I was pretty sure I tried opening every option that I could on every tab. What you are describing looks like it would solve my problem.

I will try and get to those downloads a little later, however if you had a screen shot of what to click to open those downloads it would help speed things up quite a bit.

Thanks again

Just the download button on the right - but this should already be the selected one in the OpenMV export… I’ve pinged the OpenMV team about this.

@janjongboom circling back so this isn’t left open ended.

I did find what you were talking about, and didn’t realize those were there after the training was complete.

First thing: I tried the model that was quantized 8, but had float32 input and output which is like you depicted. It resulted in the same error.

Second: Trying to use the float32 only model is built at 174K For some reason my H7 only shows 111K of total memory.

Aside from that am I correct in that the models on the dashboard can be renamed and used as the loaded tfllite model?

Thanks again and at this point I am just going to wait on the plus modules to arrive.

@Trainadana Yeah just rename it, and should be good to go. The full float32 model will indeed not run. I’ve pinged the OpenMV folks again, hopefully they can resolve this somewhere in the TFLite library running on the OpenMV board.