I wanted to make a system in which I can use the RGB sensor and the microphone simultaneously on Arduino Nano 33 BLE Sense. So I tried adding different blocks for both purposes and the training results were pretty good with accuracy of about 98%.
But when I wanted to test it using live classification it started giving me errors. I had made the frequency 16000Hz for both - the microphone and the colour sensor.
The errors were -“Cannot process file, impulse requires axis “audio”, but not present in this file (found: “R”, “G”, “B”) - note: axes are case sensitive.” I pretty much understand the error but I cant find a solution.
If anyone has a solution to this then you are welcome.
@Darsheeltripathy So for this to work all your data needs to have both the audio, the R, B and G axes, so four axes per data sample. So make sure you have four sensor axes (the recording from the audio and the recording from the color sensor, all at 16KHz).
How will I do that for audio samples ?
Can I rename all the axes to “audio”? Will that work?
I have set all the axes name to “audio” and the output is given below.
By the way, I am getting this error:
Cannot set tensor: Dimension mismatch. Got 11960 but expected 3960 for dimension 1 of input 0.
If you rather have two completely separate models then I’d create two projects and add audio to one, and the RGB data to another. We currently only support deploying two completely separate models on Linux though.