Hi,
I have seen that there are limited boards that you support. Please add support of some popular boards like the Arduino MKR and the Nano 33 IoT boards if possible. In the mean time, I would like to use the Ingestion API but I am facing some difficulties as I am working with Audio.
Could you please make a tutorial on how to use the Ingestion API and to develop a simple model for both Audio and Accelerometer data. It would be great as I am sure that from that tutorial a lot of developers will switch to this platform.
Thank You.
Hi @dhairyaparikh1998 for low dimensional data, like accelerometers and most other sensors you can use the data forwarder - makes it very easy to transport data off any Arduino board.
The thing with audio is that it’s very hard to do in a generic way on these boards, as you cannot write fast enough to the serial interface. So you need to buffer the data somewhere (and audio takes up a lot of RAM, so preferably in flash), then have a way of retrieving the data later on. We’ve done this work on the ST IoT Discovery Kit, Arduino Nano 33 BLE Sense and the Eta Compute AI Sensor but it requires a bunch of custom work on our side for every board.
An alternative would be to use your mobile phone to capture the audio, then export as Arduino library, and then look at the static_buffer
example. This shows how to call the impulse with a set buffer. You can then use a library for your microphone (like PDM) to fill that buffer with raw audio data and classify. The buffer format expects (in most cases) 16-bit PCM values that you can just throw in there (already in the right format in the PDM library).
Hi @janjongboom,
Thank you for the quick reply. Got your point. Will try the method you suggested and post any questions I have on this forum.
Great to know that this is an active community.