I’m using the Apollo3 Artemis module and trying to get speech recognition working with it. They do provide micro speech examples in the repo, but training a new model seems complicated and I like Edge Impulse much better. So far, I have an Artemis module with PDM microphone and I sample audio at 16khz. I can generate a wav file directly from the samples taken from the module. I have just two classes for now, one wake word and noise. (windows size:1000ms, windows increase:250ms) I have been training my model based on these recordings. Training has been great and so has testing.
I deploy it as Arduino project (nano ble sense), but just add the library to my project.
It does not perform on the Artemis module as well as testing online. The problem is I think, the differences between the BLE sense PDM library and the PDM library used for Artemis, or how the PDM readings are collected (but not sure).
I know that the Artemis PDM library places the data in a uint16_t buffer with a buffer size of 4096 (as in pdm examples).
I thought maybe I could make it work since “inference.buffer” is a int16_t. I attached a code snippet.
In the snippet above, the PDM data is placed in “pdmData”.
Sorry for my lack of understanding , and thank you!