Question/Issue:
Hello, I am working on a project that involves acoustic data and environmental parameters (temperature, humidity, and carbon dioxide). I collected acoustic and environmental data in the same way, and I wanted to use multi-modal sensing techniques or sensor fusion to create a single robust model capable of identifying both acoustic and environmental signatures. When environmental parameters change, instead of relying on a single parameter, acoustic signatures should be used to ensure that ongoing activities are correct.
The environmental data is in Excel (.csv) with four columns: three for environmental data and one for output, while the acoustic datasets are in.wav. I became completely lost while following Edge Impulse Docs’ tutorials on Data Fusion and Sensor Fusion with Embeddings. If you require any additional information, I can provide it. Any suggestions or help would be greatly appreciated.