Listen to the hornet

we are trying to develop an orientation detector based on the sound of the hornet in flight.
Thanks to a thesis found by Michel, we know that the frequency of wing beating is 100 Hz and that this is a characteristic of hornets. Using two microphones spaced 1.2m apart (half wavelength) therefore in phase opposition we hope to capture a maximum offset (parallel to the flight) or minimum (perpendicular to the flight). Ambitious, isn’t it?

My question: how to develop AI and from what?

We want to observe the difference in sound between two sources, do we need to:

  • make an AI based on listening to the frequency of the hornet’s flight and
    separate the two sound arrivals, i.e. use stereo recordings
    and measure the gap between the two sound tracks?

  • make an AI directly based on direct listening to the sound on a single channel from the two microphones
    and give as a basis the different shifts observed by rotating the device?
    decalage_son_2_I2S_10ms

I don’t know if I’m expressing the problem correctly.

Thank you for enlightening me.

Hi @BARROIS,

Based on our previous conversation in other threads, I do not think AI is the best tool for your project. From what I understand, you are trying to identify the location of a hornet based on the sound arriving at 2 difference microphones. What you want is “sound localization.” Please see this academic paper to get a start on how that might be accomplished: Electronics | Free Full-Text | Sound Localization Based on Acoustic Source Using Multiple Microphone Array in an Indoor Environment

I am surprised that AI, advertised as omnipotent, cannot solve this problem.
Is it because we have no idea how to ask the question?
Or is the notion of time not easy to include in an AI?
I am indeed looking for a temporal shift, are there AI specialized in temporality, chronomology?
I have vaguely heard about “recurrent” AI, is this a path?
Thank you for this reference, this seems a little beyond my knowledge