Question/Issue The deployed model differs from the model in edge impulse. There have been attempts to change impulse design, as well as deploy an unoptimized version, but all to no avail.
Project ID: 215622
I created a simple project (215622) to classify two sounds. The impact of the ball on the table (BALL) and silence (NOISE). The peculiarity of this project is that the microphone is pressed against the table, so that the table is like a microphone membrane, because of this, each blow is like a knock on the microphone.
Life classification in edge impulse gives a good result, but when I deploy the arduino library on ESP 32 WROOM, I get a terrible result. The program classifies the ball only when I interact very strongly with the microphone (I hit or shuffle on the microphone).
I tried deploying a non-optimized version. I also launched another similar project (198795) that I created earlier and it works fine, which is why I conclude that the problem is not in my sketch, but in the library itself. I tried to completely copy the impulse design settings of this project, but the result is the same. Also, once it turned out to deploy this project (211436) and he classified the ball hitting the table, but not every one. Unfortunately, this was the only successful attempt.
Projects 211436 and 215622 are the same, it’s just that project 215622 has a simpler data set, specifically for tests.
I just checked your accuracy on your testing dataset and it is 1.56% accurate.
Also, in your training dataset, you have samples that are a a few milliseconds long and another that is 36s long (which I think contains both classes you want to predict). Same for your testing dataset, your only sample contains both occurrences but has only one label.
I’d suggest to start by working on your dataset to make sure it is clean. Make sure your “Noise” label does not contain any occurence of what your trying to classify otherwise it will “confuse” the model.
Louis, thank you for the answer and the time spent!
Firstly, the accuracy of 1.56% in my dataset is due to the fact that I did not select samples of the ball from the audio files. That is, the file contains more than 90% of the NOISE label, but despite this, you can clearly see that most of the beats are classified as BALL.
I should have to create a new label for this audio so as not to confuse you.
About my training data. My task, in addition to classifying the impact of the ball on the table, is to distinguish the impact of the ball on the table from the impact on other surfaces.Therefore, the NOISE label also marks the blows of the ball, but they are quieter and have a different sound, I hope this is enough to distinguish them from hitting the table.
You also noticed that in my audio the duration of 36s is labeled NOISE, although it consists of hitting the ball on the table. This is due to the fact that these blows occur on a table to which a microphone is not attached, I also planned to classify such blows as NOISE.
I shouldn’t have written that I was trying to classify silence and the impact of the ball…
I will try to leave only the noise this time to make sure exactly that the problem is in my dataset. Thanks again for the answer
I have improved my dataset, clearly divided the boundaries between BALL and NOISE. You were right when you said there was an error in my dataset. But the problem has not completely disappeared. When tested in Edge Impulse, the model shows 99% accuracy. The Deployed model shows 75% accuracy, that is, out of 100 clear ball , only 75 of them are classified as BALL.
It’s frustrating because even under ideal conditions I get such a bad result. I don’t understand how I can improve it
@Vylix I’d collect 10,000 samples to give you a better understanding of the accuracy of the model. Your accuracy at 100 samples may be perhaps 75% ± 15%. At 10,000 samples the margin of error will decease. Also, analyze the temporal variations of the data to see perhaps the electronics or microphone are being affected by ambient noise level (dBA), temperature, humidity, light or something else.