hi guys, I have several questions.
I read the article written by @janjongboom. The article mentions MFCC for human speech/voice and MFE for non-human voice. why is MFCC good for human speech and MFE good for non-human voice?Is there any research or paper that explains this?
I have run training NN, but I got the result ‘job failed’:
I have decreased the number of windows to 250ms from 500ms, but still got the same error job failed. this happened after I had EON Tuner running, and I’ve re-train the suggested NN architecture. previously with the same data, a bigger number of windows up to 1000ms running smoothly. does the error have any effect after I run the EON tuner?
on the EON tuner that I run why is there only 2D Conv? no network type 1D Conv as in the documentation?