I have specific question on the audio data that is used to visualize in the edge impulse project.
The amplitude of the original data will range between -1 to 1, but when I see in the visualization of the audio data that is upload. it ranges in 1000/10000 values.
How are these normalized?
How are these varying?
Also Can I know what is the range that it is scaled to bcoz input in the live classification or inference is also ranging in 1000/10000?
The scaling factor applied either during feature extraction or as a post-processing step to ensure that the feature values are within a suitable range for the learn block used for classification or inference. (which is why you also see this on device)
There are a number of parameters that can influence the scaling:
Hopefully that answers your question, if you have any further I may need to loop in the DSP team