Classifier not working on canned sample

Do you mean on device, as in using a microphone, or an example that runs on a pre set input array?

Using a microphone, ideally on some nRF variant but I would take anything.

Here is another nordic example. It uses the correct conversion function, which is one that simply casts the int inputs to floats: firmware-nordic-nrf9160dk/src/sensors/ei_microphone.cpp at 0a5c87bd7c09d0232121a17169e9935f03b1f1ab · edgeimpulse/firmware-nordic-nrf9160dk · GitHub

I’m looking for a complete example. That model is for continuous motion recognition, not audio.

Not trying to be difficult but I get the distinct impression that the pre-emphasis stage has never been tested with audio and I’m wondering why you’re so confident that it works.

For every commit to our codebase, we build the CPP source for each and every DSP block, across a variety settings, and run this compiled binary to make sure that the output matches the results in Studio. So no, your distinct impression is incorrect, and this is why I am confident it works.

As with any software company, test coverage can never be 100%, so there will occasionally be escapes. These are easily found when raw data is logged, so that questionable samples can be run again and any issues can be reproduced. But I know there are cases that exercise the preemphasis code, and this is why I’m “so confident,” barring a sample that I can run to reproduce a suspected issue.

To address your question of a complete example, you are correct that this repo does ship with a continuous motion recognition model, but all the code is there to also run an audio model. See the instructions here for how you can update this repo with your own model. It’s as simple as replacing one folder with the folder exported from Studio.

I’m sorry I don’t have specific audio example, but as far as showing the interaction with hardware drivers, all of our example repos contain code to exercise all the sensors on the demonstration board, not just the microphone

Well this has been an incredibly frustrating exchange that makes me not want to re-up our subscription. I’m asking perfectly reasonable questions that boil down to the fact that the model runs fine on here but requires magical gain values on the platform.

When I’ve asked for a reference example, your answer has been essentially, it works because I say so.

Good luck running a platform treating your customers this way.

@jefffhaynes I’m sorry the examples I’ve tried to provide were not helpful. I’ve been trying to either help you debug an on device issue, or send me a sample array that doesn’t match the results in Studio so I have something to reproduce and fix.

Perhaps I misunderstood what you wanted when you asked for an example? Here is a zip archive of another reference, using our example-standalone repo, which allows you to build and test against an array on any target. I’ve loaded it with the model/deploy from your project, which I updated to latest SDK (I just regenerated the deployment). If you build and run this locally, you will see that the results for this sample, which is 2096 in your project, match both in classification result, and in processed features. (Your uploaded sample is added inside main.cpp). I changed from our vanilla example to use run_classifier_continuous, as that is what I saw in your code, and also sets the debug flag to True so we can confirm that we’re getting the correct processed features.

As for “magic gain,” there’s no magic. The range of your samples at inference time must roughly match (like order of magnitude) the range used in training, ie, the range of your samples in Studio. This is true for any supervised learning model, nothing specific to our platform. Your samples in studio are int16 range, roughly ±32767. Thus at inference time, your samples, though casted to float, should be in the same range (ie, not ± 1.0)

Sorry for the delay in responding. Thank you for the example. I am able to run the samples statically on our platform including the pre-emphasis stage. However, when I switch to continuous processing from the microphone, it completely stops working unless I disable pre-emphasis.

I still feel as though there must be some difference between processing the samples as they appear in the portal and processing raw data from the microphone although I am unable to tell what the difference could be. Are you doing any processing on the data during ingestion?