I am trying build a simple motion classification application on NRF52833 soc using nRF5SDK. I am able to generate a model and deploy it. But when I run the classifier function ‘ei_run_classifier’ it is giving me a response of ‘EI_IMPULSE_DSP_ERROR’.
I am using ‘spectogram’ preprocessing block. Are there any other settings that need to be taken care of when using pre-processing blocks ? I did not have this issue when I used ‘RAW data’.
Hi Louis, I tested it adding spectogram to each of the axis, the issue still present. Let me know what else I can test, or if you need any more information from me.
Actually we like to add STFT but standard block not available so tried spectrogram, which is also giving similar accuracy results. We are sampling data @50Hz and window size is 1000 ms with 200ms hop. what are ideal parameters you suggest we need to choose while generating spectrogram? I mean Frame Length - 1, Frame stride- 0.2, Frequency Band - 50, Noise Floor(dB): -150.
That’s a million dollar question
It will very depend on your data, your project and your target. This is why we have released the EON tuner to test many parameters in parallel and help you choose the right DSP parameters and the right NN architecture for your specific application.
From what I see in your project, you are having an error on the EON tuner saying: An error occurred during the last EON Tuner run: Every label in the training dataset requires at least 5 data samples
It seems that you have long data samples that could be split in several smaller samples.
I wrote a python script few weeks ago to split long CSV into smaller CSV, let me know if you need it, I’d be happy to share it.
Please do share but we are using 1/5 of data we have for training. Still what we can do about spectrogram EI_IMPULSE_DSP_ERROR. That model is fine for us but giving this error but not if run a model made by using RAW data.
import csv
def write_new_csv(csv_file_write, column_headers, split_number):
writer = csv.DictWriter(csv_file_write, fieldnames=column_headers)
writer.writeheader()
return writer
def loop_through_csv(csv, split_threshold, column_headers, output_name):
row_counter = 0
split_number = 0
write_csv = None
for row in csv:
if (row_counter == 0) or (row_counter == split_threshold):
# close file if it already exists before writing to a new csv file
if write_csv:
write_csv.close()
split_number += 1
write_csv = open(f'{output_name}.{split_number}.csv', 'w')
writer = write_new_csv(write_csv, column_headers, split_number)
writer.writerow(row)
row_counter = 0
else:
writer.writerow(row)
row_counter += 1
def main():
# grab inputs from user
csv_file_path = input('Enter csv file path: ')
split_threshold = int(input('Enter how many rows per CSV file you would like: '))
output_name = input('Name prefix of the output files: ')
with open(csv_file_path) as csv_file:
csv_reader = csv.DictReader(csv_file)
column_headers = csv_reader.fieldnames
loop_through_csv(csv_reader, split_threshold, column_headers, output_name)
if __name__ == "__main__":
main()
And to execute it:
$> python3 csv-splitter.py
Enter csv file path: your-file.csv
Enter how many rows per CSV file you would like: 200
Name prefix of the output files: your-label
For the DSP error, unfortunately, I don’t have an nRF52833 SoC with me at the moment.
I’ll try to check internally if someone has more info.
Is the version called 150db noise drop spectrogram on your project the one with the error?
I’d like to run the standalone example by downloading the C++ library but I see that you modified your project.
The DSP error could be related to the resource on your board or another issue within our SDK using the Spectrogram, I’d like to make sure where does the error comes from
Let me know if you want me to put you in contact with our business developers so they can present you this offer.
Also, I tested your project with the standalone inferencing example and I don’t have the DSP error but the classification results differ from the Studio Live Classification. I am asking the Core Engineering in they can have a look.
@rjames found the issue, a fix is being pushed. It will be merged soon.
To give you more context of where the issue came from, the C/C++ and spectrogram/dsp.py differed between the python implementation and the C++ implementation.
In the python implementation it always scales the signal while in C/C++ implementation it only does so if it’s values are outside of the range [-1, 1].