Performance calibration error/exception - Score must sum to 1

Hi,
I have some weird behavior with the performance calibration feature. After training my model I tried to execute the calibration but it continued to fail. After the initialization, the job throws an exception.

Here you can find the log:

Blockquote Creating job… OK (ID: 4270803)
Generating features for MFCC…
Not generating new features: features already generated and no options or files have changed.
No new features, skipping…
Generating features for MFCC OK
Running performance calibration on synthetic data…
Copying features from processing blocks…
Copying features from DSP block…
Copying features from DSP block OK
Copying features from processing blocks OK
Copying features from processing blocks…
Copying features from DSP block…
Scheduling job in cluster…
Job started
Impulse details:
Sample rate 16000 Hz
Window size 1000 ms
Learn block latency 14 ms
DSP block latency 116 ms
Total latency 130 ms
Using minimum latency 250 ms
Match tolerance 500 ms
Target test sample length is 10 minutes, including 180 test data items.
Using 32 threads for inference.
Processing 2396 windows…
Progress: 20%
Progress: 41%
Progress: 62%
Progress: 83%
DSP took 0:00:13, inference took 0:00:01
Complete
Searching for optimal post-processing configurations.
Performing maximum 50 steps or 5 minutes (1 step minimum)…
Traceback (most recent call last):
File “/home/testing.py”, line 10, in
ei_testing.testing.run_application_testing(dir_path, options)
File “/app/./resources/libraries/ei_testing/testing.py”, line 117, in run_application_testing
X, F, best_idx, problem = optimization.optimize_kws_params(
File “/app/./resources/libraries/ei_testing/optimization.py”, line 141, in optimize_kws_params
raise err
File “/app/./resources/libraries/ei_testing/optimization.py”, line 128, in optimize_kws_params
algorithm.next()
File “/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/algorithm.py”, line 233, in next
self.evaluator.eval(self.problem, infills, algorithm=self)
File “/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/evaluator.py”, line 95, in eval
self._eval(problem, pop[I], evaluate_values_of=evaluate_values_of, **kwargs)
File “/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/evaluator.py”, line 112, in _eval
out = problem.evaluate(pop.get(“X”),
File “/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py”, line 124, in evaluate
self.do(X, out, *args, **kwargs)
File “/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py”, line 305, in do
ret = self.func_eval(self.func_elementwise_eval, self, X, out, *args, **kwargs)
File “/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py”, line 264, in looped_eval
return [func_elementwise_eval(problem, x, dict(out), args, kwargs) for x in X]
File “/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py”, line 264, in
return [func_elementwise_eval(problem, x, dict(out), args, kwargs) for x in X]
File “/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py”, line 257, in elementwise_eval
problem._evaluate(x, out, *args, **kwargs)
File “/app/./resources/libraries/ei_testing/optimization.py”, line 207, in _evaluate
detections = detector.process_results(self._raw_result)
File “/app/./resources/libraries/ei_testing/streaming_stats.py”, line 285, in process_results
return list(results)
File “/app/./resources/libraries/ei_testing/streaming_stats.py”, line 276, in process_result
is_new_command, label_index = self._recognizer.trigger(scores, result[‘start’])
File “/app/./resources/libraries/ei_testing/streaming_stats.py”, line 199, in trigger
raise Exception(
Exception: Scores must sum to approximately 1 but summed to 1.01171875
Application exited with code 1
Job failed (see above)

Project ID: 145253

Context/Use case: Speech recognition

Thank you in advance,
Valerio

Thanks for the report, and sorry for the error, we are investigating!

Warmly,
Dan

@dansitu
Hi,
I am having exact the same issue, also on speech recognition.
Project: 151388
Speech Recognition on ESP with MFCC DSP
Also EOS does not work and stops with error. Makes me thinking something is wrong with my project?

Is there any timeline for resolution?

Error message:

Creating job... OK (ID: 4557424)

Generating features for MFCC...
Not generating new features: features already generated and no options or files have changed.

No new features, skipping...

Generating features for MFCC OK

Running performance calibration on synthetic data...
Copying features from processing blocks...
Copying features from DSP block...
Copying features from DSP block OK
Copying features from processing blocks OK

Copying features from processing blocks...
Copying features from DSP block...
Scheduling job in cluster...
Container image pulled!
Job started
Impulse details:
Sample rate             16000 Hz
Window size             700 ms
Learn block latency     5 ms
DSP block latency       136 ms
Total latency           141 ms
Using minimum latency   175 ms
Match tolerance         500 ms

Target test sample length is 10 minutes, including 257 test data items.

Using 32 threads for inference.
Processing 3425 windows...
Progress: 16%
Progress: 32%
Progress: 48%
Progress: 65%
Progress: 81%
Progress: 97%
DSP took 0:00:16, inference took 0:00:01
Complete

Searching for optimal post-processing configurations.
Performing maximum 50 steps or 5 minutes (1 step minimum)...
Traceback (most recent call last):
  File "/home/testing.py", line 10, in <module>
    ei_testing.testing.run_application_testing(dir_path, options)
  File "/app/./resources/libraries/ei_testing/testing.py", line 117, in run_application_testing
    X, F, best_idx, problem = optimization.optimize_kws_params(
  File "/app/./resources/libraries/ei_testing/optimization.py", line 141, in optimize_kws_params
    raise err
  File "/app/./resources/libraries/ei_testing/optimization.py", line 128, in optimize_kws_params
    algorithm.next()
  File "/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/algorithm.py", line 233, in next
    self.evaluator.eval(self.problem, infills, algorithm=self)
  File "/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/evaluator.py", line 95, in eval
    self._eval(problem, pop[I], evaluate_values_of=evaluate_values_of, **kwargs)
  File "/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/evaluator.py", line 112, in _eval
    out = problem.evaluate(pop.get("X"),
  File "/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py", line 124, in evaluate
    self.do(X, out, *args, **kwargs)
  File "/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py", line 305, in do
    ret = self.func_eval(self.func_elementwise_eval, self, X, out, *args, **kwargs)
  File "/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py", line 264, in looped_eval
    return [func_elementwise_eval(problem, x, dict(out), args, kwargs) for x in X]
  File "/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py", line 264, in <listcomp>
    return [func_elementwise_eval(problem, x, dict(out), args, kwargs) for x in X]
  File "/app/perfcal/.venv/lib/python3.8/site-packages/pymoo/core/problem.py", line 257, in elementwise_eval
    problem._evaluate(x, out, *args, **kwargs)
  File "/app/./resources/libraries/ei_testing/optimization.py", line 207, in _evaluate
    detections = detector.process_results(self._raw_result)
  File "/app/./resources/libraries/ei_testing/streaming_stats.py", line 285, in process_results
    return list(results)
  File "/app/./resources/libraries/ei_testing/streaming_stats.py", line 276, in process_result
    is_new_command, label_index = self._recognizer.trigger(scores, result['start'])
  File "/app/./resources/libraries/ei_testing/streaming_stats.py", line 199, in trigger
    raise Exception(
Exception: Scores must sum to approximately 1 but summed to 0.98828125
Application exited with code 1

Thank you for the report! This is a bug in the Performance Calibration feature and we’ll have a fix early next week.

@dansitu : You are really super fast :slight_smile:
Have a good weekend,
Christian

Thank you, have a great weekend too! :slight_smile: