Question/Issue:
I’ve searched for similar issues in this forum, but haven’t seen an effective solution.
Creating job... OK (ID: 7900684)
Scheduling job in cluster...
Job started
Copying features from processing blocks...
Copying features from DSP block...
Copying features from DSP block OK
Copying features from processing blocks OK
Scheduling job in cluster...
Container image pulled!
Job started
Splitting data into training and validation sets...
Splitting data into training and validation sets OK
Training model...
Training on 30 inputs, validating on 8 inputs
Epoch 1/30
1/1 - 1s - loss: 1.2333 - accuracy: 0.5667 - val_loss: 1.5324 - val_accuracy: 0.2500 - 1s/epoch - 1s/step
Epoch 2/30
1/1 - 1s - loss: 1.2111 - accuracy: 0.5667 - val_loss: 1.5035 - val_accuracy: 0.2500 - 718ms/epoch - 718ms/step
Epoch 3/30
1/1 - 1s - loss: 1.1895 - accuracy: 0.5667 - val_loss: 1.4749 - val_accuracy: 0.2500 - 706ms/epoch - 706ms/step
Epoch 4/30
1/1 - 1s - loss: 1.1689 - accuracy: 0.5667 - val_loss: 1.4467 - val_accuracy: 0.2500 - 880ms/epoch - 880ms/step
Epoch 5/30
1/1 - 1s - loss: 1.1491 - accuracy: 0.5667 - val_loss: 1.4190 - val_accuracy: 0.2500 - 699ms/epoch - 699ms/step
Epoch 6/30
1/1 - 1s - loss: 1.1297 - accuracy: 0.5667 - val_loss: 1.3918 - val_accuracy: 0.2500 - 698ms/epoch - 698ms/step
Epoch 7/30
1/1 - 1s - loss: 1.1106 - accuracy: 0.5667 - val_loss: 1.3650 - val_accuracy: 0.2500 - 697ms/epoch - 697ms/step
Epoch 8/30
1/1 - 1s - loss: 1.0918 - accuracy: 0.5667 - val_loss: 1.3385 - val_accuracy: 0.2500 - 703ms/epoch - 703ms/step
Epoch 9/30
1/1 - 1s - loss: 1.0731 - accuracy: 0.5667 - val_loss: 1.3120 - val_accuracy: 0.2500 - 929ms/epoch - 929ms/step
Epoch 10/30
1/1 - 1s - loss: 1.0543 - accuracy: 0.5667 - val_loss: 1.2860 - val_accuracy: 0.2500 - 728ms/epoch - 728ms/step
Epoch 11/30
1/1 - 1s - loss: 1.0355 - accuracy: 0.5667 - val_loss: 1.2606 - val_accuracy: 0.2500 - 725ms/epoch - 725ms/step
Epoch 12/30
1/1 - 1s - loss: 1.0169 - accuracy: 0.5667 - val_loss: 1.2357 - val_accuracy: 0.2500 - 732ms/epoch - 732ms/step
Epoch 13/30
1/1 - 1s - loss: 0.9992 - accuracy: 0.5667 - val_loss: 1.2114 - val_accuracy: 0.2500 - 708ms/epoch - 708ms/step
Epoch 14/30
1/1 - 1s - loss: 0.9817 - accuracy: 0.5667 - val_loss: 1.1877 - val_accuracy: 0.2500 - 704ms/epoch - 704ms/step
Epoch 15/30
1/1 - 1s - loss: 0.9644 - accuracy: 0.5667 - val_loss: 1.1647 - val_accuracy: 0.2500 - 928ms/epoch - 928ms/step
Epoch 16/30
1/1 - 1s - loss: 0.9475 - accuracy: 0.5667 - val_loss: 1.1423 - val_accuracy: 0.2500 - 696ms/epoch - 696ms/step
Epoch 17/30
1/1 - 1s - loss: 0.9306 - accuracy: 0.5667 - val_loss: 1.1203 - val_accuracy: 0.2500 - 692ms/epoch - 692ms/step
Epoch 18/30
1/1 - 1s - loss: 0.9141 - accuracy: 0.5667 - val_loss: 1.0987 - val_accuracy: 0.2500 - 703ms/epoch - 703ms/step
Epoch 19/30
1/1 - 1s - loss: 0.8985 - accuracy: 0.5667 - val_loss: 1.0776 - val_accuracy: 0.2500 - 697ms/epoch - 697ms/step
Epoch 20/30
1/1 - 1s - loss: 0.8833 - accuracy: 0.5667 - val_loss: 1.0567 - val_accuracy: 0.2500 - 693ms/epoch - 693ms/step
Epoch 21/30
1/1 - 1s - loss: 0.8683 - accuracy: 0.5667 - val_loss: 1.0363 - val_accuracy: 0.2500 - 893ms/epoch - 893ms/step
Epoch 22/30
1/1 - 1s - loss: 0.8532 - accuracy: 0.5667 - val_loss: 1.0170 - val_accuracy: 0.2500 - 684ms/epoch - 684ms/step
Epoch 23/30
1/1 - 1s - loss: 0.8383 - accuracy: 0.5667 - val_loss: 0.9988 - val_accuracy: 0.2500 - 689ms/epoch - 689ms/step
Epoch 24/30
1/1 - 1s - loss: 0.8232 - accuracy: 0.5667 - val_loss: 0.9814 - val_accuracy: 0.2500 - 697ms/epoch - 697ms/step
Epoch 25/30
1/1 - 1s - loss: 0.8083 - accuracy: 0.5667 - val_loss: 0.9654 - val_accuracy: 0.2500 - 693ms/epoch - 693ms/step
Epoch 26/30
1/1 - 1s - loss: 0.7938 - accuracy: 0.5667 - val_loss: 0.9516 - val_accuracy: 0.2500 - 701ms/epoch - 701ms/step
Epoch 27/30
1/1 - 1s - loss: 0.7796 - accuracy: 0.5667 - val_loss: 0.9369 - val_accuracy: 0.2500 - 905ms/epoch - 905ms/step
Epoch 28/30
1/1 - 1s - loss: 0.7661 - accuracy: 0.5667 - val_loss: 0.9226 - val_accuracy: 0.2500 - 717ms/epoch - 717ms/step
Epoch 29/30
1/1 - 1s - loss: 0.7523 - accuracy: 0.5667 - val_loss: 0.9085 - val_accuracy: 0.2500 - 736ms/epoch - 736ms/step
Epoch 30/30
1/1 - 1s - loss: 0.7389 - accuracy: 0.5667 - val_loss: 0.8941 - val_accuracy: 0.2500 - 727ms/epoch - 727ms/step
Performing post-training quantization...
Performing post-training quantization OK
Running quantization-aware training...
Epoch 1/30
1/1 - 1s - loss: 0.6641 - accuracy: 0.5667 - val_loss: 0.9103 - val_accuracy: 0.3750 - 1s/epoch - 1s/step
Epoch 2/30
1/1 - 0s - loss: 0.6640 - accuracy: 0.5667 - val_loss: 0.9100 - val_accuracy: 0.3750 - 42ms/epoch - 42ms/step
Epoch 3/30
1/1 - 0s - loss: 0.6656 - accuracy: 0.5667 - val_loss: 0.9096 - val_accuracy: 0.3750 - 22ms/epoch - 22ms/step
Epoch 4/30
1/1 - 2s - loss: 0.6592 - accuracy: 0.5667 - val_loss: 0.8454 - val_accuracy: 0.3750 - 2s/epoch - 2s/step
Epoch 5/30
1/1 - 1s - loss: 0.6694 - accuracy: 0.5667 - val_loss: 0.7280 - val_accuracy: 0.5000 - 1s/epoch - 1s/step
Epoch 6/30
1/1 - 1s - loss: 0.6876 - accuracy: 0.5000 - val_loss: 0.7274 - val_accuracy: 0.5000 - 1s/epoch - 1s/step
Epoch 7/30
1/1 - 2s - loss: 0.6670 - accuracy: 0.5000 - val_loss: 0.7268 - val_accuracy: 0.5000 - 2s/epoch - 2s/step
Epoch 8/30
1/1 - 1s - loss: 0.6705 - accuracy: 0.5000 - val_loss: 0.6815 - val_accuracy: 0.6250 - 1s/epoch - 1s/step
Epoch 9/30
1/1 - 1s - loss: 0.6636 - accuracy: 0.5000 - val_loss: 0.6734 - val_accuracy: 0.6250 - 1s/epoch - 1s/step
Epoch 10/30
1/1 - 0s - loss: 0.6246 - accuracy: 0.5333 - val_loss: 0.6769 - val_accuracy: 0.5000 - 21ms/epoch - 21ms/step
Epoch 11/30
1/1 - 0s - loss: 0.6303 - accuracy: 0.5333 - val_loss: 0.6763 - val_accuracy: 0.5000 - 26ms/epoch - 26ms/step
Epoch 12/30
1/1 - 0s - loss: 0.6294 - accuracy: 0.5333 - val_loss: 0.6756 - val_accuracy: 0.5000 - 62ms/epoch - 62ms/step
Epoch 13/30
1/1 - 0s - loss: 0.6387 - accuracy: 0.5000 - val_loss: 0.6750 - val_accuracy: 0.5000 - 32ms/epoch - 32ms/step
Epoch 14/30
1/1 - 0s - loss: 0.6283 - accuracy: 0.5000 - val_loss: 0.6743 - val_accuracy: 0.5000 - 20ms/epoch - 20ms/step
Epoch 15/30
1/1 - 0s - loss: 0.6172 - accuracy: 0.5000 - val_loss: 0.6737 - val_accuracy: 0.5000 - 70ms/epoch - 70ms/step
Epoch 16/30
1/1 - 2s - loss: 0.6146 - accuracy: 0.5000 - val_loss: 0.6602 - val_accuracy: 0.5000 - 2s/epoch - 2s/step
Epoch 17/30
1/1 - 1s - loss: 0.5834 - accuracy: 0.5000 - val_loss: 0.5703 - val_accuracy: 0.7500 - 1s/epoch - 1s/step
Epoch 18/30
1/1 - 0s - loss: 0.5507 - accuracy: 0.6000 - val_loss: 0.5980 - val_accuracy: 0.7500 - 39ms/epoch - 39ms/step
Epoch 19/30
1/1 - 0s - loss: 0.5409 - accuracy: 0.6000 - val_loss: 0.5936 - val_accuracy: 0.7500 - 67ms/epoch - 67ms/step
Epoch 20/30
1/1 - 0s - loss: 0.5093 - accuracy: 0.6333 - val_loss: 0.5889 - val_accuracy: 0.7500 - 20ms/epoch - 20ms/step
Epoch 21/30
1/1 - 0s - loss: 0.4945 - accuracy: 0.6333 - val_loss: 0.5885 - val_accuracy: 0.7500 - 62ms/epoch - 62ms/step
Epoch 22/30
1/1 - 0s - loss: 0.4837 - accuracy: 0.6667 - val_loss: 0.5870 - val_accuracy: 0.7500 - 22ms/epoch - 22ms/step
Epoch 23/30
1/1 - 1s - loss: 0.4803 - accuracy: 0.6667 - val_loss: 0.5071 - val_accuracy: 1.0000 - 1s/epoch - 1s/step
Epoch 24/30
1/1 - 2s - loss: 0.4643 - accuracy: 0.7333 - val_loss: 0.5067 - val_accuracy: 1.0000 - 2s/epoch - 2s/step
Epoch 25/30
1/1 - 0s - loss: 0.4567 - accuracy: 0.8333 - val_loss: 0.5122 - val_accuracy: 1.0000 - 21ms/epoch - 21ms/step
Epoch 26/30
1/1 - 0s - loss: 0.4378 - accuracy: 0.8000 - val_loss: 0.5118 - val_accuracy: 1.0000 - 59ms/epoch - 59ms/step
Epoch 27/30
1/1 - 0s - loss: 0.4322 - accuracy: 0.8000 - val_loss: 0.5114 - val_accuracy: 1.0000 - 61ms/epoch - 61ms/step
Epoch 28/30
1/1 - 0s - loss: 0.4300 - accuracy: 0.8000 - val_loss: 0.5110 - val_accuracy: 1.0000 - 26ms/epoch - 26ms/step
Epoch 29/30
1/1 - 0s - loss: 0.4266 - accuracy: 0.8000 - val_loss: 0.5107 - val_accuracy: 1.0000 - 68ms/epoch - 68ms/step
Epoch 30/30
1/1 - 1s - loss: 0.4152 - accuracy: 0.8333 - val_loss: 0.5036 - val_accuracy: 1.0000 - 1s/epoch - 1s/step
Running quantization-aware training OK
Finished training
Saving best performing model...
Saving best performing model OK
Converting TensorFlow Lite float32 model...
Converting TensorFlow Lite int8 quantized model...
Converting to Akida model...
Converting to Akida model OK
Model Summary
______________________________________________
Input shape Output shape Sequences Layers
==============================================
[1, 1, 39] [1, 1, 2] 2 4
______________________________________________
SW/dense (Software)
______________________________________________
Layer (type) Output shape Kernel shape
==============================================
dense (Fully.) [1, 1, 20] (1, 1, 39, 20)
______________________________________________
HW/dense_1-y_pred (Hardware) - size: 1336 bytes
_____________________________________________________
Layer (type) Output shape Kernel shape NPs
=====================================================
dense_1 (Fully.) [1, 1, 10] (1, 1, 20, 10) 1
_____________________________________________________
y_pred (Fully.) [1, 1, 2] (1, 1, 10, 2) 1
_____________________________________________________
Saving Akida model...
Saving Akida model OK...
Loading data for profiling...
Loading data for profiling OK
Creating embeddings...
[ 0/38] Creating embeddings...
[38/38] Creating embeddings...
Creating embeddings OK (took 1 second)
Calculating performance metrics...
Calculating inferencing time...
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Calculating inferencing time OK
Profiling float32 model...
Profiling float32 model (TensorFlow Lite Micro)...Profiling float32 model (EON)...
Attached to job 7900684...
Attached to job 7900684...
Profiling int8 model...
Profiling int8 model (TensorFlow Lite Micro)...
Profiling int8 model (EON)...
Attached to job 7900684...
Attached to job 7900684...
Profiling akida model...
WARNING: Requested model can't be fully mapped to hardware. Reason:
WARNING: Error when mapping layer 'dense': Weight bits should be between 1 and 4 inclusive.
WARNING: Reported program size, number of NPs and nodes may not be accurate!
Traceback (most recent call last):
File "/home/profile.py", line 361, in <module>
main_function()
File "/home/profile.py", line 339, in main_function
metadata = ei_tensorflow.profiling.get_model_metadata(model, validation_dataset, Y_test, samples_dataset, Y_samples, has_samples,
File "/app/./resources/libraries/ei_tensorflow/profiling.py", line 815, in get_model_metadata
akida_perf = profile_model(model_type, None, None, validation_dataset, Y_test, X_samples,
File "/app/./resources/libraries/ei_tensorflow/profiling.py", line 292, in profile_model
prediction, prediction_train, prediction_test = make_predictions(mode, model, validation_dataset, Y_test,
File "/app/./resources/libraries/ei_tensorflow/profiling.py", line 251, in make_predictions
return ei_tensorflow.brainchip.model.make_predictions(mode, akida_model_path, validation_dataset,
File "/app/./resources/libraries/ei_tensorflow/brainchip/model.py", line 59, in make_predictions
prediction = predict(model_path, validation_dataset, len(Y_test))
File "/app/./resources/libraries/ei_tensorflow/brainchip/model.py", line 101, in predict
output = model.predict(item)
RuntimeError: The maximum input event value is 15, got 71
Application exited with code 1
Job failed (see above)
Project ID:
210079
Context/Use case:
Analyze Accelerometer Data of MMA7660 on RP2040 with Edge Impulse