Akida Fomo model ending training early?

My questions are based around creating a model using the brainchip AKIDA model format. Firstly, the only option with AKIDA appears to be the FOMO, which returns a centroid. Is there another option for AKIDA?

Secondly, under the OJBECT DETECTION interface where you train the model, after training, there are 3 performance outputs, 1) Quantized (AKIDA), 2) unoptimized (Float 32) and 3) Quantized (int8) performance metrics for F1 score. This respectively gives me the following F1 scores: 1) 17.4%, 2) 92.3% and 3) 92.1%. As can be seen, the Quantized (Akida) F1 score appears to be remarkably lower than the other 2. Does this have anything to do with the running of the Quantization training where it stops early after 11 Epochs? My main training is for 300 epochs. I am trying to compare my output from AKIDA v’s YOLO5 community, but given the AKIDA output is so low, I’m not sure what the problems is - it should be nearly the same value? I have included an extract below showing how the training stops short. If this is the issue, is there a way to prevent this so that the quantization training goes for the 300 cycles also?

Thanks
Allan

Extract from Training AKIDA model - object detection, using AKIDA FOMO
Running quantization-aware training…

Epoch Train Validation
Loss Loss Precision Recall F1
00 0.72353 0.65686 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
01 0.66386 0.66752 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
02 0.67104 0.66791 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
03 0.66963 0.66371 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
04 0.66489 0.65825 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
05 0.65937 0.65278 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
06 0.65402 0.64775 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
07 0.64915 0.64327 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
08 0.64483 0.63932 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
09 0.64104 0.63586 0.00 0.00 0.00

Epoch Train Validation
Loss Loss Precision Recall F1
10 0.63771 0.63282 0.00 0.00 0.00
Restoring model weights from the end of the best epoch: 1.
Epoch 11: early stopping
Running quantization-aware training OK

Finished training

Converting TensorFlow Lite float32 model…
Converting TensorFlow Lite int8 quantized model…
Converting to Akida model…

Converting to Akida model OK

                Model Summary                    

Input shape Output shape Sequences Layers NPs

[512, 512, 3] [64, 64, 3] 3 8 96



Layer (type) Output shape Kernel shape NPs

===================== SW/conv_0 (Software) =====================

conv_0 (InputConv.) [256, 256, 16] (3, 3, 3, 16) N/A

======= HW/conv_1-conv_3 (Hardware) - size: 890872 bytes =======

conv_1 (Conv.) [256, 256, 32] (3, 3, 16, 32) 15


conv_2 (Conv.) [128, 128, 64] (3, 3, 32, 64) 32


conv_3 (Conv.) [128, 128, 64] (3, 3, 64, 64) 16

==== HW/separable_4-conv2d_1 (Hardware) - size: 353984 bytes ===

separable_4 (Sep.Conv.) [64, 64, 128] (3, 3, 64, 1) 16


                                     (1, 1, 64, 128)      

separable_5 (Sep.Conv.) [64, 64, 128] (3, 3, 128, 1) 8


                                     (1, 1, 128, 128)     

conv2d (Conv.) [64, 64, 32] (1, 1, 128, 32) 7


conv2d_1 (Conv.) [64, 64, 3] (1, 1, 32, 3) 2


Saving Akida model…
Saving Akida model OK…
Loading data for profiling…
Loading data for profiling OK

Hi @Bourko1

A. FOMO Only
In our Brainchip Akida doc you can see some examples of classification example here:

B. Accuracy / Performance
Quantization can affect the accuracy. You can read more on increasing performance here: Increasing model performance - Edge Impulse Documentation

I am not sure why retraining would be stopping at 11 if 300 was configured. Let me check with the tech team to check with someone working on that platform.

Best

Eoin

No worries thanks for the response Eoin

Cheers

Hi

FOMO for BrainChip has this line in the QAT portion of the code. This explains the early stopping. Please remove if we think it is misbehaving.

early_stopping = tf.keras.callbacks.EarlyStopping(
    monitor=stopping_metric,
    mode="max",
    verbose=1,
    min_delta=0,
    patience=10,
    restore_best_weights=True,

The number of QAT cycles is set the 30, but is changeable in Expert Mode.

YoloV5 is incompatible with BrainChip’s AKD1000. The community block will not run on device

Can you please let me know your use case for FOMO vs another object detector with bounding boxes please?