Different results with FOMO-AD

Hi

I have different results running FOMO-AD with the same image running online test and offline on a Raspberry Pi 3.

When I run the model test or live classification on the Edge Impulse website, the result is correct, showing the problem bounding boxes with high scores and the others, like in the background, with low scores. When I download the .eim file and run it in a Python script, I have different results than I have on the website, the algothin scores the problem parts and the background with high scores.

Another thing is that the result from the runner.classify method shows anomaly = 0.0, but visual_anomaly_grid has a lot of high scores. The size of the grid showed on the result of the runner.classify is 8x8, should it be 16x16?

When I run the old block with GMM, it works well.

Does anyone have the same issue?

Project ID: 531418

Hello @Adilson.Salmazo,

Let me try to reproduce your issue and come back to you.

Best,

Louis

1 Like

Hello @Adilson.Salmazo,

I tried to run your model using the python SDK, on my macbook not on a RPI, I can try that later if needed.

Here are the results that I obtain for your sample testing/anomaly.jpg.53kg0k93.ingestion-74f769ff6b-ljk4b.jpg.59ld26rd.ingestion-6fcc6c867d-lrw7b.jpg:

{
    "result": {
        "anomaly": 0.0,
        "visual_anomaly_grid": [
            {
                "height": 8,
                "label": "anomaly",
                "value": 9.590187072753906,
                "width": 8,
                "x": 32,
                "y": 0
            },
            {
                "height": 8,
                "label": "anomaly",
                "value": 10.44467830657959,
                "width": 8,
                "x": 32,
                "y": 8
            },
            {
                "height": 8,
                "label": "anomaly",
                "value": 9.662215232849121,
                "width": 8,
                "x": 128,
                "y": 8
            },
            ... // Other cells of the grid
        ],
        "visual_anomaly_max": 44.683528900146484,
        "visual_anomaly_mean": 19.170963287353516
    },
    "timing": {
        "anomaly": 1801,
        "classification": 74,
        "dsp": 0,
        "json": 2,
        "stdin": 6
    }
}

A few comments on that:

  • Anomaly: the anomaly key is used for non-visual anomaly blocks (K-Means or GMM). It is confusing I agree, I’ll check with the team how can add clarity on that. As you have not set one of those block in your project, it’ll output 0.
    In your case I suspect you want to have a look at the visual_anomaly_max or the visual_anomaly_mean to identify your anomalies.
  • Height and Width: By default, the grid size is your input size divided by 8. In your case 288/8 = 32. This means you will get a grid composed of 32 cells (8x8 pixels each).
  • Difference between Studio and on-device inference: We have identified the issue, it comes from the different resize implementation in the Python Inferencing SDK and the method we use in the studio. Although the cell scores don’t exactly match, they are pretty similar.
    I am sending you on a private message (for confidentiality) an image from your test dataset to show the differences between the Linux Python Inferencing SDK and the Studio.

We are working on reviewing the resizing method to avoid this difference. I don’t have an ETA we’ll let you know when it’ll be ready.

I hope this answers your question. Let me know if you need more information.

Best,

Louis

And for the resize issue, one solution would be to import your data already resized as png.
That way, your model will learn on data that will be similarly processed during your on-device inference.

Here is a transformation block my colleague @jimbruges created to match the resize method of the Linux Python Inferencing SDK: GitHub - edgeimpulse/image-resize-transformation-block: This transformation block resizes all images in an Edge Impulse project using the same methods as in edge_impulse_linux.

Best,

Louis