Determine if Quantized (int8) or Unoptimized (float32) Model

Given the following folders from a deployed EI Model is it possible to interrogate one or more files to determine if this Model is a Quantized (int8) Model or Unoptimized (float32) Model?

|-- edge_impulse/
|--|--edge-impulse-sdk/
|--|--model-parameters/
|--|--tflite-model/

Hello @MMarcial,

You can check the model-parameters/model_metadata.h file, it should contain all the meta data of your impulse.

Best,

Louis

Ah indeed, thanks Louis!

Checking model-parameters/model_metadata.h for

  • #define EI_CLASSIFIER_TFLITE_INPUT_DATATYPE
    will be
  • EI_CLASSIFIER_DATATYPE_INT8
    or
  • EI_CLASSIFIER_DATATYPE_FLOAT32