Could not deploy the target for nordic-nrf52840-dk

Question/Issue:
I’m using the Edge Impulse SDK in jupyter notebook.
I can deploy Arduino and success. But failed in the nRF52840-DK and nRF5340-DK.
Here is the python code for deploy model:

try:
    ei.model.deploy(model=model,
                    model_output_type=ei.model.output_type.Classification(),
                    deploy_target = 'nordic-nrf52840-dk',
                    output_directory=".")
except Exception as e:
    print(f"Could not deploy: {e}")

following is the error prompt:

WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 3 of 3). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: C:\Users\ADMINI~1\AppData\Local\Temp\tmpj_cp9jyc\saved_model\assets
INFO:tensorflow:Assets written to: C:\Users\ADMINI~1\AppData\Local\Temp\tmpj_cp9jyc\saved_model\assets
Could not deploy: deploy_target: [nordic-nrf52840-dk] not in [‘zip’, ‘arduino’, ‘cubemx’, ‘wasm’, ‘wasm-browser-simd’, ‘tensorrt’, ‘ethos’, ‘synaptics-tensaiflow-lib’, ‘meta-tf’, ‘memryx-dfp’, ‘tidl-lib-am62a’, ‘tidl-lib-am68a’, ‘slcc’, ‘arduino-nano-33-ble-sense’, ‘arduino-nicla-vision’, ‘espressif-esp32’, ‘raspberry-pi-rp2040’, ‘silabs-xg24’, ‘infineon-cy8ckit-062s2’, ‘infineon-cy8ckit-062-ble’, ‘nordic-thingy53’, ‘nordic-thingy53-nrf7002eb’, ‘sony-spresense-commonsense’, ‘renesas-ck-ra6m5’, ‘brickml’, ‘brickml-module’, ‘runner-linux-aarch64’, ‘runner-linux-armv7’, ‘runner-linux-x86_64’, ‘runner-linux-aarch64-akd1000’, ‘runner-linux-x86_64-akd1000’, ‘runner-mac-x86_64’, ‘runner-linux-aarch64-tda4vm’, ‘runner-linux-aarch64-am62a’, ‘particle’, ‘iar’, ‘runner-linux-aarch64-am68a’]

It seems the board not support, however, when I check the available device by using this code:

ei.model.list_deployment_targets()

It show many devices in list and include nRF52840-DK and nRF5340-DK.

BTW, the board is PCA10056/2.0.2/2021.41

Hi @microa,

Can you run the profiling as shown below to estimate the RAM/ROM usage?

profile = ei.model.profile(model=model, device='nordic-nrf52840-dk')
print(profile.summary())

Aurelien

Hi @aurel ,

Thanks for your help! Here is the running result:

WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 3 of 3). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: C:\Users\ADMINI~1\AppData\Local\Temp\tmps932isbn\saved_model\assets
INFO:tensorflow:Assets written to: C:\Users\ADMINI~1\AppData\Local\Temp\tmps932isbn\saved_model\assets
Target results for float32:
{
“device”: “nordic-nrf52840-dk”,
“tfliteFileSizeBytes”: 230912,
“isSupportedOnMcu”: true,
“memory”: {
“tflite”: {
“ram”: 109256,
“rom”: 278400,
“arenaSize”: 108928
},
“eon”: {
“ram”: 88624,
“rom”: 248768
}
},
“timePerInferenceMs”: 1437
}

Performance on device types:

{
“variant”: “float32”,
“lowEndMcu”: {
“description”: “Estimate for a Cortex-M0+ or similar, running at 40MHz”,
“timePerInferenceMs”: 3801,
“memory”: {
“tflite”: {
“ram”: 109121,
“rom”: 266608
},
“eon”: {
“ram”: 88512,
“rom”: 245488
}
},
“supported”: true
},
“highEndMcu”: {
“description”: “Estimate for a Cortex-M7 or other high-end MCU/DSP, running at 240MHz”,
“timePerInferenceMs”: 52,
“memory”: {
“tflite”: {
“ram”: 109256,
“rom”: 278400
},
“eon”: {
“ram”: 88624,
“rom”: 248768
}
},
“supported”: true
},
“highEndMcuPlusAccelerator”: {
“description”: “Most accelerators only accelerate quantized models.”,
“timePerInferenceMs”: 52,
“memory”: {
“tflite”: {
“ram”: 109256,
“rom”: 278400
},
“eon”: {
“ram”: 88624,
“rom”: 248768
}
},
“supported”: true
},
“mpu”: {
“description”: “Estimate for a Cortex-A72, x86 or other mid-range microprocessor running at 1.5GHz”,
“timePerInferenceMs”: 1,
“rom”: 230912.0,
“supported”: true
},
“gpuOrMpuAccelerator”: {
“description”: “Estimate for a GPU or high-end neural network accelerator”,
“timePerInferenceMs”: 1,
“rom”: 230912.0,
“supported”: true
}
}
None

@microa,

What is the input of your model? As our firmware for nRF53840DK supports audio and accelerometer data, only those types will be supported. You can check the details on how to set the Input_type here

If you have some other type of input, then select the ‘zip’ device to download the C++ library and build the application locally.

Aurelien

@aurel
My input is a sequence data, it’s like a simple test code:

from tensorflow.keras.layers import Conv1D, AveragePooling1D, Flatten, Dense, BatchNormalization, Dropout, MaxPooling1D

model = Sequential()

model.add(Conv1D(filters=128, kernel_size=10, strides=3, activation='relu', input_shape=(260, 1)))
model.add(MaxPooling1D(pool_size=2, strides=2))

model.add(Conv1D(filters=32, kernel_size=3, strides=1, activation='relu'))
model.add(MaxPooling1D(pool_size=2, strides=1))

model.add(Conv1D(filters=32, kernel_size=7, strides=2, activation='relu'))
model.add(Flatten())

model.add(Dense(64, activation='relu'))
model.add(Dense(5, activation='softmax')) 

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

I have tried to use ‘zip’, but I am not very familiar with how to combine the ‘zip’ to the nRF52840-DK’s template code. So I hope the edge impulse can direct generate deploy code.