Following the “Continuous motion recognition” tutorial, an error of ERR: -1002 (EIDSP_OUT_OF_MEM) occurred when running on the board after deploying back to device. The full log file is here.
Hi @yilintung, thanks for the report. Let me investigate where we’re leaking memory! Was this with the uTensor or TFLite inferencing engine?
Hi @janjongboom , I following the “Continuous motion recognition" tutorial , using uTensor inferencing engine.
- Inferencing engine: uTensor .
- Output format: Binary (DISCO-L475VG-IOT01A) .
@yilintung, OK, thanks a lot for the update. We’ll check it as soon as possible. I’m pretty sure the TFLite export does not have this problem, so perhaps try that for now.
@janjongboom I modified the inferencing engine to TensorFlow Lite. But other errors occur:
Failed to allocate TFLite arena (16384 bytes)
Failed to run impulse (-6)
I think it may be that the Binary’s procedure repeatedly initializes the interpreter, which is the out of memory’s cause. The full log file is here.
This should not be the case, we clear the memory area every time. Will take a look tomorrow!
Hi @yilintung, found the issue! We introduced a regression where the file handle to the classification file was not closed, leaking 2K per inference. We’re only running a few inferences in our integration tests, thus not capturing the out of memory exception. Verifying the fix right now, and expect to be deployed later today.
This has now been fixed and released. Just re-export the project to get the latest binary!
I built a binary in the original project and still had problems. So I had to recreate this project, and the built binary can run correctly.