Memory increasing when using WebAssembly 32float build (Memory Leak ?)

I have deployed the WebAssembly build in my container running Node.js (Node-RED)


Using Node-RED, I am sending every 100 ms an audio fragment of 100 ms to the WebAssembly edge impulse. This is working fine but after some time I am getting error:

"RuntimeError: abort(OOM). Build with -s ASSERTIONS=1 for more info."

Once I get this error, I always get this error (for all future audio fragments).
The only way to fix this is to restart Node-RED container.

My Node-RED project is using following code to trigger the WebAssembly edge impulse code:

Hi @janvda, interesting case - and it goes deeper than I can currently find. When looking at this with a debugger the JavaScript application seems to run fine, but at some point (for me around inference 16777) something corrupts in the WebAssembly library. I’m going to set a higher memory allocation and allow memory growth there which should hopefully resolve the issue.

1 Like

This will be live tomorrow morning!

1 Like

Here some memory stats from my node-red container.
The good news is that I haven’t seen this abort(OOM) error anymore.
The bad news is that there is still some memory leak (this is not necessarily in the deployed edge impulse code).

@janjongboom, I have isolated the memory leak to the edge impulse classify node.

The code for this node:

Some notes:

  1. The memory increased within an hour from 134.5 MB to 149.3MB => around 4000 bytes/sec
  2. I am chopping 44KHz audio (16bit) into chunks of 100ms and pass this on as input to the edge impulse classify node => so the input for this node is 88200 bytes/sec.

So I looked at it last time with the debugger on, and calling the garbage collector manually worked, so I suspect something with the new Uint8Array's that are being created. In your graph it shows that memory eventually is cleaned up though, or is that a restart of the app?

Yes, that is a restart of the container. So memory is only cleaned up when restarting the app.

In other words we do 10 classifications per second, this gives a memory leak of about 400 bytes per classification.

So I am just wondering if the memory leak is not in the webassembly module where each time the classifier method is executed it allocates about 400 bytes without freeing it.

Here below I have highlighted the statement in

that potentially might have a memory leak

Yeah the underlying library does not leak any memory (we have integration tests for that), but maybe somewhere in the binding. Will do another deep dive next week.

1 Like

Feel free to test it based on the Unoptimized (float32) webassembly build for my project with ID 8755.

Hi @janjongboom,

FYI, I have renamed title of this forum topic to make the problem more clear as the initial problem regarding the "RuntimeError: abort(OOM). Build with -s ASSERTIONS=1 for more info." is resolved.

I also created a little github project that is reproducing the problem:

The details of this little project are documented in its README.
As you can see in the README, when I execute the following command:

. ./ 500000

It is logging the following

Jans-MBP:doorbell-mel-wasm-v9 jan$ node -v
Jans-MBP:doorbell-mel-wasm-v9 jan$ . ./ 500000

Ran classifier 0 times and memory usage is :
rss 28.2 MB
heapTotal 5.3 MB
heapUsed 3.12 MB
external 128.86 MB

Ran classifier 5000 times and memory usage is :
rss 37.43 MB
heapTotal 8.1 MB
heapUsed 4.65 MB
external 129.6 MB

Ran classifier 10000 times and memory usage is :
rss 39.34 MB
heapTotal 8.1 MB
heapUsed 2.99 MB
external 128.93 MB

Ran classifier 15000 times and memory usage is :
rss 41.64 MB
heapTotal 8.1 MB
heapUsed 3.35 MB
external 129.06 MB

Ran classifier 20000 times and memory usage is :
rss 43.51 MB
heapTotal 8.1 MB
heapUsed 3.69 MB
external 129.2 MB

Ran classifier 25000 times and memory usage is :
rss 45.39 MB
heapTotal 8.1 MB
heapUsed 4.03 MB
external 129.33 MB

Ran classifier 30000 times and memory usage is :
rss 47.25 MB
heapTotal 8.1 MB
heapUsed 4.36 MB
external 129.47 MB

Ran classifier 35000 times and memory usage is :
rss 49.13 MB
heapTotal 8.1 MB
heapUsed 4.7 MB
external 129.6 MB

So you see that the heap is not increasing but the rss is gradually increasing (a bit less than 2 MB for each 5000 classifications).

I think this indicates that somewhere memory gets allocated that isn’t freed.

@janvda I’ve been digging through this today, and there’s indeed 332 bytes (depending on the size of your model) leaking in the Emscripten bindings. Internally we use a vector structure for the classifications that Emscripten loses track off when returning to JS context. I’m trying to rewrite this today to get rid of the leak…

edit: we have a fix now, will see if it passes CI and hopefully land early next week.

Hi @janjongboom, that is very good news. Thanks for looking into to it and delivering the fix somewhere next week (there is no hurry from my side).