Flatten fails, core dump, 134 error

Hi,
I added some new data, but can’t retrain job 44494.
This is XL data taken at 75 Hz, trying to build 10 seconds windows with 1s expansion.

This error isn’t very descriptive, not sure what to try. Ideas?

Thanks!

Creating job... OK (ID: 1263375)

Retraining Flatten...
Scheduling job in cluster...
Job started
Creating windows from 62 files...
[0/1] Pre-caching files...
[0/1] Pre-caching files...
[1/1] Pre-caching files...
Pre-caching files OK


<--- Last few GCs --->

[6:0x33d6e90]    26157 ms: Mark-sweep 2026.5 (2078.1) -> 2017.6 (2081.6) MB, 798.5 / 0.0 ms  (average mu = 0.099, current mu = 0.015) allocation failure scavenge might not succeed
[6:0x33d6e90]    26964 ms: Mark-sweep 2030.0 (2081.6) -> 2021.1 (2085.3) MB, 795.2 / 0.0 ms  (average mu = 0.059, current mu = 0.015) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x140dc19]
    1: StubFrame [pc: 0x139477c]
Security context: 0x1fdb783808d1 <JSObject>
    2: createWindowsFromSampleInternal(aka createWindowsFromSampleInternal) [0x3289f164fe31] [/app/node/windowing/build/window-time-series.js:~245] [pc=0x122b07625783](this=0x3e4620f404b1 <undefined>,0x1c004d455409 <NpySerializer map = 0x38154fd34239>,0x2238eccffe01 <Object map = 0x1c7ffe07fd69>,10000,80,0x3e4620f406e9 <false>,0x1...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0xa1a640 node::Abort() [node]
 2: 0xa1aa4c node::OnFatalError(char const*, char const*) [node]
 3: 0xb9a62e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
 4: 0xb9a9a9 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
 5: 0xd57c25  [node]
 6: 0xd582b6 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [node]
 7: 0xd64b75 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
 8: 0xd65a25 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 9: 0xd684dc v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
10: 0xd2eefb v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
11: 0x10714ce v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
12: 0x140dc19  [node]
/home/create_features.sh: line 3:     6 Aborted                 (core dumped) node "/app/node/windowing/build/window-time-series.js" "/home/input.json" "/home/output.json"

Application exited with code 134 (Error)

Job failed (see above)

Hello @Panometric,

Just increased your DSP performances and time limit, could you try again please?

Regards,

Louis

Same result…

Creating job... OK (ID: 1265224)

Retraining Flatten...
Scheduling job in cluster...
Creating windows from 62 files...
Job started
[0/1] Pre-caching files...
[0/1] Pre-caching files...
[1/1] Pre-caching files...
Pre-caching files OK


<--- Last few GCs --->

[6:0x481de90]    27409 ms: Mark-sweep 2025.7 (2077.6) -> 2017.1 (2081.1) MB, 786.4 / 0.0 ms  (average mu = 0.100, current mu = 0.015) allocation failure scavenge might not succeed
[6:0x481de90]    28218 ms: Mark-sweep 2029.7 (2081.1) -> 2020.7 (2084.3) MB, 798.1 / 0.0 ms  (average mu = 0.058, current mu = 0.014) allocation failure scavenge might not succeed


<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x140dc19]
    1: StubFrame [pc: 0x139477c]
Security context: 0x2388a77c08d1 <JSObject>
    2: createWindowsFromSampleInternal(aka createWindowsFromSampleInternal) [0x18dc58acf521] [/app/node/windowing/build/window-time-series.js:~245] [pc=0x29b862e657c3](this=0x30e633f004b1 <undefined>,0x36c9d8c19c21 <NpySerializer map = 0x1f33b87f4289>,0x2a44b65ff761 <Object map = 0x4677acbfd69>,10000,80,0x30e633f006e9 <false>,0x36...

 1: 0xa1a640 node::Abort() [node]
 2: 0xa1aa4c node::OnFatalError(char const*, char const*) [node]
 3: 0xb9a62e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
 4: 0xb9a9a9 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
 5: 0xd57c25  [node]
 6: 0xd582b6 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [node]
 7: 0xd64b75 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
 8: 0xd65a25 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 9: 0xd684dc v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
10: 0xd2eefb v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
11: 0x10714ce v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
12: 0x140dc19  [node]
/home/create_features.sh: line 3:     6 Aborted                 (core dumped) node "/app/node/windowing/build/window-time-series.js" "/home/input.json" "/home/output.json"

Application exited with code 134 (Error)

Job failed (see above)

@Panometric Up your window increase, you have samples that are >20 minutes long, with 80 ms. increase you’re creating 15,000 windows per sample (and even 100K samples for the 1 hour window that you have). I’d set it to 10 seconds as well.

1 Like

Thanks, I got it to run. The increase was wrong, and splitting the samples improved it allot. But That error message gave me no clues.

A few suggestions:

  1. I had set the increase to a second, but forgot to save it. It could have warned I made changes but did not save.
  2. The error message could have said too many windows, and pointed at least to the problematic file.

Thanks for your help.

1 Like

@Panometric Yep, will make sure we add it to the backlog.