numpy::log() implementation in numpy.hpp fails on Portenta H7 (Cortex-M7)

Question/Issue:
The log() function provided in the SDK headers (numpy.hpp) is mathematically flawed on the Portenta H7 (ARM Cortex-M7) platform, leading to incorrect feature vector generation.

Context/Use case:
This issue is encountered when running any deployed impulse that utilizes an MFE block (and likely all other DSP blocks with logarithm-dependent feature scaling) on a Portenta H7. The local inference results do not match the expected results from the Studio.

Summary:
The inline function “static inline float log(float a)” in numpy.hpp uses a Bit-Manipulation/Type-Punning optimization hack that violates the C++ Strict Aliasing Rule. On the Portenta H7 platform, this leads to mathematically wrong logarithm values. This core mathematical error corrupts subsequent DSP feature vectors.

Steps to Reproduce:

  1. Deploy any impulse containing an audio DSP block (MFE) as a C++ Library.
  2. Implement the inference code on a Portenta H7
  3. Compare the resulting feature vector (output of the DSP block) with the expected vector from the Studio.

Minimal Reproducible Example (isolates the bug)
Run this code on a Arduino Portenta H7:
printf(“log(2.7)=%f\r\n”, numpy::log(2.7));

Expected Results:
log(2.7)=0.993252 (The natural logarithm of 2.7)

Actual Results:
log(2.7)=3.386294 (An incorrect value)

Reproducibility:

  • [x] Always

Environment:

  • Platform: Portenta H7 (STM32H7)
  • Build Environment Details: GCC/CubeIDE 1.19.0 with optimization -O0
  • OS Version: Windows 10
  • Edge Impulse Version (Firmware): 1.77.1

Wow thanks @swalderi for highlighting this, we are investigating the STM32 deployment at the moment, your analysis is going to be providede to our embedded team to help with reproducting the mathematical flaw you have pointed out.

Edit : Actually I’m opening a seperate issue for this one @swalderi, we will update here once complete but if you need to enquire again the number is: 14386

This should be something we can add to the existing issue being worked!

fyi @ei_francesco @Arjan

Best

Eoin

@swalderi

Our firmware is also publicly available here for your review, this I believe is the section you have highlighted, you can also review the code here if you need to:

Best

Eoin

@swalderi

Which toolchain are you using ?

See this topic:

regards,
fv

I am using the default toolchain bundled with STM32CubeIDE 1.19.0.
The specific version of the GNU Tools for STM32 (arm-none-eabi-gcc) is 13.3.rel1

I had similar problems as described in that thread. I actually used the Arduino Nano 33 BLE Sense too, where the model worked correctly. When switching to the Portenta the inference always returned EI_IMPULSE_OK, but the classifier result was consistently the background class with a probability of 100%. It took me quite a while to finally trace the issue back to the faulty log() function, as I was looking everywhere else first.

To get correct inference results I just replaced the log function with this:

    __attribute__((always_inline)) static inline float log(float a)
    {
    	return logf(a);
    }