Is there an option to choose fixed point processing? Our project sorely needs this

We are sampling 8 kHz audio and are aiming to use run continuous classification, requiring our combined DSP and inference latency to be under one second (this limitation is imposed by the maximum buffer size our memory budget allows).

We have some hardware that is significantly more constrictive than the list of boards/platforms offered when, for example, choosing a target platform for the EON tuner to use.

Perhaps our project is ambitious, but our memory budget for the ML model is about 200-220 KB of flash, and 50-55 KB of RAM. We’re also running on a core that does not have an FPU or dedicated floating point hardware.

Simply put, we just need a way to perform fixed point computations on our audio samples. This would help with the DSP latency and with our memory budget.

We noticed that, when using MFE block, the features data appears as integers when the sample rate is high (16 kHz) as shown here:

But the features are floats when set to 8 kHz as shown here:

Why is this the case and is there any way to specify our desired processing precision?

We are not using a SBC, and don’t have any external flash or MPU that allows us to page in the buffer as needed. I’m not even sure if keeping our buffers in flash will be fast enough to keep up with our <1 second constraint, and our RAM budget is not large enough to contain 32-bit, 8kHz samples. As far as I know, it would only work if our samples remained 8-bit through the entire process.

How can we achieve this using the Edge Impulse service? Or are there any other ideas or suggestions for other directions to take?

Hello @Markrubianes,

I am not sure why when using 8KHz the features obtained are floats… I’ll ask internally if there is any particular reason.
In the meantime, you might be interested in creating your own Custom Processing Blocks where you can rewrite / adapt our default models to match your specific needs.
All the default processing blocks are listed here:



Thank you @louis for this helpful information. We’re still holding out for an answer or response regarding the built-in blocks.

If I understand correctly, this statement from the Custom Processing Bocks tutorial, “However, we cannot automatically generate optimized native code for the block”, refers to native, embedded C code, correct? That would mean that, if we invested the time to write our own custom blocks, we couldn’t guarantee that our unoptimized dsp block would introduce a drop in performance to our impulse. Am I reasoning about this correctly?

@janjongboom @Louis Can you kindly provide additional support and guidance to Markrubianes on his audio processing project? Mark is part of a NCSU engineering team we’re sponsoring to develop the feasibility model for an audio sensor node to integrate into our facility based sensor monitoring application. The preferred hardware does not contain a floating point processor, so the team is trying to maximize the ML audio characterization using only integer audio data. The project is part of a senior design course, so timely support and guidance is greatly appreciated as the team ramps up their learning curve with Edge Impulse tools and ML in general. I am happy to discuss real time if helpful, or we can set up a direct support session with Mark, even sharing what the team has created in Edge Impulse thus far. Thank-you! Jim M

@Markrubianes @Jim_Newcaretec Is this downsampled data? We just use floats internally, so might lead to this, you can just truncate the non-integer part when passing data into the model. E.g. see <-- here on using int16 buffer (same trick works with i8, I’ve done it on microbit).

However… in the end this all needs to be floating point numbers because our DSP code uses floating point math underneath. There’s no way around that. The good news is that by taking the MFE block and tweaking the amount of operations it does (upping frame size / frame stride, lowering the FFT length / filter number) this can be fast enough to run with emulated floating point math. One of our commercial customers is running this on Cortex-M0+.

Hello @Markrubianes,

That would mean that, if we invested the time to write our own custom blocks, we couldn’t guarantee that our unoptimized dsp block would introduce a drop in performance to our impulse. Am I reasoning about this correctly?

You may want to use python / matlab / C++ / anything language to create your custom DSP block.
However, your embedded target will probably only support C++.
This mean that you will need to reimplement the feature extraction (the pre-processing) function in C++ in Edge Impulse SDK.

When you create a Custom DSP, in the parameter.json file, you have a way to give a name to the function you will want to re-implement.

As long as you extract the features the same way both in your custom DSP block and in the function implemented in the SDK, the performances will be the same.



@janjongboom thank you for confirming the answer to our question about the use of floats in the Edge Impulse SDK. We are currently using mfe with some of your parameter suggestions, and are getting promising results. Implementing our own pre-processing block may turn out to be unnecessary, however, it’s good to know how to proceed should we choose to go that route. Thanks to you and @louis for your support.

1 Like