# Unsigned Integers to Float Conversion

I am using an ADC which converts to 8 bit unsigned integers using DMA which need to be converted to floats for Edge Impulse to run. I see there are int8_to_float and int16_to_float functions for this purpose. Is there a similar function for unsigned integers? If not, what would be the best course of action to edit the existing functions for this purpose?

Hi @jwcuddeb,

Are your uint8 ADC readings raw ADC values? If so, the easiest route would be to convert them to whatever units you want from your sensor and use those float values.

For example, a raw ADC reading of “109” (uint8) might translate to 24.331 deg C (refer to the sensor datasheet for the math on performing this conversion). In this case, you would use the Celsius values as your floating point numbers for training and inference.

Hope that helps!

Thanks Shawn! Our application is using 8-bit audio. Our ADC data register is uint32_t, but we want 8-bit resolution. So our problem is that we need to declare our buffer as uint8_t to preserve the unsigned values, otherwise, every value above 127 will be interpreted as negative, and I think our signal will be clipped above the zero crossing of the waveform. We played it back to confirm the signal is total distortion.

So far we’ve gotten around this by typecasting the values to int8_t, which assumes two’s complement conversion, but now that we’re using DMA, we’re not trying to read the samples until right when the dsp/inferencing begins.

We understand that the Edge Impulse SDK processes data as floats, so we’re trying to maintain the full 255 range, expressed as a non-distorted waveform where our peaks are -128 and 127 and zero crossing at 0.

The int8_to_float function either requires signed type to be passed beforehand. I suppose we could make our own pass on the buffer to perform our conversions, but we were hoping to be able to modify the existing helper function to minimize our processing in order to keep the inferencing under one second.

I recommend writing your own conversion from your 8-bit integers to floats. In the STM32 example here (ei-keyword-spotting/main.cpp at master · ShawnHymel/ei-keyword-spotting · GitHub), the samples are 16-bit integers from I2S/DMA stored in a buffer. The following callback is defined, which converts them to floats (using the numpy.hpp function in the SDK) before storing them to the internal SDK buffer:

``````/**
* Get raw audio signal data
*/
static int get_audio_signal_data(size_t offset, size_t length, float *out_ptr)
{
numpy::int16_to_float(&inference.buffers[inference.buf_select ^ 1][offset], out_ptr, length);

return 0;
}
``````

I recommend looking at that int8_to_float or int16_to_float function as a starting place to create your own function. You can then call it in the definition of that callback function that’s used to feed samples to the SDK run_classifier() function.

Hope that helps!