Arm Mbed and Pelion device management support forum

Problem with CMSIS DSP floating point FFT (?)

Hi All

We are using the CMSIS DSP’s arm_cfft_f32() to perform a transform but haven’t been able to get a result without a large frequency distortion result. The result is the same on various Cortex parts (with and without FPU).

Test code:

float fft_input_buffer[512];
float fft_magnitude_buffer[256];
fnInjectFloat(fft_input_buffer, 512);
arm_cfft_f32(&arm_cfft_sR_f32_len256, fft_input_buffer, 0, 1); // perform complex fast-fourier transform (the result is in the input buffer)
arm_cmplx_mag_f32(fft_input_buffer, fft_magnitude_buffer, 256); // calculate the magnitude of each frequency component (256 bins are now available) - 48kHz sampling rate gives 0…24kHz spectrum with 93Hz resolution per bin

The function fnInjectFloat() is putting a high resolution test signal, in this specific reference case a 7’000Hz sine wave, into the input buffer.

Before calling the FFT the input buffer can be displayed to have the expected input signal (75 cycles in 512 input buffer):

After performing the FFT and calculating the magnitude of the output bins one expects to find the energy at 7kHz. This is however the result in the magnitude buffer:

There is a nice peak at 7kHz but there is a second peak at 17kHz (24kHz - 7kHz) with three times the energy.

Tested with various FFT lengths with the same result.
Tested on various chips (m0+, m4 - with and without FPU) and in each case with the same result.

If the test frequency is increase, eg. to 8kHz the second component is at 16kHz (24kHz - 8 kHz). If a frequency greater than 12kHz is injected the unexpected bin energy is less that 12kHz (folder around the mid-frequency).

Finally, if real sampled input is used - rather than preparing a signal in the input buffer - the same results are also obtained in each case.
Almost the only code involved is the CMSIS library code.

Can anyone explain why correct results can’t be obtained? (Using CMSIS DSP 5.1)



Update. I would like to add that if the input signal from arm_fft_bin_data.c (the arm_fft_bin_example) is used the ARM test passes for the 1024 point FFT (max. bin at 213)
When changing this file for the reference signal the same problem results.

This is the test signal case passed through the same code.


Passing a 16kHz signal (assuming 48mHz sampling and thus 24kHz bandwidth) instead gives


showing maximum frequency at 8kHz and the actual signal rather small at 16kHz (this is a256 point graph but result is identical for 1024 or whatever is used).

Therefore the ARM test passes for 1024 point and the test signal but results are very wrong for other signals ???