Abstract
A recently developed linear convolution filter based on Hirschman theory
has shown its advantage in saving computations compared with other
convolution filters. In this paper, we ameliorate the Hirschman
convolution filter with the usage of split-radix algorithm and explore
its latency-reduced advantage for the first time. We present a
comparison of hardware resource in FPGA for the proposed Hirschman-based
filter and other convolution filters. Simulation results indicate that
the split-radix Hirschman convolution filter achieves a promising
reduction in latency by averagely 18.15% with an acceptable power
consumption rise, compared with the main competitor using extended
SRFFT. In the case of device capacity limited, the proposed Hirschman
convolution filter is still computationally attractive as it performs
small-size originator function, instead of larger Fourier transform
required by other convolution filters.