Many neural network models have been mathematically demonstrated to be universal approximators. For accurate function approximation, the number of samples in the training data set must be high enough to cover the entire input data space. But this number increases exponentially with the dimension of the input space, increasing the space- and time-complexity of the learning process. Hence, the neural function approximation is a complex task for problems with high dimension of the input space, like those based on signal spectral analysis. In this paper, some aspects of neural estimation of signal spectral components are discussed. The goal is to find a feed-forward neural network (FFNN) model for estimating spectral components of a signal, with computational complexity comparable with Fast Fourier Transform (FFT) algorithm, but easier to implement in hardware. Different FFNN architectures, with different data sets and training conditions, are analyzed. A butterfly-like FFNN (BFFNN) was proposed, which has much less weight connections and better performance than FFNN.