Image redundancy reduction for neural network classification using discrete cosine transforms
High information redundancy and strong correlations in face images result in inefficiencies when such images are used directly in recognition tasks. In this paper, discrete cosine transforms (DCT) are used to reduce image information redundancy because only a subset of the transform coefficients are necessary to preserve the most important facial features, such as hair outline, eyes and mouth. We demonstrate experimentally that when DCT coefficients are fed into a backpropagation neural network for classification, high recognition rates can be achieved using only a small proportion (0.19%) of available transform components. This makes DCT-based face recognition more than two orders of magnitude faster than other approaches.