Elimination of Redundant Input Information and Parameters during Neural Network Training


The Transactions of the Korea Information Processing Society (1994 ~ 2000), Vol. 3, No. 3, pp. 439-448, Apr. 1996
10.3745/KIPSTE.1996.3.3.439,   PDF Download:

Abstract

Extraction and selection of the informative features play a central role in pattern recognition. This paper describes a modified back-propagation algorithm that performs selection of the informative features and trains a neural network simultaneously. The algorithm is mainly composed of three repetitive steps ; training, connection pruning, and input unit elimination. After initial training, the connection that have small magnitude are first pruned. Any input unit that has a small number of connections to the hidden units is deleted, which is equivalent to excluding the feature corresponding to that unit. If the error increases, the network is retrained, again followed by connection pruning and input unit elimination. As a result, the algorithm selects the most important features in the measurement space without a transformation to another space. Also, the selected features are the most informative ones for the classification, because feature selection is tightly coupled with the classification performance. This algorithm helps avoid measurement of redundant or less informative features, which may be expensive. Furthermore, the final network does not include redundant parameters, i.e., weights and biases, that may cause degradation of classification performance. In applications, the algorithm preserves the most informative features and significantly reduces the dimension of the feature vectors without performance degradation.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
W. Y. Gwan and P. K. Kyu, "Elimination of Redundant Input Information and Parameters during Neural Network Training," The Transactions of the Korea Information Processing Society (1994 ~ 2000), vol. 3, no. 3, pp. 439-448, 1996. DOI: 10.3745/KIPSTE.1996.3.3.439.

[ACM Style]
Won Yong Gwan and Park Kwang Kyu. 1996. Elimination of Redundant Input Information and Parameters during Neural Network Training. The Transactions of the Korea Information Processing Society (1994 ~ 2000), 3, 3, (1996), 439-448. DOI: 10.3745/KIPSTE.1996.3.3.439.