Comparisons between minimum error entropy and minimum mean squared error based on PCA neural networks

Zhenhua Guo, Hong Yue, Hong Wang

Research output: Contribution to journalArticle

Abstract

Principal component analysis and neural network combining PCA provides an adaptive neural network parallel line of the main element and master spatial analysis techniques, but the data for non-Gaussian stochastic systems based on the minimum mean square error of the reconstructed PCA extracted . the main element direction than the direction of maximizing the information paper first presents a minimum mean square error based on self-association of the main element network analysis of variance reconstruction of its best properties of both IT and non-maximizing features; then presented with minimal residual Poor information entropy learning objectives PCA neural network and gives the approximate calculation method residual entropy network output and network learning methods; finally analyzed in Gaussian random distribution system, the minimum residual entropy and minimum mean square error of reconstruction The results are consistent.
Translated title of the contributionComparisons between minimum error entropy and minimum mean squared error based on PCA neural networks
LanguageChinese
Pages96-102
Number of pages7
JournalPattern Recognition and Artificial Intelligence
Volume18
Issue number1
DOIs
Publication statusPublished - Feb 2005

Fingerprint

Mean square error
Entropy
Neural networks
Stochastic systems
Electric network analysis
Analysis of variance (ANOVA)
Principal component analysis

Keywords

  • PCA neural networks
  • minimum residual entropy
  • minimum mean squared error

Cite this

@article{4ed54f0727ec481180eaa42fb4ea3ece,
title = "最小残差熵与最小均方差的主元网络及其比较",
abstract = "Principal component analysis and neural network combining PCA provides an adaptive neural network parallel line of the main element and master spatial analysis techniques, but the data for non-Gaussian stochastic systems based on the minimum mean square error of the reconstructed PCA extracted . the main element direction than the direction of maximizing the information paper first presents a minimum mean square error based on self-association of the main element network analysis of variance reconstruction of its best properties of both IT and non-maximizing features; then presented with minimal residual Poor information entropy learning objectives PCA neural network and gives the approximate calculation method residual entropy network output and network learning methods; finally analyzed in Gaussian random distribution system, the minimum residual entropy and minimum mean square error of reconstruction The results are consistent.",
keywords = "PCA neural networks, minimum residual entropy, minimum mean squared error",
author = "Zhenhua Guo and Hong Yue and Hong Wang",
year = "2005",
month = "2",
doi = "10.3969/j.issn.1003-6059.2005.01.016",
language = "Chinese",
volume = "18",
pages = "96--102",
journal = "Pattern Recognition and Artificial Intelligence",
issn = "1003-6059",
publisher = "Journal of Pattern Recognition and Artificial Intelligence",
number = "1",

}

最小残差熵与最小均方差的主元网络及其比较. / Guo, Zhenhua; Yue, Hong; Wang, Hong.

In: Pattern Recognition and Artificial Intelligence, Vol. 18, No. 1, 02.2005, p. 96-102.

Research output: Contribution to journalArticle

TY - JOUR

T1 - 最小残差熵与最小均方差的主元网络及其比较

AU - Guo, Zhenhua

AU - Yue, Hong

AU - Wang, Hong

PY - 2005/2

Y1 - 2005/2

N2 - Principal component analysis and neural network combining PCA provides an adaptive neural network parallel line of the main element and master spatial analysis techniques, but the data for non-Gaussian stochastic systems based on the minimum mean square error of the reconstructed PCA extracted . the main element direction than the direction of maximizing the information paper first presents a minimum mean square error based on self-association of the main element network analysis of variance reconstruction of its best properties of both IT and non-maximizing features; then presented with minimal residual Poor information entropy learning objectives PCA neural network and gives the approximate calculation method residual entropy network output and network learning methods; finally analyzed in Gaussian random distribution system, the minimum residual entropy and minimum mean square error of reconstruction The results are consistent.

AB - Principal component analysis and neural network combining PCA provides an adaptive neural network parallel line of the main element and master spatial analysis techniques, but the data for non-Gaussian stochastic systems based on the minimum mean square error of the reconstructed PCA extracted . the main element direction than the direction of maximizing the information paper first presents a minimum mean square error based on self-association of the main element network analysis of variance reconstruction of its best properties of both IT and non-maximizing features; then presented with minimal residual Poor information entropy learning objectives PCA neural network and gives the approximate calculation method residual entropy network output and network learning methods; finally analyzed in Gaussian random distribution system, the minimum residual entropy and minimum mean square error of reconstruction The results are consistent.

KW - PCA neural networks

KW - minimum residual entropy

KW - minimum mean squared error

U2 - 10.3969/j.issn.1003-6059.2005.01.016

DO - 10.3969/j.issn.1003-6059.2005.01.016

M3 - Article

VL - 18

SP - 96

EP - 102

JO - Pattern Recognition and Artificial Intelligence

T2 - Pattern Recognition and Artificial Intelligence

JF - Pattern Recognition and Artificial Intelligence

SN - 1003-6059

IS - 1

ER -