Some Refinements of the Standard Autoassociative Neural Network
Journal article, 2012

Improving the training algorithm, determining near-optimal number of nonlinear principal components (NLPCs), extracting meaningful NLPCs, and increasing the nonlinear, dynamic, and selective processing capability of the standard autoassociative neural network are the objectives of this article that are achieved independently by some new refinements of the network structure and the training algorithm. In addition, three different topologies of the network are presented, which make it possible to perform local nonlinear principal component analysis. Performances of all methods are evaluated by a stock price database that demonstrates their efficiency in different situations. Finally, as it will be illustrated in the last section, the proposed structures can be easily combined together, which introduce them as efficient tools in a wide range of signal processing applications.

Reinforcement learning

Genetic algorithm

Recurrent neural networks

Constructive and destructive neural networks

Autoassociative neural network

Nonlinear principal component analysis

Author

Behrooz Makki

Chalmers, Signals and Systems, Kommunikationssystem, informationsteori och antenner, Communication Systems

Mona Noori-Hosseini

Chalmers, Signals and Systems

Neural Computing and Applications

0941-0643 (ISSN) 1433-3058 (eISSN)

Vol. 22 7-8 1461-1475

Areas of Advance

Information and Communication Technology

Subject Categories

Electrical Engineering, Electronic Engineering, Information Engineering

DOI

10.1007/s00521-012-0825-5

More information

Created

10/8/2017