Some Refinements of the Standard Autoassociative Neural Network
Artikel i vetenskaplig tidskrift, 2013

Improving the training algorithm, determining near-optimal number of nonlinear principal components (NLPCs), extracting meaningful NLPCs, and increasing the nonlinear, dynamic, and selective processing capability of the standard autoassociative neural network are the objectives of this article that are achieved independently by some new refinements of the network structure and the training algorithm. In addition, three different topologies of the network are presented, which make it possible to perform local nonlinear principal component analysis. Performances of all methods are evaluated by a stock price database that demonstrates their efficiency in different situations. Finally, as it will be illustrated in the last section, the proposed structures can be easily combined together, which introduce them as efficient tools in a wide range of signal processing applications.

Genetic algorithm

Recurrent neural networks

Nonlinear principal component analysis

Reinforcement learning

Constructive and destructive neural networks

Autoassociative neural network

Författare

Behrooz Makki

Amirkabir University of Technology

Mona Noori-Hosseini

Amirkabir University of Technology

Neural Computing and Applications

0941-0643 (ISSN) 1433-3058 (eISSN)

Vol. 22 7-8 1461-1475

Styrkeområden

Informations- och kommunikationsteknik

Ämneskategorier

Elektroteknik och elektronik

DOI

10.1007/s00521-012-0825-5

Mer information

Senast uppdaterat

2021-02-15