Kernel Subspace Learning for Pattern Classification
Book chapter, 2018

Kernel methods are nonparametric feature extraction techniques that attempt to boost the learning capability of machine learning algorithms using nonlinear transformations. However, one major challenge in its basic form is that the computational complexity and the memory requirement do not scale well with respect to the training size. Kernel approximation is commonly employed to resolve this issue. Essentially, kernel approximation is equivalent to learning an approximated subspace in the high-dimensional feature vector space induced and characterized by the kernel function. With streaming data acquisition, approximated subspaces can be constructed adaptively. Explicit feature vectors are then extracted by a transformation onto the approximated subspace and linear learning techniques can be subsequently applied. From a computational point of view, operations in kernel methods can easily be parallelized and modern infrastructures can be utilized to achieve efficient computing. Moreover, the extracted explicit feature vectors can easily be interfaced with other learning techniques.

Nyström

Kernel approximation

CUDA

Spark

GPU

Subspace learning

Classification

Author

Yinan Yu

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Konstantinos I. Diamantaras

T.E.I. of Thessaloniki

Tomas McKelvey

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

S. Y. Kung

Princeton University

Adaptive Learning Methods for Nonlinear System Modeling

127-147
9780128129760 (ISBN)

Areas of Advance

Information and Communication Technology

Subject Categories

Computational Mathematics

Signal Processing

Computer Science

Roots

Basic sciences

DOI

10.1016/B978-0-12-812976-0.00008-7

More information

Latest update

3/21/2023