Abstract
A family of regularized least squares regression models in a
Reproducing Kernel Hilbert
Space is extended by the kernel partial least squares (PLS) regression model.
Similar to principal components regression (PCR), PLS is a method
based on the projection
of input (explanatory) variables to the latent variables
(components). However, in contrast to PCR,
PLS creates the components by modeling the relationship between input and
output variables while maintaining most of the information in the
input variables. PLS
is useful in situations where the number of explanatory variables
exceeds the number of observations and/or a high level of multicollinearity
among those variables is assumed. Motivated by this fact we will
provide a kernel PLS algorithm for construction of nonlinear regression
models in possibly high-dimensional feature spaces.
We give the theoretical description of the kernel PLS algorithm
and we experimentally compare the algorithm with the existing kernel
PCR and kernel ridge regression techniques. We will demonstrate that on the data
sets employed kernel PLS achieves the same results as kernel
PCR but uses significantly fewer, qualitatively different components.
Go back