Initially proposed by Hastie and Stuetzle, principal curves are smooth, self-consistent curves that pass through the "middle" of a data cloud. Unlike the rigid orthogonal vectors of linear PCA, a principal curve bends and twists to accommodate the global shape of the data. 3. Kernel PCA (kPCA)
Because the bottleneck layer contains fewer nodes than the input or output layers, the network is forced to compress the data. The values extracted at this bottleneck represent the nonlinear principal component scores. Nonlinear Principal Component Analysis and Rela...
Traditional PCA finds the lower-dimensional hyperplane that minimizes the sum of squared orthogonal deviations from the dataset. In contrast, NLPCA maps the data to a lower-dimensional curved surface. Initially proposed by Hastie and Stuetzle, principal curves
The network typically utilizes five layers: an input layer, an encoding layer, a narrow "bottleneck" layer, a decoding layer, and an output layer. Kernel PCA (kPCA) Because the bottleneck layer contains
is a powerful extension of standard Principal Component Analysis (PCA) designed to uncover complex, non-planar patterns in high-dimensional datasets. While classical PCA excels at identifying straight-line dimensions of maximum variance, it often fails when applied to systems where variables interact in inherently curved or nonlinear ways.
To accomplish this, three primary methodologies have emerged over the decades: 1. Autoassociative Neural Networks (Autoencoders)
Nonlinear transfer functions (like hyperbolic tangents) in the hidden layers empower the network to characterize arbitrary continuous curves. 2. Principal Curves and Manifolds