Whitening

WhiteningThegoalofwhiteningistomaketheinputlessredundant;moreformally,ourdesiderataarethatourlearningalgorithmsseesatraininginputwhere(i)thefeaturesarelesscorrelatedwitheachoth…

The goal of whitening is to make the input less redundant; more formally, our desiderata are that our learning algorithms sees a training input where (i) the features are less correlated with each other, and (ii) the features all have the same variance.

 

example

How can we make our input features uncorrelated with each other? We had already done this when computing \textstyle x_{\rm rot}^{(i)} = U^Tx^{(i)}. Repeating our previous figure, our plot for \textstyle x_{\rm rot} was:

PCA-rotated.png

The covariance matrix of this data is given by:

\begin{align} \begin{bmatrix} 7.29 & 0  \\ 0 & 0.69 \end{bmatrix}. \end{align}

 

It is no accident that the diagonal values are \textstyle \lambda_1 and \textstyle \lambda_2. Further, the off-diagonal entries are zero; thus, \textstyle x_{​{\rm rot},1} and \textstyle x_{​{\rm rot},2}are uncorrelated, satisfying one of our desiderata for whitened data (that the features be less correlated).

To make each of our input features have unit variance, we can simply rescale each feature \textstyle x_{​{\rm rot},i} by \textstyle 1/\sqrt{\lambda_i}. Concretely, we define our whitened data \textstyle x_{​{\rm PCAwhite}} \in \Re^n as follows:

\begin{align} x_{​{\rm PCAwhite},i} = \frac{x_{​{\rm rot},i} }{\sqrt{\lambda_i}}.    \end{align}

Plotting \textstyle x_{​{\rm PCAwhite}}, we get:

PCA-whitened.png

This data now has covariance equal to the identity matrix \textstyle I. We say that \textstyle x_{​{\rm PCAwhite}} is our PCA whitened version of the data: The different components of \textstyle x_{​{\rm PCAwhite}} are uncorrelated and have unit variance.

 

ZCA Whitening

 

 

Finally, it turns out that this way of getting the data to have covariance identity \textstyle I isn’t unique. Concretely, if \textstyle R is any orthogonal matrix, so that it satisfies \textstyle RR^T = R^TR = I (less formally, if \textstyle R is a rotation/reflection matrix), then \textstyle R \,x_{\rm PCAwhite} will also have identity covariance. In ZCA whitening, we choose \textstyle R = U. We define

\begin{align} x_{\rm ZCAwhite} = U x_{\rm PCAwhite} \end{align}

Plotting \textstyle x_{\rm ZCAwhite}, we get:

ZCA-whitened.png

 

It can be shown that out of all possible choices for \textstyle R, this choice of rotation causes \textstyle x_{\rm ZCAwhite} to be as close as possible to the original input data \textstyle x.

When using ZCA whitening (unlike PCA whitening), we usually keep all \textstyle n dimensions of the data, and do not try to reduce its dimension.

 

Regularizaton

 

When implementing PCA whitening or ZCA whitening in practice, sometimes some of the eigenvalues \textstyle \lambda_i will be numerically close to 0, and thus the scaling step where we divide by \sqrt{\lambda_i} would involve dividing by a value close to zero; this may cause the data to blow up (take on large values) or otherwise be numerically unstable. In practice, we therefore implement this scaling step using a small amount of regularization, and add a small constant \textstyle \epsilon to the eigenvalues before taking their square root and inverse:

\begin{align} x_{​{\rm PCAwhite},i} = \frac{x_{​{\rm rot},i} }{\sqrt{\lambda_i + \epsilon}}. \end{align}

When \textstyle x takes values around \textstyle [-1,1], a value of \textstyle \epsilon \approx 10^{-5} might be typical.

 

For the case of images, adding \textstyle \epsilon here also has the effect of slightly smoothing (or low-pass filtering) the input image. This also has a desirable effect of removing aliasing artifacts caused by the way pixels are laid out in an image, and can improve the features learned (details are beyond the scope of these notes).

ZCA whitening is a form of pre-processing of the data that maps it from \textstyle x to \textstyle x_{\rm ZCAwhite}. It turns out that this is also a rough model of how the biological eye (the retina) processes images. Specifically, as your eye perceives images, most adjacent “pixels” in your eye will perceive very similar values, since adjacent parts of an image tend to be highly correlated in intensity. It is thus wasteful for your eye to have to transmit every pixel separately (via your optic nerve) to your brain. Instead, your retina performs a decorrelation operation (this is done via retinal neurons that compute a function called “on center, off surround/off center, on surround”) which is similar to that performed by ZCA. This results in a less redundant representation of the input image, which is then transmitted to your brain.

转载于:https://www.cnblogs.com/sprint1989/p/3971244.html

今天的文章Whitening分享到此就结束了,感谢您的阅读。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
如需转载请保留出处:https://bianchenghao.cn/6362.html

(0)
编程小号编程小号

相关推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注