多變量線性回歸(二)

正規方程(Normal Equation)

到目前為止,我們都在使用梯度下降算法將代價函數J(θ)最小化。但對于某些線性回歸問題,我們引入正規方程來求解最優的θ值,從而使得代價函數J(θ)最小化。

正規方程是通過求解如下的方程來使得代價函數J(θ)最小的參數θ的值:

假設我們使用如下數據集作為我們的訓練集:

我們可以構建出如下數據表:

x0 x1 x2 x3 x4 y
1 2104 5 1 45 460
1 1416 3 2 40 232
1 1534 3 2 30 315
1 852 2 1 36 178

其中,x0為我們添加的特征變量,這樣我們由x0 至 x4可構建訓練集特征矩陣X,由y可構建訓練集結果矩陣Y。至此,我們利用正規方程解出參數θ = (XTX)-1XTY。

在Octave中,正規方程寫為:pinv(X'X)X'*Y。

注:對于不可逆的矩陣(通常特征變量存在線性相關或特征變量數量過多,即特征變量數量大于訓練集中的訓練數據。),正規方程方法不可使用。

梯度下降算法與正規方程法的比較:

梯度下降算法 正規方程
需要選擇學習率α 不需要
需要多次迭代 一次運算得出
當特征數量n越大時越適用 通常當特征數量n≤10000時適用
補充筆記
Normal Equation

Gradient descent gives one way of minimizing J. Let’s discuss a second way of doing so, this time performing the minimization explicitly and without resorting to an iterative algorithm. In the "Normal Equation" method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. This allows us to find the optimum theta without iteration. The normal equation formula is given below:
  θ = (XTX)-1XTy

There is no need to do feature scaling with the normal equation.

The following is a comparison of gradient descent and the normal equation:

Gradient Descent Normal Equation
Need to choose α No need to choose α
Needs many iterations No need to iterate
O(kn2) O(n3, need to calculate inverse of XTX)
Works well when n is large Slow if n is very large

With the normal equation, computing the inversion has complexity O(n3). So if we have a very large number of features, the normal equation will be slow. In practice, when n exceeds 10,000 it might be a good time to go from a normal solution to an iterative process.

Normal Equation Noninvertibility

When implementing the normal equation in octave we want to use the 'pinv' function rather than 'inv.' The 'pinv' function will give you a value of θ even if XTX is not invertible.

If XTX is noninvertible, the common causes might be having :

  • Redundant features, where two features are very closely related (i.e. they are linearly dependent)
  • Too many features (e.g. m ≤ n). In this case, delete some features or use "regularization" (to be explained in a later lesson).

Solutions to the above problems include deleting a feature that is linearly dependent with another or deleting one or more features when there are too many features.

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容