決定下一步做什么
這里我們以房價預測為例,假設我們需要使用線性回歸模型對房價進行預測,其代價函數J(θ)如下圖所示。當我們將已經訓練好的模型用來預測房價時,我們發現有較大的誤差,那么我們下一步應該怎么做?
我們可能會想到以下幾種方法:
- 獲取更多的樣本;
- 嘗試減少特征變量的數量;
- 嘗試獲取更多的特征變量;
- 嘗試增加多項式特征;
- 嘗試減小正則化參數λ的值;
- 嘗試增大正則化參數λ的值;
......
這些方法可能有用也可能沒用,我們更不應該在實際應用中隨機選擇上述方法。還有一點要說明的是:上述方法中的任意一個方法,在具體實踐過程中都可能轉變為一個為期半年甚至時間更長的項目。因此,我們需要引入機器學習診斷法來幫助我們決定下一步該做什么才是有效的方法。
Question:
Which of the following statements about diagnostics are true? Check all that apply.
A. It's hard to tell what will work to improve a learning algorithm, so the best approach is to with gut feeling and just see what works.
B. Diagnostics can give guidance as to what might be more fruitful things to try to improve a learning algorithm.
C. Diagnostics can be time-consuming to implement and try, but they can still be a very good use of your time.
D. A diagnostics can sometimes rule out certain courses of action (changes to your learning algorithm) as being unlikely to improve its performance significantly.
我們不難選出B,C和D這三個正確答案。
假設評估
對于我們之前所提及的欠擬合問題和過擬合問題,我們是通過畫圖的方法來檢驗的。如若訓練集中有較多的特征變量時,我們就無法將函數圖給呈現出來。
對此,我們可以將數據集分為訓練集和測試集兩個部分,其中訓練集占數據集的70%,測試集占數據集的30%。首先,我們利用訓練集將代價函數J(θ)最小化,然后利用測試集測試模型誤差,從而出模型是否出現欠擬合或過擬合問題。
注:如果數據集有一定規律,則要從數據集中隨機選取70%的樣本作為訓練集,30%的樣本作為測試集。
線性回歸模型
- 利用訓練集將代價函數J(θ)最小化,得到此時參數θ的值
- 利用測試集計算誤差:
邏輯回歸模型
- 利用訓練集將代價函數J(θ)最小化,得到此時參數θ的值
- 利用測試集計算誤差:
除此之外,對于邏輯回歸模型我們還能計算誤分類率來幫助我們理解邏輯回歸模型的誤差。
因此,我們可將誤差測試的表達式改寫為:
Question:
Suppose an implementation of linear regression (without regularization) is badly overfitting the training set. In this case, we would expect:
A. The training error J(θ) to be low and the test error Jtest(θ) to be high
B. The training error J(θ) to be low and the test error Jtest(θ) to be low
C. The training error J(θ) to be high and the test error Jtest(θ) to be low
D. The training error J(θ) to be high and the test error Jtest(θ) to be high
我們不難選出A這個正確答案。
補充筆記
Evaluating a Hypothesis
Once we have done some trouble shooting for errors in our predictions by:
- Getting more training examples
- Trying smaller sets of features
- Trying additional features
- Trying polynomial features
- Increasing or decreasing λ
We can move on to evaluate our new hypothesis.
A hypothesis may have a low error for the training examples but still be inaccurate (because of overfitting). Thus, to evaluate a hypothesis, given a dataset of training examples, we can split up the data into two sets: a training set and a test set. Typically, the training set consists of 70 % of your data and the test set is the remaining 30 %.
The new procedure using these two sets is then:
- Learn Θ and minimize Jtrain(Θ) using the training set
- Compute the test set error Jtest(Θ)
The test set error
- For linear regression:
- For classification ~ Misclassification error (aka 0/1 misclassification error):
This gives us a binary 0 or 1 error result based on a misclassification. The average test error for the test set is:
This gives us the proportion of the test data that was misclassified.
模型選擇以及訓練集、交叉驗證集和測試集的劃分
假設我們要在以下的多項式模型中選擇一個合適的模型:
- (d = 1) hθ(x) = θ0 + θ1x
- (d = 2) hθ(x) = θ0 + θ1x + θ2x2
- (d = 3) hθ(x) = θ0 + θ1x + θ2x2 + θ3x3
...... - (d = 10) hθ(x) = θ0 + θ1x + θ2x2 + θ3x3 + ... + θ10x10
其中,參數d表示多項式的次數。對于這種情況,我們使用將數據集劃分為訓練集和測試集的方法。
上圖中,我們假設d=5時其測試誤差最小。但這時我們只是找到了一個對于測試集非常擬合的模型,我們無法判斷其實際泛化誤差是否完美。
因此,我們不能再將數據集只分為兩部分,訓練集和測試集。對此,我們引入交叉驗證集(Validation Set)。我們將數據集分為三個部分,訓練集(60%)、交叉驗證集(20%)和測試集(20%)。
對于上例,我們需要計算三部分數據集的誤差。
最終,我們可得到d=4時測試誤差最小。
Question:
Consider the model selection procedure where we choose the degree of polynomial using a cross validation set. For the final model (with parameters θ), we might generally expect JCV(θ) To be lower than Jtest(θ) because:
A. An extra parameter (d, the degree of the polynomial) has been fit to the cross validation set.
B. An extra parameter (d, the degree of the polynomial) has been fit to the test set.
C. The cross validation set is usually smaller than the test set.
D. The cross validation set is usually larger than the test set.
終上所述,我們不難選出A這一正確答案。
補充筆記
Model Selection and Train/Validation/Test Sets
Just because a learning algorithm fits a training set well, that does not mean it is a good hypothesis. It could over fit and as a result your predictions on the test set would be poor. The error of your hypothesis as measured on the data set with which you trained the parameters will be lower than the error on any other data set.
Given many models with different polynomial degrees, we can use a systematic approach to identify the 'best' function. In order to choose the model of your hypothesis, you can test each degree of polynomial and look at the error result.
One way to break down our dataset into the three sets is:
- Training set: 60%
- Cross validation set: 20%
- Test set: 20%
We can now calculate three separate error values for the three different sets using the following method:
- Optimize the parameters in Θ using the training set for each polynomial degree.
- Find the polynomial degree d with the least error using the cross validation set.
- Estimate the generalization error using the test set with Jtest(Θ(d)), (d = theta from polynomial with lower error);
This way, the degree of the polynomial d has not been trained using the test set.