單變量線性回歸(一)

模型表達(Model Representation)

我們通過一個例子作為開始,這個例子就是我們之前的預測房價的例子。我們要使用一個數據集,數據集包含某地的住房價格。假設你有一朋友正想出售自己面積為1250平方英尺的房子,你要告訴他這房子可以賣多少錢。這時你就需構建一個模型,從這個數據集來看這或許是條直線,從圖中的原諒綠你可以對你朋友說這房子可以賣22萬美元左右。

房屋價格數據集

這就是監督學習的一個例子,其這所以稱之為監督學習是因為對于每個數據而言,我們預先給出了“正確答案”,即告訴我們根基我們所擁有的數據,我們可以得出房屋的實際價格。

對于這個例子更為具體而言。這是一個回歸問題,回歸一詞指的是我們根據已有的數據預測出一個正確的輸出值。更進一步來說,在監督學習中我們都會擁有一個數據集,這個數據集稱為訓練集。我們將在以后用如下標記描述回歸問題:

  • m:number of training examples
  • x:"input" variable / features
  • y:"output" variable / "target" variable

除此之外,我們使用(x, y)來表示一個訓練樣本(訓練樣本:訓練過程中使用的數據稱為“訓練數據”,其中每一個樣本稱為“訓練樣本”。),其中我們用(x(i), y(i))來表示某一個訓練樣本。

我們通常使用h(hypothesis)表示一個函數。在預測房價的例子中,x對應房屋面積,y對應房屋價格,根據監督學習我們可以構建一個函數表達式h:
  hθ(x) = θ0 + θ1x

我們將這種可以構建出線性函數的模型稱為線性回歸模型。在這里只有一個特征變量x,因此我們稱這種單一變量模型為單變量線性回歸。

補充筆記
Model Representation

To establish notation for future use, we'll use x(i) to denote the "input" variables (living area in this example), also called input features, and y(i) to denote the "output" or target variable that we are trying to predict (price). A pair (x(i), y(i)) is called a training example, and the dataset that we'll be using to learn -- a list of m training examples (x(i), y(i)); i = 1, ... , m -- is called a training set. Note that the superscript "(i)" in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use X to denote the space of input values, and Y to denote the space of output values. In this example, X = Y = R (Real Number ).

To describle the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h: X -> Y so that h(x) is a "good" predictor for the corresponding value of y. For histirical reasons, this function h is called a hypothesis. Seen pictorially, the process is therefore like this:

When the target variable that we're trying to predict is continuous, such as in our housing example, we call the learning problem a regression problem. When y can take on a small number of discrete of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem.

代價函數(Cost Function)

在這之前我們介紹了如下這個表達式:
  hθ(x) = θ0 + θ1x

表達式中的θ0和 θ1這些參數,我們將其(θi)稱為模型參數。我們選擇不同的參數值,將得到不同的假設函數。因此,在我們預測房價的例子中,我們通過選擇參數值盡可能正確地預測y的值。若我們想要取得θ0和 θ1的值來使得h(x)與y之間的差最小化,我們要做的就是盡量減少假設函數的輸出值與房子真實價格之間的差的平方,即使得代價函數最小,用數學表達式可表示為:

補充筆記
Cost Function

We can measure the accuracy of our hypothesis function by using a cost function. This takes an average difference (actually a fancier version of an average) of all the results of the hypothesis with inputs from x's and the actual output y's.

Cost Function - Intuition Ⅰ

If we try to think of it in visual terms, our training data set is scattered on the x-y plane. We are trying to make a straight line (defined by hθ(x)) which passes through these scattered data points.

Our objective is to get the best possible line. The best possible line will be such so that the average squared vertical distances of the scattered points from the line will be the least. Ideally, the line should pass though all the points of our training data set. In such a case, the value of J(θ0, θ1) will be 0. The following example shows the ideal situation where we have a cost function of 0.

When θ1 = 1, we get a slope of 1 which goes which goes through every single data point in our model. Conversely, when θ1 = 0.5, we see the vertical distance from our fit to the data points increase.

This increase our cost function to 0.58. Plotting several other points yields to the following graph:

Thus as a goal, we should try to minimize the cost function. In this case, θ1 = 1 is our global minimum.

Cost Function - Intuition Ⅱ

A contour plot is a graph that contains many contour lines. A contour line of a two variable function has a constant value at all points of the same line. An example of such a graph is the one to the right below.

Taking any color and going along the 'circle', one would expect to get the same value of the cost function. For example, the three green points found on the green line above have the same value for J(θ0, θ1) and as a result, they are found along the same line. The circled x displays the value of the cost function for the graph on the left when θ0 = 800 and θ1 = -0.15. Taking another h(x) and plotting its contour plot, one gets the following graphs:

When θ0 = 360 and θ1 = 0, the value of J(θ0, θ1) in the contour plot gets closer to the center thus reducing the cost function error. Now giving our hypothesis function a slightly positive slope results in a better fit of the data.

The graph above minimizes the cost function as much as possible and consequently, the result of θ1 and θ0 tend to be around 0.12 and 250 respectively. Plotting those values on our graph to the right seems to put our point in the center of the inner most 'circle'.

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容