核心:dual problem,kernel trick,
最原始的SVM是處理線性可分的情況,對于非線性可分的,可以使用soft margin SVM(不需要所有的instance都分類正確);也可以加入其他非線性的feature或者Similarity Features,強行使得線性可分,比如以下加入二階feature后就線性可分了
上圖用的是高維的Polynomial feature,下圖是adding similarity feature,分別計算每個instance到左圖中2個紅點的距離,就把原始的一維數據轉化為2維數據,而且也變得線性可分了
但是顯示的加入其他高維的feature會使得計算量變大,這時就要用到kernel trick,It makes it possible to get the same result as if you added many polynomial features, even with very high-degree polynomials, without actually having to add them.
上面2種方法對應的kernel就叫Polynomial Kernel和Gaussian RBF Kernel,那kernel trick 具體是怎么樣的呢?怎么把它apply到SVM中呢?
這就要引出對偶問題,因為SVM的原問題并不支持kernel trick的,SVM分類器是一個最大化間隔的分類器,對于linear SVM來說,預測是這樣計算的
一個直觀的理解就是:The smaller the weight vector w, the larger the margin
所以就可以minimize weight w
因為 ∥ w ∥is not differentiable,我們就用了上述w平方, 還有一個約束條件t(i)= –1 for negative instances (if y(i)= 0) and t(i)= 1 for positive instances (if? y(i)= 1)。
這時一個QP problem,有蠻多現成的方法來解,但是因為這個原問題無法使用kernel trick(為什么后面會談到),我們打算將原問題轉化為對偶問題
The solution to the dual problem typically gives a lower bound to the solution of the primal problem, but under some conditions it can even have the same solutions as the primal problem. Luckily, the SVM problem happens to meet these conditions(KKT條件)
對應的對偶問題形式是(TODO:對偶問題的推導):
解為(TODO:為什么解是這個):
那為什么這個就可以apply kernel trick呢?先來看下kernel trick具體含義,加入我們想加入一些二階的feature
如果計算2個轉化后的feature vector的dot product的話:
可以看到The dot product of the transformed vectors is equal to the square of
the dot product of the original vectors:
這意味著:you don’t actually need to transform the training instances at all,我們只需要計算原始的2個低維向量的dot product就可以得到高維空間上的dot product,這樣就可以makes the whole process much more computationally efficient,而注意到SVM的dual problem就有2個向量的點積形式,在求解和預測的時候都可以用kernel trick
b為:
In Machine Learning, a? kernelis a function capable of computing the dot product? ?(a)T·?(b) based only on the original vectors? aand? b, without having to compute (or even to know about) the transformation? ?,以下是一些常見的kernel
Others:
1. SVM regression:SVM Regression tries to fit as many instances as possible on the street while limiting margin violations
給定一個?,目標是使得2條邊界線囊括的instance越多越好
2.?Linear SVM classifiercost function
第一項使得w越小越好,第二項計算的是 the total of all margin violations,The function max(0, 1 – t) is called the hinge lossfunction
Exercises
1、?Why is it important to scale the inputs when using SVMs?
SVMs try to fit the largest possible “street” between the classes (see the first answer), so if the training set is not scaled, the SVM will tend to neglect small features
2.?Should you use the primal or the dual form of the SVM problem to train a model?on a training set with millions of instances and hundreds of features?
This question applies only to linear SVMs since kernelized can only use the dual form. The computational complexity of the primal form of the SVM problem is proportional to the number of training instances m, while the computational complexity of the dual form is proportional to a number between m2 and m3. So if there are millions of instances, you should definitely use the primal form, because the dual form will be much too slow