from sklearn.linear_model import ElasticNet
ENreg = ElasticNet(alpha=1, l1_ratio=0.5, normalize=False)
ENreg.fit(x_train,y_train)
pred_cv = ENreg.predict(x_cv)
#calculating mse
mse = np.mean((pred_cv - y_cv)**2)
mse?1773750.73
ENreg.score(x_cv,y_cv)
0.4504
這里的R-2值比lasso還有ridge小得多
?Elastic regression generally works well when we have a big dataset.
這個彈性網絡里的網,就像漁網一樣,如果你要抓魚,一群魚在一起組隊游泳,撒網過去網了一組比較有相關性的魚。也就是把那些互相不獨立的變量放到一個組里頭,Now if any one of the variable of this group is a strong predictor (meaning having a strong relationship with dependent variable), then we will include the entire group in the model building, because omitting other variables (like what we did in lasso) might result in losing some information in terms of interpretation ability, leading to a poor model performance.
Alpha = a + b ????????? and ??? l1_ratio =? a / (a+b)
并且,a,b分別是L1和L2的系數:
a * (L1 term) + b* (L2 term)
令alpha (or a+b) = 1
1. 如果l1_ratio =1,從而就有a/1 = 1,從而a=1,b=0這就是一個lasso 懲罰項
2.?如果 l1_ratio = 0, 從而 a=0,從而b=1,這是一個 ridge 懲罰項。
3.?如果l1_ratio 處于 0 和 1之間,那么懲罰性是ridge 和 lasso的結合.