最近需要找個代理上網,但是網上免費的ip實在是不太好用,隨機找到好用ip的概率實在有點小,就只能把它們都下下來挨個試。我爬的是西刺代理-高匿,翻看兩頁,想看看數據來源,完了是最不喜歡的在網頁中直接顯示,而且就是最普通的td,tr包裹信息,沒有class,id之類的快選,想來只能用正則了額,看看別人的想法,果然如此。
簡書中一篇也是爬西刺
突然想起了pandas里read_html可以直接讀取網頁中的表格,簡直是為這種情況而生的。官方文檔在這里
但是pandas畢竟不是專業做http訪問的模塊,不能傳遞參數,加入headers等來偽裝瀏覽器,對于可以用不加任何參數的get方法請求的,是能直接用pandas.read_html(url)[0],直接得到結果的。[0]代表頁面上第幾張表格。但是西刺這么干,得到的是503.所以先用requests獲取主體然后再用pandas獲取表格就完美了。
基本構架就是這么三行了。
import requests
import pandas as pd
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
re = requests.get('http://www.xicidaili.com/nn/',headers =headers)
df = pd.read_html(re.content,header=0)[0]
print(df)
效果如下
然后就是多爬兩頁,整合,一下。這里我整合了前6頁的ip,當然也可以用scrapy來爬,但是ip的實效性,爬前幾頁我覺得就差不多了,scrapy有點太大。
爬之后的信息用pandas整合成一個dataframe,然后根據驗證的需求構造proxy,這里我用了兩個驗證方法,一個是 telnetlib.Telnet,不用構造ip地址,直接用,telnetlib.Telnet(x, port=str(y), timeout=0.5),x是ip地址,y是port。
另一個是用requests加上代理看是否能訪問相應網站,基本格式是proxy = { 'HTTP': '162.105.30.101:8080'},要把ip和端口串起來。
單ip,多ip的使用
df1['add'] = list(map(lambda x, y: ''http://' + str(x) + ':' + str(y) + ''', df1['IP地址'],df1['端口']))就是把ip端口串成需要的格式。
驗證網址是百度一下,之所以用第二個方法是我發現第一個方法篩選之后的ip有些還是不能用,加上第二個方法能夠返回‘百度一下,你就知道’,基本上就是真的能用了。
所以,基本流程是先用requests得到html主體,用pandas.read_html,得到整個表,驗證即可。
import random
from pyquery import PyQuery as pq
import telnetlib
import requests
import pandas as pd
def pa():
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
con = [1]*6
for i in range(6):
if i == 0:
re = requests.get('http://www.xicidaili.com/nn/',headers =headers)
else:
re = requests.get('http://www.xicidaili.com/nn/{}'.format(i), headers=headers)
con[i] = pd.read_html(re.content,header=0)[0]
df1 = pd.concat([con[i] for i in range(6)],ignore_index=True)
df1['status'] = list(map(lambda x, y: yanz(x, y), df1['IP地址'], df1['端口']))
df1.to_csv('vpn4.csv')
def yanz(x,y):
try:
tn = telnetlib.Telnet(x, port=str(y), timeout=0.5)
except:
return 'no'
else:
return 'yes'
def shuai():
df = pd.read_csv('vpn4.csv')
df1 = df[df['status'] == 'yes']
df1['add'] = list(map(lambda x, y: '\'http://' + str(x) + ':' + str(y) + '\'', df1['IP地址'], df1['端口']))
df1['st'] = df1['add'].map(lambda x:yanzheng(x))
df1.to_csv('vpn.csv')
def yanzheng(pro):
url = 'https://www.baidu.com/'
user_agent = [
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
"Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
"Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
"Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
"Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
"Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
"Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
"Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
"Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
]
try:
re = requests.get(url,proxies={'http':pro},timeout=0.5,headers={'User-Agent':random.choice(user_agent)})
re.encoding = 'jbk'
res = pq(re.text)
except requests.RequestException as e:
return e
else:
return res('title').text()
結果這樣