老實說,這個課程有點長,而且有點難,搞了幾天終于弄好了。
整個課程,實際上包含四個部分:提取一個真實網頁的信息;提取一堆真實網頁的信息;提取自己賬戶上保存的信息;模擬手機端提取一些難以提取的圖片。
我的成果
14.JPG
我的代碼
from bs4 import BeautifulSoup
import requests
import time
url_saves ='http://www.tripadvisor.cn/Saves?v=full#303955'
url = 'http://www.tripadvisor.cn/Attractions-g664891-Activities-Macau.html'
urls =[ 'http://www.tripadvisor.cn/Attractions-g664891-Activities-oa30-Macau.html#ATTRACTION_LIST'.format(str(i)) for i in range(30,930,30)]
headers = {
'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36 115Browser/7.0.0',
'Cookie' : 'TASSK=enc%3AEBisFh79eMUzT3MJTUwN%2BMvxieGPPqnq%2BieJmkZ19iLNWHKSWruCw0P2ixRnAMvI0%2BQbDGL4T5c%3D; TAUnique=%1%enc%3A0K2Wy69nxlFpu0sd%2BkOy%2Bg2r1pMQVIUGRHSz2SLqSMkiC9mUUqh3Gg%3D%3D; _jzqckmp=1; __gads=ID=7335946c9dd2b2c6:T=1463646260:S=ALNI_MZKIDpCNf_76WpWe4pteoH0BB7Ldg; TAAuth2=%1%3%3A637ee648eb67e3e41be6f5e2fc85b6b1%3AAFRZN2WL2V59z%2FVe9ajP3C0F%2BLplh7OXkqCsoddosFiI2osT0tQMyVEEFz4%2F8ChdcCJDfMs0M79588stUTZopXo%2FaJ%2Bnf2HLNUpQ2p7v2gAnVTegQC99DChpbFJdRmzZ9tH%2Fux0elV3OGZDaarloitD1sVIsO2ksEcFKn4S4a8FzhqhMzZnslu41MRFq1GtAJoIbifmPstbE0Afw4dZK4oM%3D; _smt_uid=573d80af.bc4ebfa; bdshare_firstime=1463648453166; TATravelInfo=V2*A.2*MG.-1*HP.2*FL.3*RVL.293917_140l503193_140l6427689_140*RS.1; ServerPool=C; ki_t=1463646229531%3B1463646229531%3B1463673603689%3B1%3B13; ki_r=; CM=%1%HanaPersist%2C%2C-1%7Ct4b-pc%2C%2C-1%7CHanaSession%2C%2C-1%7CFtrSess%2C%2C-1%7CRCPers%2C%2C-1%7CHomeAPers%2C%2C-1%7CWShadeSeen%2C%2C-1%7CRCSess%2C%2C-1%7CFtrPers%2C%2C-1%7CHomeASess%2C%2C-1%7Csh%2C%2C-1%7C2016sticksess%2C%2C-1%7CCpmPopunder_1%2C1%2C1463732646%7CCCPers%2C%2C-1%7CCCSess%2C%2C-1%7CWAR_RESTAURANT_FOOTER_SESSION%2C%2C-1%7Csesssticker%2C%2C-1%7C2016stickpers%2C%2C-1%7Ct4b-sc%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS2%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS%2C%2C-1%7Csess_rev%2C3%2C-1%7CSaveFtrPers%2C%2C-1%7CSaveFtrSess%2C%2C-1%7Cpers_rev%2C%2C-1%7CRBASess%2C%2C-1%7Cperssticker%2C%2C-1%7CMetaFtrSess%2C%2C-1%7CRBAPers%2C%2C-1%7CWAR_RESTAURANT_FOOTER_PERSISTANT%2C%2C-1%7CMetaFtrPers%2C%2C-1%7C; TAReturnTo=%1%%2FAttractions-g664891-Activities-Macau.html; roybatty=ANQizvIgk9mg7P1ZdpRYlmCT%2BI4ReEi1jLMRBeLume67cwpQ8f1leiD5rFSZ04pJE6VkPaeLa2OW%2Fh5SlRmreKftvPgy0LjweCDRR9iPoWjtTuPxJ3Jbj%2Be1ydCXLbkwBfZLKD4atIa%2BlbIGdwZqcPcQY8I2JZUjzN1tnrhpjh2m%2C1; NPID=; TASession=%1%V2ID.6B0EA5448E35897C60A5476BB4C9E090*SQ.6*LS.UserReviewController*GR.27*TCPAR.2*TBR.6*EXEX.52*ABTR.23*PPRP.70*PHTB.99*FS.79*CPU.53*HS.popularity*ES.popularity*AS.popularity*DS.5*SAS.popularity*FPS.oldFirst*TS.E2EAD86EE045C9196C22C29430AAF1CB*FA.1*DF.0*LP.%2FAttractions-g664891-Activities-Macau%5C.html*FLO.664891*TRA.true*LD.664891; TAUD=LA-1463673628881-1*LG-29309-2.1.F*LD-29311-.....; Hm_lvt_2947ca2c006be346c7a024ce1ad9c24a=1463646225,1463673598; Hm_lpvt_2947ca2c006be346c7a024ce1ad9c24a=1463673626; _qzja=1.652561790.1463646228659.1463648120069.1463673598195.1463673598195.1463673627363..0.0.16.3; _qzjb=1.1463673598194.2.0.0.0; _qzjc=1; _qzjto=1.0.0; _jzqa=1.2281301167737561300.1463646229.1463648120.1463673598.3; _jzqc=1; _jzqb=1.2.10.1463673598.1'
}
def get_attractions(url,data=None):
wb_data = requests.get(url)
time.sleep(4)
soup = BeautifulSoup(wb_data.text, 'lxml')
titles = soup.select('div.property_title > a[target="_blank"]')
imgs = soup.select('img[width="160"]')
cates = soup.select('div.p13n_reasoning_v2 > a')
# print(titles,imgs,cates,sep=('\n--------------\n'))
if data == None:
for title, img, cate in zip(titles, imgs, cates):
data = {
'tiele': title.get_text(),
'img': img.get('src'),
'cates': list(cate.stripped_strings)
}
print(data)
def get_favs(url,data=None):
wb_date = requests.get(url_saves, headers=headers)
soup = BeautifulSoup(wb_date.text, 'lxml')
titles = soup.select('div > a.location-name')
imgs = soup.select('img.photo_image')
adresses = soup.select('div > span.format_address')
if data == None:
for title, img, adress in zip(titles, imgs, adresses):
data = {
'title': title.get_text(),
'img': img.get('src'),
'adress': list(adress.stripped_strings)
}
print(data)
for singel_url in urls:
get_attractions(singel_url)
我的總結
- 真實網頁和本地網頁在導入庫方面有點不同,需要導入requests庫。
import requests
- 真實網頁和本地網頁在解析網頁方面,所用的函數也不同,本地網頁的是:
with open (‘文件路徑’)as wb_data
真實網頁:
wb_data = requests.get(url)
- 真實網頁在提取網頁信息方面,更為復雜,并非直接復制selector路徑就可以了。
例如:
titles = soup.select('div.property_title > a[target="_blank"]')
這里找到的a標簽太多了,需要用標簽中的某些關鍵信息進行區分。
又如:
imgs = soup.select('img[width="160"]')
這里通過[width="160"]進行圖片區分。
又如:
titles = soup.select('div > a.location-name')
這里通過拿到a標簽的a.location-name,然后到網頁源代碼上搜查這個標簽,是否數量一致。如果數量一致,則證明找的路徑找對了。
- 用headers來提取瀏覽器的“User-Agent”和“Cookie”,可以用來模擬用戶登陸以及模擬手機的狀態。在network上的request里面,提取這兩個值。
- 真實網頁中,我們想提取一堆的網頁,必須觀察其中的規律。例如:
urls =[ 'http://www.tripadvisor.cn/Attractions-g664891-Activities-oa30-Macau.html#ATTRACTION_LIST'.format(str(i)) for i in range(30,930,30)]
上面{}就是代入數值的地方,這個網頁是以30為排列數值,30、60、90、……、930.
- 在真實網絡環境中,很多網頁有反爬蟲的策略,所以不能太快地不斷爬取數據,于是我們引入time庫,4秒爬取一次數據。
import time
time.sleep(4)
- 用def關鍵字來定義,這樣就能保證兩個關鍵字同時運行了。
- 有些圖片,它是通過js代碼來控制,最后展示出一個圖片集合。但我們需要的是圖片本身,于是,我們有必要運用修改headers來模擬手機網頁,最終提取圖片。因為手機網頁為了增強適應性,會很少使用js,所以提取出來會比較簡單。
- 本課有四個知識點:一、解釋單個真實網頁頁面;二、解釋一堆連續的網頁頁面;三、模擬手機來提取一些難搞的圖片;四、模擬賬戶登陸狀態,提取自己保存在賬戶的信息。