今天的爬蟲是爬取某網站的商品信息,難點在于網頁瀏覽量的爬取,不僅需要偽造Referer,而且瀏覽量的獲取不能直接抓取,否則會為0。此項是由js控制的,如果使用chrome瀏覽器,可以在network里找到有一頁控制瀏覽量的文件。
http://jst1.58.com/counter?infoid={}
通過infoid來獲取瀏覽量,而此參數是商品網址的一部分,所以需要從網址中提取出來。
代碼入下:
from bs4 import BeautifulSoup
import requests
import time
headers = {
'User-Agent': 'xxxxx',
'Referer': 'xxxxx',
'Cookie': 'xxxxx'
}
# 獲取爬取頁面個數以及其鏈接
def get_pages_num(who_sells, page_num):
base_urls = ['http://cd.58.com/taishiji/{}/pn{}'.format(who_sells, page_num) for page_num in range(1, page_num+1)]
return base_urls
# 獲取所有鏈接
def get_links_from(who_sells, page_num):
base_urls = get_pages_num(who_sells, page_num)
links = []
for url in base_urls:
time.sleep(1)
r = requests.get(url, headers=headers).text
soup = BeautifulSoup(r, 'lxml')
for link in soup.select('td.t > a'):
if len(link.get('href').split('?')[0]) == 46:
links.append(link.get('href').split('?')[0])
return links
# 獲取瀏覽量
def get_views(url):
id_num = url.split('/')[-1].strip('x.shtml')
api = 'http://jst1.58.com/counter?infoid={}'.format(id_num)
js = requests.get(api, headers=headers)
views = js.text.split('=')[-1]
return views
# 獲取詳細信息
def get_item_info(who_sells=0, page_num=1):
urls = get_links_from(who_sells, page_num)
for url in urls:
time.sleep(2)
r = requests.get(url, headers=headers)
soup = BeautifulSoup(r.text, 'lxml')
title = soup.title.text
price = soup.findAll('span', 'price c_f50')[0].text
area = list(soup.select('.c_25d')[-1].stripped_strings)
data = soup.select('li.time')[0].text
data = {
'title': title,
'price': price,
'data': data,
'area': ''.join(area) if len(list(soup.select('.c_25d'))) == 2 else None,
'cate': '個人' if who_sells == 0 else '商家', #通過參數來判斷賣家
'views': get_views(url)
}
print(data)
get_item_info(page_num=3)
此代碼的2個參數一個是對應賣家的,0代表個人,1代表商家,另一個是對應爬取多少頁的。