第二周作業(yè)

作業(yè)1:在 MongoDB 中篩選房源

代碼:

import pymongo
from bs4 import BeautifulSoup
import requests

client = pymongo.MongoClient('localhost',27017)
xiaozhu = client['xiaozhu']
info = xiaozhu['info']

def get_info (page=1):
    for p in range(1,page+1):
        url='http://bj.xiaozhu.com/search-duanzufang-p{}-0/'.format(p)
        web_data=requests.get(url)
        soup=BeautifulSoup(web_data.text,'lxml')
        titles=soup.select('span.result_title')
        prices=soup.select('span.result_price > i')
        for title,price in zip (titles,prices):
            data={
                'title':title.get_text(),
                'price':int(price.get_text())
            }
            info.insert_one(data)
    print('數(shù)據(jù)采集完成!')

#get_info(3)

for fang in info.find():
    if fang['price'] >= 500:
       print(fang)
數(shù)據(jù)庫(kù)內(nèi)容
篩選結(jié)果

總結(jié):熟悉了Mongodb的安裝,中間通過搜索解決了自啟動(dòng)的問題。學(xué)習(xí)了Mongodb的基本用法,插入數(shù)據(jù),查詢數(shù)據(jù)

作業(yè)2:爬取手機(jī)號(hào)

代碼:

from bs4 import  BeautifulSoup
import requests
import pymongo
import time

client = pymongo.MongoClient('localhost',27017)
tel = client['tel']
url_list = tel['url_list']
tel_list = tel['tel_list']

def get_tel (page=1):
    for p in range(1,page+1):
        url='http://wh.58.com/shoujihao/pn{}/'.format(str(p))
        web_data = requests.get(url)
        soup = BeautifulSoup(web_data.text,'lxml')
        time.sleep(1)
        if  soup.find('a','next'):
            titles = soup.select('a.t > strong.number')
            links = soup.select('a.t')
            for   title,link in zip (titles,links):
                if  link.get('href').rfind('wh.58') != -1:
                    data={
                        'title': title.get_text(),
                        'link': link.get('href').split('?')[0],
                    }
                    url_list.insert_one(data)
                else:
                    pass
        else:
                print('爬不到了')

#get_tel(120)

def get_item_info ():
    for  item in url_list.find():
        url=item['link']
        web_data=requests.get(url)
        soup=BeautifulSoup(web_data.text,'lxml')
        data = {
            'number':item['title'],
            'price':soup.select('.price')[0].text.strip(),
        }
        tel_list.insert_one(data)
    print('完成')

get_item_info()
爬地址存數(shù)據(jù)庫(kù)結(jié)果
爬詳細(xì)信息數(shù)據(jù)庫(kù)結(jié)果,只是練習(xí)沒爬詳細(xì)信息

總結(jié):先爬地址,存數(shù)據(jù)再爬詳細(xì)信息

作業(yè)三:設(shè)置斷點(diǎn)續(xù)傳

再上一個(gè)作業(yè)上小改

def get_item_info ():
    db_urls = [item['link'] for item in url_list.find()]
    index_urls = [item['url'] for item in tel_list.find()]
    x = set(db_urls)
    y = set(index_urls)
    rest_urls = x - y
    for  url in rest_urls:
        web_data=requests.get(url)
        soup=BeautifulSoup(web_data.text,'lxml')
        data = {
            'number':item['title'],
            'price':soup.select('.price')[0].text.strip(),
            'url':url,
        }
        tel_list.insert_one(data)
    print('完成')
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

推薦閱讀更多精彩內(nèi)容