pyspider源代碼resultdb

class ResultWorker(object):

"""
do with result
override this if needed.
"""

def __init__(self, resultdb, inqueue):
    self.resultdb = resultdb
    self.inqueue = inqueue
    self._quit = False

def on_result(self, task, result):
    '''Called every result'''
    if not result:
        return
    if 'taskid' in task and 'project' in task and 'url' in task:
        logger.info('result %s:%s %s -> %.30r' % (
            task['project'], task['taskid'], task['url'], result))
        return self.resultdb.save(
            project=task['project'],
            taskid=task['taskid'],
            url=task['url'],
            result=result
        )
    else:
        logger.warning('result UNKNOW -> %.30r' % result)
        return

def quit(self):
    self._quit = True

def run(self):
    '''Run loop'''
    logger.info("result_worker starting...")

    while not self._quit:
        try:
            task, result = self.inqueue.get(timeout=1)
            self.on_result(task, result)
        except Queue.Empty as e:
            continue
        except KeyboardInterrupt:
            break
        except AssertionError as e:
            logger.error(e)
            continue
        except Exception as e:
            logger.exception(e)
            continue

    logger.info("result_worker exiting...")

調(diào)用

@cli.command()
@click.option('--result-cls', default='pyspider.result.ResultWorker', callback=load_cls,
              help='ResultWorker class to be used.')
@click.pass_context
def result_worker(ctx, result_cls, get_object=False):
    """
    Run result worker.
    """
    g = ctx.obj
    ResultWorker = load_cls(None, None, result_cls)

    result_worker = ResultWorker(resultdb=g.resultdb, inqueue=g.processor2result)

    g.instances.append(result_worker)
    if g.get('testing_mode') or get_object:
        return result_worker

    result_worker.run()

這里調(diào)用了ResultWorker的run方法。

run

其實run方法就做了一件事

 task, result = self.inqueue.get(timeout=1)
 self.on_result(task, result)

run方法從隊列中獲得task和result,并且調(diào)用on_result()方法。

task的具體數(shù)據(jù)
{'type': 1, 'group': None, 'project_updatetime': 1507644612.9016058, 'project': 'baidu', 'process': {'callback': 'detail_page'}, 'lastcrawltime': None, 'taskid': '7e243bf3e4e3950fbc6eb09076c18fe0', 'schedule': {'priority': 2}, 'project_md5sum': '9ddb79b9257976f959bf6a64cb3918b8', 'url': 'http://e.baidu.com/?refer=888', 'fetch': {}, 'track': {}, 'status': 1}

拿出一些參數(shù)看一下

  • project,process,url, project_md5sum, lastcrawltime, taskid,這些就是字面意思
  • {'callback': 'detail_page'} 其實每次callback都是detail_page, 也就是說, result模塊取出來的數(shù)據(jù)都是最后采集到的結(jié)果, 這個應(yīng)該是調(diào)度器實現(xiàn)的。
result
{'url': 'http://e.baidu.com/?refer=888', 'title': '百度推廣官方網(wǎng)站 | 讓有需求的客戶找到你'}

爬蟲代理return的結(jié)果,比如我的測試?yán)?/p>

    @config(priority=2)
    def detail_page(self, response):
        return {
            "url": response.url,
            "title": response.doc('title').text(),
        }

on_result
 return self.resultdb.save(
                project=task['project'],
                taskid=task['taskid'],
                url=task['url'],
                result=result
            )

存入數(shù)據(jù)庫。 數(shù)據(jù)庫連接模塊不再關(guān)注的范圍內(nèi),具體實現(xiàn)不想看其實可以

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

推薦閱讀更多精彩內(nèi)容