使用Scrapy對新聞進行爬蟲(一)

Scrapy Item Pipeline學習筆記

Item Pipeline 主要用於從網頁抓取(spider)後對數據Item進行收集,寫入數據庫或文件中。

執行方式

spider 在獲得item後,會傳遞給item pipeline,進行後續數據收集工作。
在setting中對item pipeline類路徑進行配置,scrapy框架會調用該item pipeline類,爲了正確調用,
item pipeline類必須按照框架要求實現一些方法。使用者只需關注實現這些方法即可。

例子

下面文件實現了一個簡單的item pipeline類,對抓取的新聞數據進行進一步處理並寫入文件中。這些方法的功能見註釋。
1. 文件:pipelines.py

注意事項:
1. 初始化函數實現非常自由,無需限定參數,只需保證from_crawler類方法能夠調用該初始化函數生成相應的實例及可。
2. 框架所使用方法聲明參數固定。(保證框架能夠正確調用)

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


class News2FileFor163Pipeline(object):
    """
    pipeline: process items given by spider
    """

    def __init__(self, filepath, filename):
        """
        init for the pipeline class
        """
        self.fullname = filepath + '/' + filename
        self.id = 0
        return

    def process_item(self, item, spider):
        """
        process each items from the spider.
        example: check if item is ok or raise DropItem exception.
        example: do some process before writing into database.
        example: check if item is exist and drop.
        """
        for element in ("url","source","title","editor","time","content"):
            if item[element] is None:
                raise DropItem("invalid items url: %s" % str(item["url"]))
        self.fs.write("news id: %s" % self.id)
        self.fs.write("\n")
        self.id += 1
        self.fs.write("url: %s" % item["url"][0].strip().encode('UTF-8'))
        self.fs.write("\n")
        self.fs.write("source: %s" % item["source"][0].strip().encode('UTF-8'))
        self.fs.write("\n")
        self.fs.write("title: %s" % item["title"][0].strip().encode('UTF-8'))
        self.fs.write("\n")
        self.fs.write("editor: %s" % item["editor"][0].strip().
                      encode('UTF-8').split(':')[1])
        self.fs.write("\n")
        time_string = item["time"][0].strip().split()
        datetime = time_string[0] + ' ' + time_string[1]
        self.fs.write("time: %s" % datetime.encode('UTF-8'))
        self.fs.write("\n")
        content = ""
        for para in item["content"]:
            content += para.strip().replace('\n', '').replace('\t', '')
        self.fs.write("content: %s" % content.encode('UTF-8'))
        self.fs.write("\n")
        return item

    def open_spider(self, spider):
        """
        called when spider is opened.
        do something before pipeline is processing items.
        example: do settings or create connection to the database.
        """
        self.fs = open(self.fullname, 'w+')
        return

    def close_spider(self, spider):
        """
        called when spider is closed.
        do something after pipeline processing all items.
        example: close the database.
        """
        self.fs.flush()
        self.fs.close()
        return

    @classmethod
    def from_crawler(cls, crawler):
        """
        return an pipeline instance.
        example: initialize pipeline object by crawler's setting and components.
        """
        return cls(crawler.settings.get('ITEM_FILE_PATH'),
                   crawler.settings.get('ITEM_FILE_NAME'))
  1. 抓取project中settting.py的相關配置代碼
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'NewsSpiderMan.pipelines.News2FileFor163Pipeline': 300,
}

進一步需求

如果抓取數據的內容非常多,使用item pipeline 對數據處理並寫入數據庫中乃王道。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章