scrapy-redis分佈式爬蟲如何在start_urls中添加參數

scrapy-redis分佈式爬蟲如何在start_urls中添加參數

1.背景介紹

  • 有這樣一個需求,需要爬取A,B,C,D四個鏈接下的數據,但是每個鏈接下要爬取的數據量不同:
url鏈接: 指定爬取的商品數
A:       10
B:       20
C:       5
D:       32
  • 首先通過下面的文章瞭解一下scrapy-redis分佈式爬蟲的基本框架。
  • 在非分佈式的scrapy爬蟲中,這個參數非常好帶入進去,只需要重寫spider中的start_requests方法即可。
  • 但是在scrapy-redis分佈式爬蟲中,爬蟲是依賴spider中設置的redis_key啓動的,比如我們在spider中指定:
redis_key = 'amazonCategory:start_urls'
  • 然後,只要往 amazonCategory:start_urls 中填入一些url,爬蟲就會啓動,並取出這些鏈接進行爬取。
    這裏寫圖片描述
  • 但是,scrapy-redis分佈式爬蟲,在start_urls的部分,只接受url,所以沒有辦法和 指定爬取的商品數 這個參數對應起來。
  • 當然,可以在默認處理start_urls的方法parse中,用url找到對應的 指定爬取的商品數 參數。但是,這裏又存在一個很大的問題:那就是scrapy-redis框架,在start_urls的處理,全都是默認參數,也是默認支持重定向的,這裏面的A,B,C,D四個鏈接,在爬取過程中,隨時都可能發生網頁重定向,跳轉到其他頁面,這時候parse函數中取出來的response.url就不是原來的A,B,C,D這幾個鏈接了,而是跳轉後的鏈接,也就無法和原來的 指定爬取的商品數 這個參數對應起來了。

2. 環境

  • 系統:win7
  • scrapy 1.4.0
  • scrapy-redis 0.6.8
  • redis 3.0.5
  • python 3.6.1

3. 分析scrapy-redis分佈式爬蟲的起步過程

這裏寫圖片描述

  • 上面是scrapy最新的架構圖。而scrapy-redis分佈式爬蟲是將scrapy和redis簡單的粘在一起。所以只需要分析這個框架。
  • 整個其實過程可以總結爲:

    • 第一:爬蟲指定好redis_key,啓動,等待起始url。
    • 第二:運行腳本,往redis_key中填充start_urls
    • 第三:爬蟲發現redis_key中有了start_urls,開始取出這些url
    • 第四:爬蟲按照默認參數,將這些url打包生成requests
    • 第五:將這些requests送往scheduler調度模塊,進入等待隊列,等待調度。
    • 第六:scheduler模塊開始調度這些requests,出隊,發往爬蟲引擎。
    • 第七:爬蟲引擎將這些requests送到下載中間件(多個,例如加header,代理,自定義等等)進行處理。
    • 第八:處理完之後,送往Downloader模塊進行下載。
    • …………後面和本次主題無關,暫時就不考慮了。
  • 從上面整個過程來看,只能從一個地方入手,就是 “ 第四:爬蟲按照默認參數,將這些url打包生成requests ”,我們需要在將url生成requests這個過程中,增加一些參數。

4. 爬蟲起步過程的源碼分析

  • 第一步:我們的爬蟲是繼承自類RedisSpider:
# 文件mySpider.py
class MySpider(RedisSpider):

# 文件 E:\Miniconda\Lib\site-packages\scrapy_redis\spiders.py
# RedisSpider 繼承自 RedisMixin 和 Spider
class RedisSpider(RedisMixin, Spider):
  • 第二步 :分析類 RedisMixin
# 文件 E:\Miniconda\Lib\site-packages\scrapy_redis\spiders.py

class RedisMixin(object):
    """Mixin class to implement reading urls from a redis queue."""
    redis_key = None
    redis_batch_size = None
    redis_encoding = None

    # Redis client placeholder.
    server = None

    # 註解: scrapy-redis爬蟲啓動的第一步
    def start_requests(self):
        """Returns a batch of start requests from redis."""
        print(f"######1 spiders.py RedisMixin: start_requests")
        return self.next_requests()

    def setup_redis(self, crawler=None):
        """Setup redis connection and idle signal.

        This should be called after the spider has set its crawler object.
        """
        if self.server is not None:
            return

        if crawler is None:
            # We allow optional crawler argument to keep backwards
            # compatibility.
            # XXX: Raise a deprecation warning.
            crawler = getattr(self, 'crawler', None)

        if crawler is None:
            raise ValueError("crawler is required")

        settings = crawler.settings

        if self.redis_key is None:
            self.redis_key = settings.get(
                'REDIS_START_URLS_KEY', defaults.START_URLS_KEY,
            )

        self.redis_key = self.redis_key % {'name': self.name}

        if not self.redis_key.strip():
            raise ValueError("redis_key must not be empty")

        if self.redis_batch_size is None:
            # TODO: Deprecate this setting (REDIS_START_URLS_BATCH_SIZE).
            self.redis_batch_size = settings.getint(
                'REDIS_START_URLS_BATCH_SIZE',
                settings.getint('CONCURRENT_REQUESTS'),
            )

        try:
            self.redis_batch_size = int(self.redis_batch_size)
        except (TypeError, ValueError):
            raise ValueError("redis_batch_size must be an integer")

        if self.redis_encoding is None:
            self.redis_encoding = settings.get('REDIS_ENCODING', defaults.REDIS_ENCODING)

        self.logger.info("Reading start URLs from redis key '%(redis_key)s' "
                         "(batch size: %(redis_batch_size)s, encoding: %(redis_encoding)s",
                         self.__dict__)

        self.server = connection.from_settings(crawler.settings)
        # The idle signal is called when the spider has no requests left,
        # that's when we will schedule new requests from redis queue
        crawler.signals.connect(self.spider_idle, signal=signals.spider_idle)

    # 註解: 從 redis-key中,取出數據,這些數據默認是url,然後用這些url去構造requests
    def next_requests(self):
        """Returns a request to be scheduled or none."""
        print(f"######2 spiders.py RedisMixin: next_requests")
        use_set = self.settings.getbool('REDIS_START_URLS_AS_SET', defaults.START_URLS_AS_SET)
        fetch_one = self.server.spop if use_set else self.server.lpop
        # XXX: Do we need to use a timeout here?
        found = 0
        # TODO: Use redis pipeline execution.
        # 註解: redis_batch_size 是指我們在setting裏面設置的CONCURRENT_REQUESTS, 一次性請求的量
        #      這個地方的作用,也是,當數量達到這個值之前,url會進行累計,達到這個數量時,會一次性送去處理
        while found < self.redis_batch_size:
            # 註解: 從redis_key中讀出數據,也就是start_urls
            data = fetch_one(self.redis_key)
            if not data:
                # 註解: redis_key中數據已經全部取完
                # Queue empty.
                break
            # 註解: 將從redis_key中取出的數據(url),製作成request。關鍵點就在這裏了
            req = self.make_request_from_data(data)
            if req:
                # 註解: 將製作好的request,送往調度器
                yield req
                found += 1
            else:
                self.logger.debug("Request not made from data: %r", data)

        if found:
            self.logger.debug("Read %s requests from '%s'", found, self.redis_key)


    # 註解: 將從redis_key中取出的數據(url),製作成request。關鍵點就在這裏了
    def make_request_from_data(self, data):
        """Returns a Request instance from data coming from Redis.

        By default, ``data`` is an encoded URL. You can override this method to
        provide your own message decoding.

        Parameters
        ----------
        data : bytes
            Message from redis.

        """
        url = bytes_to_str(data, self.redis_encoding)
        # return self.make_requests_from_url(url)
        # 註解: 需要在這一步,將make_requests_from_url改寫,加入一些指定參數
        #      尤其需要注意到的是,這兒的self.make_requests_from_url(url),
        #      指的是E:\Miniconda\Lib\site-packages\scrapy\spiders\__init__.py中的方法。
        #  這個部分只會影響到start_urls的構造。 像後續的request構造,都是在mySpider.py中完成,在scheduler模塊中調度,不會走入這個邏輯
        print(f"######3 spiders.py RedisMixin: make_request_from_data, url={url}")
        return self.make_requests_from_url_DontRedirect(url)

    def schedule_next_requests(self):
        """Schedules a request if available"""
        # TODO: While there is capacity, schedule a batch of redis requests.
        for req in self.next_requests():
            self.crawler.engine.crawl(req, spider=self)

    def spider_idle(self):
        """Schedules a request if available, otherwise waits."""
        # XXX: Handle a sentinel to close the spider.
        self.schedule_next_requests()
        raise DontCloseSpider
  • 第三步 :分析類 Spider
# 文件 E:\Miniconda\Lib\site-packages\scrapy\spiders\__init__.py
class Spider(object_ref):
    """Base class for scrapy spiders. All spiders must inherit from this
    class.
    """

    name = None
    custom_settings = None
    # ……
        def make_requests_from_url(self, url):
        """ This method is deprecated. """
        return Request(url, dont_filter=True)

    def make_requests_from_url_DontRedirect(self, url):
        """ This method is deprecated. """
        print(f"######4 __init__.py Spider: make_requests_from_url_DontRedirect, url={url}")
        # 註解: 在這一步進行改寫,將原始的originalUrl放到meta信息中進行傳遞。
        #      如果想指定不允許重定向,也是在meta信息中,加入:meta={'dont_redirect':True}
        #      不過,在這裏,推薦的做法,還是允許跳轉,但是記錄原始鏈接originalUrl
        return Request(url, meta={'originalUrl': url}, dont_filter=True)
  • 通過運行的Log跟蹤爬蟲軌跡:
######1 spiders.py RedisMixin: start_requests
###### there is no BloomFilter, used the default redis set to dupefilter. isUseBloomfilter = False
2018-03-27 17:22:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-03-27 17:22:52 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
request is None, lostGetRequest = 1, time = 2018-03-27 17:22:52.369602
######2 spiders.py RedisMixin: next_requests
######3 spiders.py RedisMixin: make_request_from_data, url=https://www.amazon.com/s/ref=lp_166220011_nr_n_3/134-0288275-8061547?fst=as%3Aoff&rh=n%3A165793011%2Cn%3A%21165795011%2Cn%3A166220011%2Cn%3A1265807011&bbn=166220011&ie=UTF8&qid=1517900306&rnid=166220011
######4 __init__.py Spider: make_requests_from_url_DontRedirect, url=https://www.amazon.com/s/ref=lp_166220011_nr_n_3/134-0288275-8061547?fst=as%3Aoff&rh=n%3A165793011%2Cn%3A%21165795011%2Cn%3A166220011%2Cn%3A1265807011&bbn=166220011&ie=UTF8&qid=1517900306&rnid=166220011
request is None, lostGetRequest = 1, time = 2018-03-27 17:22:52.386602
######3 spiders.py RedisMixin: make_request_from_data, url=https://www.amazon.com/s/ref=lp_166461011_nr_n_1/133-3783987-2841554?fst=as%3Aoff&rh=n%3A165793011%2Cn%3A%21165795011%2Cn%3A166461011%2Cn%3A11350121011&bbn=166461011&ie=UTF8&qid=1517900305&rnid=166461011
######4 __init__.py Spider: make_requests_from_url_DontRedirect, url=https://www.amazon.com/s/ref=lp_166461011_nr_n_1/133-3783987-2841554?fst=as%3Aoff&rh=n%3A165793011%2Cn%3A%21165795011%2Cn%3A166461011%2Cn%3A11350121011&bbn=166461011&ie=UTF8&qid=1517900305&rnid=166461011
request is None, lostGetRequest = 1, time = 2018-03-27 17:22:52.388602
######3 spiders.py RedisMixin: make_request_from_data, url=https://www.amazon.com/s/ref=lp_495236_nr_n_6/143-2389134-7838338?fst=as%3Aoff&rh=n%3A228013%2Cn%3A%21468240%2Cn%3A495224%2Cn%3A495236%2Cn%3A3742221&bbn=495236&ie=UTF8&qid=1517901748&rnid=495236
######4 __init__.py Spider: make_requests_from_url_DontRedirect, url=https://www.amazon.com/s/ref=lp_495236_nr_n_6/143-2389134-7838338?fst=as%3Aoff&rh=n%3A228013%2Cn%3A%21468240%2Cn%3A495224%2Cn%3A495236%2Cn%3A3742221&bbn=495236&ie=UTF8&qid=1517901748&rnid=495236
request is None, lostGetRequest = 1, time = 2018-03-27 17:22:52.390602
######3 spiders.py RedisMixin: make_request_from_data, url=https://www.amazon.com/s/ref=lp_13980701_nr_n_0/133-0601909-3066127?fst=as%3Aoff&rh=n%3A1055398%2Cn%3A%211063498%2Cn%3A1063278%2Cn%3A3734741%2Cn%3A13980701%2Cn%3A3734771&bbn=13980701&ie=UTF8&qid=1517900284&rnid=13980701
######4 __init__.py Spider: make_requests_from_url_DontRedirect, url=https://www.amazon.com/s/ref=lp_13980701_nr_n_0/133-0601909-3066127?fst=as%3Aoff&rh=n%3A1055398%2Cn%3A%211063498%2Cn%3A1063278%2Cn%3A3734741%2Cn%3A13980701%2Cn%3A3734771&bbn=13980701&ie=UTF8&qid=1517900284&rnid=13980701
  • 打印看下,originalUrl是否有成功傳遞過來:
# 文件 mySpider.py

# 分析start_urls頁面
    # 第一個處理函數必須是parse,名字不能換
    def parse(self, response):
        print(f"parse url = {response.url}, status = {response.status}, meta = {response.meta}")
        theCategoryUrl = response.url       # 品類的鏈接
        isRedirect = False                  # 記錄是否發生了跳轉,默認是記爲 無重定向
        redirectUrl = ""                    # 記錄跳轉後的頁面,默認爲空
        if "originalUrl" in response.meta:
            if response.meta["originalUrl"] == response.url:    # 說明沒有發生頁面跳轉
                isRedirect = False
            else:
                # 發生了跳轉,打上標記, 並更新categoryUrl爲original,方便找到categoryInfo的信息,並記錄跳轉後的頁面
                isRedirect = True
                theCategoryUrl = response.meta["originalUrl"]
                redirectUrl = response.url
        else:
            # 如果後期沒有這個機制的話,那就恢復原來的處理機制,啥也不做
            pass

        # 先判斷這個爬蟲是否有這個指定品類的基礎信息,如果有,就去爬取;如果沒有,就不爬
        if theCategoryUrl in self.categoryRecordDict:
            categoryInfo = self.categoryRecordDict[theCategoryUrl]
        else:
            print(f"this url doesnt have mainInfo, no need follow.")
            return None
        print(f"Follow this url = {theCategoryUrl}")
parse url = https://www.amazon.com/s/ref=lp_723452011_nr_n_7/134-5490946-1974065?fst=as%3Aoff&rh=n%3A3760901%2Cn%3A%213760931%2Cn%3A723418011%2Cn%3A723452011%2Cn%3A723457011&bbn=723452011&ie=UTF8&qid=1517900276&rnid=723452011, status = 200, meta = {'originalUrl': 'https://www.amazon.com/s/ref=lp_723452011_nr_n_7/134-5490946-1974065?fst=as%3Aoff&rh=n%3A3760901%2Cn%3A%213760931%2Cn%3A723418011%2Cn%3A723452011%2Cn%3A723457011&bbn=723452011&ie=UTF8&qid=1517900276&rnid=723452011', 'download_timeout': 20.0, 'proxy': 'http://proxy.abuyun.com:9020', 'download_slot': 'www.amazon.com', 'download_latency': 3.305999994277954, 'depth': 0}

5. 小結

  • 其實通過以上的源碼分析,發現在start_urls上能做的文章很少,而且需要很慎重,畢竟這是scrapy-redis的源碼,其他爬蟲項目也會用到。所以在這個上面的改動一定要特別小心,並做好記錄和備份。儘量在滿足需求的同時,將影響降到最小。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章