scrapy-splash 簡單使用

一.創建scrapy 應用

scrapy startproject jingdong

二.穿件爬蟲(爬蟲名字不能scrapy名相

scrapy genspider jd jd.com

三.開啓scrapy-splash 服務

sudo docker run -p 8050:8050 scrapinghub/splash

四.安裝scrapy-splash 框架

pip install scrapy-splash

五.配置setting文件

ROBOTSTXT_OBEY = False


SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}


DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810
}


SPLASH_URL = 'http://localhost:8050'


DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

六.重寫scrapy 的 start_requests方法調用請求

def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url,
                                self.parse,
                                args={'wait': '0.5'})

完整例子:

import scrapy
from scrapy_splash import SplashRequest

class JdSpider(scrapy.Spider):
    name = 'jd'
    # allowed_domains = ['jd.com', 'book.jd.com']
    start_urls = ['https://book.jd.com/']


    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url,
                                self.parse,
                                args={'wait': '0.5'})


    def parse(self, response):
        div_list = response.xpath('//div[@class="book_nav_body"]/div')
        for div in div_list:
            title = div.xpath('./div//h3[@class="item_header_title"]/a/text()')
            print(title)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章