爬虫第七篇(scrapy 框架简介)

文档地址:https://scrapy-chs.readthedocs.io/zh_CN/0.24/topics/signals.html

scrapy 框架简介

  • Scrapy是用纯Python实现一个为了爬取网站数据、提取结构性数据而编写的应用框架,用途非常广泛
  • 框架的力量,用户只需要定制开发几个模块就可以轻松的实现一个爬虫,用来抓取网页内容以及各种图片,非常之方便

Scrapy架构图

在这里插入图片描述

  • crapy Engine(引擎): 负责Spider、ItemPipeline、Downloader、Scheduler中间的通讯,信号、数据传递等。
  • Scheduler(调度器): 它负责接受引擎发送过来的Request请求,并按照一定的方式进行整理排列,入队,当引擎需要时,交还给引擎。
  • Downloader(下载器):负责下载Scrapy Engine(引擎)发送的所有Requests请求,并将其获取到的Responses交还给Scrapy Engine(引擎),由引擎交给Spider来处理
  • Spider(爬虫):它负责处理所有Responses,从中分析提取数据,获取Item字段需要的数据,并将需要跟进的URL提交给引擎,再次进入Scheduler(调度器)
  • Item Pipeline(管道):它负责处理Spider中获取到的Item,并进行进行后期处理(详细分析、过滤、存储等)的地方
  • Downloader Middlewares(下载中间件):你可以当作是一个可以自定义扩展下载功能的组件
  • Spider Middlewares(Spider中间件):你可以理解为是一个可以自定扩展和操作引擎和Spider中间通信的功能组件(比如进入Spider的Responses;和从Spider出去的Requests)

1.安装Scrapy框架

  • pip install scrapy

2.创建一个scrapy项目

scrapy startproject 项目名

在这里插入图片描述

3.创建爬虫文件

scrapy genspider 文件名 域名
在这里插入图片描述

# -*- coding: utf-8 -*-
import scrapy


class BaiduSpider(scrapy.Spider):
    # 爬虫名称
    name = 'baidu'
    # 设置允许爬取的域(可以指定多个)
    allowed_domains = ['www.baidu.com']
    # 设置起始url(可以设置多个)
    start_urls = ['http://www.baidu.com/']

    def parse(self, response):
        '''
        是一个回调方法,起始url请求成功后,会回调这个方法
        :param response: 响应结果
        :return:
        '''
        pass

# parse 方法主要做数据的提取,并把提取的数据封装在item中,传递给pipeline

使用模板创建

scrapy genspider -t crawl 文件名 域名

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule


class TaobaoSpider(CrawlSpider):
    name = 'taobao'
    allowed_domains = ['www.taobao.com']
    start_urls = ['http://www.taobao.com/']
	
    '''
    Rule 主要是按正则匹配的规则提取链接
    '''
    rules = (
        Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    )
    
    '''
        LinkExtractor : 设置提取链接的规则(正则表达式)
        allow=(), : 设置允许提取的url
        restrict_xpaths=(), :根据xpath语法,定位到某一标签下提取链接
        restrict_css=(), :根据css选择器,定位到某一标签下提取链接
        deny=(), : 设置不允许提取的url(优先级比allow高)
        allow_domains=(), : 设置允许提取url的域
        deny_domains=(), :设置不允许提取url的域(优先级比allow_domains高)
        unique=True, :如果出现多个相同的url只会保留一个
        strip=True :默认为True,表示自动去除url首尾的空格
    '''
    '''
    rule
        link_extractor, : linkExtractor对象
        callback=None,  : 设置回调函数  
        follow=None, : 设置是否跟进
        process_links=None, :可以设置回调函数,对所有提取到的url进行拦截
        process_request=identity : 可以设置回调函数,对request对象进行拦截
    '''
	
     # 注意: CrawlSpider中一定不要出现parse回调方法 会重写父类的方法
    def parse_item(self, response):
        item = {}
        #item['domain_id'] = response.xpath('//input[@id="sid"]/@value').get()
        #item['name'] = response.xpath('//div[@id="name"]').get()
        #item['description'] = response.xpath('//div[@id="description"]').get()
        return item

item pipiline组件是一个独立的Python类,其中process_item()方法必须实现:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


class BaiduPipeline(object):
    def __init__(self):
        # 初始化一些参数,比如说mysql, mongo连接初始化
        pass

    def process_item(self, item, spider):
        """
        处理spider传递过来的item
        :param item: item对象
        :param spider: spider对象
        :return:
        """
        return item

    def open_spider(self, spider):
        # 可选实现,当spider被开启时,这个方法被调用
        pass

    def close_spider(self, spider):
        # 可选实现,当spider被关闭时,这个方法被调用,一般用来关闭mysql, mongo连接
        pass

setting.py 的设置

# -*- coding: utf-8 -*-

# Scrapy settings for Baidu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'Baidu'

SPIDER_MODULES = ['Baidu.spiders']
NEWSPIDER_MODULE = 'Baidu.spiders'

LOG_FILE = "BaiduSpider.log"

LOG_LEVEL = "INFO"

FEED_EXPORT_ENCODING='UTF8'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'Baidu (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'Baidu.middlewares.BaiduSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'Baidu.middlewares.BaiduDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'Baidu.pipelines.BaiduPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# 调式的过程避免每次发送请求,优先从缓存中读取
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 0
HTTPCACHE_DIR = 'httpcache'
HTTPCACHE_IGNORE_HTTP_CODES = []
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

Dowmloader Middleware的使用

1.设置随机代理

1.1 在settings.py中添加代理IP

PROXIES = ['http://183.207.95.27:80', 'http://111.6.100.99:80', 'http://122.72.99.103:80', 
           'http://106.46.132.2:80', 'http://112.16.4.99:81', 'http://123.58.166.113:9000', 
           'http://118.178.124.33:3128', 'http://116.62.11.138:3128', 'http://121.42.176.133:3128', 
           'http://111.13.2.131:80', 'http://111.13.7.117:80', 'http://121.248.112.20:3128', 
           'http://112.5.56.108:3128', 'http://42.51.26.79:3128', 'http://183.232.65.201:3128', 
           'http://118.190.14.150:3128', 'http://123.57.221.41:3128', 'http://183.232.65.203:3128', 
           'http://166.111.77.32:3128', 'http://42.202.130.246:3128', 'http://122.228.25.97:8101', 
           'http://61.136.163.245:3128', 'http://121.40.23.227:3128', 'http://123.96.6.216:808', 
           'http://59.61.72.202:8080', 'http://114.141.166.242:80', 'http://61.136.163.246:3128', 
           'http://60.31.239.166:3128', 'http://114.55.31.115:3128', 'http://202.85.213.220:3128']

1.2 在middlewares.py文件中,添加下面的代码


import scrapy
from scrapy import signals
import random
 
 
class ProxyMiddleware(object):
    '''
    设置Proxy
    '''
 
    def __init__(self, ip):
        self.ip = ip
 
    @classmethod
    def from_crawler(cls, crawler):
        return cls(ip=crawler.settings.get('PROXIES'))
 
    def process_request(self, request, spider):
        ip = random.choice(self.ip)
        request.meta['proxy'] = ip

1.3 最后将我们自定义的类添加到下载器中间件设置中,如下

DOWNLOADER_MIDDLEWARES = {
    'myproject.middlewares.ProxyMiddleware': 543,
}

2.设置随机UserAgent

2.1 在settings.py中添加

MY_USER_AGENT = [
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
    "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
    "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
    "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
    ]

2.2 在middlewares.py文件中,添加下面的代码

import scrapy
from scrapy import signals
from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware
import random
 
 
class MyUserAgentMiddleware(UserAgentMiddleware):
    '''
    设置User-Agent
    '''
 
    def __init__(self, user_agent):
        self.user_agent = user_agent
 
    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            user_agent=crawler.settings.get('MY_USER_AGENT')
        )
 
    def process_request(self, request, spider):
        agent = random.choice(self.user_agent)
        request.headers['User-Agent'] = agent

2.3 最后一步,就是将我们自定义的这个MyUserAgentMiddleware类添加到DOWNLOADER_MIDDLEWARES

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddleware.useragent.UserAgentMiddleware': None, 
    'myproject.middlewares.MyUserAgentMiddleware': 400,
}

Crawler对象体系

settings        # crawler的配置管理器

crawler.settings.get(name)

    set(name, value, priority=‘project’)
    setdict(values, priority=‘project’)
    setmodule(module, priority=‘project’)
    get(name, default=None)
    getbool(name, default=False)
    getint(name, default=0)
    getfloat(name, default=0.0)
    getlist(name, default=None)
    getdict(name, default=None)
    copy() # 深拷贝当前配置
    freeze()
    frozencopy()
    
signals        # crawler的信号管理器

crawler.signals.connect(receiver, signal)

    connect(receiver, signal)
    send_catch_log(signal, **kwargs)
    send_catch_log_deferred(signal, **kwargs)
    disconnect(receiver, signal)
    disconnect_all(signal)
    
stats         # crawler的统计信息收集器

crawler.stats.get_value()

    get_value(key, default=None)
    get_stats()
    set_value(key, value)
    set_stats(stats)
    inc_value(key, count=1, start=0)
    max_value(key, value)
    min_value(key, value)
    clear_stats()
    open_spider(spider)
    close_spider(spider)
    
    extensions 扩展管理器,跟踪所有开启的扩展

    engine 执行引擎,协调crawler的核心逻辑,包括调度,下载和spider

    spider 正在爬取的spider。该spider类的实例由创建crawler时所提供

    crawl(*args, **kwargs) 初始化spider类,启动执行引擎,启动crawler

Scrapy内置信号
    engine_started   # 引擎启动
    engine_stopped  # 引擎停止
    spider_opened   # spider开始
    spider_idle  # spider进入空闲(idle)状态
    spider_closed  # spider被关闭
    spider_error  # spider的回调函数产生错误
    request_scheduled    # 引擎调度一个 Request
    request_dropped   # # 引擎丢弃一个 Request
    response_received   # 引擎从downloader获取到一个新的 Response
    response_downloaded  # 当一个 HTTPResponse 被下载
    item_scraped   # item通过所有 Item Pipeline 后,没有被丢弃dropped
    item_dropped  #   DropItem丢弃item

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章