十四、學習分佈式爬蟲之Scrapy

Scrapy框架

學習目標

  1. 理解scrapy框架。
  2. 學會spider爬蟲的編寫。
  3. 學會Crawlspider爬蟲編寫。
  4. 學會中間件的編寫。
  5. 學會pipeline保存數據。
  6. 學會將Scrapy結合selenium一起使用。
  7. 學會在Scrapy中使用IP代理。
    Scrapy框架的介紹
    在這裏插入圖片描述
    安裝scrapy
    在這裏插入圖片描述
    scrapy框架架構
    在這裏插入圖片描述
    在這裏插入圖片描述
    創建scrapy項目
  • 創建項目:scrapy startproject 【項目名稱】
  • 創建爬蟲:cd到項目中–>scrapy genspider 爬蟲名稱 域名
    項目文件作用
  • settings.py:用來配置爬蟲。
  • middlewares.py:用來定義中間件。
  • items.py:用來提前定義好需要下載的數據字段。
  • pipelines.py:用來保存數據。
  • scrapy.py:用來配置項目的。

scrapy框架爬取古詩文網

創建scrapy項目
在這裏插入圖片描述
創建爬蟲
在這裏插入圖片描述
settings.py
關閉robots協議
在這裏插入圖片描述
添加請求頭信息
在這裏插入圖片描述
打開pipelines管道,用於保存數據
在這裏插入圖片描述
gsww_spider.py

# -*- coding: utf-8 -*-
import scrapy
from ..items import GswwItem

class GswwSpiderSpider(scrapy.Spider):
    name = 'gsww_spider'
    allowed_domains = ['gushiwen.org']
    start_urls = ['https://www.gushiwen.org/default_1.aspx']

    def myprint(self,value):
        print('='*30)
        print(value)
        print('='*30)

    def parse(self, response):
        gsw_divs = response.xpath("//div[@class='left']/div[@class='sons']")   #返回一些SelectorList對象
        for gsw_div in gsw_divs:
            # title = gsw_div.xpath(".//b/text()").getall()  #將SelectorList裏面的數據全部提取出來
            title = gsw_div.xpath(".//b/text()").get()  #將SelectorList裏面的第一個數據提取出來
            source = gsw_div.xpath(".//p[@class='source']/a/text()").getall()
            # self.myprint(title)
            dynasty = source[0]
            author = source[1]
            content_list = gsw_div.xpath(".//div[@class='contson']//text()").getall()
            content = "".join(content_list).strip()
            # self.myprint(content)
            item = GswwItem(title=title,dynasty=dynasty,author=author,content=content)  #將數據打包到pipelines
            yield item  #將數據一個一個發送給pipelines保存

        next_href = response.xpath("//a[@id='amore']/@href").get()  #/default_2.aspx
        if next_href:
            next_url = response.urljoin(next_href)  #將href加入域名,https://www.gushiwen.org/default_2.aspx
            request = scrapy.Request(next_url)
            yield request  #將這個request對象發送給調度器,調度器進行請求

items.py
添加保存信息字段
在這裏插入圖片描述
pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import json

class GswwPipeline(object):
    def open_spider(self,spider):  #當spider打開時調用
        self.fp = open("古詩文.txt",'w',encoding='utf-8')

    def process_item(self, item, spider):
        self.fp.write(json.dumps(dict(item),ensure_ascii=False)+"\n")
        return item

    def close_spider(self,spider): #當spider關閉時調用
        self.fp.close()

run.py
想要運行這個scrapy框架,只需要運行這個run.py文件即可。

from scrapy import cmdline

cmds = ["scrapy","crawl","gsww_spider"]
cmdline.execute(cmds)

CrawlSpider爬蟲

作用:可以自定義規則,讓scrapy自動的去爬取我們想要的鏈接,而不必跟spider類一樣,手動的yield Request
在這裏插入圖片描述
提取兩個類:

  • LinkExtrator:用來定義需要爬取的url規則。
  • Rule:用來定義這個url爬取後的處理方式。比如是否需要跟進,是否需要執行回調函數等。
    在這裏插入圖片描述
    在這裏插入圖片描述
    Scrapy Shell
    在命令行中,進入到項目所在的路徑,然後:scrapy shell 鏈接
    在這裏面,可以先去寫提取的規則,沒有問題後,就可以把代碼拷貝到項目中去,方便寫代碼。
    異步保存MySQL數據
    在這裏插入圖片描述

獵雲網爬取

在這裏插入圖片描述
在這裏插入圖片描述
lyw_spider.py

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from ..items import LywItem


class LywSpiderSpider(CrawlSpider):
    name = 'lyw_spider'
    allowed_domains = ['lieyunwang.com']
    start_urls = ['https://www.lieyunwang.com/latest/p1.html']

    rules = (
        Rule(LinkExtractor(allow=r'/latest/p\d+\.html'), follow=True), #負責爬取頁面的url,不需要爬取裏面的詳情頁面
        Rule(LinkExtractor(allow=r'/archives/\d+'), callback='parse_detail',follow=False), #負責詳情頁面,不需要跟進
    )

    def parse_detail(self, response):
        # print('='*30)
        # print(response.url)  #打印請求到的url
        # print('=' * 30)
        titlelist = response.xpath("//h1[@class='lyw-article-title']/text()").getall()
        title = "".join(titlelist).strip()
        pub_time = response.xpath("//h1[@class='lyw-article-title']/span//text()").getall()
        author = response.xpath("//a[contains(@class,'author-name')]//text()").get()
        content = response.xpath("//div[@class='main-text']").get()
        origin_url = response.url
        item = LywItem(title=title,pub_time=pub_time,author=author,content=content,origin_url=origin_url)
        yield item

添加數據庫配置
在這裏插入圖片描述
pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
from twisted.enterprise import adbapi

class LywPipeline(object):
	#2
    def __init__(self,mysql_config):
        #創建連接池
        self.dbpool = adbapi.ConnectionPool(
            mysql_config['DRIVER'],
            host = mysql_config['HOST'],
            port = mysql_config['PORT'],
            user = mysql_config['USER'],
            password = mysql_config['password'],
            db = mysql_config['DATABASE'],
            charset = 'utf8'
        )

	#1
    @classmethod
    def from_crawler(cls,crawler):
        #只要重寫了from_crawler方法,那麼以後創建對象的時候,就會調用這個方法來獲取pipeline對象
        mysql_config = crawler.settings['MYSQL_CONFIG']  #獲取setting中的配置信息
        return cls(mysql_config)  #相當於LywPipeline(mysql_config)
        
	#3
    def process_item(self, item, spider):  #當每個item傳入進來時調用這個方法
        result = self.dbpool.runInteraction(self.insert_item,item) #使用runInteraction方法來運行插入sql語句的insert_item函數,同時該函數會獲得一個遊標cursor
        #錯誤監聽,如果出現了錯誤,就調用下面這個函數,打印錯誤
        result.addErrback(self.insert_error)
        return item

	#4
    def insert_item(self,cursor,item):
        sql = "insert into lyw_data(id,title,author,pub_time,content,origin_url) values(null,%s,%s,%s,%s,%s)"
        args = (item['title'],item['author'],item['pub_time'],item['content'],item['origin_url'])
        cursor.execute(sql,args)

    def insert_error(self,failure):
        print('='*30)
        print(failure)
        print('='*30)
	
	#5
    #This method is called when the spider is closed.
    def close_spider(self,spider):
        self.dbpool.close()

實現GitHub自動登錄

github_spider

# -*- coding: utf-8 -*-
import scrapy
import time


class GithubSpiderSpider(scrapy.Spider):
    name = 'github_spider'
    allowed_domains = ['github.com']
    start_urls = ['https://github.com/login']

    def parse(self, response):
        timestamp = str(int(time.time()*1000))
        authenticity_token = response.xpath("//input[@name='authenticity_token']/@value").get()
        timestamp_secret = response.xpath("//input[@name='timestamp_secret']/@value").get()
        form_data = {
            'commit': 'Sign in',
            'utf8':'✓',
            'authenticity_token': authenticity_token,
            'ga_id':'1804287830.1582287555',
            'login':'wu*******',
            'password':'wuy*******',
            'webauthn-support': 'supported',
            'webauthn-iuvpaa-support': 'supported',
            'timestamp': timestamp,
            'timestamp_secret':timestamp_secret,
        }
        #第一種提交表單的方式
        yield scrapy.FormRequest("https://github.com/session",formdata=form_data,callback=self.after_login)

        #第二種提交表單的方式
        # yield scrapy.FormRequest.from_response(response,formdata={
        #     'login': 'wu******',
        #     'password': 'w*********'
        # },callback=self.after_login)

    def after_login(self,response):
        print('='*30)
        yield scrapy.Request("https://github.com/settings/profile",callback=self.visit_profile)

    def visit_profile(self,response):
        print('-'*30)
        with open('github_profile.html','w',encoding='utf-8') as f:
            f.write(response.text)

下載圖片文件

在這裏插入圖片描述
在這裏插入圖片描述
注意:其中item中的兩個字段image_urls和images是必須的
zcool_spider.py

# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders.crawl import CrawlSpider,Rule
from scrapy.linkextractors import LinkExtractor
from ..items import ZcoolItem


class ZcoolSpiderSpider(CrawlSpider):
    name = 'zcool_spider'
    allowed_domains = ['zcool.com.cn']
    start_urls = ['https://www.zcool.com.cn/discover/0!3!0!0!0!!!!2!-1!1']

    rules = (
        #翻頁的url
        Rule(LinkExtractor(allow=r'.+0!3!0!0!0!!!!2!-1!\d+'), follow=True),
        #詳情頁面的url
        Rule(LinkExtractor(allow=r'.+/work/.+html'), follow=False,callback="parse_detail")
    )

    def parse_detail(self, response):
        image_urls = response.xpath("//div[contains(@class,'work-show-box')]//img/@src").getall()
        title_list = response.xpath("//div[@class='details-contitle-box']/h2/text()").getall()
        title = "".join(title_list).strip()
        item = ZcoolItem(title=title,image_urls=image_urls)
        print('='*30)
        yield  item

items.py
在這裏插入圖片描述
settings.py
添加IMAGES_STORE保存路徑
在這裏插入圖片描述
pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.pipelines.images import ImagesPipeline
from zcool import settings
import os
import re

class ZcoolPipeline(ImagesPipeline):
    # def process_item(self, item, spider):
    #     return item


    #將item綁定到request上,下載圖片之前調用
    def get_media_requests(self, item, info):
        media_requests = super(ZcoolPipeline, self).get_media_requests(item,info)
        # print('*'*30,media_requests)
        # print('item',item)
        # print('item_dict',dict(item))
        for media_request in media_requests:
            media_request.item = item
        return media_requests
	
	#This method is called once per downloaded item. It returns the download path of the file originating from the specified response.
    def file_path(self, request, response=None, info=None):
        origin_path = super(ZcoolPipeline, self).file_path(request,response,info)
        # print('origin_path:',origin_path)
        # print('request_item_title:',request.item['title'])
        title = request.item['title']
        title = re.sub(r'[\\/:\*\?"<>\|]',"",title)
        # print('title',title)
        save_path = os.path.join(settings.IMAGES_STORE,title)
        # print(save_path)
        if not os.path.exists(save_path):
            os.mkdir(save_path)
        image_name = origin_path.replace("full/","")
        # print('-'*30,image_name)
        # print('save_path:',os.path.join(save_path,image_name))
        return os.path.join(save_path,image_name)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章