Python爬蟲----Scrapy的簡單使用

         一、新建工程

                scrapy startproject doubandemo

                進入doubandemo/spiders下,初始化spider

                scrapy genspider douban_spider movie.douban.com

         二、爲了方便,在Pycharm中進行可視化操作

                項目目錄如下圖所示:

                                      

                (1)創建main.py避免每次都都需要在命令行中啓動爬蟲

main.py

# 設置啓動文件,就不用每次都去命令行中執行啓動操作

from scrapy import cmdline
cmdline.execute('scrapy crawl douban_spider'.split())

                (2)在items.py中自定義數據接口

items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy

# 明確要抓取的內容,如電影名稱,星級,導演以及描述等屬性


class DoubandemoItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    serial_num = scrapy.Field()  # 排名
    movie_name = scrapy.Field()  # 電影名
    introduce = scrapy.Field()   # 介紹
    star = scrapy.Field()        # 星級
    evaluation = scrapy.Field()  # 評價
    description = scrapy.Field()  # 描述

                (3)在spiders/xxx_spider.py(自定義爬蟲)中編寫數據解析規則

douban_spider.py

# -*- coding: utf-8 -*-
import scrapy
from douban.items import DoubandemoItem


# 負責寫XPath和正則表達式


class DoubanSpiderSpider(scrapy.Spider):
    name = 'douban_spider'  # 爬蟲名
    allowed_domains = ['movie.douban.com']  # 允許的域名
    start_urls = ['https://movie.douban.com/top250']  # 入口url,扔到調度器裏面去

    # 默認的解析方法

    def parse(self, response):
        movie_list = response.xpath("//div[@class='article']//ol[@class='grid_view']//li")
        for item in movie_list:
            # 導入文件
            douban_item = DoubandemoItem()
            # 寫詳細的xpath來進行解析
            douban_item['serial_num'] = item.xpath(".//div[@class='item']//em/text()").extract_first()
            douban_item['movie_name'] = item.xpath(".//div[@class='info']//div[@class='hd']/a/span[1]/text()").extract_first()
            content = item.xpath(".//div[@class='info']//div[@class='bd']/p[1]/text()").extract()
            #douban_item['introduce'] = content
            for i_content in content:
                content_s = "".join(i_content.split())
                douban_item['introduce'] = content_s

            douban_item['star'] = item.xpath(".//div[@class='info']//div[@class='star']/span[2]/text()").extract_first()
            douban_item['evaluation'] = item.xpath(".//div[@class='info']//div[@class='star']/span[4]/text()").extract_first()
            douban_item['description'] = item.xpath(".//div[@class='info']//span[@class='inq']/text()").extract_first()
            yield douban_item  # 將數據傳送到管道

        # 解析下一頁

        next_link = response.xpath("//div[@class='article']//div[@class='paginator']//span[@class='next']/link/@href").extract()  # 獲取下一頁的鏈接
        if next_link:
            next_link = next_link[0]
            # 提交給調度器
            yield scrapy.Request("https://movie.douban.com/top250"+next_link, callback=self.parse)


                (4)可以啓動爬蟲進行嘗試爬去數據,可能會報錯,這時需要設置請求頭的USER_AGENT字段

                         在setting.py中添加該字段:

USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0'

                (5)數據存儲

                       1.  可以將爬取到的數據導出爲.json或者.csv文件,操作如下

scrapy crawl douban_spider -o test.json
scrapy crawl douban_spider -o test.csv

                       2.  可以將爬取到的數據保存到mongodb

                            A.先在setting.py中設定mongodb的變量

mongo_host = '127.0.0.1'  # 數據庫ip地址
mongo_port = 27017  # 數據庫端口
mongo_db_name = 'douban'  # 數據庫名
mongo_db_collection = 'douban_movie'  # 數據庫表名

                            B.在pipelines.py中編寫存儲代碼

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

import pymongo
from douban.settings import mongo_host,mongo_port,mongo_db_name,mongo_db_collection


class DoubanPipeline(object):
    def __init__(self):  # 構造函數
        host = mongo_host
        port = mongo_port
        db_name = mongo_db_name
        db_collection = mongo_db_collection
        client = pymongo.MongoClient(host=host,port=port)  # 取得mongodb的鏈接
        my_db = client[db_name]  # 取得數據庫
        self.post = my_db[db_collection]  # 取得表,然後就可以進行數據的添加

    def process_item(self, item, spider):
        # 向mongodb寫入數據
        data = dict(item)  # 將數據轉化爲字典結構
        self.post.insert(data)
        return item

                            C.在settings.py中打開pipelines

ITEM_PIPELINES = {
    'douban.pipelines.DoubanPipeline': 300,
}

                (5)ip中間件編寫

                         主要是對爬蟲進行僞裝,避免對對方防火牆發現。兩種僞裝方式:設置代理ip和設置隨機USER-AGENT

 

                         設置代理ip:在middlewares.py中進行編寫,然後在settings.py中開啓中間件:

# 需要import base64

class my_proxy(object):  # 代理ip類
    def process_request(self,request,spider):
        request.meta['proxy'] = 'http-cla.abuyun.com:9030'  # 設置代理服務器的ip地址和端口號
        proxy_name_pass = b'username:password'  # 代理用戶名和密碼
        encode_name_pass = base64.encode(proxy_name_pass)  # 加密
        request.headers['Proxy-Authorization'] = 'Basic '+encode_name_pass.decode()
DOWNLOADER_MIDDLEWARES = {
    'douban.middlewares.my_proxy': 543,
  # 'douban.middlewares.DoubanDownloaderMiddleware': 543,
}

                         設置隨機USER-AGENT:同樣地,在middlewares.py中進行編寫,然後在settings.py中開啓中間件:

# 需要import random

class my_useragent(object):  # 隨機設置user-agent來僞裝
    def process_request(self,request,spider):
        # 設置user-agent列表,可以在網上查找
        user_agents = [
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
            "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
            "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
            "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
            "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
            "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
            "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
            "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
            "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
            "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER",
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)",
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 LBBROWSER",
            "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
        ]
        agent = random.choice(user_agents)  # 隨機選擇
        request.headers['User_Agent'] = agent
DOWNLOADER_MIDDLEWARES = {
   #  'douban.middlewares.my_proxy': 543,
   # 'douban.middlewares.DoubanDownloaderMiddleware': 543,
   'douban.middlewares.my_useragent': 543,
}

                         運行後顯示如下,則證明啓動了中間件:

2019-05-04 23:55:12 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'douban.middlewares.my_useragent',

                (6)注意事項

                         #中間件定義完後要在settings.py文件中啓用

                         #爬蟲文件名和爬蟲名稱不能相同,spiders目錄內不能存在相同爬蟲名稱的項目文件

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章