python3 [爬蟲入門實戰]scrapy爬取盤多多五百萬數據並存mongoDB

總結:雖然是第二次爬取,但是多多少少還是遇到一些坑,總的結果還是好的,scrapy比多線程多進程強多了啊,中途沒有一次被中斷過。

此版本是盤多多爬取數據的scrapy版本,涉及數據量較大,到現在已經是近500萬的數據了。

1,抓取的內容

這裏寫圖片描述

主要爬取了:文件名,文件鏈接,文件類型,文件大小,文件瀏覽量,文件收錄時間

一,scrapy中item.py代碼

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class PanduoduoItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # pass
    # 文件名稱
    docName = scrapy.Field()
    # 文件鏈接
    docLink = scrapy.Field()
    # 文件分類
    docType = scrapy.Field()
    # 文件大小
    docSize = scrapy.Field()
    # 網盤類型
    docPTpye = scrapy.Field()
    # 瀏覽量
    docCount = scrapy.Field()
    # 收錄時間
    docTime = scrapy.Field()

在spider進行抓取出現的問題,(1),因爲沒有設置請求頭信息,盤多多瀏覽器會返回403錯誤,不讓進行數據的爬取,所以這裏我們要進行user-agent的設置,(settings.py中)

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'

COOKIES_ENABLED = False

ROBOTSTXT_OBEY = False

(2)直接在def parse(self, response):方法中打印response.body會返回不是utf8的編碼,(無奈的是沒有做相應的處理,還是爬出來了。)

二,spider裏面的代碼

(1),在spider中遇到的問題還是有的,比如table下的tbody標籤獲取,因爲內容都被tbody包裹起來的,最後測試半小時,我們可以直接獲取table下的tr標籤就可以了。
(2),在tr下有多個不規則的td標籤,我們可以直接根據td[index]來獲取相對於的數據,


貼上代碼:

#encoding=utf8
import scrapy
from PanDuoDuo.items import PanduoduoItem

class Panduoduo(scrapy.Spider):
    name = 'panduoduo'
    allowed_domains =['panduoduo.net']
    start_urls = ['http://www.panduoduo.net/c/4/{}'.format(n) for n in range(1,86151)]#6151
    # start_urls = ['http://www.panduoduo.net/c/4/1']#6151
    def parse(self, response):
        base_url = 'http://www.panduoduo.net'
        # print(str(response.body).encode('utf-8'))
        node_list = response.xpath("//div[@class='ca-page']/table[@class='list-resource']")
        node_list = response.xpath("//table[@class='list-resource']/tr")
        # print(node_list)
        for node  in node_list:
            duoItem = PanduoduoItem()
            title = node.xpath("./td[@class='t1']/a/text()").extract()
            print(title)
            duoItem['docName'] = ''.join(title)
            link = node.xpath("./td[@class='t1']/a/@href").extract()
            linkUrl  = base_url+''.join(link)
            duoItem['docLink'] = linkUrl
            print(linkUrl)
            docType = node.xpath("./td[2]/a/text()").extract()
            duoItem['docType'] = ''.join(docType)
            print(docType)
            docSize = node.xpath("./td[@class='t2']/text()").extract()
            print(docSize)
            duoItem['docSize'] = ''.join(docSize)
            docCount = node.xpath("./td[5]/text()").extract()
            docTime = node.xpath("./td[6]/text()").extract()
            duoItem['docCount'] = ''.join(docCount)
            duoItem['docTime'] = ''.join(docTime)
            print(docCount)
            print(docTime)
            yield duoItem

(3)piplines.py裏面的代碼

在這裏主要進行了存入mongodb的操作和寫入json文件的操作,不過現在看來,存入json文件確實是多餘的,因爲數據量確實大了。(在存入mongodb的時候遇到過存入報錯的問題,這時候可能是mongodb被佔用的問題,把原來的進行刪除再重新運行一遍就行了。)

代碼:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import json
import pymongo
from scrapy.conf import settings

class PanduoduoPipeline(object):
    def process_item(self, item, spider):
        return item

class DuoDuoMongo(object):
    def __init__(self):
        self.client = pymongo.MongoClient(host=settings['MONGO_HOST'], port=settings['MONGO_PORT'])
        self.db = self.client[settings['MONGO_DB']]
        self.post = self.db[settings['MONGO_COLL']]

    def process_item(self, item, spider):
        postItem = dict(item)
        self.post.insert(postItem)
        return item

# 寫入json文件
class JsonWritePipline(object):
    def __init__(self):
        self.file = open('盤多多.json','w',encoding='utf-8')

    def process_item(self,item,spider):
        line  = json.dumps(dict(item),ensure_ascii=False)+"\n"
        self.file.write(line)
        return item

    def spider_closed(self,spider):
        self.file.close()

最後附上settings裏面的代碼,這裏的沒有用到代理詞,瀏覽器什麼的,所以暫時不用設置middlewares.py裏面的文件
settings代碼:

# -*- coding: utf-8 -*-

# Scrapy settings for PanDuoDuo project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'PanDuoDuo'

SPIDER_MODULES = ['PanDuoDuo.spiders']
NEWSPIDER_MODULE = 'PanDuoDuo.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'PanDuoDuo (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# 配置mongoDB
MONGO_HOST = "127.0.0.1"  # 主機IP
MONGO_PORT = 27017  # 端口號
MONGO_DB = "PanDuo"  # 庫名
MONGO_COLL = "pan_duo"  # collection

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'PanDuoDuo.middlewares.PanduoduoSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'PanDuoDuo.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   # 'PanDuoDuo.pipelines.PanduoduoPipeline': 300,
   'PanDuoDuo.pipelines.DuoDuoMongo': 300,
   'PanDuoDuo.pipelines.JsonWritePipline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

最後再看一下數據庫裏面的數據:

這裏寫圖片描述

再看一下現在的總數,還在繼續爬取哦,從下午1:00左右爬的應該是,

這裏寫圖片描述

到此end,下次學習任務,一定把模擬登陸搞懂了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章