python使用Scrapy框架抓取起點中文網免費小說案例

使用工具,ubuntu,python,pycharm
一、使用pycharm創建項目:過程略
二、安裝scrapy框架

pip install Scrapy

三、創建scrapy項目:

1.創建爬蟲項目
    scrapy startproject qidian
2.創建爬蟲,先進入爬蟲項目目錄
cd qidian/
scrapy genspider book book.qidian.com

創建完成後項目目錄如下

clipboard.png
目錄下的的book.py就是我們的爬蟲文件

四、打開book.py編寫爬蟲的代碼

1.進入需要爬去的書的目錄,找到開始url
設置start_url:
#鬼吹燈圖書目錄
start_urls = ['https://book.qidian.com/info/53269#Catalog']
2、在創建項目的時候,篩選的url地址爲:

allowed_domains = ['book.qidian.com']

  打開圖書章節後發現章節的url如下:
 # https://read.qidian.com/chapter/PNjTiyCikMo1/FzxWdm35gIE1
  所以需要將read.qidian.com 加入allowed_domains 中,
allowed_domains = ['book.qidian.com', 'read.qidian.com']
剩下的就是通過xpath 獲取抓取到的內容,提取我們需要的內容
完整代碼如下
# -*- coding: utf-8 -*-
import scrapy
import logging

logger = logging.getLogger(__name__)


class BookSpider(scrapy.Spider):
    name = 'book'
    allowed_domains = ['book.qidian.com', 'read.qidian.com']
    start_urls = ['https://book.qidian.com/info/53269#Catalog']

    def parse(self, response):
        # 獲取章節列表
        li_list = response.xpath('//div[@class="volume"][2]/ul/li')
        # 列表循環取出章節名稱和章節對應的url
        for li in li_list:
            item = {}
            # 章節名稱
            item['chapter_name'] = li.xpath('./a/text()').extract_first()
            # 章節url
            item['chapter_url'] = li.xpath('./a/@href').extract_first()
            # 獲取到的url //read.qidian.com/chapter/PNjTiyCikMo1/TpiSLsyH5Hc1
            # 需要重新構造
            item['chapter_url'] = 'https:' + item['chapter_url']
            # 循環抓取每個章節的內容
            if item['chapter_url'] is not None:
                # meta:傳遞item數據
                yield scrapy.Request(item['chapter_url'], callback=self.parse_chapter, meta={'item': item})

    def parse_chapter(self, response):
        item = response.meta['item']
        # 獲取文章內容
        item['chapter_content'] = response.xpath('//div[@class="read-content j_readContent"]/p/text()').extract()
        yield item

五、將爬去數據保存到mongodb中

1.修改setting文件
找到並打開註釋:
ITEM_PIPELINES = {
    'qidain.pipelines.QidainPipeline': 300,
}
2.添加monggodb相關配置
# 主機地址
MONGODB_HOST = '127.0.0.1'
# 端口
MONGODB_PORT = 27017
# 需要保存的數據哭名字
MONGODB_DBNAME = 'qidian'
# 保存的文件名
MONGODB_DOCNAME = 'dmbj'
3.在pipelines.py文件中保存數據,最終文件內容如下
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

from scrapy.conf import settings
import pymongo


class QidainPipeline(object):
    def __init__(self):
        '''在__init__中配置mongodb'''
        host = settings['MONGODB_HOST']
        port = settings['MONGODB_PORT']
        db_name = settings['MONGODB_DBNAME']
        client = pymongo.MongoClient(host=host, port=port)
        db = client[db_name]
        self.post = db[settings['MONGODB_DOCNAME']]

    def process_item(self, item, spider):
        self.post.insert(item)
        return item
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章