python3爬蟲之使用Scrapy框架爬取英雄聯盟高清桌面壁紙

使用Scrapy爬蟲抓取英雄聯盟高清桌面壁紙

源碼地址:https://github.com/snowyme/loldesk

開始項目前需要安裝python3和Scrapy,不會的自行百度,這裏就不具體介紹了

首先,創建項目

scrapy startproject loldesk

生成項目的目錄結構

首先需要定義抓取元素,在item.py中,我們這個項目用到了圖片名和鏈接

import scrapy

class LoldeskItem(scrapy.Item):
    name = scrapy.Field()
    ImgUrl = scrapy.Field()
    pass

接下來在爬蟲目錄創建爬蟲文件,並編寫主要代碼,loldesk.py

import scrapy
from loldesk.items import LoldeskItem

class loldeskpiderSpider(scrapy.Spider):
    name = "loldesk"
    allowed_domains = ["www.win4000.com"]
    # 抓取鏈接
    start_urls = [
        'http://www.win4000.com/zt/lol.html'
    ]

    def parse(self, response):
        list = response.css(".Left_bar ul li")
        for img in list:
            imgurl = img.css("a::attr(href)").extract_first()
            imgurl2 = str(imgurl)
            next_url = response.css(".next::attr(href)").extract_first()
            if next_url is not None:
                # 下一頁
                yield response.follow(next_url, callback=self.parse)

            yield scrapy.Request(imgurl2, callback=self.content)

    def content(self, response):
        item = LoldeskItem()
        item['name'] = response.css(".pic-large::attr(title)").extract_first()
        item['ImgUrl'] = response.css(".pic-large::attr(src)").extract()
        yield item
        # 判斷頁碼
        next_url = response.css(".pic-next-img a::attr(href)").extract_first()
        allnum = response.css(".ptitle em::text").extract_first()
        thisnum = next_url[-6:-5]
        if int(allnum) > int(thisnum):
            # 下一頁
            yield response.follow(next_url, callback=self.content)

圖片的鏈接和名稱已經獲取到了,接下來需要使用圖片通道下載圖片並保存到本地,pipelines.py:

from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
from scrapy.http import Request
import re

class MyImagesPipeline(ImagesPipeline):

    def get_media_requests(self, item, info):
        for image_url in item['ImgUrl']:
            yield Request(image_url,meta={'item':item['name']})

    def file_path(self, request, response=None, info=None):
        name = request.meta['item']
        name = re.sub(r'[?\\*|“<>:/()0123456789]', '', name)
        image_guid = request.url.split('/')[-1]
        filename = u'full/{0}/{1}'.format(name, image_guid)
        return filename

    def item_completed(self, results, item, info):
        image_path = [x['path'] for ok, x in results if ok]
        if not image_path:
            raise DropItem('Item contains no images')
        item['image_paths'] = image_path
        return item

最後在settings.py中設置存儲目錄並開啓通道:

# 設置圖片存儲路徑
IMAGES_STORE = 'F:/python/loldesk'
#啓動pipeline中間件
ITEM_PIPELINES = {
   'loldesk.pipelines.MyImagesPipeline': 300,
}

在根目錄下運行程序:

scrapy crawl loldesk

大功告成!!!一共抓取到128個文件夾

爬取性感女神照片教程,請移步:

https://blog.csdn.net/ziwoods/article/details/84334263

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章