Python3 爬取帥哥圖

此爬蟲用的是基於 urllib3的第三方庫 requests
網頁原地址:http://www.shuaia.net/index.html

下載第三方庫 requests :

pips install requests

爬取單面目標鏈接

    通過 Inspect element 發現目標地址存儲在 class 屬性爲 "item-img" 的 <a> 標籤的 href 屬性中,獲取到目標地址後,相當於點擊圖片之後進入了網頁本身的頁面,然後根據下一個頁找到下一個頁面地址。在 <a>標籤裏面 <img> 標籤裏也有個鏈接,但它是首頁的瀏覽縮略圖,不是高清的。


代碼如下:

from bs4 import BeautifulSoup
import requests

if __name__ == '__main__':
    url = 'http://www.shuaia.net/index.html'
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
    }
    req = requests.get(url=url, headers=headers)
    req.encoding = 'utf-8'
    html = req.text
    bs = BeautifulSoup(html, 'lxml')
    targets_url = bs.find_all(class_="item-img")
    list_url = []
    for each in targets_url:
        list_url.append(each.img.get('alt') + '=' + each.get('href'))
    print(list_url)

    這樣就獲取到了首頁的圖片鏈接

爬取多頁目標鏈接

   翻到第 2 頁的時候,很容易就發現地址變爲了:http://www.shuaia.net/index_2.html ,第 3 頁爲:http://www.shuaia.net/index_3.html,後面的以此類推。

    獲取前19頁的鏈接,改造代碼如下:

from bs4 import BeautifulSoup
import requests

if __name__ == '__main__':
    list_url = []
    for num in range(1, 20):
        if num == 1:
            url = 'http://www.shuaia.net/index.html'
        else:
            url = 'http://www.shuaia.net/index_%d.html' % num
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
        }
        req = requests.get(url=url, headers=headers)
        req.encoding = 'urf-8'
        html = req.text
        bs = BeautifulSoup(html, 'lxml')
        targets_url = bs.find_all(class_='item-img')
        for each in targets_url:
            list_url.append(each.img.get('alt') + ': ' + each.get('href'))
    print(list_url)

單張圖片下載

進入目標地址  Inspect element ,可以看到圖片地址保存在 class 屬性爲"wr-single-content-list" 的 div->p->a->img 的 src 屬性中。

代碼如下:

from bs4 import BeautifulSoup
import requests
from urllib.request import urlretrieve
import os

target_url = 'http://www.shuaia.net/wenshennan/2017-05-04/1289.html'
filename = '花紋身襯托完美肌肉的歐美男' + '.jpg'
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
}
img_req = requests.get(url=target_url, headers=headers)
img_req.encoding = 'utf-8'
img_html = img_req.text
img_bf = BeautifulSoup(img_html, 'lxml')
img_url = img_bf.find_all('div', class_='wr-single-content-list')

img_bf1 = BeautifulSoup(str(img_url), 'lxml')
img_url = 'http://www.shuaia.net' + img_bf1.div.img.get('src')
if 'images' not in os.listdir():
    os.makedirs('images')
urlretrieve(url=img_url, filename='images/' + filename)
print('下載完成')
圖片保存在程序文件所在的目錄 images 文件夾裏。

多張圖片下載(整體代碼)

此方法簡單但速度慢。服務器有防爬蟲程序,所以不能爬的太快,每下載一張圖片需要加 1 秒的延時,否則會被服務器斷開連接。

from bs4 import BeautifulSoup
import requests
from urllib.request import urlretrieve
import os
import time

if __name__ == '__main__':
    list_url = []
    for num in range(1, 10):
        if num == 1:
            url = 'http://www.shuaia.net/index.html'
        else:
            url = 'http://www.shuaia.net/index_%d.html' % num
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
        }
        req = requests.get(url=url, headers=headers)
        req.encoding = 'urf-8'
        html = req.text
        bs = BeautifulSoup(html, 'lxml')
        targets_url = bs.find_all(class_='item-img')
        for each in targets_url:
            list_url.append(each.img.get('alt') + ': ' + each.get('href'))
    print('鏈接採集完成')

    for each_img in list_url:
        img_info = each_img.split(': ')
        targets_url = img_info[1]
        filename = img_info[0] + '.jpg'
        print('正在下載:' + filename)
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
        }
        img_req = requests.get(url=targets_url, headers=headers)
        img_req.encoding = 'utf-8'
        img_html = img_req.text
        img_bf = BeautifulSoup(img_html, 'lxml')
        img_url = img_bf.find_all('div', class_='wr-single-content-list')

        img_bf1 = BeautifulSoup(str(img_url), 'lxml')
        img_url = 'http://www.shuaia.net' + img_bf1.div.img.get('src')
        if 'images' not in os.listdir():
            os.makedirs('images')
        if each_img is False:   #爲了防止出現異常情況
            continue
        urlretrieve(url=img_url, filename='images/' + filename)
        time.sleep(1)   #有時可以試試不做延時
    print('下載完成')


這是最終下載好的圖片:


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章