環境:Python3.6 + BeautfulSoup4
爬取目標:京東手機圖片https://list.jd.com/list.html?cat=9987,653,655
思路
首先打開目標網頁https://list.jd.com/list.html?cat=9987,653,655 查看網頁獲取發送的GET請求的特徵,對比第二頁的URL https://list.jd.com/list.html?cat=9987,653,655&page=2&sort=sort_rank_asc&trans=1&JL=6_0_0&ms=6#J_main
https://list.jd.com/list.html?cat=9987,653,655
https://list.jd.com/list.html?cat=9987,653,655&page=2&sort=sort_rank_asc&trans=1&JL=6_0_0&ms=6#J_main
通過page來區分檢索的是哪一頁,嘗試在URL地址欄輸入
https://list.jd.com/list.html?cat=9987,653,655 &page=3
發現可以打開網頁。那就OK了
第二步編寫一個函數用來處理每個頁面,這裏需要對爬蟲進行簡單的僞裝,添加uesr-agent的頭部屬性來假裝一個瀏覽器。再通過beautifulsoup來進行過濾。
比較展示列表中的圖片width和height都是220,以此爲過濾條件
imglist = soup.find_all("img",width=220,height=220)
有的img標籤含有src屬性但是有的img標籤沒有,通過正則表達式獲取圖片的url
src = re.compile('//img.+\.jpg').search(img.__str__())
imgurl = "https:" + src.group()
最後通過urlretrieve保存圖片到本地
request.urlretrieve(imgurl,filename=imagename)
代碼
import re
from bs4 import BeautifulSoup
from urllib import request
from urllib import error
def craw(url,page):
# print("====="+str(page))
req = request.Request(url)
# 需要進行瀏覽器僞裝
req.add_header('user-agent','Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36')
html = request.urlopen(req).read().decode("utf8")
soup = BeautifulSoup(html,"html5lib")
# 過濾信息
imglist = soup.find_all("img",width=220,height=220)
# print(imglist)
x=1
for img in imglist:
# print(str(page)+str(x))
# print(img)
src = re.compile('//img.+\.jpg').search(img.__str__())
if src == None:
continue
imgurl = "https:" + src.group()
# 保存到D盤
imagename = "D:/img/" + str(page) + str(x) + ".jpg"
try:
# 下載圖片到指定的路徑並重命名
request.urlretrieve(imgurl,filename=imagename)
except error.URLError as e:
print(e.reason)
x+=1
def test():
for i in range(1,6):#前五個頁面
url = "https://list.jd.com/list.html?cat=9987,653,655&page=" + str(i)
craw(url,i)
if __name__ == "__main__":
test()