爬蟲下載壁紙,並設置壁紙自動切換

貼代碼(初版,沒有排版,更改,封裝):

1.爬蟲部分

一開始找到百度壁紙,個人比較喜歡雪景,所以想用爬蟲批量下載,結果發現百度壁紙是動態的,就用request結果還是不行,所以最後又不得不用phantomjs來獲取網頁遠嗎,後來獲取源碼以後,解析出來了圖片的url地址,然而mdzz用urlretrieve下載,結果百度給403foribidden了,後來一直在找辦法,什麼訪問帶頭部,GG,什麼發放穩獲取response然後寫入文件,GG,最後想到了unbuntu系統裏面的wget來下載於是寫入文件然後wget-i文件名進行批量下載,可以下,但是wget訪問頻率太快,在下載十多張圖片之後就會被百度封了,於是查看wget命令,發現-w(-wait)課進行休眠,於是準備合理化訪問。

2.圖片背景自動更換設置(win10)

個性化-背景-背景-幻燈片放映,選擇圖片存儲的文件夾,done

import urllib
'''
from urllib import request
from bs4 import BeautifulSoup as bs

url = "http://image.baidu.com/search/index?tn=baiduimage&ct=201326592&lm=-1&cl=2&ie=gbk&word=%D1%A9%BE%B0%D7%C0%C3%E6%B1%DA%D6%BD&fr=ala&ala=1&pos=0&alatpl=wallpaper&oriquery=%E9%9B%AA%E6%99%AF%E6%A1%8C%E9%9D%A2%E5%A3%81%E7%BA%B8"
response = request.urlopen(url).read()
content = str(response,encoding = "utf-8")
bs_obj = bs(content,"html.parser")
print(bs_obj)

'''
#urlopen是最簡單的但tmd也是最垃圾的over
#下面有請我們最高級的phatomjs
from selenium import webdriver

driver = webdriver.PhantomJS()
driver.set_window_size(25600,14400)
driver.get("http://image.baidu.com/search/index?tn=baiduimage&ct=201326592&lm=-1&cl=2&ie=gbk&word=%D1%A9%BE%B0%B1%DA%D6%BD&fr=ala&ala=1&pos=0&alatpl=wallpaper&oriquery=%E9%9B%AA%E6%99%AF%E5%A3%81%E7%BA%B8s")
page_source = driver.page_source
#print(page_source)
#print(page_source)
#有請偉大的re
#偉大的re失敗了。。。這尼瑪代碼還會自動消失出現,!絕了
'''
import re
pattern = re.compile(r'src="http://.*?.jpg"')
img_src_list = pattern.findall(page_source,re.I)
url_pattern = re.compile(r'http://.*?.jpg')
img_url_list = []
for i in img_src_list:
    img_url_list.append(url_pattern.find(i,re.I))

for i in img_url_list:
    print(i)
'''
#有請老夥伴bs4
#init download path
download_path = r"C:\Users\Mr.Guo\Pictures\BDpictures"
from bs4 import BeautifulSoup
import requests
from urllib import request
bs_obj = BeautifulSoup(page_source,"html.parser")
img_url_list = bs_obj.findAll("img",{"class":"main_img img-hover"})
final_url_list = []
for i in img_url_list:
    final_url_list.append(i.attrs["src"])
#print(final_url_list)
f = open(download_path+"\test.txt",'a')
for i in range(len(final_url_list)):
    print(final_url_list[i])
    try:
        '''
        opener=request.build_opener()
        opener.addheaders=[('User-Agent','Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.75 Safari/537.36')]
        request.install_opener(opener)
        #request.urlretrieve(url=url,filename=‘%s/%s.txt‘%(savedir,get_domain_url(url=url)))
        request.urlretrieve(final_url_list[i],download_path+"\%s.jpg"%i)
        '''
        '''
        r = requests.get(final_url_list[i])
        i_download_path = download_path + "\%s.jpg"%i
        with open(i_download_path, "wb") as code:
             code.write(r.content)
        '''
        f.write(final_url_list[i]+'\n')
    except Exception as e:
        print(e)
        pass   
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章