這是b站的完結番劇界面,它屬於b站-番劇分區-完結動畫區,今天來爬取b站的完結番劇,來了解他們的播放量和硬幣數等。
爬取方法:
B站是一個對於爬蟲是一個很友好的網站,它對於爬蟲有專門的接口
https://github.com/uupers/BiliSpider/wiki
這個網址中有b站各個區域的接口,由於我們爬取的是b站二級分區數據,所以我們可以在這個網頁右側的[Bilibili API 二級分區視頻分頁數據(投稿時間逆序)]鏈接中,我們可以看到b站視頻數據接口的信息。它是一個json文件。
我們需要用到的就是這部分信息,我們就轉而獲取視頻接口信息的json文件,提取出想要的文件來保存在csv文件中。
我們總的思路是獲取視頻信息的json文件->提取json數據->保存數據爲csv
具體實現:
這個就是我們所需要獲取的json文件的地址了
def get_url():
url = 'http://api.bilibili.com/x/web-interface/newlist?rid=32&pn='
for i in range(1, 328):
urls.append(url + str(i) + '&ps=50')
得到json文件中的信息
def get_message(url):
print(url)
time.sleep(1)#一秒爬一次,運用了4個線程就是一秒爬4次
try:
r = requests.get(url, timeout=5)
data = json.loads(r.text)['data']['archives']
for j in range(len(data)):
content = {}
content['aid'] = data[j]['aid']
content['title'] = data[j]['title']
content['view'] = data[j]['stat']['view']
content['danmaku'] = data[j]['stat']['danmaku']
content['reply'] = data[j]['stat']['reply']
content['favorite'] = data[j]['stat']['favorite']
content['coin'] = data[j]['stat']['coin']
comic_list.append(content)
except Exception as e:
print(e)
然後寫入csv文件
def write_to_file(comic_list):#寫入csv文件
with open(r'bilibili-comic.csv', 'w', newline='', encoding='utf-8') as f:
fieldnames = ['aid', 'title', 'view', 'danmaku', 'reply', 'favorite', 'coin']
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
try:
writer.writerows(comic_list)
except Exception as e:
print(e)
我的電腦是4核的就創建4個線程,然後調用map函數運行get_messsage函數
get_url()
pool = ThreadPool(4)
pool.map(get_message, urls)
pool.close()
write_to_file(comic_list)
整體的代碼
import requests
import json
import csv
from multiprocessing.dummy import Pool as ThreadPool#導入多線程庫
import time
comic_list = []
urls = []
def get_url():
url = 'http://api.bilibili.com/x/web-interface/newlist?rid=32&pn='
for i in range(1, 328):
urls.append(url + str(i) + '&ps=50')
def get_message(url):
print(url)
time.sleep(1)#一秒爬一次,運用了4個線程就是一秒爬4次
try:
r = requests.get(url, timeout=5)
data = json.loads(r.text)['data']['archives']
for j in range(len(data)):
content = {}
content['aid'] = data[j]['aid']
content['title'] = data[j]['title']
content['view'] = data[j]['stat']['view']
content['danmaku'] = data[j]['stat']['danmaku']
content['reply'] = data[j]['stat']['reply']
content['favorite'] = data[j]['stat']['favorite']
content['coin'] = data[j]['stat']['coin']
comic_list.append(content)
except Exception as e:
print(e)
def write_to_file(comic_list):#寫入csv文件
with open(r'bilibili-comic.csv', 'w', newline='', encoding='utf-8') as f:
fieldnames = ['aid', 'title', 'view', 'danmaku', 'reply', 'favorite', 'coin']
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
try:
writer.writerows(comic_list)
except Exception as e:
print(e)
get_url()
pool = ThreadPool(4)
pool.map(get_message, urls)
pool.close()
write_to_file(comic_list)
在爬取過程中,很多網站都有自己的接口,我們可以去尋找接口來讓爬取過程變得簡單