1. 相關網址和庫
參考文章:Python3 網絡爬蟲(二):下載小說的正確姿勢
網址
https://www.xsbiquge.com
需要用到的庫
request、beautifulsoup4、tqdm
2.代碼實現
import requests
from bs4 import BeautifulSoup
from tqdm import tqdm
class NovelSpider(object):
def __init__(self):
self.server = 'https://www.xsbiquge.com'
self.target_url = 'https://www.xsbiquge.com/15_15338/'
self.book_name = '詭祕之主.txt'
# 保存章節目錄及連接
self.chapter_list = []
# 1.發送請求
def get_response(self, url):
response = requests.get(url)
data = response.content.decode('utf-8')
return data
# 2.解析數據
# 2.1 解析章節目錄數據
def parse_list_data(self, data):
bs_chapter = BeautifulSoup(data, 'lxml')
chapters = bs_chapter.find('div', id='list')
chapters = chapters.find_all('a')
for chapter in chapters:
chapter_dict = {}
# 章節名
chapter_dict['chapter'] = chapter.get_text()
chapter_dict['url'] = self.server + chapter.get('href')
self.chapter_list.append(chapter_dict)
# 2.2 解析章節內容數據
def parse_detail_data(self, data):
bs_content = BeautifulSoup(data, 'lxml')
texts = bs_content.find('div', id='content')
# 獲得內容後,去除換行符以及首行縮進
content = texts.get_text().strip().split('\xa0' * 4)
return content
def run(self):
data = self.get_response(self.target_url)
self.parse_list_data(data)
# print(self.chapter_list)
for data in tqdm(self.chapter_list):
content_url = data['url']
content_data = self.get_response(content_url)
chapter_content = self.parse_detail_data(content_data)
# 保存數據
with open(self.book_name, 'a', encoding='utf-8') as f:
f.write(data['chapter'])
f.write('\n')
f.write('\n'.join(chapter_content))
f.write('\n\n')
if __name__ == '__main__':
NovelSpider().run()
3. 小結
- 分別解析目錄頁和詳情頁信息。
- 通過 tqdm 顯示下載進度。