bs與re爬起點網站的免費完本小說

http://f.qidian.com/all?size=-1&sign=-1&tag=-1&chanId=-1&subCateId=-1&orderId=&update=-1&page=1&month=-1&style=1&action=-1

這是網站的第一頁,觀察發現,網址中只有page一個變量,便想到用for循環來進行網址的變換。

因爲這是整個網站的小說,所以想着先把小說名字及其鏈接爬下來,再通過每本小說的鏈接將小說章節、鏈接及其內容爬下來,在爬取過程中,我遇到的最大問題是不知道怎麼通過鏈接將小說章節的名字和章節鏈接爬下來,即把下面的文字內容爬下來





後來通過各種查資料,發現只需要把正則寫出來,進行匹配即可,其中用到了finditer

reg = re.finditer(r'<a data-cid="(.*?)" data-eid="qd_G55" href=".*?" target="_blank" title="首發時間:.*?章節字數:.*?">(.*?)</a>',tent)
當然,還有一些細節性的東西也需要掌握!!

1**

        for i in url2:
            url2 = 'http:' + i

2**

            for i in reg:
                # print i.group(1)
                page_url = "http:"+ i.group(1)
                page_name = i.group(2)

3**

content7 = soup.find("div", class_="read-content j_readContent").get_text('\n').encode('utf-8')

4**

                with open(name + '.txt','a') as f:
                    f.write(page_name + '\n' + content7 + '\n')
最後,完整代碼如下:

# -*- coding:utf-8 -*-

from bs4 import BeautifulSoup
import urllib2
import re
import time

for p in range(1,2):
    url = 'http://f.qidian.com/all?size=-1&sign=-1&tag=-1&chanId=-1&subCateId=-1&orderId=&update=-1&page=%d&month=-1&style=1&action=-1' % p
    user_agent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:49.0) Gecko/20100101 Firefox/49.0"
    headers = {'User-Agent': user_agent}
    response = urllib2.Request(url, headers=headers)
    html = urllib2.urlopen(response).read()
    soup = BeautifulSoup(html, 'html.parser', from_encoding='utf-8')
    content1 = soup.find("div",class_="all-book-list").find_all('h4')

    for k in content1:
        name = k.find('a').get_text(strip=True)
        print name
        k = str(k)
        url2 = re.findall(r'.*?<a.*?href="(.*?)".*?>', k)

        for i in url2:
            url2 = 'http:' + i
            #print url2
            user_agent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:49.0) Gecko/20100101 Firefox/49.0"
            headers = {'User-Agent': user_agent}
            response = urllib2.Request(url2, headers=headers)
            html3 = urllib2.urlopen(response).read()
            soup = BeautifulSoup(html3, 'html.parser')
            tent = soup.find_all("ul",class_="cf")
            tent = str(tent)
            reg = re.finditer(r'<a data-cid="(.*?)" data-eid="qd_G55" href=".*?" target="_blank" title="首發時間:.*?章節字數:.*?">(.*?)</a>',tent)

            for i in reg:
                #print i.group(1)
                page_url = "http:"+ i.group(1)
                page_name = i.group(2)
                print page_name
                user_agent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:49.0) Gecko/20100101 Firefox/49.0"
                headers = {'User-Agent': user_agent}
                response = urllib2.Request(page_url, headers=headers)
                html4 = urllib2.urlopen(response).read()
                soup = BeautifulSoup(html4, 'html.parser', from_encoding='utf-8')
                content7 = soup.find("div", class_="read-content j_readContent").get_text('\n').encode('utf-8')

                with open(name + '.txt','a') as f:
                    f.write(page_name + '\n' + content7 + '\n')

                print 'OK1'

                time.sleep(1)



***當然,這裏只是爬了一頁的。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章