五、分佈式爬蟲學習之BeautSoup4

BeautifulSoup4庫

和lxml一樣,BeautifulSoup也是一個HTML/XML的解析器,主要功能也是如何解析和提取HTML/XML數據。
區別:lxml只會局部遍歷,而BeautifulSoup是基於HTML DOM(Document Object Model)的,會載入整個文檔,解析DOM樹,因此時間和內存開銷都會大很多,所以性能要低於lxml。
BeautifulSoup4基本使用

from bs4 import BeautifulSoup #導入
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html,'lxml') #使用lxml解析器
print(soup)

BeautifulSoup4四個常用的對象
BeautifulSoup將複雜HTML文檔轉換成一個複雜的樹形結構,每個節點都是python對象,所有對象可以歸納爲4種:

  1. Tag
  2. NavigableString
  3. BeautifulSoup
  4. Comment
    1.Tag:
    Tag通俗點講就是HTML中的一個個標籤。我們可以利用soup加標籤名輕鬆地獲取這些標籤的內容,這些對象的類型是bs4.element.Tag。但是注意,它查找的是在所有內容中的第一個符合要求的標籤。
    2.NavigableString:
    如果拿到標籤後,還想獲取標籤中的內容,那麼可以通過tag.string獲取標籤中的文字。
from bs4 import BeautifulSoup

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html,'lxml') #使用lxml解析器
print(soup.p)        #<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
print(soup.p.string) #The Dormouse's story
print(type(soup.p.string))  #<class 'bs4.element.NavigableString'>
print(type(soup.p))  #<class 'bs4.element.Tag'>
print(soup.p.name)   #標籤名p
print(soup.p.attrs)   #標籤屬性{'class': ['title'], 'name': 'dromouse'}
print(soup.p['class'])   #獲取標籤屬性{['title']
print(soup.p.get('class'))   #獲取標籤屬性{['title']
soup.p['class'] = 'new'     #修改屬性
print(soup.p)

3.BeautifulSoup:
BeautifulSoup對象表示的是一個文檔的全部內容,大部分時候可以把它當作Tag對象,它支持遍歷文檔樹和搜索文檔樹中描述的大部分的方法。
4.Comment:
Tag,NavigableString,BeautifulSoup幾乎覆蓋了html和lxml中的所有內容,但是還有一些特殊對象。容易讓人擔心的內容是文檔的註釋部分。Comment對象是一個特殊的NavigableString對象
遍歷文檔樹
contents和children:
contents:返回所有子節點的列表。
children:返回所有子節點的迭代器。
strings和stripped_strings
strings:如果tag中包含多個字符串,可以使用.strings來循環獲取
stripped_strings:輸出的字符串中可能包含了很多空格或空行,使用它可以去除多餘空白內容。

from bs4 import BeautifulSoup

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html,'lxml') #使用lxml解析器
head_tag = soup.head
print(head_tag.contents)  #[<title>The Dormouse's story</title>]列表
print(head_tag.children)  #<list_iterator object at 0x00B71FD0>迭代器
body_tag = soup.body
# for i in body_tag.strings: #打印body標籤裏所有的字符串
#     print(i)
for i in body_tag.stripped_strings: #打印body標籤裏所有的字符串並去除空白
    print(i)

搜索文檔樹
find和find_all方法:
find方法是找到第一個滿足條件的標籤後就立即返回,只返回一個元素。find_all方法是把所有滿足條件的標籤都選到(返回一個列表),然後返回回去。

from bs4 import BeautifulSoup
html = """
<table class="tablelist" cellpadding="0" cellspacing="0">
    <tbody>
        <tr class="h">
            <td class="l" width="374">職位名稱</td>
            <td>職位類別</td>
            <td>人數</td>
            <td>地點</td>
            <td>發佈時間</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=33824&keywords=python&tid=87&lid=2218">22989-金融雲區塊鏈高級研發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=29938&keywords=python&tid=87&lid=2218">22989-金融雲高級後臺開發</a></td>
            <td>技術類</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31236&keywords=python&tid=87&lid=2218">SNG16-騰訊音樂運營開發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31235&keywords=python&tid=87&lid=2218">SNG16-騰訊音樂業務運維工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34531&keywords=python&tid=87&lid=2218">TEG03-高級研發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34532&keywords=python&tid=87&lid=2218">TEG03-高級圖像算法研發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31648&keywords=python&tid=87&lid=2218">TEG11-高級AI開發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>4</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32218&keywords=python&tid=87&lid=2218">15851-後臺開發工程師</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32217&keywords=python&tid=87&lid=2218">15851-後臺開發工程師</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a id="test" class="test" target='_blank' href="position_detail.php?id=34511&keywords=python&tid=87&lid=2218">SNG11-高級業務運維工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
    </tbody>
</table>
"""
soup = BeautifulSoup(html,'lxml')
#1.獲取所有的tr標籤
# trs = soup.find_all('tr')
# for tr in trs:
#     print(tr)
#     print('-'*50)

#2.獲取第二個tr標籤
# tr = soup.find_all('tr',limit=2) #前兩個,並返回列表
# print(tr[1])

#3.獲取所有class等於even的tr標籤
# trs = soup.find_all('tr',class_='even')
# trs = soup.find_all('tr',attrs={'class':'even'})
# for tr in trs:
#     print(tr)
#     print('-'*50)

#4.將所有id等於test,class等於test的a標籤提取出來
# a = soup.find_all('a',attrs={'id':'test','class':'test'})
# for i in a:
#     print(i)

#5.獲取所有a標籤的href屬性
# alist = soup.find_all('a')
# for a in alist:
#     # print(a['href']) #1
#     # print('-'*50)
#     # print(a.get('href')) #2
#     href = a.attrs['href'] #3
#     print(href)

#6.獲取所有的職位信息
trs = soup.find_all('tr')[1:]
for tr in trs:
    tds = tr.find_all('td')
    name = tds[0].string
    print(name)
#第二種方法
trs = soup.find_all('tr')[1:]
for tr in trs:
    infos = list(tr.stripped_strings)
    print(infos)

select方法
有時候使用css選擇器的方式可以更加的方便。使用css選擇器的語法,應該使用select方法,以下列出幾種常用的css選擇器方法:

  1. 通過標籤名查找:print(soup.select(‘a’))
  2. 通過類名查找:print(soup.select(’.sister’))
  3. 通過id查找:print(soup.select(’#link1’))
  4. 組合查找:print(soup.select(‘p #link1’))
  5. 通過屬性查找:print(soup.select(‘a[href=“http://*****”]’))
  6. 獲取內容:print(soup.select(‘title’)[0].get_text())
    select練習題
from bs4 import BeautifulSoup
html = """
<table class="tablelist" cellpadding="0" cellspacing="0">
    <tbody>
        <tr class="h">
            <td class="l" width="374">職位名稱</td>
            <td>職位類別</td>
            <td>人數</td>
            <td>地點</td>
            <td>發佈時間</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=33824&keywords=python&tid=87&lid=2218">22989-金融雲區塊鏈高級研發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=29938&keywords=python&tid=87&lid=2218">22989-金融雲高級後臺開發</a></td>
            <td>技術類</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31236&keywords=python&tid=87&lid=2218">SNG16-騰訊音樂運營開發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31235&keywords=python&tid=87&lid=2218">SNG16-騰訊音樂業務運維工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34531&keywords=python&tid=87&lid=2218">TEG03-高級研發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34532&keywords=python&tid=87&lid=2218">TEG03-高級圖像算法研發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31648&keywords=python&tid=87&lid=2218">TEG11-高級AI開發工程師(深圳)</a></td>
            <td>技術類</td>
            <td>4</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32218&keywords=python&tid=87&lid=2218">15851-後臺開發工程師</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32217&keywords=python&tid=87&lid=2218">15851-後臺開發工程師</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a id="test" class="test" target='_blank' href="position_detail.php?id=34511&keywords=python&tid=87&lid=2218">SNG11-高級業務運維工程師(深圳)</a></td>
            <td>技術類</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
    </tbody>
</table>
"""
soup = BeautifulSoup(html,'lxml')
#1.獲取所有的tr標籤
# trs = soup.select('tr') #列表
# print(trs)

#2.獲取第二個tr標籤
# tr = soup.select('tr')[1]
# print(tr)

#3.獲取所有class等於even的tr標籤
# trs = soup.select('.even')
# trs = soup.select('tr[class="even"]')
# print(trs)

#4. 獲取所有a標籤的href屬性
# alist = soup.select('a')
# for a in alist:
#     print(a['href'])

#5.獲取所有的職位信息
trs = soup.select('tr')[1:]
for tr in trs:
    infos = list(tr.stripped_strings)
    print(infos)

爬取豆瓣Top250

import requests
from bs4 import BeautifulSoup

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36'
}

def get_detail_urls(url):
    res = requests.get(url, headers=headers)
    # print(res)
    html = res.text
    # 解析HTML,獲取詳情頁面url
    soup = BeautifulSoup(html, 'lxml')
    # print(soup)
    lis = soup.find('ol', class_='grid_view').find_all('li')
    detail_urls = []
    for li in lis:
        detail_url = li.find('a')['href']
        # print(detail_url)
        detail_urls.append(detail_url)
    return detail_urls

def parse_detail_url(url,f):
    res = requests.get(url, headers=headers)
    # print(res)
    html = res.text
    # 解析HTML,獲取詳情頁面url
    soup = BeautifulSoup(html, 'lxml')
    movie_name = list(soup.find('div', id='content').find('h1').stripped_strings)  # 電影名和年份
    movie_name = join_list(movie_name)
    # print(movie_name)
    # 導演
    director = list(soup.find('div', id='info').find('span').find('span', class_='attrs').stripped_strings)
    director = join_list(director)
    # print(director)
    # 編劇
    screenwriter = list(soup.find('div', id='info').find_all('span')[3].find('span', class_='attrs').stripped_strings)
    screenwriter = join_list(screenwriter)
    # print(screenwriter)
    # 主演
    actor = list(soup.find('span', class_='actor').find('span', class_='attrs').stripped_strings)
    actor = join_list(actor)
    # print(actor)
    # 評分
    score = soup.find('strong', class_='ll rating_num').string
    score = join_list(score)
    # print(score)
    f.write('電影名:%s,導演:%s,編劇:%s,主演:%s,評分:%s\n'%(movie_name,director,screenwriter,actor,score))

def join_list(l):
    return ''.join(l)
    
def main():
    base_url = 'https://movie.douban.com/top250?start={}&filter='
    with open('top250.txt','a',encoding='utf-8') as f:
        for x in range(0,251,25):
            url = base_url.format(x)
            detail_urls = get_detail_urls(url)
            for detail_url in detail_urls:
                #爬取詳情頁面內容
                parse_detail_url(detail_url,f)
if __name__ == '__main__':
    main()

爬取快代理ip

注意:延時爬取,爬取速度太快可能會被禁止本機ip訪問

import requests
from bs4 import BeautifulSoup
import time

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36',
}
def get_free_ips(url):
    res = requests.get(url,headers=headers)
    html = res.text
    soup = BeautifulSoup(html,'lxml')
    trs = soup.find('table', attrs={'class':'table table-bordered table-striped'}).find('tbody').find_all('tr') #列表
    ips = []
    for tr in trs:
        free_ips = list(tr.stripped_strings)
        # print(free_ips)
        ips.append(free_ips)
    return ips
def save_ips(ip,f):
    f.write('ip:%s,port:%s,匿名度:%s,類型:%s,位置:%s,響應速度:%s,最後驗證時間:%s\n' % (ip[0], ip[1], ip[2], ip[3], ip[4], ip[5], ip[6]))
def main():
    base_url = 'https://www.kuaidaili.com/free/inha/{}/'
    with open('free_ip.txt','a',encoding='utf-8') as f:
        for x in range(1,11):
            url = base_url.format(x)
            print(url)
            time.sleep(3)  #延時爬取,爬取速度太快可能會被禁止本機ip訪問
            ips = get_free_ips(url)
            for ip in ips:
                save_ips(ip,f)
if __name__ == '__main__':
    main()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章