五、分布式爬虫学习之BeautSoup4

BeautifulSoup4库

和lxml一样,BeautifulSoup也是一个HTML/XML的解析器,主要功能也是如何解析和提取HTML/XML数据。
区别:lxml只会局部遍历,而BeautifulSoup是基于HTML DOM(Document Object Model)的,会载入整个文档,解析DOM树,因此时间和内存开销都会大很多,所以性能要低于lxml。
BeautifulSoup4基本使用

from bs4 import BeautifulSoup #导入
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html,'lxml') #使用lxml解析器
print(soup)

BeautifulSoup4四个常用的对象
BeautifulSoup将复杂HTML文档转换成一个复杂的树形结构,每个节点都是python对象,所有对象可以归纳为4种:

  1. Tag
  2. NavigableString
  3. BeautifulSoup
  4. Comment
    1.Tag:
    Tag通俗点讲就是HTML中的一个个标签。我们可以利用soup加标签名轻松地获取这些标签的内容,这些对象的类型是bs4.element.Tag。但是注意,它查找的是在所有内容中的第一个符合要求的标签。
    2.NavigableString:
    如果拿到标签后,还想获取标签中的内容,那么可以通过tag.string获取标签中的文字。
from bs4 import BeautifulSoup

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html,'lxml') #使用lxml解析器
print(soup.p)        #<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
print(soup.p.string) #The Dormouse's story
print(type(soup.p.string))  #<class 'bs4.element.NavigableString'>
print(type(soup.p))  #<class 'bs4.element.Tag'>
print(soup.p.name)   #标签名p
print(soup.p.attrs)   #标签属性{'class': ['title'], 'name': 'dromouse'}
print(soup.p['class'])   #获取标签属性{['title']
print(soup.p.get('class'))   #获取标签属性{['title']
soup.p['class'] = 'new'     #修改属性
print(soup.p)

3.BeautifulSoup:
BeautifulSoup对象表示的是一个文档的全部内容,大部分时候可以把它当作Tag对象,它支持遍历文档树和搜索文档树中描述的大部分的方法。
4.Comment:
Tag,NavigableString,BeautifulSoup几乎覆盖了html和lxml中的所有内容,但是还有一些特殊对象。容易让人担心的内容是文档的注释部分。Comment对象是一个特殊的NavigableString对象
遍历文档树
contents和children:
contents:返回所有子节点的列表。
children:返回所有子节点的迭代器。
strings和stripped_strings
strings:如果tag中包含多个字符串,可以使用.strings来循环获取
stripped_strings:输出的字符串中可能包含了很多空格或空行,使用它可以去除多余空白内容。

from bs4 import BeautifulSoup

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html,'lxml') #使用lxml解析器
head_tag = soup.head
print(head_tag.contents)  #[<title>The Dormouse's story</title>]列表
print(head_tag.children)  #<list_iterator object at 0x00B71FD0>迭代器
body_tag = soup.body
# for i in body_tag.strings: #打印body标签里所有的字符串
#     print(i)
for i in body_tag.stripped_strings: #打印body标签里所有的字符串并去除空白
    print(i)

搜索文档树
find和find_all方法:
find方法是找到第一个满足条件的标签后就立即返回,只返回一个元素。find_all方法是把所有满足条件的标签都选到(返回一个列表),然后返回回去。

from bs4 import BeautifulSoup
html = """
<table class="tablelist" cellpadding="0" cellspacing="0">
    <tbody>
        <tr class="h">
            <td class="l" width="374">职位名称</td>
            <td>职位类别</td>
            <td>人数</td>
            <td>地点</td>
            <td>发布时间</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=33824&keywords=python&tid=87&lid=2218">22989-金融云区块链高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=29938&keywords=python&tid=87&lid=2218">22989-金融云高级后台开发</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31236&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐运营开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31235&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34531&keywords=python&tid=87&lid=2218">TEG03-高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34532&keywords=python&tid=87&lid=2218">TEG03-高级图像算法研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31648&keywords=python&tid=87&lid=2218">TEG11-高级AI开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>4</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32218&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32217&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a id="test" class="test" target='_blank' href="position_detail.php?id=34511&keywords=python&tid=87&lid=2218">SNG11-高级业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
    </tbody>
</table>
"""
soup = BeautifulSoup(html,'lxml')
#1.获取所有的tr标签
# trs = soup.find_all('tr')
# for tr in trs:
#     print(tr)
#     print('-'*50)

#2.获取第二个tr标签
# tr = soup.find_all('tr',limit=2) #前两个,并返回列表
# print(tr[1])

#3.获取所有class等于even的tr标签
# trs = soup.find_all('tr',class_='even')
# trs = soup.find_all('tr',attrs={'class':'even'})
# for tr in trs:
#     print(tr)
#     print('-'*50)

#4.将所有id等于test,class等于test的a标签提取出来
# a = soup.find_all('a',attrs={'id':'test','class':'test'})
# for i in a:
#     print(i)

#5.获取所有a标签的href属性
# alist = soup.find_all('a')
# for a in alist:
#     # print(a['href']) #1
#     # print('-'*50)
#     # print(a.get('href')) #2
#     href = a.attrs['href'] #3
#     print(href)

#6.获取所有的职位信息
trs = soup.find_all('tr')[1:]
for tr in trs:
    tds = tr.find_all('td')
    name = tds[0].string
    print(name)
#第二种方法
trs = soup.find_all('tr')[1:]
for tr in trs:
    infos = list(tr.stripped_strings)
    print(infos)

select方法
有时候使用css选择器的方式可以更加的方便。使用css选择器的语法,应该使用select方法,以下列出几种常用的css选择器方法:

  1. 通过标签名查找:print(soup.select(‘a’))
  2. 通过类名查找:print(soup.select(’.sister’))
  3. 通过id查找:print(soup.select(’#link1’))
  4. 组合查找:print(soup.select(‘p #link1’))
  5. 通过属性查找:print(soup.select(‘a[href=“http://*****”]’))
  6. 获取内容:print(soup.select(‘title’)[0].get_text())
    select练习题
from bs4 import BeautifulSoup
html = """
<table class="tablelist" cellpadding="0" cellspacing="0">
    <tbody>
        <tr class="h">
            <td class="l" width="374">职位名称</td>
            <td>职位类别</td>
            <td>人数</td>
            <td>地点</td>
            <td>发布时间</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=33824&keywords=python&tid=87&lid=2218">22989-金融云区块链高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=29938&keywords=python&tid=87&lid=2218">22989-金融云高级后台开发</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31236&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐运营开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31235&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34531&keywords=python&tid=87&lid=2218">TEG03-高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34532&keywords=python&tid=87&lid=2218">TEG03-高级图像算法研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31648&keywords=python&tid=87&lid=2218">TEG11-高级AI开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>4</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32218&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32217&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a id="test" class="test" target='_blank' href="position_detail.php?id=34511&keywords=python&tid=87&lid=2218">SNG11-高级业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
    </tbody>
</table>
"""
soup = BeautifulSoup(html,'lxml')
#1.获取所有的tr标签
# trs = soup.select('tr') #列表
# print(trs)

#2.获取第二个tr标签
# tr = soup.select('tr')[1]
# print(tr)

#3.获取所有class等于even的tr标签
# trs = soup.select('.even')
# trs = soup.select('tr[class="even"]')
# print(trs)

#4. 获取所有a标签的href属性
# alist = soup.select('a')
# for a in alist:
#     print(a['href'])

#5.获取所有的职位信息
trs = soup.select('tr')[1:]
for tr in trs:
    infos = list(tr.stripped_strings)
    print(infos)

爬取豆瓣Top250

import requests
from bs4 import BeautifulSoup

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36'
}

def get_detail_urls(url):
    res = requests.get(url, headers=headers)
    # print(res)
    html = res.text
    # 解析HTML,获取详情页面url
    soup = BeautifulSoup(html, 'lxml')
    # print(soup)
    lis = soup.find('ol', class_='grid_view').find_all('li')
    detail_urls = []
    for li in lis:
        detail_url = li.find('a')['href']
        # print(detail_url)
        detail_urls.append(detail_url)
    return detail_urls

def parse_detail_url(url,f):
    res = requests.get(url, headers=headers)
    # print(res)
    html = res.text
    # 解析HTML,获取详情页面url
    soup = BeautifulSoup(html, 'lxml')
    movie_name = list(soup.find('div', id='content').find('h1').stripped_strings)  # 电影名和年份
    movie_name = join_list(movie_name)
    # print(movie_name)
    # 导演
    director = list(soup.find('div', id='info').find('span').find('span', class_='attrs').stripped_strings)
    director = join_list(director)
    # print(director)
    # 编剧
    screenwriter = list(soup.find('div', id='info').find_all('span')[3].find('span', class_='attrs').stripped_strings)
    screenwriter = join_list(screenwriter)
    # print(screenwriter)
    # 主演
    actor = list(soup.find('span', class_='actor').find('span', class_='attrs').stripped_strings)
    actor = join_list(actor)
    # print(actor)
    # 评分
    score = soup.find('strong', class_='ll rating_num').string
    score = join_list(score)
    # print(score)
    f.write('电影名:%s,导演:%s,编剧:%s,主演:%s,评分:%s\n'%(movie_name,director,screenwriter,actor,score))

def join_list(l):
    return ''.join(l)
    
def main():
    base_url = 'https://movie.douban.com/top250?start={}&filter='
    with open('top250.txt','a',encoding='utf-8') as f:
        for x in range(0,251,25):
            url = base_url.format(x)
            detail_urls = get_detail_urls(url)
            for detail_url in detail_urls:
                #爬取详情页面内容
                parse_detail_url(detail_url,f)
if __name__ == '__main__':
    main()

爬取快代理ip

注意:延时爬取,爬取速度太快可能会被禁止本机ip访问

import requests
from bs4 import BeautifulSoup
import time

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36',
}
def get_free_ips(url):
    res = requests.get(url,headers=headers)
    html = res.text
    soup = BeautifulSoup(html,'lxml')
    trs = soup.find('table', attrs={'class':'table table-bordered table-striped'}).find('tbody').find_all('tr') #列表
    ips = []
    for tr in trs:
        free_ips = list(tr.stripped_strings)
        # print(free_ips)
        ips.append(free_ips)
    return ips
def save_ips(ip,f):
    f.write('ip:%s,port:%s,匿名度:%s,类型:%s,位置:%s,响应速度:%s,最后验证时间:%s\n' % (ip[0], ip[1], ip[2], ip[3], ip[4], ip[5], ip[6]))
def main():
    base_url = 'https://www.kuaidaili.com/free/inha/{}/'
    with open('free_ip.txt','a',encoding='utf-8') as f:
        for x in range(1,11):
            url = base_url.format(x)
            print(url)
            time.sleep(3)  #延时爬取,爬取速度太快可能会被禁止本机ip访问
            ips = get_free_ips(url)
            for ip in ips:
                save_ips(ip,f)
if __name__ == '__main__':
    main()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章