知乎爬取表情包

任务:爬取你有哪些让你一秒变开心的表情包回答下的表情包

网页爬取

这儿主要的知识点是Ajax加载的问题,简单来说就是浏览网页的时候,会有下滑查看更多的选项,知乎的回答就属于这种。
先打开Chrome输入网址,按下F12,打开开发者工具。选Network一项,在Network下面选XHR(xhr格式的为Ajax请求),不断下滑网页,直到抓到以answer?开头的包。
在这里插入图片描述
我们来分析一下这个包,点击Headers
在这里插入图片描述
可以看到这是个requests请求,他base_url = “https://www.zhihu.com/api/v4/questions/302378021/answers?”,后面跟着include、limit等几个参数。这部分的参数可以在Query String Parameters栏目中找到。
在这里插入图片描述
现在可以写一下网页请求的代码,headers中老规矩设置一下自己referer和user-agent,可以复制Request Headers栏目中的。

import requests
from urllib.parse import urlencode

def get_page(offset):
    params = {
        'include' : "data[*].is_normal,admin_closed_comment,reward_info,is_collapsed,annotation_action,annotation_detail,collapse_reason,is_sticky,collapsed_by,suggest_edit,comment_count,can_comment,content,editable_content,voteup_count,reshipment_settings,comment_permission,created_time,updated_time,review_info,relevant_info,question,excerpt,relationship.is_authorized,is_author,voting,is_thanked,is_nothelp,is_labeled,is_recognized,paid_info,paid_info_content;data[*].mark_infos[*].url;data[*].author.follower_count,badge[*].topics",
        'limit' : 5,
        'offset' : offset,
        'platform': 'desktop',
        'sort_by': 'default',
    }
    base_url = "https://www.zhihu.com/api/v4/questions/302378021/answers?"
    url = base_url + urlencode(params) #请求网址
    headers = {
        'referer' : "https://www.zhihu.com/question/302378021",
        'user-agent' : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36",
    }
    try:
        response = requests.get(url, headers=headers) #发送get请求
        if response.status_code == 200:
            return response.json() #返回json格式
    except requests.ConnectionError:
        return None
        
get_page(5)

数据分析

选中preview,可以看到data下面有5条数据,一条即为一个回答,展开其中一个回答,可以看到回答的内容在content中
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
把content中内容复制下来分析一下,篇幅有限只选了第一个figure的内容
在这里插入图片描述

<p>更新一下</p><p class="ztext-empty-paragraph"><br/></p>
<figure data-size="normal">
<noscript><img src="https://pic3.zhimg.com/50/v2-3a5f0b335d4b3e55724b78cc7f2fb0b2_hd.gif" data-rawwidth="240" data-rawheight="240" data-size="normal" data-thumbnail="https://pic3.zhimg.com/50/v2-3a5f0b335d4b3e55724b78cc7f2fb0b2_hd.jpg" class="content_image" width="240"/></noscript><img src="data:image/svg+xml;utf8,&lt;svg xmlns=&#39;http://www.w3.org/2000/svg&#39; width=&#39;240&#39; height=&#39;240&#39;&gt;&lt;/svg&gt;" data-rawwidth="240" data-rawheight="240" data-size="normal" data-thumbnail="https://pic3.zhimg.com/50/v2-3a5f0b335d4b3e55724b78cc7f2fb0b2_hd.jpg" class="content_image lazy" width="240" data-actualsrc="https://pic3.zhimg.com/50/v2-3a5f0b335d4b3e55724b78cc7f2fb0b2_hd.gif"/>
</figure>

可以看到这个figure标签内有4个网址:

img src="https://pic3.zhimg.com/50/v2-3a5f0b335d4b3e55724b78cc7f2fb0b2_hd.gif"
data-thumbnail="https://pic3.zhimg.com/50/v2-3a5f0b335d4b3e55724b78cc7f2fb0b2_hd.jpg"
data-thumbnail="https://pic3.zhimg.com/50/v2-3a5f0b335d4b3e55724b78cc7f2fb0b2_hd.jpg"
data-actualsrc="https://pic3.zhimg.com/50/v2-3a5f0b335d4b3e55724b78cc7f2fb0b2_hd.gif"

data-thumbnail应该是缩略图(忽略),我们爬取属性为img src或者data-actualsrc的值即可,这次用正则表达式来提取

import requests
from urllib.parse import urlencode
import re

def get_page(offset):
    params = {
        'include' : "data[*].is_normal,admin_closed_comment,reward_info,is_collapsed,annotation_action,annotation_detail,collapse_reason,is_sticky,collapsed_by,suggest_edit,comment_count,can_comment,content,editable_content,voteup_count,reshipment_settings,comment_permission,created_time,updated_time,review_info,relevant_info,question,excerpt,relationship.is_authorized,is_author,voting,is_thanked,is_nothelp,is_labeled,is_recognized,paid_info,paid_info_content;data[*].mark_infos[*].url;data[*].author.follower_count,badge[*].topics",
        'limit' : 5,
        'offset' : offset,
        'platform': 'desktop',
        'sort_by': 'default',
    }
    base_url = "https://www.zhihu.com/api/v4/questions/302378021/answers?"
    url = base_url + urlencode(params)
    headers = {
        'referer' : "https://www.zhihu.com/question/302378021",
        'user-agent' : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36",
    }
    try:
        response = requests.get(url, headers=headers)
        if response.status_code == 200:
            return response.json()
    except requests.ConnectionError:
        return None

def get_image(json):
    if json.get('data'):
        for item in json.get('data'):
            content = item.get('content')
            #提取data-actualsrc的值为网址的属性(我这里只选了jpg图片,想要gif的改成gif即可)
            images = re.findall('.*?data-actualsrc=.(https:\S*?jpg)',content)
            for image in images:
                yield image
json = get_page(5)
for image in get_image(json):
	print(image)

数据存储

数据存储就比较简单了,对爬取到的url发送请求并保存

def save_image(image, cnt):
    path = './images/'
    file_path = path + '/{0}.jpg'.format(str(cnt))
    response = requests.get(image)
    with open(file_path, 'wb') as f:
        f.write(response.content)

最后这里只是分析了一个包,只有5个回答,看一下preview中paging栏目下的next和previous,即下一个和前一个请求的网址,可以看到,只改变了offset的值,分别为10和0,本次请求的offset为5。不难发现规律,每次offset增加5
在这里插入图片描述
完整代码:

import requests
from urllib.parse import urlencode
import re
import time

def get_page(offset):
    params = {
        'include' : "data[*].is_normal,admin_closed_comment,reward_info,is_collapsed,annotation_action,annotation_detail,collapse_reason,is_sticky,collapsed_by,suggest_edit,comment_count,can_comment,content,editable_content,voteup_count,reshipment_settings,comment_permission,created_time,updated_time,review_info,relevant_info,question,excerpt,relationship.is_authorized,is_author,voting,is_thanked,is_nothelp,is_labeled,is_recognized,paid_info,paid_info_content;data[*].mark_infos[*].url;data[*].author.follower_count,badge[*].topics",
        'limit' : 5,
        'offset' : offset,
        'platform': 'desktop',
        'sort_by': 'default',
    }
    base_url = "https://www.zhihu.com/api/v4/questions/302378021/answers?"
    url = base_url + urlencode(params)
    headers = {
        'referer' : "https://www.zhihu.com/question/302378021",
        'user-agent' : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36",
    }
    try:
        response = requests.get(url, headers=headers)
        if response.status_code == 200:
            return response.json()
    except requests.ConnectionError:
        return None

def get_image(json):
    if json.get('data'):
        for item in json.get('data'):
            content = item.get('content')
            images = re.findall('.*?data-actualsrc=.(https:\S*?jpg)',content)
            for image in images:
                yield image

def save_image(image, cnt):
    path = './images/'
    file_path = path + '/{0}.jpg'.format(str(cnt))
    response = requests.get(image)
    with open(file_path, 'wb') as f:
        f.write(response.content)

if __name__ == "__main__":
    cnt = 0
    for i in range(100):
        json = get_page(5*i)
        for image in get_image(json):
            save_image(image, cnt)
            cnt += 1
            if cnt % 100 == 0:
                print("第%d个表情包存储完毕"%i)
        time.sleep(1)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章