阅读本文前可先参考前文:爬虫案例一:基础
前言
我在秋招期间,有幸做了一家电商公司的线上题目,然后发现这个题目比较全面,所以我就准备以这个题目为核心写个系列教程,不过为了公平起见,我就不写公司名字了。
题目网站
题目
- 点击进入第二个题目
- 题目描述和提交答案的地方
- 题目提供的网页
如何分析
- 根据前一篇文章我们已经知道了如何获取单页面上的所有的数字,那么如何获取多页面上的数字呢,这里我们点击翻页观察一下
URL
- 这里我们发现除了
page=
后面的页数 其余都是不变的,所以我们只需要根据URL
的生成规则把所有页面的URL
拼接生成即可,然后把生成的URL
按照单页面的获取方式获取页面
拼接生成1000页的URL
- 代码
#range 方法在不指定第一个参数的情况下是返回从0开始的数字列表,然后后面指定到哪里截止 for page in range(1,1001): url='http://glidedsky.com/level/web/crawler-basic-2?page=%s'%page print(url)
- 运行结果
封装好单个页面获取值的方法
- 这部分的分析请看上篇文章,代码如下
import requests from bs4 import BeautifulSoup #计算方法,根据传入的url请求网页并计算返回网页上值的合 def calculate(url): headers={ 'Host':'glidedsky.com', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0', 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2', 'Accept-Encoding':'gzip, deflate', 'Referer':'http://glidedsky.com/level/crawler-basic-1', 'Connection':'keep-alive', 'Cookie':'XSRF-TOKEN=eyJpdiI6Im5ORlZabitEd1d3UHdMdldQT0s3VUE9PSIsInZhbHVlIjoiMDJsNjhCUUR5clhFU25cL2FJQ21MQWpyd2V2a0RDVUh5UURPZGJ2OXNLdmFMZ0N5amVZWVg0Q3FQYkZIN1RhZDMiLCJtYWMiOiI2YTljYmE4OGJlNTQ2YTFlMGI3Yzk4MTNlNmYyMzZlM2JiMmU2NjJhOThiNGVmNDQxMjVlN2MzNjI1ZDk1Yjg4In0%3D; glidedsky_session=eyJpdiI6InBwTFhZbjZLR1BOUk1tbE1wZDBPMnc9PSIsInZhbHVlIjoiMlZsVjNJbVwvWUpIRlJxWHR2YmUzNUUyUitaQ2V2VDNsSTBaVmthdkhxTDhOMnBHK0hMQk5KNjlXd2JPdEh4eUsiLCJtYWMiOiJjZjgxNGI2YmY5YWJkN2Y5NGFiN2ZhY2U3ZWEyNzBmNjZlODE4YTNkYTFjZWJlNGIzMzc4NjliNjMwN2I4OTc1In0%3D; footprints=eyJpdiI6IkNNUWpGQ3FkWFwvQWQ0bU4yRDVlREpBPT0iLCJ2YWx1ZSI6IlhLeE9DZXgwbHRGYmZYa2pmYlE3YTAwY1hzOW1JNEdjY1lKZU04bUZocTkrb3NQNUppbmU3R1wveTRFejJwbU01IiwibWFjIjoiYzg5MDBjYjllZjM5OTA3Y2UwNmM1MmExYjczZmFhYzJhZjYzN2Y5YjVhOTRiNTIzNzczOGI5YWRkYmUyNmE2MiJ9; Hm_lvt_020fbaad6104bcddd1db12d6b78812f6=1573610190; Hm_lpvt_020fbaad6104bcddd1db12d6b78812f6=1573611039; _ga=GA1.2.157021610.1573610191; _gid=GA1.2.1296302792.1573610191; remember_web_59ba36addc2b2f9401580f014c7f58ea4e30989d=eyJpdiI6IlwvNmJXbTR1K0lENWM2ZGZGdEJBamJnPT0iLCJ2YWx1ZSI6InJkUmE3V2ExXC9JNURpWWZZbElHVXA0aDNxdHcxOFwvR2tMUkpISzdVN1FCRjROOE9iZklcL29Lb3RBY0dmVjNOV0M5K1k4bVp3dTZvb2hiYXVDK0w5YTFFUUpEbW85b3o0UEVsUkN4UHpoSnU1Q2NKYlZCXC9UbkY3cUhqVlg1bW1hXC9wXC96SndtR3NJOVNKN3BYaXFscmhvUzhBMGNwdDNySHZidGJzUUNocFowWT0iLCJtYWMiOiIxMzE5NmE0Y2IzMmI5YmNmZjUwMGYyOGNiYTM5OGIwOTVhMGY4Y2E3MDFlMDg4Y2I5Y2VhOWZjY2Q2YzE0MWQ2In0%3D', 'Upgrade-Insecure-Requests':'1', 'Pragma':'no-cache', 'Cache-Control':'no-cache' } response=requests.get(url,headers=headers) #生成解析器对象 soup=BeautifulSoup(response.text,'lxml') #根据标签的特征获取所有满足特征的标签 divs=soup.find_all('div',attrs={'class':'col-md-1'}) #换成列表推导式 nums=[int(div.text.strip()) for div in divs] sum_num=sum(nums) return sum_num
- 用第二页做测试,返回结果,结果计算,结果正确
最终代码
- 只需要把封装的方法放到for循环里面,并把方法返回的结果累加即可
import requests from bs4 import BeautifulSoup #计算方法,根据传入的url请求网页并计算返回网页上值的合 def calculate(url): headers={ 'Host':'glidedsky.com', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0', 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2', 'Accept-Encoding':'gzip, deflate', 'Referer':'http://glidedsky.com/level/crawler-basic-1', 'Connection':'keep-alive', 'Cookie':'XSRF-TOKEN=eyJpdiI6Im5ORlZabitEd1d3UHdMdldQT0s3VUE9PSIsInZhbHVlIjoiMDJsNjhCUUR5clhFU25cL2FJQ21MQWpyd2V2a0RDVUh5UURPZGJ2OXNLdmFMZ0N5amVZWVg0Q3FQYkZIN1RhZDMiLCJtYWMiOiI2YTljYmE4OGJlNTQ2YTFlMGI3Yzk4MTNlNmYyMzZlM2JiMmU2NjJhOThiNGVmNDQxMjVlN2MzNjI1ZDk1Yjg4In0%3D; glidedsky_session=eyJpdiI6InBwTFhZbjZLR1BOUk1tbE1wZDBPMnc9PSIsInZhbHVlIjoiMlZsVjNJbVwvWUpIRlJxWHR2YmUzNUUyUitaQ2V2VDNsSTBaVmthdkhxTDhOMnBHK0hMQk5KNjlXd2JPdEh4eUsiLCJtYWMiOiJjZjgxNGI2YmY5YWJkN2Y5NGFiN2ZhY2U3ZWEyNzBmNjZlODE4YTNkYTFjZWJlNGIzMzc4NjliNjMwN2I4OTc1In0%3D; footprints=eyJpdiI6IkNNUWpGQ3FkWFwvQWQ0bU4yRDVlREpBPT0iLCJ2YWx1ZSI6IlhLeE9DZXgwbHRGYmZYa2pmYlE3YTAwY1hzOW1JNEdjY1lKZU04bUZocTkrb3NQNUppbmU3R1wveTRFejJwbU01IiwibWFjIjoiYzg5MDBjYjllZjM5OTA3Y2UwNmM1MmExYjczZmFhYzJhZjYzN2Y5YjVhOTRiNTIzNzczOGI5YWRkYmUyNmE2MiJ9; Hm_lvt_020fbaad6104bcddd1db12d6b78812f6=1573610190; Hm_lpvt_020fbaad6104bcddd1db12d6b78812f6=1573611039; _ga=GA1.2.157021610.1573610191; _gid=GA1.2.1296302792.1573610191; remember_web_59ba36addc2b2f9401580f014c7f58ea4e30989d=eyJpdiI6IlwvNmJXbTR1K0lENWM2ZGZGdEJBamJnPT0iLCJ2YWx1ZSI6InJkUmE3V2ExXC9JNURpWWZZbElHVXA0aDNxdHcxOFwvR2tMUkpISzdVN1FCRjROOE9iZklcL29Lb3RBY0dmVjNOV0M5K1k4bVp3dTZvb2hiYXVDK0w5YTFFUUpEbW85b3o0UEVsUkN4UHpoSnU1Q2NKYlZCXC9UbkY3cUhqVlg1bW1hXC9wXC96SndtR3NJOVNKN3BYaXFscmhvUzhBMGNwdDNySHZidGJzUUNocFowWT0iLCJtYWMiOiIxMzE5NmE0Y2IzMmI5YmNmZjUwMGYyOGNiYTM5OGIwOTVhMGY4Y2E3MDFlMDg4Y2I5Y2VhOWZjY2Q2YzE0MWQ2In0%3D', 'Upgrade-Insecure-Requests':'1', 'Pragma':'no-cache', 'Cache-Control':'no-cache' } response=requests.get(url,headers=headers) #生成解析器对象 soup=BeautifulSoup(response.text,'lxml') #根据标签的特征获取所有满足特征的标签 divs=soup.find_all('div',attrs={'class':'col-md-1'}) #换成列表推导式 nums=[int(div.text.strip()) for div in divs] sum_num=sum(nums) return sum_num #range 方法在不指定第一个参数的情况下是返回从0开始的数字列表,然后后面指定到哪里截止 sum_num=0 for page in range(1,1001): url='http://glidedsky.com/level/web/crawler-basic-2?page=%s'%page num=calculate(url) sum_num+=num print('1000页的总结果为',sum)
- 运算结果
- 结果提交
GitHub源码:https://github.com/coderyyn/spider_example
相关文章
我的个人博客网站是:www.coderyyn.cn
上面会不定期分享有关爬虫、算法、环境搭建以及有趣的帖子
欢迎大家一起交流学习
转载请注明