csdn既没有做接口ip访问量的限制,访问量统计时也没有做同一ip相同时间段的重复访问重复计数的处理。这也时这个程序能够刷访问量的原因。
githup 地址:https://github.com/hailinli/accessCsdn
一、思路介绍
1、从页面中 https://blog.csdn.net/linhai1028/article/list/2 解析出所有文章链接
2、依次访问这些文章
由于csdn的浏览量要过30多秒后再一次看就又可一加了,所以我们设置一个定时器,每30秒后执行一次,所以我们的浏览量过万也不是什么事了。
二、代码实现
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 18/6/24 下午8:39
# @Author : Lihailin<[email protected]>
# @Desc :
# @File : aceessCsdn.py
# @Software: PyCharm
from lxml import etree
import crawBase
import time
class AccessCsdn(crawBase.CrawBase):
'''
访问csdn
'''
def getArticals(self, url):
'''
https://blog.csdn.net/linhai1028/article/list/2
解析所有博客链接
:param urls:
:return:
'''
c = self.get(url)
html = etree.HTML(c)
l = html.xpath('//div[@class="article-list"]//a/@href')
# print(l)
return l
def geAllArticals(self, urlBase):
'''
https://blog.csdn.net/linhai1028/article/list/1
list第
解析所有博客链接
:param urlBase:
:return:
'''
i = 1
urls = []
url = urlBase
while True:
# print('sfs'+url)
t = self.getArticals(url)
urls += t
if len(t) == 0:
break
i += 1
url = urlBase + '/article/list/%s' % i
return urls
def run(self, url, sec):
'''
刷url链接文章
:param url:
:param sec: 间隔时间
:return:
'''
urls = self.geAllArticals(url)
urls = list(set(urls))
# print(len(urls))
while True:
for url in urls:
# print(url)
self.get(url)
time.sleep(sec)
if __name__ == '__main__':
url = "https://blog.csdn.net/linhai1028/"
accessCsdn = AccessCsdn()
accessCsdn.run(url, 40)
三、使用
git clone git@github.com:hailinli/accessCsdn.git
cd acceseeCsdn.git
python accessCsdn.py
参考
环境
- python3
- requests2.18
- lxml4.2