爬取西刺代理的免費IP

背景

  • 出於爬取其他項目的需求,爬取點代理ip存成文本文件,隨機取一個簡單驗證,方便自己其他代碼裏面調用。

環境

  • win10, python 3.6, pycharm

乾貨

import requests
from bs4 import BeautifulSoup
import time
import random

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.146 Safari/537.36'}

def xici_ip(page):
    for num_page in range(1,page+1):
        url_part = "http://www.xicidaili.com/wn/" # 爬取西刺代理的IP,此處選的是國內https
        url = url_part + str(num_page)  # 構建爬取的頁面URL
        r = requests.get(url, headers=headers)
        if r.status_code == 200:
            soup = BeautifulSoup(r.text,'lxml')
            trs = soup.find_all('tr')
            for i in range(1,len(trs)):
                tr = trs[i]
                tds = tr.find_all('td')
                ip_item = tds[1].text + ':' + tds[2].text
                # print('抓取第'+ str(page) + '頁第' + str(i) +'個:' + ip_item)
                with open(r'路徑\get_xici_ip.txt', 'a', encoding='utf-8') as f:
                    f.writelines(ip_item + '\n')
                # time.sleep(1)
            return ('存儲成功')

def get_ip():
    with open(r'路徑\get_xici_ip.txt', 'r', encoding='utf-8') as f:
        lines = f.readlines()
        return random.choice(lines)

def check_ip():
    proxies = {'HTTPS': 'HTTPS://' + get_ip().replace('\n', '')}
    try:
        r = requests.get('http://httpbin.org/ip', headers=headers, proxies=proxies, timeout=10)
        if r.status_code == 200:
            return proxies
    except Exception as e:
        print(e)

def main():
    xici_ip(1) # 抓取第一頁,一頁100個url
    try:
        return check_ip()
    except Exception as e:
        print(e)
        check_ip()

if __name__ == '__main__':
    main()

參考

End

走過路過,有任何問題,請不吝賜教。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章