[python 那些事] [初級練習] 簡單爬蟲 武漢加油

目標網站:https://news.163.com/special/epidemic/
任務:爬取當日各地疫情基本狀況
適合人羣:瞭解基本python代碼,小項目實訓

代碼如下

先引入爬蟲利器requests 和 數據處理小能手pandas

import requests
import pandas as pd

下面函數是獲取json數據

def get_page(url):
    headers={'User-Agent':'XXXXXXX'}
    r=requests.get(url, headers=headers)
    r.encoding = r.apparent_encoding
    a=r.json()
    return a

下面就需要稍微分析一下網頁源碼了 ,畢竟是提取有效信息嘛

def parse_page(html):
    all=[]
    china = html['data']['areaTree'][0]['children']
    for i in range(len(china)):
        provinceName=china[i]['name']
        for j in range(len(china[i]['children'])):
            cityName = china[i]['children'][j]['name']
            confirm = china[i]['children'][j]['today']['confirm']
            dead = china[i]['children'][j]['today']['dead']
            heal = china[i]['children'][j]['today']['heal']
            suspect = china[i]['children'][j]['today']['suspect']
            lastUpdateTime = china[i]['children'][j]['lastUpdateTime']
            a = {'province':provinceName,'city':cityName,
                     'confirm':confirm,'dead':dead,'heal':heal,
                    'suspect':suspect,'lastUpdateTime':lastUpdateTime}
            all.append(a)
    return all

下面是將有效數據保存到文件中

def save_file(all):
    df = pd.DataFrame(all)
    order=['province','city','confirm','dead','heal','suspect','lastUpdateTime']
    df = df[order]
    df.to_csv('pachong.csv',index=True,header=True)

上面都是函數 下面就是執行啦

url = "https://c.m.163.com/ug/api/wuhan/app/data/list-total?t=316639086783"
dataJson = get_page(url)
allData = parse_page(dataJson)
save_file(allData)

老規矩 有問題私聊

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章