內容有:
- 通過
requests
庫模擬表單提交 - 通過
pandas
庫提取網頁表格
目標分析
網址是這個:https://www.ctic.org/crm?tdsourcetag=s_pctim_aiomsg
打開長這樣:
點擊View Summary
後出現目標網頁長這樣
目標數據所在網頁的網址是這樣的:https://www.ctic.org/crm/?action=result
,剛剛選擇的那些參數並沒有作爲url的參數啊!網址網頁都變了,所以也不是ajax
嘗試獲取目標頁面
讓我來點擊View Summary
這個按鈕時到底發生了啥:右鍵View Summary
檢查是這樣:
提交的爲 post
請求:
點擊View Summary
,到DevTools裏找network第一條:
不管三七二十一,post一下試試看
import requests
url = 'https://www.ctic.org/crm?tdsourcetag=s_pctim_aiomsg'
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/74.0.3729.131 Safari/537.36',
'Host': 'www.ctic.org'}
data = {'_csrf': 'SjFKLWxVVkkaSRBYQWYYCA1TMG8iYR8ReUYcSj04Jh4EBzIdBGwmLw==',
'CRMSearchForm[year]': '2011',
'CRMSearchForm[format]': 'Acres',
'CRMSearchForm[area]': 'County',
'CRMSearchForm[region]': 'Midwest',
'CRMSearchForm[state]': 'IL',
'CRMSearchForm[county]': 'Adams',
'CRMSearchForm[crop_type]': 'All',
'summary': 'county'}
response = requests.post(url, data=data, headers=headers)
print(response.status_code)
果不其然,輸出400
……
嘗試用cookies
首先,搞不清cookies具體是啥,只知道它是用來維持會話的,應該來自於第一次get
,搞出來看看先:
response1 = requests.get(url, headers=headers)
if response1.status_code == 200:
cookies = response1.cookies
print(cookies)
輸出:
<RequestsCookieJar[<Cookie PHPSESSID=52asgghnqsntitqd7c8dqesgh6 for www.ctic.org/>, <Cookie _csrf=2571c72a4ca9699915ea4037b967827150715252de98ea2173b162fa376bad33s%3A32%3A%22TAhjwgNo5ElZzV55k3DMeFoc5TWrEmXj%22%3B for www.ctic.org/>]>
直接把它放到post
裏試試
response2 = requests.post(url, data=data, headers=headers, cookies=cookies)
print(response2.status_code)
還是400
,
post
請求所帶的data
中那個一開始就顯得很可疑的_csrf
那個完全看不懂的cookies
裏好像就有一個_csrf
啊!但是兩個_csrf的值很明顯結構不一樣,試了一下把data
裏的_csrf
換成cookies
裏的_csrf
確實也不行。
這個兩個_csrf
雖然不相等,但是應該是匹配的,剛剛的data
來自瀏覽器,cookies
來自python程序,所以不匹配!
點開瀏覽器的DevTools,Ctrl+F搜索了一下,嘿嘿,發現了:
這三處。
第一處那裏的下一行的csrf_token
很明顯就是post
請求所帶的data
裏的_csrf
,另外兩個是js裏的函數,雖然js沒好好學但也能看出來這倆是通過post請求獲得州名和縣名的,Binggo!一下子解決兩個問題。
爲了驗證猜想,打算先直接用requests獲取點擊View Summary前的頁面的HTML和cookies,將從HTML中提取的csrf_token
值作爲點擊View Summary時post
請求的data
裏的_csrf
值,同時附上cookies
,這樣兩處_csrf
就應該是匹配的了:
from lxml import etree
response1 = requests.get(url, headers=headers)
cookies = response1.cookies
html = etree.HTML(response1.text)
csrf_token = html.xpath('/html/head/meta[3]/@content')[0]
data.update({'_csrf': csrf_token})
response2 = requests.post(url, data=data, headers=headers, cookies=cookies)
print(response2.status_code)
輸出200,雖然和Chrome顯示的302不一樣,但是也表示成功
嘗試pandas庫提取網頁表格
現在既然已經拿到了目標頁面的HTML,那在獲取所有年、地區、州名、縣名之前,先測試一下pandas.read_html
提取網頁表格的功能。
import pandas as pd
df = pd.read_html(response2.text)[0]
print(df)
準備所有參數
接下來要獲取所有年、地區、州名、縣名。年份和地區是寫死在HTML裏的,直接xpath獲取:
州名、縣名根據之前發現的兩個js函數,要用post請求來獲得,其中州名要根據地區名獲取,縣名要根據州名獲取,套兩層循環就行
def new():
session = requests.Session()
response = session.get(url=url, headers=headers)
html = etree.HTML(response.text)
return session, html
session, html = new()
years = html.xpath('//*[@id="crmsearchform-year"]/option/text()')
regions = html.xpath('//*[@id="crmsearchform-region"]/option/text()')
_csrf = html.xpath('/html/head/meta[3]/@content')[0]
region_state = {}
state_county = {}
for region in regions:
data = {'region': region, '_csrf': _csrf}
response = session.post(url_state, data=data)
html = etree.HTML(response.json())
region_state[region] = {x: y for x, y in
zip(html.xpath('//option/@value'),
html.xpath('//option/text()'))}
for state in region_state[region]:
data = {'state': state, '_csrf': _csrf}
response = session.post(url_county, data=data)
html = etree.HTML(response.json())
state_county[state] = html.xpath('//option/@value')
使用requests.Session
就完全不需要自己管理cookies
了,方便!
然後把所有年、地區、州名、縣名的可能組合先整理成csv文件,一會直接從csv裏讀取並構造post請求的data字典:
remain = [[str(year), str(region), str(state), str(county)]
for year in years for region in regions
for state in region_state[region] for county in state_county[state]]
remain = pd.DataFrame(remain, columns=['CRMSearchForm[year]',
'CRMSearchForm[region]',
'CRMSearchForm[state]',
'CRMSearchForm[county]'])
remain.to_csv('remain.csv', index=False)
# 由於州名有縮寫和全稱,也本地保存一份
import json
with open('region_state.json', 'w') as json_file:
json.dump(region_state, json_file, indent=4)
正式開始
import pyodbc
with open("region_state.json") as json_file:
region_state = json.load(json_file)
data = pd.read_csv('remain.csv')
# 讀取已經爬取的
cnxn = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};'
'DBQ=./ctic_crm.accdb')
crsr = cnxn.cursor()
crsr.execute('select Year_, Region, State, County from ctic_crm')
done = crsr.fetchall()
done = [list(x) for x in done]
done = pd.DataFrame([list(x) for x in done], columns=['CRMSearchForm[year]',
'CRMSearchForm[region]',
'CRMSearchForm[state]',
'CRMSearchForm[county]'])
done['CRMSearchForm[year]'] = done['CRMSearchForm[year]'].astype('int64')
state2st = {y: x for z in region_state.values() for x, y in z.items()}
done['CRMSearchForm[state]'] = [state2st[x]
for x in done['CRMSearchForm[state]']]
# 排除已經爬取的
remain = data.append(done)
remain = remain.drop_duplicates(keep=False)
total = len(remain)
print(f'{total} left.n')
del data
# %%
remain['CRMSearchForm[year]'] = remain['CRMSearchForm[year]'].astype('str')
columns = ['Crop',
'Total_Planted_Acres',
'Conservation_Tillage_No_Till',
'Conservation_Tillage_Ridge_Till',
'Conservation_Tillage_Mulch_Till',
'Conservation_Tillage_Total',
'Other_Tillage_Practices_Reduced_Till15_30_Residue',
'Other_Tillage_Practices_Conventional_Till0_15_Residue']
fields = ['Year_', 'Units', 'Area', 'Region', 'State', 'County'] + columns
data = {'CRMSearchForm[format]': 'Acres',
'CRMSearchForm[area]': 'County',
'CRMSearchForm[crop_type]': 'All',
'summary': 'county'}
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/74.0.3729.131 Safari/537.36',
'Host': 'www.ctic.org',
'Upgrade-Insecure-Requests': '1',
'DNT': '1',
'Connection': 'keep-alive'}
url = 'https://www.ctic.org/crm?tdsourcetag=s_pctim_aiomsg'
headers2 = headers.copy()
headers2 = headers2.update({'Referer': url,
'Origin': 'https://www.ctic.org'})
def new():
session = requests.Session()
response = session.get(url=url, headers=headers)
html = etree.HTML(response.text)
_csrf = html.xpath('/html/head/meta[3]/@content')[0]
return session, _csrf
session, _csrf = new()
for _, row in remain.iterrows():
temp = dict(row)
data.update(temp)
data.update({'_csrf': _csrf})
while True:
try:
response = session.post(url, data=data, headers=headers2, timeout=15)
break
except Exception as e:
session.close()
print(e)
print('nSleep 30s.n')
time.sleep(30)
session, _csrf = new()
data.update({'_csrf': _csrf})
df = pd.read_html(response.text)[0].dropna(how='all')
df.columns = columns
df['Year_'] = int(temp['CRMSearchForm[year]'])
df['Units'] = 'Acres'
df['Area'] = 'County'
df['Region'] = temp['CRMSearchForm[region]']
df['State'] = region_state[temp['CRMSearchForm[region]']][temp['CRMSearchForm[state]']]
df['County'] = temp['CRMSearchForm[county]']
df = df.reindex(columns=fields)
for record in df.itertuples(index=False):
tuple_record = tuple(record)
sql_insert = f'INSERT INTO ctic_crm VALUES {tuple_record}'
sql_insert = sql_insert.replace(', nan,', ', null,')
crsr.execute(sql_insert)
crsr.commit()
print(total, row.to_list())
total -= 1
else:
print('Done!')
crsr.close()
cnxn.close()