2.1 認識HTTP請求
2.1.1 HTTP請求的含義
2.1.2 HTTP請求信息
1. 請求方法
2. 請求頭部
2.2 爬蟲基礎-Requests庫入門
2.2.1 Requests庫的安裝
2.2.2 Requests庫的請求方法
import requests
# get 獲取
response = requests.get('https://www.douban.com/')
# post 提交
requests.post('https://www.douban.com/')
2.2.3 Requests庫的響應對象
2.2.4 響應狀態碼
418 反爬蟲
200 正常登錄
import requests
url = 'https://www.douban.com/search'
r = requests.get(url)
# 狀態碼
code = r.status_code
print(code)
沒有定製頭部文件,被反爬蟲了
2.2.5 定製請求頭部
# headers 頭部信息
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
# 網址
url = 'https://www.douban.com/search'
# get 獲取
r = requests.get(url, headers=headers)
2.2.6 重定向與超時
# timeout=3 3秒內網頁無反應拋出timeout異常
r = requests.get(url, headers=headers, timeout=3)
# 重定向 ,重新定位到網頁,相當於重新訪問,刷新
r.history
2.2.7 傳遞URL參數
import requests
# headers 頭部信息
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
# 網址
url = 'https://www.douban.com/search'
payload = {'q': 'python', 'cat': '1001'}
# get 獲取
# timeout=3 3秒內網頁無反應拋出timeout異常
r = requests.get(url, headers=headers, timeout=3,params=payload)
url = r.url
print(url)
2.7.1 更改cat
1、搜索全部,不加cat
import requests
# headers 頭部信息
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
# 網址
url = 'https://www.douban.com/search'
payload = {'q': 'python'}
# get 獲取
# timeout=3 3秒內網頁無反應拋出timeout異常
r = requests.get(url, headers=headers, timeout=3,params=payload)
url = r.url
print(url)
2、搜索圖片,cat=1025
import requests
# headers 頭部信息
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
# 網址
url = 'https://www.douban.com/search'
payload = {'q': 'python', 'cat': '1025'}
# get 獲取
# timeout=3 3秒內網頁無反應拋出timeout異常
r = requests.get(url, headers=headers, timeout=3,params=payload)
url = r.url
print(url)
2.3 爬蟲基礎——Urllib庫基礎
2.3.1 Urllib庫簡介
2.3.2 發送GET請求
不自定義頭部文件,都會被反爬蟲
2.3.3 模擬瀏覽器發送GET請求
和Requests庫一樣,需要定義頭部纔行
2.3.4 POST發送一個請求
2.3.5 URL解析
1. urlparse: 拆分URL
from urllib.parse import urlparse
# 1. urlparse: 拆分URL
urlparse = urlparse('https://www.douban.com/search?cat=1001&q=python')
print(urlparse)
2. urlunparse: 拼接URL
# 2. urlunparse: 拼接URL
from urllib.parse import urlunparse
data = ['https', 'www.douban.com', '/search', '', 'cat=1001&q=python', '']
print(urlunparse(data))
3.urljoin: 拼接兩個URL
# 3.urljoin: 拼接兩個URL
from urllib.parse import urljoin
urljoin = urljoin('https://www.douban.com', 'accounts/login')
print(urljoin)
4. 總代碼
# -*- coding: utf-8 -*-
from urllib.parse import urlparse
# 1. urlparse: 拆分URL
urlparse = urlparse('https://www.douban.com/search?cat=1001&q=python')
print(urlparse)
# ParseResult(scheme='https',
# netloc='www.douban.com',
# path='/search',
# params='',
# query='cat=1001&q=python',
# fragment='')
# 2. urlunparse: 拼接URL
from urllib.parse import urlunparse
data = ['https', 'www.douban.com', '/search', '', 'cat=1001&q=python', '']
print(urlunparse(data))
# 3.urljoin: 拼接兩個URL
from urllib.parse import urljoin
urljoin = urljoin('https://www.douban.com', 'accounts/login')
print(urljoin)