Max retries exceeded with URL in requests

爬蟲的時候遇到的一個錯誤 Max retries exceeded with URL in requests

requests.exceptions.ConnectionError: HTTPSConnectionPool:
 Max retries exceeded with url: ××××××××××××××××××××××××××××××××
  (Caused by <class 'socket.gaierror'>: [Errno -2] Name or service not known)

我的解決辦法:

import requests

try:
    r = requests.get(ap)
except requests.exceptions.ConnectionError as e:
    print str(e)
except:
	pass

親測有效。

其他人的解決辦法,還沒有試驗過不知道效果如何:

import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry


session = requests.Session()
retry = Retry(connect=3, backoff_factor=0.5)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)

session.get(url)

有人說 pip install pyopenssl 有用,不過試了沒有用


感覺個人覺得最好的是這個,就是一般不要except之後什麼都不接,不過我寫代碼就是這種德行的。
這裏設置了一個30s的時間設定,超過就不處理了。

try:
        res = requests.get(adress,timeout=30)
    except requests.ConnectionError as e:
        print("OOPS!! Connection Error. Make sure you are connected to Internet. Technical Details given below.\n")
        print(str(e))            
        continue
    except requests.Timeout as e:
        print("OOPS!! Timeout Error")
        print(str(e))
        renewIPadress()
        continue
    except requests.RequestException as e:
        print("OOPS!! General Error")
        print(str(e))
        renewIPadress()
        continue
    except KeyboardInterrupt:
        print("Someone closed the program")

Reference

https://stackoverflow.com/questions/23013220/max-retries-exceeded-with-url-in-requests

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章