python3裏遇到的一些坑

1.在使用 pytesseract中的image_to_string是報錯如下(Pycharm會報錯,IDLE則不會): Python3.5版本,已經正確安裝了Pillow和pytesseract模塊,安裝方法請百度,使用PyCharm也可以管理,比較簡單

Traceback (most recent call last):
  File "D:/Chao/PycharmProjects/net.bjxueche.haijia/CoreImage.py", line 82, in <module>
    text = image_to_string(image=image, boxes=True)
  File "D:\mysoft\Python\Python35\lib\site-packages\pytesseract\pytesseract.py", line 162, in image_to_string
    config=config)
  File "D:\mysoft\Python\Python35\lib\site-packages\pytesseract\pytesseract.py", line 95, in run_tesseract
    stderr=subprocess.PIPE)
  File "D:\mysoft\Python\Python35\lib\subprocess.py", line 950, in __init__
    restore_signals, start_new_session)
  File "D:\mysoft\Python\Python35\lib\subprocess.py", line 1220, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] 系統找不到指定的文件。

解決辦法:

打開文件 pytesseract.py,找到如下代碼,將tesseract_cmd的值修改爲全路徑,在此使用就不會報錯了。

# CHANGE THIS IF TESSERACT IS NOT IN YOUR PATH, OR IS NAMED DIFFERENTLY
#tesseract_cmd = 'tesseract'
tesseract_cmd = 'D:/Program Files (x86)/Tesseract-OCR/tesseract.exe'

PS:我的環境變量中明明有這個值,在CMD中也可以正常使用“tesseract”命令,不知道pytesseract爲什麼會報錯,總之,這樣修改後可以正常運行了

Pycharm會報錯,IDLE則不會,修改好兩個都可以正常使用

2.Python3和Python2裏的抓取網頁的區別

2.1、最簡單

import urllib.request
response = urllib.request.urlopen('http://python.org/')
html = response.read()

2.2、使用 Request

import urllib.request

req = urllib.request.Request('http://python.org/')
response = urllib.request.urlopen(req)
the_page = response.read()

2.3、發送數據

#! /usr/bin/env python3

import urllib.parse
import urllib.request

url = 'http://localhost/login.php'
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
values = {
'act' : 'login',
'login[email]' : '[email protected]',
'login[password]' : '123456'
}

data = urllib.parse.urlencode(values)
req = urllib.request.Request(url, data)
req.add_header('Referer', 'http://www.python.org/')
response = urllib.request.urlopen(req)
the_page = response.read()

print(the_page.decode("utf8"))

2.4、發送數據和header

#! /usr/bin/env python3

import urllib.parse
import urllib.request

url = 'http://localhost/login.php'
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
values = {
'act' : 'login',
'login[email]' : '[email protected]',
'login[password]' : '123456'
}
headers = { 'User-Agent' : user_agent }

data = urllib.parse.urlencode(values)
req = urllib.request.Request(url, data, headers)
response = urllib.request.urlopen(req)
the_page = response.read()

print(the_page.decode("utf8"))

2.5、http 錯誤

#! /usr/bin/env python3

import urllib.request

req = urllib.request.Request('http://www.111cn.net ')
try:
urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
print(e.code)
print(e.read().decode("utf8"))

2.6、異常處理1

#! /usr/bin/env python3

from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
req = Request("http://www.111cn.net /")
try:
response = urlopen(req)
except HTTPError as e:
print('The server couldn't fulfill the request.')
print('Error code: ', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason: ', e.reason)
else:
print("good!")
print(response.read().decode("utf8"))

2.7、異常處理2

#! /usr/bin/env python3

from urllib.request import Request, urlopen
from urllib.error import  URLError
req = Request("http://www.111cn.net /")
try:
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print('We failed to reach a server.')
print('Reason: ', e.reason)
elif hasattr(e, 'code'):
print('The server couldn't fulfill the request.')
print('Error code: ', e.code)
else:
print("good!")
print(response.read().decode("utf8"))

2.8、HTTP 認證

#! /usr/bin/env python3

import urllib.request

# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()

# Add the username and password.
# If we knew the realm, we could use it instead of None.
top_level_url = "https://www.111cn.net /"
password_mgr.add_password(None, top_level_url, 'rekfan', 'xxxxxx')

handler = urllib.request.HTTPBasicAuthHandler(password_mgr)

# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)

# use the opener to fetch a URL
a_url = "https://www.111cn.net /"
x = opener.open(a_url)
print(x.read())

# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)

a = urllib.request.urlopen(a_url).read().decode('utf8')
print(a)

2.9 使用代理

#! /usr/bin/env python3

import urllib.request

proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)


a = urllib.request.urlopen("http://www.111cn.net ").read().decode("utf8")
print(a)

2.10 超時

#! /usr/bin/env python3

import socket
import urllib.request

# timeout in seconds
timeout = 2
socket.setdefaulttimeout(timeout)

# this call to urllib.request.urlopen now uses the default timeout
# we have set in the socket module
req = urllib.request.Request('http://www.111cn.net /')
a = urllib.request.urlopen(req).read()
print(a)



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章