Python學習總結筆記

Python學習總結筆記

這次是由於想學習python密碼學編程引發的一次重新學習python,主要講的是關於python的一些科學庫,這裏記錄下來,可能會寫很久,因爲內容稍微有點多,也不打算寫幾篇博文,就放在這一篇博文裏面吧。

  • python文件服務器
# windows
python -m http.server [<portNo>]

# linux
python -m SimpleHTTPServer [<portNo>]
  • *args和**kwargs
def fun(a,*args,**kwargs):
    print("a = "+str(a))
    for i in args:
        print('===================')
        print(i)
        print('===================')
    for key,val in kwargs.items():
            print(["key : "+str(key)," val : "+str(val)])
if __name__=="__main__":
    fun(1,'a','b','c',*('t',2,3),**{'c':1,'b':2},s=5,u=6)

# output
a = 1
a
b
c
t
2
3
['key : c', ' val : 1']
['key : b', ' val : 2']
['key : s', ' val : 5']
['key : u', ' val : 6']

從這裏可以看出*args可以代表除a以外的所有單個輸入和數組輸入,**kwargs代表所有字典和=輸入

  • python2代碼轉換到python3代碼
# pip install 2to3
2to3 -w example.py
  • 格式化代碼
# pip install autopep8
autopep8.exe --in-place --aggressive --aggressive test.py
  • 異步
import asyncio
import time
import concurrent.futures as cf
import requests
from bs4 import BeautifulSoup


def get_title(i):
    url = 'https://movie.douban.com/top250?start={}&filter='.format(i*25)
    headers = {"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1.6) ",
               "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
               "Accept-Language": "en-us",
               "Connection": "keep-alive",
               "Accept-Charset": "GB2312,utf-8;q=0.7,*;q=0.7"}
    r = requests.get(url,headers=headers)
    soup = BeautifulSoup(r.content)
    lis = soup.find('ol', class_='grid_view').find_all('li')
    for li in lis:
        title = li.find("span",class_="title").text
        print(title)

async def title():
    with cf.ThreadPoolExecutor(max_workers = 10) as excutor:
        loop = asyncio.get_event_loop()
        futures = (loop.run_in_executor(excutor,get_title,i) for i in range(10))
        for result in await asyncio.gather(*futures):
            pass


def myfunc(i):
    print("start {}th".format(i))
    time.sleep(1)
    print("finish {}th".format(i))

async def main():
    with cf.ThreadPoolExecutor(max_workers = 10) as executor:
        loop = asyncio.get_event_loop()
        futures=(loop.run_in_executor(executor,myfunc,i) for i in range(10))
        for result in await asyncio.gather(*futures):
            pass



if __name__=="__main__":
    time1=time.time()
    loop = asyncio.get_event_loop()
    loop.run_until_complete(title())
    #下面代碼供測試速度對比
    # for i in range(10):
    #     get_title(i)
    print("花費了:"+str(time.time()-time1)+"s")

  1. 這裏簡單說一下異步,多線程,並行,併發區別

1.進程應用程序運行的載體,是程序一次動態執行的過程。

  • 程序用於描述進程要完成的功能,是控制進程執行的指令集;
  • 數據集合是程序在執行時所需要的數據和工作區;
  • 程序控制塊(Program Control Block,簡稱PCB),包含進程的描述信息和控制信息,是進程存在的唯一標誌。

2.線程是程序執行流的最小單元,是處理器調度和分派的基本單位。一個進程可以有一個或多個線程,各個線程之間共享程序的內存空間(也就是所在進程的內存空間)。

區別:

  • 線程由線程ID、當前指令指針(PC)、寄存器和堆棧組成。
  • 進程由內存空間(代碼、數據、進程空間、打開的文件)和一個或多個線程組成。

wyt

  1. 線程和核心關係

現在的電腦顯示的CPU數量是核心的兩倍,實際是採用超線程技術將一個物理處理核心模擬成兩個邏輯處理核心,對應兩個內核線程

這裏補充一下線程的種類
wyt
K代表內核線程,LWP代表輕量級進程,U代表 用戶線程

python的threading庫就是用戶線程
腳本運行時,系統產生一個LWP,當運行過程中需要創建新的線程時創建是U

  1. 並行和併發

並行指的是真正的多線程運行,併發指的是單線程不阻塞,採用時分複用同時執行多個任務
簡單來說,並行指物理上同時執行,併發指能夠讓多個任務在邏輯上交織執行的程序設計
wyt
wyt

  • base64編解碼
# Ascii85編解碼
import base64
s = "Hello World!"
b = s.encode("UTF-8")
e = base64.a85encode(b)
s1 = e.decode("UTF-8")
print("ASCII85 Encoded:", s1)
b1 = s1.encode("UTF-8")
d = base64.a85decode(b1)
s2 = d.decode("UTF-8")
print(s2)

# base64編解碼
import base64
s = "Hello World!"
b = s.encode("UTF-8")
e = base64.b64encode(b)
s1 = e.decode("UTF-8")
print(s1)

#base85編解碼
import base64
# Creating a string
s = "Hello World!"
# Encoding the string into bytes
b = s.encode("UTF-8")
# Base85 Encode the bytes
e = base64.b85encode(b)
# Decoding the Base85 bytes to string
s1 = e.decode("UTF-8")
# Printing Base85 encoded string
print("Base85 Encoded:", s1)
# Encoding the Base85 encoded string into bytes
b1 = s1.encode("UTF-8")
# Decoding the Base85 bytes
d = base64.b85decode(b1)
# Decoding the bytes to string
s2 = d.decode("UTF-8")
print(s2)

實踐操作中UTF-8是默認編解碼格式,所以如果是UTF-8可以省略寫UTF-8

  • configparser使用
import configparser

config = configparser.ConfigParser()
# config['settings']={'email':"[email protected]",'phone':'15827993562'}
# with open('config.txt','w') as configfile:
#     config.write(configfile)

if __name__=="__main__":
    config.read("config.txt")
    for key,val in config['settings'].items():
        print("key : "+key+"  val : "+val)
    # for key, val in config['host'].items():
    #     print("key : " + key + "  val : " + val)

  • ctype(後期拓展)
  • deque雙棧
from collections import deque

d = deque([1, 2, 3])
p = d.popleft()        # p = 1, d = deque([2, 3])
d.appendleft(5)        # d = deque([5, 2, 3])
創建空雙端隊列:

dl = deque()  # deque([]) creating empty deque
使用一些元素創建deque:

dl = deque([1, 2, 3, 4])  # deque([1, 2, 3, 4])
向deque添加元素:

dl.append(5)  # deque([1, 2, 3, 4, 5])
在deque中添加元素左側:

dl.appendleft(0)  # deque([0, 1, 2, 3, 4, 5])
向deque添加元素列表:

dl.extend([6, 7])  # deque([0, 1, 2, 3, 4, 5, 6, 7])
從左側添加元素列表:

dl.extendleft([-2, -1])  # deque([-1, -2, 0, 1, 2, 3, 4, 5, 6, 7])
使用.pop()元素自然會從右側刪除一個項目:

dl.pop()  # 7 => deque([-1, -2, 0, 1, 2, 3, 4, 5, 6])
使用.popleft()元素從左側刪除項目:

dl.popleft()  # -1 deque([-2, 0, 1, 2, 3, 4, 5, 6])
按值刪除元素:

dl.remove(1)  # deque([-2, 0, 2, 3, 4, 5, 6])
反轉deque中元素的順序:

dl.reverse()  # deque([6, 5, 4, 3, 2, 0, -2])
  • dis反彙編
>>> import dis 
>>> def hello():
...     print "Hello, World"
...
>>> dis.dis(hello)
  2           0 LOAD_CONST               1 ('Hello, World')
              3 PRINT_ITEM
              4 PRINT_NEWLINE
              5 LOAD_CONST               0 (None)
              8 RETURN_VALUE
  • 生成器,迭代器
# 生成器表達式
a=(x*2 for x in range(10))      	#<generator object <genexpr> at 0x000001A3ACC7CF48>
next(a)
print([i for i in a])

b=[x*2 for x in range(10)]			#list


# 生成器
def fib(n):
	prev,curr = 0,1
	while n>0:
		n-=1
		yield curr
		prev,curr=curr,curr+prev
print(i for i in fib(10))
  • simplejson
import simplejson
a = simplejson.dumps({'a':1,'b':2})		#'{"a": 1, "b": 2}'
simplejson.loads(a)						#{'a': 1, 'b': 2}

import simplejson
with open("file.txt","r") as f:
    info = simplejson.load(f)
    
import simplejson
info = [{"name":"laowang","age":40}]
with open("file.txt","w") as f:
    result = simplejson.dump(info,f)
  • kivy框架(之後補充)
  • pyinstaller
# pip install pyinstaller
pyinstaller myscript.py -F		#捆綁到單個文件
pyinstaller myscript.py -D		#捆綁到一個文件夾
#實測前者好用

  • 函數編程
# lambda函數
s=lambda x:x*x
s(2)

# 格式化輸出
a="this {} a new {}".format("is","start")
b="this %s a new %s"%("is","start")
  • 裝飾器
import time
import requests
from bs4 import BeautifulSoup


def timer(info):
    def decorator(func):
        def wrapper(*args,**kwargs):
            if info=="m":
                start=time.time()
                func(*args,**kwargs)
                print((time.time()-start)/60)
            if info=="s":
                start=time.time()
                func(*args,**kwargs)
                print((time.time()-start))
        return wrapper
    return decorator


@timer('s')
def get_title(s):
    for i in range(s):
        url = 'https://movie.douban.com/top250?start={}&filter='.format(i*25)
        headers = {"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1.6) ",
                   "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
                   "Accept-Language": "en-us",
                   "Connection": "keep-alive",
                   "Accept-Charset": "GB2312,utf-8;q=0.7,*;q=0.7"}
        r = requests.get(url,headers=headers)
        soup = BeautifulSoup(r.content)
        lis = soup.find('ol', class_='grid_view').find_all('li')
        for li in lis:
            title = li.find("span",class_="title").text
            print(title)


class Timer:
    def __init__(self,func):
        self._func=func
    def __call__(self, *args, **kwargs):
        start=time.time()
        result = self._func(*args,**kwargs)
        end = time.time()
        print("time : "+str(end-start))
        return result

@Timer
def get_title1(s):
    for i in range(s):
        url = 'https://movie.douban.com/top250?start={}&filter='.format(i*25)
        headers = {"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1.6) ",
                   "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
                   "Accept-Language": "en-us",
                   "Connection": "keep-alive",
                   "Accept-Charset": "GB2312,utf-8;q=0.7,*;q=0.7"}
        r = requests.get(url,headers=headers)
        soup = BeautifulSoup(r.content)
        lis = soup.find('ol', class_='grid_view').find_all('li')
        for li in lis:
            title = li.find("span",class_="title").text
            print(title)

if __name__=="__main__":
    get_title1(10)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章