注意:這個模塊的某些函數需要操作系統的支持,例如,multiprocessing.synchronize模塊在某些平臺上引入時會激發一個ImportError
1)Process
要創建一個Process是很簡單的。
- from multiprocessing import Process
- def f(name):
- print('hello', name)
- if __name__ == '__main__':
- p = Process(target=f, args=('bob',))
- p.start()
- p.join()
- from multiprocessing import Process
- import os
- def info(title):
- print title
- print 'module name:', __name__
- print 'parent process:', os.getppid()#這個測試不通過,3.0不支持
- print 'process id:', os.getpid()
- def f(name):
- info('function f')
- print 'hello', name
- if __name__ == '__main__':
- info('main line')
- p = Process(target=f, args=('bob',))
- p.start()
- p.join()
參數:
group: None,它的存在僅僅是爲了與threading.Thread兼容
target: 一般是函數
name: 進程名
args: 函數的參數
kargs: keywords參數
函數:
run() 默認的run()函數調用target的函數,你也可以在子類中覆蓋該函數
start() 啓動該進程
join([timeout]) 父進程被停止,直到子進程被執行完畢。
當timeout爲None時沒有超時,否則有超時。
進程可以被join很多次,但不能join自己
is_alive()
terminate() 結束進程。
在Unix上使用的是SIGTERM
在Windows平臺上使用TerminateProcess
屬性:
name 進程名
daemon 守護進程
pid 進程ID
exitcode 如果進程還沒有結束,該值爲None
authkey
2)Queue
Queue類似於queue.Queue,一般用來進程間交互信息
例子:
- from multiprocessing import Process, Queue
- def f(q):
- q.put([42, None, 'hello'])
- if __name__ == '__main__':
- q = Queue()
- p = Process(target=f, args=(q,))
- p.start()
- print(q.get()) # prints "[42, None, 'hello']"
- p.join()
Queue實現了queue.Queue的大部分方法,但task_done()和join()沒有實現。
創建Queue:multiprocessing.Queue([maxsize])
函數:
qsize() 返回Queue的大小
empty() 返回一個boolean值表示Queue是否爲空
full() 返回一個boolean值表示Queue是否滿
put(item[, block[, timeout]])
put_nowait(item)
get([block[, timeout]])
get_nowait()
get_no_wait()
close() 表示該Queue不在加入新的元素
join_thread()
cancel_join_thread()
3)JoinableQueue
創建:multiprocessing.JoinableQueue([maxsize])
task_done()
join()
4)Pipe
- from multiprocessing import Process, Pipe
- def f(conn):
- conn.send([42, None, 'hello'])
- conn.close()
- if __name__ == '__main__':
- parent_conn, child_conn = Pipe()
- p = Process(target=f, args=(child_conn,))
- p.start()
- print(parent_conn.recv()) # prints "[42, None, 'hello']"
- p.join()
5)異步化synchronization
- from multiprocessing import Process, Lock
- def f(l, i):
- l.acquire()
- print('hello world', i)
- l.release()
- if __name__ == '__main__':
- lock = Lock()
- for num in range(10):
- Process(target=f, args=(lock, num)).start()
- from multiprocessing import Process, Value, Array
- def f(n, a):
- n.value = 3.1415927
- for i in range(len(a)):
- a[i] = -a[i]
- if __name__ == '__main__':
- num = Value('d', 0.0)
- arr = Array('i', range(10))
- p = Process(target=f, args=(num, arr))
- p.start()
- p.join()
- print(num.value)
- print(arr[:])
2>Array
7)Manager
- from multiprocessing import Process, Manager
- def f(d, l):
- d[1] = '1'
- d['2'] = 2
- d[0.25] = None
- l.reverse()
- if __name__ == '__main__':
- manager = Manager()
- d = manager.dict()
- l = manager.list(range(10))
- p = Process(target=f, args=(d, l))
- p.start()
- p.join()
- print(d)
- print(l)
8)Pool
- from multiprocessing import Pool
- def f(x):
- return x*x
- if __name__ == '__main__':
- pool = Pool(processes=4) # start 4 worker processes
- result = pool.apply_async(f, [10]) # evaluate "f(10)" asynchronously
- print result.get(timeout=1) # prints "100" unless your computer is *very* slow
- print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
函數:
apply(func[, args[, kwds]])
apply_async(func[, args[, kwds[, callback]]])
map(func,iterable[, chunksize])
map_async(func,iterable[, chunksize[, callback]])
imap(func, iterable[, chunksize])
imap_unordered(func, iterable[, chunksize])
close()
terminate()
join()
- from multiprocessing import Pool
- def f(x):
- return x*x
- if __name__ == '__main__':
- pool = Pool(processes=4) # start 4 worker processes
- result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously
- print(result.get(timeout=1)) # prints "100" unless your computer is *very* slow
- print(pool.map(f, range(10))) # prints "[0, 1, 4,..., 81]"
- it = pool.imap(f, range(10))
- print(next(it)) # prints "0"
- print(next(it)) # prints "1"
- print(it.next(timeout=1)) # prints "4" unless your computer is *very* slow
- import time
- result = pool.apply_async(time.sleep, (10,))
- print(result.get(timeout=1)) # raises TimeoutError
9)雜項
multiprocessing.active_children() 返回所有活動子進程的列表
multiprocessing.cpu_count() 返回CPU數目
multiprocessing.current_process() 返回當前進程對應的Process對象
multiprocessing.freeze_support()
multiprocessing.set_executable()
10)Connection對象
send(obj)
recv()
fileno()
close()
poll([timeout])
send_bytes(buffer[, offset[, size]])
recv_bytes([maxlength])
recv_bytes_info(buffer[, offset])
- >>> from multiprocessing import Pipe
- >>> a, b = Pipe()
- >>> a.send([1, 'hello', None])
- >>> b.recv()
- [1, 'hello', None]
- >>> b.send_bytes('thank you')
- >>> a.recv_bytes()
- 'thank you'
- >>> import array
- >>> arr1 = array.array('i', range(5))
- >>> arr2 = array.array('i', [0] * 10)
- >>> a.send_bytes(arr1)
- >>> count = b.recv_bytes_into(arr2)
- >>> assert count == len(arr1) * arr1.itemsize
- >>> arr2
- array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])