感謝:http://www.cnblogs.com/herbert/p/3395268.html http://blog.csdn.net/my2010sam/article/details/9831821
主要有以下三種方式:
一,CPU時間
time.clock()
測量CPU時間,比較精準,通過比較程序運行前後的CPU時間差,得出程序運行的CPU時間。
二, 時鐘時間
time.time()
測量時鐘時間,也就是通常的類似掐表計時。
三,基準時間
timeit.timeit(stmt='pass', setup='pass', timer=<default timer>, number=1000000)
簡短示例:
timeit(“math.sqrt(2.0)”, “import math”)
timeit(“sqrt(2.0)”, “from ,math import sqrt”)
timeit(“test()”, “from __main__ import test”, number = 10000)
測試一行代碼的運行時間,在Python中比較方便,可以直接使用timeit:
Timer 類:
__init__(stmt="pass", setup="pass", timer=default_timer)stmt 是執行語句,setup 是導入執行語句環境print_exc(file=None)timeit(number=default_number)返回測試所用秒數,number 是每個測試中調用被計時語句的次數repeat(repeat=default_repeat, number=default_number)返回測試所用秒數列表,repeat 是重複整個測試的次數,number 是每個測試中執行語句的次數
timeit(stmt="pass", setup="pass", timer=default_timer, number=default_number)= Timer(stmt, setup, timer).timeit(number)repeat(stmt="pass", setup="pass", timer=default_timer, repeat=default_repeat, number=default_number)= Timer(stmt, setup, timer).repeat(repeat, number)
實例:
import timeit
def func1(x):
pow(x, 2)
def func2(x):
return x * x
v = 10000
func1_test = 'func1(' + str(v) + ')'
func2_test = 'func2(' + str(v) + ')'
print timeit.timeit(func1_test, 'from __main__ import func1')
print timeit.timeit(func2_test, 'from __main__ import func2')
print timeit.repeat(func1_test, 'from __main__ import func1')
print timeit.repeat(func2_test, 'from __main__ import func2')
三種方法比較示例(示例中的三個方法都是求解一個數的因子數的個數)
CPU時間的示例:
import time
def countDiv(n):
"Return the count number of divisors of n."
count = 1
for i in range(1, n):
if n%i == 0:
count += 1
return count
def countDiv2(n):
return len([x for x in range(1, n+1) if n%x == 0])
def countDiv3(n):
s = set()
for i in range(1, n):
if i in s:
break
else:
if n%i == 0:
s.update({i, n/i})
return len(s)
start_CPU = time.clock()
a = countDiv(73920)
end_CPU = time.clock()
print("Method 1: %f CPU seconds" % (end_CPU - start_CPU))
start_CPU = time.clock()
a = countDiv2(73920)
end_CPU = time.clock()
print("Method 2: %f CPU seconds" % (end_CPU - start_CPU))
start_CPU = time.clock()
a = countDiv3(73920)
end_CPU = time.clock()
print("Method 3: %f CPU seconds" % (end_CPU - start_CPU))
結果:最快的是方法三,方法二和方法一,其實不相上下.
Method 1: 0.022805 CPU seconds
Method 2: 0.015988 CPU seconds
Method 3: 0.000141 CPU seconds
時鐘時間示例:
import time
start_Real = time.time()
a = countDiv(73920)
end_End = time.time()
print("Method 1: %f real seconds" % (end_End - start_Real))
start_Real = time.time()
a = countDiv2(73920)
end_End = time.time()
print("Method 2: %f real seconds" % (end_End - start_Real))
start_Real = time.time()
a = countDiv3(73920)
end_End = time.time()
print("Method 3: %f real seconds" % (end_End - start_Real))
結果:
Method 1: 0.016001 real seconds
Method 2: 0.016001 real seconds
Method 3: 0.000000 real seconds
在精度不夠的情況下,都無法得知,方法三真正的運行時間.
真的想知道,方法三比方法一或者二快多少倍,還是要使用timeit。timeit可以重複執行代碼一定次數,這樣更加穩定的反應程序的執行時間,不會因爲一次的執行而產生較大的誤差。
if __name__ == '__main__':
import timeit
print(timeit.timeit("countDiv(73920)", setup = "from __main__ import countDiv", number=100))
print(timeit.timeit("countDiv2(73920)", setup = "from __main__ import countDiv2", number=100))
print(timeit.timeit("countDiv3(73920)", setup = "from __main__ import countDiv3", number=100))
結果:
1.6992941682537246
1.69091280670973
0.013773491283526784
通過timeit可以看出,方法二基本和方法一的性能是相同的。timeit返回是時鐘時間。方法二,基本是方法三耗時的130倍左右。