導航
Queue示例 - 一個併發的網絡爬蟲
Tornado的tornado.queues
模塊爲協程實現異步生產者/消費者模式,類似於Python標準庫的隊列模塊爲線程實現的模式。
一個yieldQueue.get
的協程直到隊列中有元素之前都會暫停。如果隊列設置了最大容量,一個yieldQueue.put
的協程在隊列有空間之前都會暫停。
一個Queue
維護一個從零開始的未完成任務的計數。put
增加計數; task_done
減少計數。
在此處的web-spider示例中,隊列開始僅包含base_url。當一個worker獲取一個頁面時,它會解析鏈接並將新的鏈接放入隊列中,然後調用task_done
來減少一次計數器。 最終,一個worker獲取一個之前URL已經被訪問過的頁面,並且隊列中也沒有剩餘的工作。 因此,該worker對task_done
的調用將計數器減少爲零。 正在等待join
的主協程將取消暫停然後結束。
#!/usr/bin/env python3
import time
from datetime import timedelta
from html.parser import HTMLParser
from urllib.parse import urljoin, urldefrag
from tornado import gen, httpclient, ioloop, queues
base_url = 'http://www.tornadoweb.org/en/stable/'
concurrency = 10
async def get_links_from_url(url):
"""Download the page at `url` and parse it for links.
Returned links have had the fragment after `#` removed, and have been made
absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes
'http://www.tornadoweb.org/en/stable/gen.html'.
"""
response = await httpclient.AsyncHTTPClient().fetch(url)
print('fetched %s' % url)
html = response.body.decode(errors='ignore')
return [urljoin(url, remove_fragment(new_url))
for new_url in get_links(html)]
def remove_fragment(url):
pure_url, frag = urldefrag(url)
return pure_url
def get_links(html):
class URLSeeker(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.urls = []
def handle_starttag(self, tag, attrs):
href = dict(attrs).get('href')
if href and tag == 'a':
self.urls.append(href)
url_seeker = URLSeeker()
url_seeker.feed(html)
return url_seeker.urls
async def main():
q = queues.Queue()
start = time.time()
fetching, fetched = set(), set()
async def fetch_url(current_url):
if current_url in fetching:
return
print('fetching %s' % current_url)
fetching.add(current_url)
urls = await get_links_from_url(current_url)
fetched.add(current_url)
for new_url in urls:
# Only follow links beneath the base URL
if new_url.startswith(base_url):
await q.put(new_url)
async def worker():
async for url in q:
if url is None:
return
try:
await fetch_url(url)
except Exception as e:
print('Exception: %s %s' % (e, url))
finally:
q.task_done()
await q.put(base_url)
# Start workers, then wait for the work queue to be empty.
workers = gen.multi([worker() for _ in range(concurrency)])
await q.join(timeout=timedelta(seconds=300))
assert fetching == fetched
print('Done in %d seconds, fetched %s URLs.' % (
time.time() - start, len(fetched)))
# Signal all the workers to exit.
for _ in range(concurrency):
await q.put(None)
await workers
if __name__ == '__main__':
io_loop = ioloop.IOLoop.current()
io_loop.run_sync(main)
上一篇:協程
下一篇:一個Tornado網絡應用的結構