【Python爬蟲9】Python網絡爬蟲實例實戰

  • 爬取Google真實的搜索表單
  • 爬取依賴JavaScript的網站Facebook
  • 爬取典型在線商店Gap
  • 爬取擁有地圖接口的寶馬官網
    #1.爬Google搜索引擎
# -*- coding: utf-8 -*-

import sys
import urllib
import urlparse
import lxml.html
from downloader import Downloader

def search(keyword):
    D = Downloader()
    url = 'https://www.google.com/search?q=' + urllib.quote_plus(keyword)
    html = D(url)
    tree = lxml.html.fromstring(html)
    links = []
    for result in tree.cssselect('h3.r a'):
        link = result.get('href')
        qs = urlparse.urlparse(link).query
        links.extend(urlparse.parse_qs(qs).get('q', []))
    return links

if __name__ == '__main__':
    try:
        keyword = sys.argv[1]
    except IndexError:
        keyword = 'test'
    print search(keyword)

注意:提取Google搜索結果時注意爬取延時問題,否則下載速度過快會出現驗證碼處理。
#2.爬Facebook和Linkein
查看Packt出版本的Facebook頁面源代碼時,可以找到最開始的幾篇日誌,但後面的日誌只有瀏覽器滾動時纔會通過AJAX加載。

2.1自動化登錄Facebook

這些AJAX的數據無法簡化提取,雖然這些AJAX事件可以被臥逆向工程,但是不同類型的Facebook頁面使用了不用的AJAX調用。所以下面用Selenium渲染實現自動化登錄Facebook。

# -*- coding: utf-8 -*-

import sys
from selenium import webdriver

def facebook(username, password, url):
    driver = webdriver.Firefox()
    driver.get('https://www.facebook.com')
    driver.find_element_by_id('email').send_keys(username)
    driver.find_element_by_id('pass').send_keys(password)
    driver.find_element_by_id('login_form').submit()
    driver.implicitly_wait(30)
    # wait until the search box is available,
    # which means have succrssfully logged in
    search = driver.find_element_by_id('q')
    # now are logged in so can navigate to the page of interest
    driver.get(url)
    # add code to scrape data of interest here
    #driver.close()
    
if __name__ == '__main__':
    try:
        username = sys.argv[1]
        password = sys.argv[2]
        url = sys.argv[3]
    except IndexError:
        print 'Usage: %s <username> <password> <url>' % sys.argv[0]
    else:
        facebook(username, password, url)

##2.2提取Facebook的API數據
Facebook提供了一些API數據,如果允許訪問這些數據,下面就提取Packt出版社頁面的數據。

# -*- coding: utf-8 -*-

import sys
import json
import pprint
from downloader import Downloader

def graph(page_id):
    D = Downloader()
    html = D('http://graph.facebook.com/' + page_id)
    return json.loads(html)

if __name__ == '__main__':
    try:
        page_id = sys.argv[1]
    except IndexError:
        page_id = 'PacktPub'
    pprint.pprint(graph(page_id))

Facebook開發者文檔:https://developers.facebook.com/docs/graph-api 這些API調用多數是設計給已授權的facebook用戶交互的facebook應用的,要想提取比如用戶日誌等更加詳細的信息,仍然需要爬蟲。

2.3自動化登錄Linkedin

# -*- coding: utf-8 -*-

import sys
from selenium import webdriver

def search(username, password, keyword):
    driver = webdriver.Firefox()
    driver.get('https://www.linkedin.com/')
    driver.find_element_by_id('session_key-login').send_keys(username)
    driver.find_element_by_id('session_password-login').send_keys(password)
    driver.find_element_by_id('signin').click()
    driver.implicitly_wait(30)
    driver.find_element_by_id('main-search-box').send_keys(keyword)
    driver.find_element_by_class_name('search-button').click()
    driver.find_element_by_css_selector('ol#results li a').click()
    # Add code to scrape data of interest from LinkedIn page here
    #driver.close()
    
if __name__ == '__main__':
    try:
        username = sys.argv[1]
        password = sys.argv[2]
        keyword = sys.argv[3]
    except IndexError:
        print 'Usage: %s <username> <password> <keyword>' % sys.argv[0]
    else:
        search(username, password, keyword)

#3.爬在線商店Gap
Gap擁有一個結構化良好的網站,通過Sitemap可以幫助網絡爬蟲定位到最新的內容。從http://www.gap.com/robots.txt 中可以發現網站地圖Sitemap: http://www.gap.com/products/sitemap_index.html

This XML file does not appear to have any style information associated with it. The document tree is shown below.
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap>
<loc>http://www.gap.com/products/sitemap_1.xml</loc>
<lastmod>2017-01-30</lastmod>
</sitemap>
<sitemap>
<loc>http://www.gap.com/products/sitemap_2.xml</loc>
<lastmod>2017-01-30</lastmod>
</sitemap>
</sitemapindex>

如上網站地圖冊鏈接的內容僅僅是索引,其索引的網站地圖纔是數千種產品類目的鏈接,比如:http://www.gap.com/products/blue-long-sleeve-shirts-for-men.jsp 。由於這裏有大量要爬取的內容,因此我們將使用第4篇開發的多線和爬蟲,並支持一人可選的回調參數。

# -*- coding: utf-8 -*-

from lxml import etree
from threaded_crawler import threaded_crawler

def scrape_callback(url, html):
    if url.endswith('.xml'):
        # Parse the sitemap XML file
        tree = etree.fromstring(html)
        links = [e[0].text for e in tree]
        return links
    else:
        # Add scraping code here
        pass       

def main():
    sitemap = 'http://www.gap.com/products/sitemap_index.xml'
    threaded_crawler(sitemap, scrape_callback=scrape_callback)
    
if __name__ == '__main__':
    main() 

該回調函數首先下載到的URL擴展名。如果擴展名是.xml,則用lxml的etree模塊解析XML文件並從中提取鏈接。否則,認爲這是一個類目URL(這例沒有實現提取類目的功能)。
#4.爬寶馬官網
寶馬官方網站中有一個查詢本地經銷商的搜索工具,其網址爲https://www.bmw.de/de/home.html?entryType=dlo
該工具將地理位置作爲輸入參數,然後在地圖上顯示附近的經銷商地點,比如輸入BerlinLook For
我們使用開發者工具會發現搜索觸發了AJAX請求:
https://c2b-services.bmw.com/c2b-localsearch/services/api/v3/clients/BMWDIGITAL_DLO/DE/pois?country=DE&category=BM&maxResults=99&language=en&lat=52.507537768880056&lng=13.425269635701511
maxResults默認設爲99,我們可以增大該值。AJAX請求提供了JSONP格式的數據,其中JSONP是指填充模式的JSON(JSON with padding)。這裏的填充通常是指要調用的函數,而函數的參數則爲純JSON數據。本例調用的是callback函數,要想使用Pythonr的json模塊解析該數據,首先需要將填充的部分截取掉。

# -*- coding: utf-8 -*-

import json
import csv
from downloader import Downloader

def main():
    D = Downloader()
    url = 'https://c2b-services.bmw.com/c2b-localsearch/services/api/v3/clients/BMWDIGITAL_DLO/DE/pois?country=DE&category=BM&maxResults=%d&language=en&lat=52.507537768880056&lng=13.425269635701511'
    jsonp = D(url % 1000) ###callback({"status:{...}"})
    pure_json = jsonp[jsonp.index('(') + 1 : jsonp.rindex(')')]
    dealers = json.loads(pure_json) ###
    with open('bmw.csv', 'w') as fp:
        writer = csv.writer(fp)
        writer.writerow(['Name', 'Latitude', 'Longitude'])
        for dealer in dealers['data']['pois']:
            name = dealer['name'].encode('utf-8')
            lat, lng = dealer['lat'], dealer['lng']
            writer.writerow([name, lat, lng])
    
if __name__ == '__main__':
    main() 
>>> dealers.keys() #[u'status',u'count',u'data',...]
>>> dealers['count'] #顯示個數
>>> dealers['data']['pois'][0] #第一個經銷商數據

Wu_Being 博客聲明:本人博客歡迎轉載,請標明博客原文和原鏈接!謝謝!
【Python爬蟲系列】《【Python爬蟲9】Python網絡爬蟲實例實戰》http://blog.csdn.net/u014134180/article/details/55508272
Python爬蟲系列的GitHub代碼文件https://github.com/1040003585/WebScrapingWithPython

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章