Scrapy遞歸抓取數據存入數據庫(示例二)


scrapy爬取了鏈接之後,如何繼續進一步爬取該鏈接對應的內容?
parse可以返回Request列表,或者items列表,如果返回的是Request,則這個Request會放到下一次需要抓取的隊列,如果返回items,則對應的items才能傳到pipelines處理(或者直接保存,如果使用默認FEEDexporter)。那麼如果由parse()方法返回下一個鏈接,那麼items怎麼返回保存?Request對象接受一個參數callback指定這個Request返回的網頁內容的解析函數(實際上start_urls對應的callback默認是parse方法),所以可以指定parse返回Request,然後指定另一個parse_item方法返回items:

以爬取南京大學bbs爲例:

1. spider下的文件:
# -*- coding: utf-8 -*-
import chardet
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.utils.url import urljoin_rfc
from scrapy.http import Request
from tutorial.items import bbsItem

class bbsSpider(BaseSpider):
	name = "boat"
    allowed_domains =["bbs.nju.edu.cn"]
    start_urls =["http://bbs.nju.edu.cn/bbstop10"]
    def parseContent(self,content):
		content = content[0].encode('utf-8')
		#print chardet.detect(content)
		#print content
		authorIndex =content.index('信區')
		author = content[11:authorIndex-2]
		boardIndex = content.index('標 題')
		board =content[authorIndex+8:boardIndex-2]
		timeIndex = content.index('南京大學小百合站 (')
		time = content[timeIndex+26:timeIndex+50]
		return (author,board,time)
    #content = content[timeIndex+58:]
    #return (author,board,time,content)
    def parse2(self,response):
		hxs =HtmlXPathSelector(response)
		item = response.meta['item']
		items = []
		content =hxs.select('/html/body/center/table[1]/tr[2]/td/textarea/text()').extract()
		parseTuple = self.parseContent(content)
		item['author'] =parseTuple[0].decode('utf-8')
		item['board']=parseTuple[1].decode('utf-8')
		item['time'] = parseTuple[2]
		#item['content'] = parseTuple[3]
		items.append(item)
		return items
	def parse(self,response):
		hxs = HtmlXPathSelector(response)
		items = []
		title=hxs.select('/html/body/center/table/tr[position()>1]/td[3]/a/text()').extract()
		url=hxs.select('/html/body/center/table/tr[position()>1]/td[3]/a/@href').extract()
		for i in range(0, 10):
			item =bbsItem()
			item['link'] = urljoin_rfc('http://bbs.nju.edu.cn/', url[i])
			item['title'] =  title[i][:]
			items.append(item)
		for item in items:
			yield Request(item['link'],meta={'item':item},callback=self.parse2)

2. pipelines文件:
# -*- coding: utf-8 -*-
# Define your item pipelines here
# Don't forget to add your pipeline to the ITEM_PIPELINESsetting
# See: http://doc.scrapy.org/topics/item-pipeline.html

from scrapy import log
from twisted.enterprise import adbapi
from scrapy.http import Request  
from scrapy.exceptions import DropItem 
from scrapy.contrib.pipeline.images import ImagesPipeline 
import time  
import MySQLdb  
import MySQLdb.cursors
import socket
import select
import sys
import os
import errno

class MySQLStorePipeline(object):
    def __init__(self):
		self.dbpool = adbapi.ConnectionPool('MySQLdb', 
              db = 'test', 
              user = 'root', 
              passwd = 'root', 
              cursorclass =MySQLdb.cursors.DictCursor,  
              charset = 'utf8', 
              use_unicode = False 
       )  
    def process_item(self,item, spider):
		query = self.dbpool.runInteraction(self._conditional_insert,item)  
		return item
    def _conditional_insert(self, tx, item):  
		tx.execute('insert into info values (%s, %s, %s)',(item['author'], item['board'], item['time']))


3. 設置setting.py:
    ITEM_PIPELINES =['tutorial.pipelines.MySQLStorePipeline']

發佈了59 篇原創文章 · 獲贊 2 · 訪問量 3萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章