scrapy爬取上海寶山安居客房產信息並存到mysql數據庫中

源碼下載:https://download.csdn.net/download/dabao87/11997988

首先搭建虛擬環境和安裝python這裏就不說了,不會的請移步我的其他文章

安裝虛擬環境:https://blog.csdn.net/dabao87/article/details/102743386

創建一個項目,命令行:

scrapy startproject anjuke

這命令會建一個叫anjuke的文件夾,裏面會有一些待你配置的文件

創建一個spider:

先進入剛纔創建的項目文件夾裏

cd anjuke
scrapy genspider anju shanghai.anjuke.com

這時的文件夾結構應該是這樣的:

創建item

item是保存爬取數據的容器,使用方法和字典類似~

將item.py修改如下:

配置settings.py 

在 settings.py 中配置幾個重要的東西,由於安居庫有防止爬蟲訪問,所以要模擬用戶的正常訪問

將 settings.py 文件中USER_AGENT參數打開

將剛纔複製的 user-agent 放到 settings.py 的USER_AGENT參數中

這個打開

COOKIES_ENABLED = False

引入 import random

DOWNLOAD_DELAY = random.choice([1,2])

這個時候配置完成了,訪問就可以模擬用戶的正常訪問了

在  anju.py 文件中的代碼

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
from anjuke.items import AnjukeItem  # 使用item


class AnjuSpider(scrapy.Spider):
    name = 'anju'
    allowed_domains = ['shanghai.anjuke.com']
    start_urls = ['https://shanghai.anjuke.com/sale/baoshan/m2401/']

    def parse(self, response):
        divs = response.xpath('''//li[@class="list-item"]''')  # 使用xpath從response中獲取需要的html塊
        for div in divs:
            item = AnjukeItem()  # 實例化item對象
            address = div.xpath('.//span[@class="comm-address"]/@title').extract_first()  # 樓盤地址和小區名稱,由於地址和小區名稱在一起,所以下面是拆分
            item['address'] = address[address.index("\xa0\xa0") + 2 :]  #地址,以“\xa0\xa0”區分
            item['name'] = address[:address.index("\xa0\xa0")]  #小區名稱
            try:
                item['type_'] = div.xpath('.//div[@class="details-item"]/span/text()').extract_first()  # 房子類型比如兩房一廳這樣子~
            except:
                pass

            # item['tags'] = div.xpath('.//span[@class="item-tags tag-metro"]/text()').extract()  # 網站給樓盤定的標籤~

            price = div.xpath('.//span[@class="price-det"]/strong/text()').extract_first()  # 價格
            item['price'] = price + '萬'
            try:
                item['area'] = div.xpath('.//div[@class="details-item"]/span/text()').extract()[1:2]
            except:
                pass
            yield item
            
        next_ = response.xpath('//div[@class="multi-page"]/a[@class="aNxt"]/@href').extract_first()  # 獲取下一頁的鏈接
        print('-------next----------')
        print(next_)
        yield response.follow(url=next_, callback=self.parse)  # 將下一頁的鏈接加入爬取隊列~~

    

執行:

scrapy crawl anju

 

這裏名字是anju,所以上面的命令要是 anju 

結果:

安居客網站的標籤和class可能會改變。不要直接複製我的代碼,可能會不能執行

 

存入mysql

參考:https://www.cnblogs.com/ywjfx/p/11102081.html

新建數據庫

scrapy

新建表

anjuke

在 pipelines.py 文件中

import pymysql

class AnjukePipeline(object):
    def __init__(self):
        # 連接MySQL數據庫
        self.connect=pymysql.connect(host='127.0.0.1',user='root',password='111111',db='scrapy',port=3306)
        self.cursor=self.connect.cursor()
    def process_item(self, item, spider):
        # 往數據庫裏面寫入數據
        self.cursor.execute('insert into anjuke(address,name,type_,area,price)VALUES ("{}","{}","{}","{}","{}")'.format(item['address'],item['name'],item['type_'],item['area'],item['price']))
        self.connect.commit()
        return item
    # 關閉數據庫
    def close_spider(self,spider):
        self.cursor.close()
        self.connect.close()

在settings.py中

主要是將 ITEM_PIPELINES 的註釋去掉

# -*- coding: utf-8 -*-

# Scrapy settings for tencent project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'tencent'

SPIDER_MODULES = ['tencent.spiders']
NEWSPIDER_MODULE = 'tencent.spiders'

LOG_LEVEL="WARNING"
LOG_FILE="./qq.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'

# Obey robots.txt rules
#ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'tencent.middlewares.TencentSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'tencent.middlewares.TencentDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'tencent.pipelines.TencentPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'



# 連接數據MySQL
# 數據庫地址
MYSQL_HOST = 'localhost'
# 數據庫用戶名:
MYSQL_USER = 'root'
# 數據庫密碼
MYSQL_PASSWORD = 'yang156122'
# 數據庫端口
MYSQL_PORT = 3306
# 數據庫名稱
MYSQL_DBNAME = 'test'
# 數據庫編碼
MYSQL_CHARSET = 'utf8'

 

下面要注意:

查看有沒有安裝pymysql

pip list

我已經安裝好了,但是在爬取提示:ModuleNotFoundError: No module named 'pymysql'

這個時候要注意是否是在安裝python中安裝的

執行

where python

我有兩個安裝目錄:一個是安裝的python,一個是虛擬環境。剛纔我在虛擬換in中安裝了pymysql,所以提示沒有pymysql模塊,

這個時候我切換到俺咋混個python的環境中,再次執行

pip list

發現沒有pymysql這個模塊,趕緊裝上

注意:一定要在Scripts這個文件下安裝

執行

pip install pymysql

再次查看他有沒有安裝上 pymysql

pip list

這個時候應該是安裝好了

在回到剛纔爬蟲的項目中,執行爬蟲

scrapy crawl anju

這時就已經可以存到數據庫了

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章