【python爬蟲專項(28)】鏈家二手房源數據採集1(分頁信息採集)

鏈家二手房源信息採集

這裏以採集 北京二手房源 爲例,要進行獲取的字段如下
在這裏插入圖片描述

爬蟲邏輯:【分頁url獲取】–> 【頁面列表數據的獲取】

函數式編程:

函數1:get_urls(city_url,n) → 【分頁網頁url獲取】函數
         city_url:不同城市起始網址
         n:頁數參數

函數2:get_data(ui,d_h,table) → 【數據採集及mongo入庫】函數
         ui:數據信息網頁
         d_h:user-agent信息
         table:mongo集合對象

前期準備及封裝第一個函數

  1. 導入庫和代碼分區
import requests
import time
from bs4 import BeautifulSoup
import pymongo

if __name__ == '__main__':

  1. 查找分頁url規律
    一般查找該頁面下面的2-4頁即可
u2 = https://bj.lianjia.com/ershoufang/pg2/
u3 = https://bj.lianjia.com/ershoufang/pg3/
u4 = https://bj.lianjia.com/ershoufang/pg4/
......
  1. 封裝第一個函數,返回分頁url列表
def get_urls(city_url,n):
    '''【分頁網頁url獲取】函數
    city_url:不同城市起始網址
    n:頁數參數
    '''
    lst = []
    for i in range(1,n+1):
        lst.append(city_url + f'pg{i}/')
    return lst
   
print(get_urls('https://bj.lianjia.com/ershoufang/',5))

輸出結果爲:

[‘https://bj.lianjia.com/ershoufang/pg1/’,
‘https://bj.lianjia.com/ershoufang/pg2/’,
‘https://bj.lianjia.com/ershoufang/pg3/’,
‘https://bj.lianjia.com/ershoufang/pg4/’,
‘https://bj.lianjia.com/ershoufang/pg5/’]

向網站發送請求

在獲取網址後要檢測一下是否可以進行數據的獲取,以第一個頁面的url爲例(記得提前配置好headers和cookies),代碼如下

dic_headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'}

cookies = "TY_SESSION_ID=a63a5c48-ee8a-411b-b774-82b887a09de9; lianjia_uuid=1e4ed8ae-d689-4d12-a788-2e93397646fd; _smt_uid=5dbcff46.49fdcd46; UM_distinctid=16e2a452be0688-0c84653ae31a8-e343166-1fa400-16e2a452be1b5b; _jzqy=1.1572667207.1572667207.1.jzqsr=baidu|jzqct=%E9%93%BE%E5%AE%B6.-; _ga=GA1.2.1275958388.1572667209; _jzqx=1.1572671272.1572671272.1.jzqsr=sh%2Elianjia%2Ecom|jzqct=/ershoufang/pg2l1/.-; select_city=310000; lianjia_ssid=a2a11c0a-c451-43aa-879e-0d202a663a5d; Hm_lvt_9152f8221cb6243a53c83b956842be8a=1582085114; CNZZDATA1253492439=1147125909-1572665418-https%253A%252F%252Fsp0.baidu.com%252F%7C1582080390; CNZZDATA1254525948=626340744-1572665293-https%253A%252F%252Fsp0.baidu.com%252F%7C1582083769; CNZZDATA1255633284=176672440-1572665274-https%253A%252F%252Fsp0.baidu.com%252F%7C1582083985; CNZZDATA1255604082=1717363940-1572665282-https%253A%252F%252Fsp0.baidu.com%252F%7C1582083899; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2216e2a452d07c03-0d376ce6817042-e343166-2073600-16e2a452d08ab2%22%2C%22%24device_id%22%3A%2216e2a452d07c03-0d376ce6817042-e343166-2073600-16e2a452d08ab2%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_referrer_host%22%3A%22%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_utm_source%22%3A%22baidu%22%2C%22%24latest_utm_medium%22%3A%22pinzhuan%22%2C%22%24latest_utm_campaign%22%3A%22sousuo%22%2C%22%24latest_utm_content%22%3A%22biaotimiaoshu%22%2C%22%24latest_utm_term%22%3A%22biaoti%22%7D%7D; _qzjc=1; _jzqa=1.941285633448461200.1572667207.1572671272.1582085116.3; _jzqc=1; _jzqckmp=1; _gid=GA1.2.1854019821.1582085121; Hm_lpvt_9152f8221cb6243a53c83b956842be8a=1582085295; _qzja=1.476033730.1572667206855.1572671272043.1582085116087.1582085134003.1582085295034.0.0.0.14.3; _qzjb=1.1582085116087.4.0.0.0; _qzjto=4.1.0; _jzqb=1.4.10.1582085116.1"
dic_cookies = {}
for i in cookies.split('; '):
    dic_cookies[i.split("=")[0]] = i.split("=")[1]

r = requests.get('https://bj.lianjia.com/ershoufang/pg1/',headers = dic_headers, cookies = dic_cookies)
print(r)  

輸出結果爲:(當結果返回200時候,說明網頁可以正常進行數據獲取)

<Response [200]>

查找每個字段對應的標籤並獲取數據

  1. 這裏以第一頁的第一個表單裏包含的信息進行查找試錯,如下
    在這裏插入圖片描述
  2. 標題字段對應的標籤
    在這裏插入圖片描述
  3. 地址對應的標籤
    在這裏插入圖片描述
  4. 詳情對應的標籤
    在這裏插入圖片描述
  5. 關注及發佈時間對應的標籤
    在這裏插入圖片描述
  6. 總價和單價對應的標籤
    在這裏插入圖片描述
  7. 獲取標籤內容
r = requests.get('https://bj.lianjia.com/ershoufang/pg1/',headers = dic_headers, cookies = dic_cookies)
soup = BeautifulSoup(r.text,'lxml')    
dic = {}
dic['標題'] = soup.find('div',class_="title").a.text
info1 = soup.find('div',class_="positionInfo").text
dic['小區'] = info1.split("    -  ")[0]
dic['地址'] = info1.split("    -  ")[1]
info2 = soup.find('div', class_="houseInfo").text
dic['戶型'] = info2.split(" | ")[0]
dic['面積'] = info2.split(" | ")[1]
dic['朝向'] = info2.split(" | ")[2]
dic['裝修類型'] = info2.split(" | ")[3]
dic['樓層'] = info2.split(" | ")[4]
dic['建築完工時間'] = info2.split(" | ")[5]
dic['是否爲板房'] = info2.split(" | ")[6]
info3 = soup.find('div',class_="followInfo").text
dic['關注量'] = info3.split(" / ")[0]
dic['發佈時間'] = info3.split(" / ")[1]
dic['總價'] = soup.find('div', class_="totalPrice").text
dic['單價'] = soup.find('div', class_="unitPrice").text.replace('單價','')
dic['鏈接'] = soup.find('div',class_="title").a['href']
print(dic)

輸出結果爲:
在這裏插入圖片描述

封裝第二個函數

在進行試錯無誤後,就可以進行函數的封裝

  1. 配置數據庫
myclient = pymongo.MongoClient("mongodb://localhost:27017/")
db = myclient['鏈家二手房_1']
datatable = db['data_1']
#datatable.delete_many({}) 如果該表格下有數據的話可以使用這條語句
  1. 封裝函數
def get_data(ui,d_h,d_c,table):
    '''【數據採集及mongo入庫】函數
    ui:數據信息網頁
    d_h:user-agent信息
    table:mongo集合對象
    '''
    ri = requests.get(ui,headers = d_h,cookies = d_c)
    soupi = BeautifulSoup(ri.text, 'lxml')
    lis = soupi.find('ul',class_="sellListContent").find_all("li")
    n = 0
    for li in lis:
        dic = {}
        dic['標題'] = li.find('div',class_="title").text
        info1 = li.find('div',class_="positionInfo").text
        dic['小區'] = info1.split("    -  ")[0]
        dic['地址'] = info1.split("    -  ")[1]
        info2 = li.find('div', class_="houseInfo").text
        dic['戶型'] = info2.split(" | ")[0]
        dic['面積'] = info2.split(" | ")[1]
        dic['朝向'] = info2.split(" | ")[2]
        dic['裝修類型'] = info2.split(" | ")[3]
        dic['樓層'] = info2.split(" | ")[4]
        dic['建築完工時間'] = info2.split(" | ")[5]
        dic['是否爲板房'] = info2.split(" | ")[6]
        info3 = li.find('div',class_="followInfo").text
        dic['關注量'] = info3.split(" / ")[0]
        dic['發佈時間'] = info3.split(" / ")[1]
        dic['價錢'] = li.find('div', class_="totalPrice").text
        dic['每平米價錢'] = li.find('div', class_="unitPrice").text
        dic['房間優勢'] = li.find('div', class_="tag").text
        dic['鏈接'] = li.find('a')['href']
        table.insert_one(dic)
        n += 1
    return n
  1. 可視化輸出以及異常處理判斷
errorlst = []
count = 0
for u in urllst:
    print("程序正在休息......")
    time.sleep(5)
    try:
        count += get_data(urllst[0],dic_headers,dic_cookies,datatable) 
        print(f'成功採集{count}條數據')
    except:
        errorlst.append(u)
        print('數據採集失敗,網址爲:',u)

輸出的結果如下:
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章