python爬取“美團美食”汕頭地區的所有店鋪信息

一、目的

獲取美團美食每個店鋪所有的評論信息,並保存到數據庫和本地

二、實現步驟

獲取所有店鋪的poiId

首先觀察詳情頁的url,後面是跟着一串數字的,而這一串數字代表着每個店鋪特有的id號,我們稱之爲poiId。所以,要想爬取所有店鋪的評論數據,就必須爬取所有店鋪的id號。

 

因此,退回到上一級頁面,打開控制檯,逐個點擊請求的preview選項,找出攜帶有poiId數據的請求。

而我們要做的,就是找出這個請求的規律,模擬客服端發送此請求,這樣子我們就可以獲得所有店鋪的poiId了。

接下來就是找規律了,我們觀察發送的請求。很長對不對,不要緊,我們慢慢分析。

cityName=%E6%B1%95%E5%A4%B4& (城市中文名字經過urlencode編碼)

cateId=0&areaId=0&sort=&dinnerCountAttrId=& (固定的一段字符串,具體意義未知)

page=1 (頁碼)

userId=&uuid=9efd650a0d204774ba7a.1577010898.1.0.0 (一段cookie,每隔一段時間會更新,用於驗證用戶的身份,由後端傳遞到前端。F12打開控制檯,點擊ApplicationCookies查看得知)

platform=1&partner=126& (一段固定的字符串)

originUrl=https%3A%2F%2Fst.meituan.com%2Fmeishi%2F(對該網頁的url進行urlencode

riskLevel=1&optimusCode=10 (一段固定的字符串)

_token=eJx1j1tvozAQhf%2BLX4OCDZiYvEEuu5ASQqGQpOoDIdxrmmAn0FT972u0uw%2F7sNJIc%2BbM0aeZL9DZZzBHEBoQSuCedWAO0BROdSABzsQGz2YQQQNjREQg%2Fccjmq5L4NRFSzB%2FJYohEUzeRuNZzK8IC6QgwzfptzZ0IRVN1BiyRQaUnF%2FYXJYZn9Ks4reknaYfVBaalZUsbvhPAAgCDUcCgaqkYTQazWiInvzp%2FO%2FsiqcEi1VFK1Tm9O91irhZr%2Fxyfy%2B1zVatrYPb7AezdSz%2FVL0Xvem5rDvuNfVHs3ZsDzaLhq%2FCa2XGrTFpL3Iw%2BAuTDMWS1rDcHnaIDC95PZucLrKMO9s77lhAbvoLjjM3juLwqt4CPSx6Kyw3k1SlqbM9J9q9R8vIoQ6nnoIvm%2FXjnq%2BLY%2FdQ%2FDJ%2F3pVtmlKVLvIocH5Gp9uTlusHjz662La4e93lONGGSjnbbbcOlivn8MjqzxwmW3WTqYteiXXGFKsk%2FgTZMFfB9y82H5QP token令牌,每隔一段時間會更新)

而在這些參數中,有幾個參數是必須的(皆可通過正則表達式獲取):

 

uuid(可以從cookie中獲得,按理來說應該每隔一段時間就應該重新獲取一次,但是我獲取了一次之後就可以一直用,個人認爲是後端沒有驗證該字段);

city(獲取店鋪所在的城市名);

page(頁碼。獲取店鋪數量,然後除以每頁最大顯示條目可得;該字段在“meishi/“文件裏ctrl+F搜索totalCount可得)。

下面就是獲取這幾個參數的config.py文件源代碼:

#獲得城市名,uuid和商鋪數目以及頁數

import requests

import re  #用於正則表達式

import math

 

#獲得城市名,uuid和商鋪數目以及頁數

def getInfo():

    """獲取uuid"""

    url = 'https://st.meituan.com/meishi/'  #汕頭美食

    headers = {

        'Host': 'st.meituan.com',

        'Referer': 'https://st.meituan.com/',

        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36',

    }

    res = requests.get(url, headers=headers).text

    # findall(pattern, string, flags=0),返回string中所有與pattern相匹配的全部字串,r表示原生字符例:\n不表示換行。re.S表示作用域拓展到整個字符串,即包括換行符

    if res:

        uuid = re.findall(r'"uuid":"(.*?)"', res, re.S)[0]

        city = re.findall(r'"chineseFullName":"(.*?)"',res,re.S)[0]

        shopsNum = re.findall(r'"totalCounts":(\d+)',res,re.S)[0]

        with open('./output_file/uuid_city_shopsNum.log', 'w',encoding="utf-8") as f:

            print('chrome_uuid:'+uuid+'\n'+'city:'+city+'\n'+'shopsNum:'+str(shopsNum))

            f.write('chrome_uuid:'+uuid+'\n'+'city:'+city+'\n'+'shopsNum:'+str(shopsNum))

    ans = {

        'uuid':uuid,

        'city':city,

        'shopsNum':int(shopsNum),

        'pages':math.ceil(int(shopsNum)/15),

    }

    return ans

ans = getInfo()

破解token參數

_token參數的構造:

解密: 由現成_token參數結尾的’='猜測進行了base64加密,於是進行base64解密,得到bytes類型字符串,進行zlib解壓後得出_token的加密生成字典,其中有兩個比較 重要的變化參數爲tscts,其中ts13位時間戳,cts則爲ts+100*1000。還有一個sign參數,形式與_token參數一致,再對sign參數進行一次同樣的解密,得到一個字符串,其中的uuid在首頁源碼中可以正則匹配出來。

 

加密: 由上可知_token參數的構造過程,進行了兩次zlib壓縮和base64編碼加密。第一次加密對象位sign參數。第二次加密就是生成_token的字典,構造好字典後再進行一次上述加密即爲_token

 

另外,需要特別說明的是,_token參數破解了之後,仍會有一些參數是常量,一些是變量,但模擬的過程仍與前面模擬的過程是相似的,所以在這一不一一贅述,詳見代碼。

 

get_shops.py源代碼:

'''

    用於保存所有頁面的ajax_url

'''

import base64, zlib

import time

import random

import pandas as pd

import os

import urllib.parse

import json

import re  #用於正則表達式

from config import ans

 

print('ans:',ans)

 

get_shops_url = [] #用於存儲所有生成的ajax_url

 

for page in range(1,ans['pages']+1):

    DATA = {

        "cityName": '汕頭',

        "cateId": '0',

        "areaId": "0",

        "sort": "",

        "dinnerCountAttrId": "",

        "page": page,

        "userId": "",

        "uuid": ans['uuid'],

        "platform": "1",

        "partner": "126",

        "originUrl": "https://{}.meituan.com/meishi".format('st'),

        "riskLevel": "1",

        "optimusCode": "1"

    }

    SIGN_PARAM = "areaId={}&cateId={}&cityName={}&dinnerCountAttrId={}&optimusCode={}&originUrl={}/pn{}/&page={}&partner={}&platform={}&riskLevel={}&sort={}&userId={}&uuid={}".format(

        DATA["areaId"],

        DATA["cateId"],

        re.findall(r"b'(.+?)'",str(DATA["cityName"].encode(encoding='UTF-8',errors='strict')))[0],

        DATA["dinnerCountAttrId"],

        DATA["optimusCode"],

        DATA["originUrl"],

        DATA["page"],

        DATA["page"],

        DATA["partner"],

        DATA["platform"],

        DATA["riskLevel"],

        DATA["sort"],

        DATA["userId"],

        DATA["uuid"]

    )

 

    def encrypt(data):

        """壓縮編碼"""

        binary_data = zlib.compress(data.encode())      #二進制壓縮

        base64_data = base64.b64encode(binary_data)     #base64編碼

        return base64_data.decode()                     #返回utf-8編碼的字符串

 

    def token():

        """生成token參數"""

        ts = int(time.time()*1000)  #獲取當前的時間,單位ms

        #brVDbrR爲設備的外匯返傭,瀏覽器的寬高等參數,可以使用事先準備的數據自行模擬

        json_path = os.path.dirname(os.path.realpath(__file__))+'\\utils\\br.json'

        df = pd.read_json(json_path)

        brVD,brR_one,brR_two = df.iloc[random.randint(0,len(df)-1)]#iloc基於索引位來選取數據集

        TOKEN_PARAM ={

                "rId": 100900,

                "ver": "1.0.6",

                "ts": ts,  # 變量

                "cts": ts + random.randint(100, 120),  # 經測,cts - ts 的差值大致在 90-130 之間

                "brVD": eval(brVD),  # 變量

                "brR": [eval(brR_one), eval(brR_two), 24, 24],

                "bI": ["https://st.meituan.com/meishi/", ""],  # 從哪一頁跳轉到哪一頁

                "mT": [],

                "kT": [],

                "aT": [],

                "tT": [],

                "aM": "",

                "sign": encrypt(SIGN_PARAM)

        }

        # 二進制壓縮

        binary_data = zlib.compress(json.dumps(TOKEN_PARAM).encode())

        # print('binary_data:',json.dumps(TOKEN_PARAM).encode())

        # base64編碼

        base64_data = base64.b64encode(binary_data)

        # print('這裏是token的使用了ascii編碼之前的:', base64_data)

        # print('這裏是token的使用了ascii編碼之後的:',urllib.parse.quote(base64_data.decode(),'utf-8'))

        return urllib.parse.quote(base64_data.decode(),'utf-8')

 

    AJAXDATA = {

        'cityName': '%E6%B1%95%E5%A4%B4',

        'cateId': 0,

        'areaId': 0,

        'sort': '',

        'dinnerCountAttrId': '',

        'page': page,

        'userId': '',

        'uuid': ans['uuid'],

        'platform': 1,

        'partner': 126,

        'originUrl': 'https%3A%2F%2Fst.meituan.com%2Fmeishi%2F',

        'riskLevel': 1,

        'optimusCode': 10,

        '_token': token()

    }

 

    urlParam = 'https://st.meituan.com/meishi/api/poi/getPoiList?cityName={}&cataId={}&areaId={}&sort={}&dinnerCountAttrId={}' \

               '&page={}&userId={}&uuid={}&platform={}&partner={}&originUrl={}&riskLevel={}&optimusCode={}&_token={}'.format(

        AJAXDATA['cityName'],

        AJAXDATA['cateId'],

        AJAXDATA['areaId'],

        AJAXDATA['sort'],

        AJAXDATA['dinnerCountAttrId'],

        AJAXDATA['page'],

        AJAXDATA['userId'],

        AJAXDATA['uuid'],

        AJAXDATA['platform'],

        AJAXDATA['partner'],

        AJAXDATA['originUrl'],

        AJAXDATA['riskLevel'],

        AJAXDATA['optimusCode'],

        AJAXDATA['_token'],

    )

然後,將ajax請求到的店鋪數據保存到txt/csv/mongoDB數據庫。因爲在其他地方也可能調用到相應的方法(增刪改查),因此,單獨將他們寫在另外一個.py文件裏,然後封裝成類。其中,save_data.py文件源代碼如下:'''

    定義類用於保存數據到數據庫,txt或者csv

'''

import pandas as pd           # 將數據保存到csv

import pymongo

 

class MongoDB():

    def __init__(self,formName,collection='',result=''):

        self.host = 'localhost'

        self.port = 27017

        self.databaseName = 'meituan'

        self.formName = formName

        self.result = result

        self.collection = collection

 

    # 連接數據庫

    def collect_database(self):

        client = pymongo.MongoClient(host=self.host, port=self.port)  # 連接MongoDB

        db = client[self.databaseName]  # 選擇數據庫

        collection = db[self.formName]  # 指定要操作的集合,

        print('數據庫已經連接')

        return collection

 

    # 保存數據

    def save_to_Mongo(self):

        # collection = self.collect_database()

        try:

            if self.collection.insert_many(self.result):

                # print('存儲到MongoDB成功', self.result)

                print('存儲到MongoDB成功')

        except Exception:

            print('存儲到MongoDb失敗', self.result)

 

    # 查詢數據

    def selectMongoDB(self):

        # collection = self.collect_database()

        print('評論數據的總長度爲:',self.collection.count_documents({}))

        # print('正在查詢數據庫')

        # for x in self.collection.find():

        #     print(x)

 

    # 刪除數據

    def delete_database(self):

        self.collection.delete_many({})  # 刪除數據庫內容

        print('已清空數據庫')

 

class SaveDataInFiles():

    def __init__(self,csv_url='',txt_url='',results=''):

        # 需要保存的數據

        self.results = results

        self.csv_url = csv_url

        self.txt_url = txt_url

 

    # 出口文件

    def saveResults(self):

        self.saveInCsv()

        self.saveInTxt()

 

    # 將結果ip保存到D:\python\meituan\output_file\proxyIp_kuai.txt

    def saveInTxt(self):

        txt = open(self.txt_url, 'w')

        txt.truncate()  # 保存內容前先清空內容

        for item in self.results:

            itemStr = str(item)

            txt.write(itemStr)

            txt.write('\n')

        txt.close()

 

    # 將結果保存到D:\python\meituan\output_file\proxyIp_kuai.csv

    def saveInCsv(self):

        # print('csv:',self.results,self.csv_url)

        csvUrl = self.csv_url

        pd.DataFrame(self.results).to_csv(csvUrl,mode='a',encoding="utf-8-sig")  # 避免保存的中文亂碼

        print('保存到csv文件中成功了')

然後,調用相應的方法將ajax獲得的數據保存起來。最重要的是保存到mongoDB數據庫(一般是先連接數據庫,然後再執行增刪改查的操作),保存到csv文件僅僅是爲了直觀的觀察數據。save_shops_info.py文件源代碼如下:'''

    保存每個列表頁所有商鋪的基本信息

'''

import requests

import json

from get_shops import get_shops_url

from save_data import MongoDB

from save_data import SaveDataInFiles

 

output = [] #初始化數組,用於保存最終的結果

index = 1

# 定義類獲取評論數據

def get_shops_info(ajax_url):

    url = ajax_url  # getshops傳遞過來的ajax_url

    headers = {

        'Host': 'st.meituan.com',

        'Referer': 'https://st.meituan.com/meishi/',

        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36',

        'X-Requested-With': 'XMLHttpRequest',

    }

    try:

        response = requests.get(url, headers=headers)

        # print('response:',response)

        # print('response_text:',response.text)

        # print('type(eval(response.text)):',type(eval(response.text)))

        if response.status_code == 200:

            return response.json()

    except requests.ConnectionError as e:

        print('Error', e.args)

 

# 從返回的json字符串中獲取想要的字段

def save_shops_info(ajax_url,index):

    items = get_shops_info(ajax_url).get('data').get('poiInfos')

    output.extend(items)

    print('正在追加內容到output數組中')

 

if __name__ == '__main__':

    # for ajaxUrl in get_shops_url:

    #     save_shops_info(ajaxUrl,index)

    #保存數據到數據庫中

    collection = MongoDB('shops_info','','').collect_database()    #連接數據庫

    # MongoDB('shops_info', collection, '').delete_database()  # 先清空數據庫內容

    # MongoDB('shops_info', collection, output).save_to_Mongo()

    # 保存數據到csv

    # SaveDataInFiles('D:\python\meituan\output_file\shops_info.csv', '', output).saveInCsv()

    #將數據保存到json文件夾中

    # with open('D:\python\meituan\output_file\shops_info.json', 'w') as f:

    #     json.dump(output, f)

    # 查詢數據庫數據

    MongoDB('shops_info',collection,'').selectMongoDB()

獲取每個店鋪的評論信息

打開控制檯,會發現,獲取評論數據的ajax請求和前面獲取店鋪基本信息的請求相似,如下:

 

Request URL:

https://www.meituan.com/meishi/api/poi/getMerchantComment?uuid=9efd650a0d204774ba7a.1577010898.1.0.0&platform=1&partner=126&originUrl=https%3A%2F%2Fwww.meituan.com%2Fmeishi%2F152376939%2F&riskLevel=1&optimusCode=10&id=152376939&userId=&offset=0&pageSize=10&sortType=1

 

而其中的id=152376939就是我們前面保存的poiId。所以到了這一步,參照前面的方法,我們就可以獲取後端傳遞過來的店鋪評論數據了。

最後,將獲取到的店鋪評論數據保存起來。detailPage_getComments.py源代碼如下:# 根據數據庫中汕頭市外賣商鋪信息,爬取所有商鋪的評論信息

 

# 爬取美團外賣評論 https://www.meituan.com/meishi/41007600/

import requests  # 模擬瀏覽器向服務器發出請求

import math

import urllib.parse  # 定義了url的標準接口,實現url的各種抽取

from selenium import webdriver

from save_data import MongoDB

from save_data import SaveDataInFiles

from config import ans

from requests.adapters import HTTPAdapter

 

 

#######################################################################################################################

# 定義類獲取商鋪評論標籤和所有評論

class GetShopComments():

    def __init__(self, shopBasicInfo, uuid, shop_num=''):

        self.comments_ajax_url = "https://www.meituan.com/meishi/api/poi/getMerchantComment?"

        self.ajax_headers = {

            'Host': 'www.meituan.com',

            'Referer': 'https://www.meituan.com/meishi/41007600/',

            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36',

            'X-Requested-With': 'XMLHttpRequest',

        }

        # 前面GetShopInformation的類中傳遞過來的最大頁數

        self.maxPage = math.ceil(shopBasicInfo['allCommentNum'] / 10)

        self.shopName = shopBasicInfo['title']

        self.poiId = shopBasicInfo['poiId']

        self.uuid = uuid['uuid']

        # self.uuid = uuid

        self.shop_num = shop_num

 

 

    # 獲取每個店鋪頁面上的所有數據(json格式),標籤+評論

    def get_comments_in_page(self, items):

        parms = {

            'basicUrl':'https://www.meituan.com/meishi/api/poi/getMerchantComment?',

            'uuid': self.uuid,

            'platform': '1',

            'partner': '126',

            'originUrl': 'https%3A%2F%2Fwww.meituan.com%2Fmeishi%2F' + str(self.poiId) + '%2F',

            'riskLevel': '1',

            'optimusCode': '10',

            'id': self.poiId,

            'userId': '',

            'offset': items,

            'pageSize': '10',

            'sortType': '1',

        }

        url = self.comments_ajax_url + urllib.parse.urlencode(parms)

        # 連接超時,重新連接

        request = requests.Session()

        request.mount('http://', HTTPAdapter(max_retries=3))

        request.mount('https://', HTTPAdapter(max_retries=3))

        try:

            response = request.get(url, headers=self.ajax_headers,timeout=10)

            if response.status_code == 200:

                return response.json()

        # except requests.ConnectionError as e:

        except requests.exceptions.Timeout as e:

            print('Error', e.args)

 

    # 解析json數據,並獲取評論數據

    def parse_comments_in_page(self, originJson, page):

        if originJson:

            items = originJson.get('data').get('comments')

            if items:

                for item in items:

                    comments = {

                        'shopName': self.shopName,

                        'page': page,

                        'username': item.get('userName'),

                        'user-icon': item.get('userUrl'),

                        'stars': item.get('star'),

                        'user-comment': item.get('comment'),

                        'user-comment-time': item.get('commentTime'),

                        'user-comment-zan': item.get('zanCnt')}

                    yield comments

 

    # 解析json數據,並獲取標籤評論數據

    def parse_comments_tags(self):

        if self.maxPage > 0:

            original_data = self.get_comments_in_page(1)

            if original_data:

                tags = original_data.get('data').get('tags')

                if tags:

                    for item in tags:

                        item['poiId'] = self.poiId

                        item['shopName'] = self.shopName

                    return tags

    # 評論數據的入口和出口

    def get_comments(self):

        commentsData = []  # 用於存儲最終的結果,然後將結果保存到數據庫中

        if self.maxPage > 0:

            for page in range(1, self.maxPage + 1):

                print('我現在已經爬取到第' + str(shop_num) + '家店鋪的第' + str(page) + '頁啦~')

                original_data = self.get_comments_in_page(page)

                results = self.parse_comments_in_page(original_data, page)

                for result in results:

                    commentsData.append(result)

            return commentsData

 

    # 評論標籤數據

 

 

#######################################################################################################################

 

if __name__ == '__main__':

    shop_num = 0  # 用於統計爬到哪一家店鋪

    # 開啓新數據庫用於保存評論數據

    tags_collection = MongoDB('shops_tags', '', '').collect_database()  # 連接數據庫

    comments_collection = MongoDB('shops_comments', '', '').collect_database()  # 連接數據庫

    # 查看數據庫內容

    # MongoDB('shops_comments',comments_collection).selectMongoDB()

    # 清空數據庫

    # MongoDB('shops_tags', tags_collection).delete_database()

    # MongoDB('shops_comments', comments_collection).delete_database()

    # 獲取前面數據庫中保存的商家數據

    collection = MongoDB('shops_info', '', '').collect_database()  # 連接數據庫

    shops = collection.find({}, {"poiId": 1, "title": 1, "allCommentNum": 1})  # 只輸出idtitle字段,第一個參數爲查詢條件,空代表查詢所有

    shops = list(shops)  # 將遊標轉換成數組

    for items in shops[0:]:

        shop_num = shop_num + 1  # 用於統計爬到哪一家店鋪

        commentsRes = GetShopComments(items, ans, shop_num).get_comments()  # 獲取店鋪的所有評論

        tagsRes = GetShopComments(items, ans).parse_comments_tags()  # 獲取評論標籤

        MongoDB('shops_tags', tags_collection, tagsRes).save_to_Mongo()  # 保存評論標籤數據

        MongoDB('shops_comments', comments_collection, commentsRes).save_to_Mongo()  # 保存評論數據

        SaveDataInFiles('D:\python\meituan\output_file\shop_comments.csv', '', commentsRes).saveInCsv()  # 保存評論數據到csv文件中

        SaveDataInFiles('D:\python\meituan\output_file\shop_tags.csv', '', tagsRes).saveInCsv()  # 保存評論數據到csv文件中

原文鏈接:https://blog.csdn.net/qq_40511157/article/details/103641937

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章