豆瓣:豆列爬取心得

最近把豆瓣2020版電影日曆上的數據整理到了豆瓣的豆列裏,但豆列裏面沒法做更方便的篩選和查詢,於是乎就想着把數據爬取下來自己篩選一下,便有了這篇筆記,代碼實現是python3

代碼實現

爬蟲代碼實現非常簡單,用到的是python3的requests_html,打開豆列頁面分析一下頁面的Html結構,會發現每部電影的信息是包含在一個div[@class='bd doulist-subject'] 裏面的。我們只需要把對應的電影名,評分,類型,主演,導演,年份等取出來就行。

這裏比較麻煩的是“導演,主演,類型,上映地區”是在一個div[@class='abstract'] 裏的,只是數據換了一下行。所以這裏需要將數據去掉換行後分別截取。

然後數據爬取下來後我是選擇了插入到MYSQL中,實現也非常簡單,這裏就不具體描述了。以下是完整代碼:
MYSQL:

CREATE TABLE movie(
mid int PRIMARY KEY AUTO_INCREMENT, 
title varchar(200),
rate decimal(5,1),
type varchar(200),
director varchar(200),
starring varchar(200),
state varchar(200),
created int
) AUTO_INCREMENT = 1

Python:

import json
from requests_html import HTMLSession
import math
import pyodbc
import re
import pymysql

class MSSQL:
    def __init__(self):
        self.server = 'mssqlserver'
        self.database = 'douban'
        self.username = 'admin'
        self.password = 'password'
        self.driver= '{ODBC Driver 13 for SQL Server}'
        
    def connect(self):
        connection = pyodbc.connect('DRIVER='+self.driver+';SERVER='+self.server+';PORT=1433;DATABASE='+self.database+';UID='+self.username+';PWD='+ self.password)
        #cursor = connection.cursor()
        return connection
    
    def execquery(self,sqltext):
        connection = self.connect()
        cursor = connection.cursor()
        cursordata = cursor.execute(sqltext)      
        return cursordata
    
    def execscalar(self,sqltext):
        connection = self.connect()
        cursor = connection.cursor()
        cursor.execute(sqltext) 
        connection.commit()   

    def insert_douban_movie(self,title,rate,director,starring,movietype,countrystate,releasetime):
          sqltext = "insert into douban_movie(title,rate,director,starring,movietype,countrystate,releasetime) values(?,?,?,?,?,?,?)"
          connection = self.connect()
          cursor = connection.cursor()
          cursor.execute(sqltext,title,rate,director,starring,movietype,countrystate,releasetime) 
          connection.commit() 

class MYSQL:
    def __init__(self):
        self.server = 'mysqlinstance'
        self.database = 'douban'
        self.username = 'username'
        self.password = 'password'
    
    def connect(self):
        conn = pymysql.connect(self.server,self.username,self.password,self.database)
        return conn

    def insert_movie(self,title,rate,type,director,starring,state,created):
        try:
            conn = self.connect()
            cursor = conn.cursor()
            cursor.execute("INSERT INTO movie(title,rate,type,director,starring,state,created) VALUES(%s,%s,%s,%s,%s,%s,%s);",(title,rate,type,director,starring,state,created))
            conn.commit()

        except Exception as e:
            conn.rollback()
            print("insert error:{error}".format(error=e))
        finally:
            cursor.close()
            conn.close()
    
    def test(self):
        db = self.connect()
        cursor = db.cursor()
        cursor.execute("SELECT VERSION()")
        data = cursor.fetchone()
        print ("Database version : %s " % data)
        db.close()

class HtmlCrawler:
      def __init__(self):
          self.session = HTMLSession()

      def get_doulist(self,doulist_url):
           r = self.session.get(doulist_url)
           page_size = 25
           total_number =int(r.html.xpath("//div[@class='doulist-filter']/a/span")[0].text.replace('(','').replace(')',''))
           total_page = math.ceil(total_number/page_size)

           for i in range(0,total_page):
               doulist_url2 = doulist_url+'/?start='+str(i*page_size)
               self.get_movies(doulist_url2)

      def get_movies(self,doulist_url):
          r = self.session.get(doulist_url)
          movies_tilte = r.html.xpath("//div[@class='bd doulist-subject']//div[@class='title']/a")
          movies_rate = r.html.xpath("//div[@class='bd doulist-subject']//div[@class='rating']/span[@class='rating_nums']")
          movies_abstact = r.html.xpath("//div[@class='bd doulist-subject']//div[@class='abstract']")
          for i in range(0,len(movies_tilte)):
              regstr = movies_abstact[i].text.strip().replace('\n','')      
              re1 = r'導演:(.*?)主演'
              re2 = r'主演:(.*?)類型'
              re3 = r'類型:(.*?)製片國家/地區'
              re4 = r'製片國家/地區:(.*?)年份'
              
              director = re.findall(re1,regstr)[0].strip()
              starring = re.findall(re2,regstr)[0].strip()
              movietype = re.findall(re3,regstr)[0].strip()
              state = re.findall(re4,regstr)[0].strip()
              created =int(regstr.split('年份:')[-1])
              MYSQL().insert_movie(movies_tilte[i].text,float(movies_rate[i].text),movietype,director,starring,state,created)

if __name__ == "__main__":
    url = 'https://www.douban.com/doulist/122330446'
    HtmlCrawler().get_doulist(url)

統計分析

本來想隨便寫幾個查詢語句就算了的,發現還是不符合我偷懶的作風,索性直接把數據放到powerbi上做個簡單的查詢頁面,做好後示意圖如下:
Snipaste_2020-08-07_09-51-45.png
上方的查詢框是Text Filter,本來還準備加篩選框(Hierarchy Slicer)的,想想還要把源數據中的主演,導演,地區,類型等截取後去重就嫌麻煩沒做了。

吐槽

小米的Mix3 掃碼也太拉垮了,我用豆瓣掃碼錄入電影信息,掃個10來次相機就死機了,必須手機重啓纔行。另外,iPhone 7,華爲 mate 30我均試過沒有這毛病,說明並不是豆瓣的問題。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章