多線程爬取豬八戒網站

此項目是使用多線程爬取豬八戒網址it類的所有公司信息 

 

豬八戒主頁網址:https://guangzhou.zbj.com/

 

我們要爬的是it這個大類的這10小類

 

通過檢查我們發現,所有的網址都是放在帶有class=‘channel-service-grid clearfix’這個屬性的div標籤下面,我們可以通過使用lxml庫以及xpath語法來獲得所有小類的url

這個函數代碼如下:

def get_categories_url(url): 
    details_list = []
    text = getHTMLText(url)
    html = etree.HTML(text)
    divs = html.xpath("//div[@class='channel-service-grid-inner']//div[@class='channel-service-grid-item' or @class='channel-service-grid-item second']")
    for div in divs:
        detail_url = div.xpath("./a/@href")[0]
        details_list.append(detail_url)
    return details_list

 

 

隨便進入一個類,我們右鍵檢查一個公司,發現這個公司的url就放在一個帶有class=‘name’的a標籤下的href屬性,然後再加上'https://'就好

函數如下:

    def get_company_urls(url):
        companies_list = []
        text = getHTMLText(url)
        html = etree.HTML(text)
        h4s = html.xpath("//h4[@class='witkey-name fl text-overflow']/a/@href")
        for h4 in h4s:
            company_url = 'https:' + h4
            companies_list.append(company_url)
        return companies_list

 

 

 

對於每一頁,我們只需要循環遍歷就能夠得到一頁中所有公司的信息

這時候我們隨便點進去幾個公司來看,發現所有公司基本可以分爲兩類:

一種是有首頁、買服務、看案例、交易評價、人才檔案之類的

另一種是像這樣就直接到人才檔案這一頁面的

 

可以看出我們要爬取的數據基本都在人才檔案這個頁面,因此我們要設定一個判斷條件,如果它有首頁、買服務、看案例、交易評價、人才檔案這些的話就跳到人才檔案的頁面那裏

我們可以看到它這些是放在li標籤下面的,我們可以這樣來設定判定條件:在網頁中找到帶有class='witkeyhome-nav clearfix'的ul標籤,獲取它下面的li標籤。如果獲取不到li標籤或者帶有li標籤的列表的長度爲0的話就代表已經是在人才檔案這個頁面下面,對這一類的url就不用採取特別處理。如下圖所示,對於不是直接到人才檔案的網頁,我們只需要找到最後一個li標籤下面的href屬性 再加上'https://'就ok了

代碼如下:

lis = html.xpath("//ul[@class='witkeyhome-nav clearfix']//li[@class=' ']")
                if len(lis) == 0:
                    company_url_queue.put(company)
                    continue
                for li in lis:
                    try:
                        if li.xpath(".//text()")[1] == '人才檔案':
                            rcda_url = ('https://profile.zbj.com'+ li.xpath("./a/@href")[0]).split('/salerinfo.html')[0]+'?isInOldShop=1'
                            company_url_queue.put(rcda_url)
                            break
                        else:continue
                    except:pass #有一些網站的li標籤是空的,因此會報錯,pass掉就好

 

拿到每一個公司的人才檔案頁面url之後,正常來說我們就能夠按照這個思路拿到我們所有想拿的信息。可是我第一次對爬取下來的人才檔案頁面url用xpath庫查找信息時,發現無論寫什麼都是返回一個空的列表給我。我自己很確信自己寫的xpath語法沒有任何問題(沒錯就是這麼自信),然後把獲取到的text打印出來看一下,發現上面並沒有我想要的信息。就如下圖所示:我複製的是公司的近三個月利潤,發現是不存在這個信息的

因此我斷定這個網頁採取了反爬蟲的機制。我們點擊右鍵檢查找到network按F5刷新一下,然後在右邊的search輸入這個交易額

就能發現這些數據其實是寫在這個名爲13780820?isInOldShop=1的js文件下面。因爲它採用的是ajax寫進去的,所以我們正常的請求方法請求不到它的數據。我們來看下它的reques url

人才檔案url:https://shop.zbj.com/13780820/salerinfo.html

我們可以發現只要把原來的人才檔案頁面的url去除掉後面的/salerinfo.html 再加上?isInOldShop=1就能拿到包含有真正數據的url

代碼如下圖所示:

rcda_url = ('https://profile.zbj.com'+ li.xpath("./a/@href")[0]).split('/salerinfo.html')[0]+'?isInOldShop=1'

 

最後對每個拿到的公司url獲取自己想要的信息就可以了,代碼如下

def get_company_infos(url):        
        company_url = url
        text = getHTMLText(url)
        html = etree.HTML(text)
        company_name = html.xpath("//h1[@class='title']/text()")[0]
        try:
            grade = html.xpath("//div[@class='ability-tag ability-tag-3 text-tag']/text()")[0].strip()
        except:
            grade = html.xpath("//div[@class='tag-wrap tag-wrap-home']/div/text()")[0].replace('\n', '')

        lis = html.xpath("//ul[@class='ability-wrap clearfix']//li")
        score = float(lis[0].xpath("./div/text()")[0].strip())
        profit = float(lis[1].xpath("./div/text()")[0].strip())
        good_comment_rate = float(lis[2].xpath("./div/text()")[0].strip().split("%")[0])
        try:
            again_rate = float(lis[4].xpath("./div/text()")[0].strip().split("%")[0])
        except:
            again_rate=0.0
        try:
            finish_rate = float(lis[4].xpath("./div/text()")[0].strip().split("%")[0])
        except:
            finish_rate = 0.0

        company_info = html.xpath("//div[@class='conteng-box-info']//text()")[1].strip().replace("\n", '')
        skills_list = []
        divs = html.xpath("//div[@class='skill-item']//text()")
        for div in divs:
            if len(div) >= 3:
                skills_list.append(div)
        good_at_skill = json.dumps(skills_list, ensure_ascii=False)

        try:
            divs = html.xpath("//div[@class='our-info']//div[@class='content-item']")
            build_time = divs[1].xpath("./div/text()")[1].replace("\n", '')
            address = divs[3].xpath("./div/text()")[1].replace("\n", '')
        except:
            build_time = '暫無'
            address = '暫無'

最後再來處理幾個小問題。1.每個小類它的頁數,翻頁的url該怎麼設定?2.我們都知道一家公司可能存在於幾個小類中,我們如何判斷這個公司已經被爬取過?3.那麼多的數據,要解析那麼多頁面,如何提高速度?

 

1.對於每一頁的頁數,我們翻到最下面右鍵檢查就能發現,它寫在了帶有屬性class='pagination-total'的div標籤下的ul標籤的最後一個li標籤裏面。因此我們可以通過下面的代碼得到:

pages = int(html.xpath("//p[@class='pagination-total']/text()")[0].split("共")[1].split('頁')[0])

按照正常套路,每個頁面都應該是第一頁帶有p=0 然後後面的頁數每頁再加上每一頁的公司總數(這裏是40),可是當我檢查的時候把我給奇葩到了:像這個網站開發小類的第一頁看似沒有問題

然後我們再看第二頁

然後再看第三第四頁

然後我們再看其他幾個小類就會發現,每個小類的第一頁後綴都是相同的,都是/p.html,然後第二頁基本每個小類都會有一個對應的值,後面的從第三頁開始就在第二頁對應那個值得基礎上加40

因此我想到用字典來存儲每個小類第二頁所對應的值,然後在遍歷每一頁前先判斷它是第幾頁,再來確定url

代碼如下

    second_page_num = {'https://guangzhou.zbj.com/wzkf/p.html':34,
                      'https://guangzhou.zbj.com/ydyykf/p.html':36,
                      'https://guangzhou.zbj.com/rjkf/p.html':37,
                      'https://guangzhou.zbj.com/uisheji/p.html':35,
                      'https://guangzhou.zbj.com/saas/p.html':38,
                      'https://guangzhou.zbj.com/itfangan/p.html':39,
                      'https://guangzhou.zbj.com/ymyfwzbj/p.html':40,
                      'https://guangzhou.zbj.com/jsfwzbj/p.html':40,
                      'https://guangzhou.zbj.com/ceshifuwu/p.html':40,
                      'https://guangzhou.zbj.com/dashujufuwu/p.html':40
                      }
    for category in categories_list:
        j = second_page_num[category]
        for i in range(1,pages+1):
            if i == 1:
                company_list = get_company_urls(category)
            elif i == 2:
                page_url = category.split('.html')[0] +'k'+str(j) +'.html'
                company_list = get_company_urls(page_url)
            else:
                page_url = category.split('.html')[0] + 'k' + str(j+40*(i-2)) + '.html'
                company_list = get_company_urls(page_url)

問題解決

第二個問題  其實很簡單,我們只要先設置一個列表用來存儲被爬取過的公司就行。在對每一頁得公司遍歷時,先判斷這家公司是否在列表中,如果在,就continue,如果不在,就把它加到列表中然後再進行爬取。代碼如下:

    is_exists_company = []
    for company in company_list:
         if company in is_exists_company:
               continue
         else:
                is_exists_company.append(company)

對於最後一個問題,我們都很容易想到解決方式:採用多線程

整個爬蟲代碼如下:


import requests
from lxml import etree
import json
import pymysql
from queue import Queue
import threading
import time

gCondition = threading.Condition()

HEADERS = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36',
    'Referer':'https://guangzhou.zbj.com/'
}

company_nums = 0
is_exists_company = []

class Producer(threading.Thread):
    def __init__(self,page_queue,company_url_queue,company_nums,is_exists_company,*args,**kwargs):
        super(Producer,self).__init__(*args,**kwargs)
        self.page_queue = page_queue
        self.company_url_queue = company_url_queue
        self.company_nums = company_nums
        self.is_exists_company = is_exists_company

    def run(self):
        while True:
            if self.page_queue.empty():
                break
            self.parse_url(self.page_queue.get())


    def parse_url(self,url):
        company_url_list = self.get_company_urls(url)
        for company in company_url_list:
            gCondition.acquire()
            if company in self.is_exists_company:
                gCondition.release()
                continue
            else:
                self.is_exists_company.append(company)
                self.company_nums += 1
            print('已經存入{}家公司'.format(self.company_nums))
            gCondition.release()
            text = getHTMLText(company)
            html = etree.HTML(text)
            lis = html.xpath("//ul[@class='witkeyhome-nav clearfix']//li[@class=' ']")
            if len(lis) == 0:
                self.company_url_queue.put(company)
                continue
            for li in lis:
                try:
                    if li.xpath(".//text()")[1] == '人才檔案':
                        rcda_url = ('https://profile.zbj.com' + li.xpath("./a/@href")[0]).split('/salerinfo.html')[
                                       0] + '?isInOldShop=1'
                        self.company_url_queue.put(rcda_url)
                        break
                    else:continue
                except:pass  # 有一些網站的li標籤是空的,因此會報錯,pass掉就好

    def get_company_urls(self,url):
        companies_list = []
        text = getHTMLText(url)
        html = etree.HTML(text)
        h4s = html.xpath("//h4[@class='witkey-name fl text-overflow']/a/@href")
        for h4 in h4s:
            company_url = 'https:' + h4
            companies_list.append(company_url)
        return companies_list





class Consunmer(threading.Thread):

    def __init__(self,company_url_queue,page_queue,*args,**kwargs):
        super(Consunmer, self).__init__(*args,**kwargs)
        self.company_url_queue = company_url_queue
        self.page_queue = page_queue

    def run(self):
        while True:
            if self.company_url_queue.empty() and self.page_queue.empty():
                break
            company_url = self.company_url_queue.get()
            self.get_and_write_company_details(company_url)
            print(company_url + '寫入完成')

    def get_and_write_company_details(self,url):
        conn = pymysql.connect(host=****, user=*****, password=*****, database=****,port=****, charset='utf8')
        cursor = conn.cursor()  # 連接數據庫放在線程主函數中的,如果放在函數外面,就會導致無法連接數據庫

        company_url = url
        text = getHTMLText(url)
        html = etree.HTML(text)
        company_name = html.xpath("//h1[@class='title']/text()")[0]
        try:
            grade = html.xpath("//div[@class='ability-tag ability-tag-3 text-tag']/text()")[0].strip()
        except:
            grade = html.xpath("//div[@class='tag-wrap tag-wrap-home']/div/text()")[0].replace('\n', '')

        lis = html.xpath("//ul[@class='ability-wrap clearfix']//li")
        score = float(lis[0].xpath("./div/text()")[0].strip())
        profit = float(lis[1].xpath("./div/text()")[0].strip())
        good_comment_rate = float(lis[2].xpath("./div/text()")[0].strip().split("%")[0])
        try:
            again_rate = float(lis[4].xpath("./div/text()")[0].strip().split("%")[0])
        except:
            again_rate=0.0
        try:
            finish_rate = float(lis[4].xpath("./div/text()")[0].strip().split("%")[0])
        except:
            finish_rate = 0.0

        company_info = html.xpath("//div[@class='conteng-box-info']//text()")[1].strip().replace("\n", '')
        skills_list = []
        divs = html.xpath("//div[@class='skill-item']//text()")
        for div in divs:
            if len(div) >= 3:
                skills_list.append(div)
        good_at_skill = json.dumps(skills_list, ensure_ascii=False)

        try:
            divs = html.xpath("//div[@class='our-info']//div[@class='content-item']")
            build_time = divs[1].xpath("./div/text()")[1].replace("\n", '')
            address = divs[3].xpath("./div/text()")[1].replace("\n", '')
        except:
            build_time = '暫無'
            address = '暫無'

        sql = """
        insert into(數據表名)(id,company_name,company_url,grade,score,profit,good_comment_rate,again_rate,company_info,good_at_skill,build_time,address) values(null,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
                                """

        cursor.execute(sql, (
        company_name, company_url, grade, score, profit, good_comment_rate, again_rate, company_info, good_at_skill,
        build_time, address))
        conn.commit()


def getHTMLText(url):
    resp = requests.get(url,headers=HEADERS)
    resp.encoding='utf-8'
    return resp.text

def get_categories_url(url):
    details_list = []
    text = getHTMLText(url)
    html = etree.HTML(text)
    divs = html.xpath("//div[@class='channel-service-grid-inner']//div[@class='channel-service-grid-item' or @class='channel-service-grid-item second']")
    for div in divs:
        detail_url = div.xpath("./a/@href")[0]
        details_list.append(detail_url)
    return details_list




def main():
    second_page_num = {'https://guangzhou.zbj.com/wzkf/p.html':34,
                      'https://guangzhou.zbj.com/ydyykf/p.html':36,
                      'https://guangzhou.zbj.com/rjkf/p.html':37,
                      'https://guangzhou.zbj.com/uisheji/p.html':35,
                      'https://guangzhou.zbj.com/saas/p.html':38,
                      'https://guangzhou.zbj.com/itfangan/p.html':39,
                      'https://guangzhou.zbj.com/ymyfwzbj/p.html':40,
                      'https://guangzhou.zbj.com/jsfwzbj/p.html':40,
                      'https://guangzhou.zbj.com/ceshifuwu/p.html':40,
                      'https://guangzhou.zbj.com/dashujufuwu/p.html':40
                      }
    global company_nums
    company_url_queue = Queue(100000)
    page_queue = Queue(1000)
    categories_list = get_categories_url('https://guangzhou.zbj.com/it')
    for category in categories_list:
        text = getHTMLText(category)
        html = etree.HTML(text)
        pages = int(html.xpath("//p[@class='pagination-total']/text()")[0].split("共")[1].split('頁')[0])
        j = second_page_num[category]
        for i in range(1,pages+1):
            if i == 1:
                page_queue.put(category)
            elif i == 2:
                page_url = category.split('.html')[0] +'k'+str(j) +'.html'
                page_queue.put(page_url)
            else:
                page_url = category.split('.html')[0] + 'k' + str(j+40*(i-2)) + '.html'
                page_queue.put(page_url)
            print('{}的第{}頁已經保存到隊列中'.format(category,i))
            time.sleep(1)

    print('url存入完成,多線程開啓')

    for x in range(5):
        t = Producer(page_queue,company_url_queue,company_nums,is_exists_company)
        t.start()

    for x in range(5):
        t = Consunmer(company_url_queue,page_queue)
        t.start()


if __name__ == '__main__':
    main()

感謝觀看

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章