Selenium模拟浏览器爬取拉勾网职位信息

今天想通过requests库来爬取拉钩网的岗位信息,但通过分析网站发现他的岗位信息都是通过向Ajax请求来获得的,也就是说返回来的网页源代码没有这部分信息,那requests库就没有什么作用了。后来我想到了利用selenium模拟浏览器来爬取,果真可行....

设计思路:

1、我们先来看网站的结构:

 然后每个岗位又可以点击,点进去之后就是这个岗位的详细信息。

2、功能设计:

所以我们就是通过先获取职位列表,然后在通过职位列表中的每一个职位的详细信息的url来获取信息

3、构建函数:

    '''解析出每一页所有岗位的url'''
    def parse_list_page(self, source):
        pass
    '''解析出每一岗位的网页源代码'''
    def get_detail_page(self, url):
        pass
    '''将每一个岗位的信息打印或保存'''
    def parse_detail_page(self, source):
        pass

4、demo:

from selenium import webdriver
from lxml import etree
import re
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
class LagouSpider(object):
    driver_path = r'D:\ChormDriveer\chrome\chromedriver.exe'
    def __init__(self):
        '''加载驱动'''
        self.driver = webdriver.Chrome(executable_path=LagouSpider.driver_path)
        self.url = 'https://www.lagou.com/jobs/list_python?city=%E5%85%A8%E5%9B%BD&cl=false&fromSearch=true&labelWords=&suginput='
        '''用来存储信息'''
        self.positions = []
    def run(self):
        '''模拟浏览器打开网页'''
        self.driver.get(self.url)
        while True:
          
            source = self.driver.page_source
           
            self.parse_list_page(source)
           
            WebDriverWait(driver=self.driver, timeout=10).until(
                EC.presence_of_element_located((By.XPATH, "//div[@class='pager_container']/span[last()]"))
            )
            '''获得下一页按钮'''
            next_btn =  self.driver.find_element_by_xpath("//div[@class='pager_container']/span[last()]")
            '''当到达最后一页时 不在进行下一页的爬取 退出死循环结束程序'''
            if "pager_next_disabled" in next_btn.get_attribute("class"):
                break
            else:
                next_btn.click()
    def parse_list_page(self, source):
        html = etree.HTML(source)
        links = html.xpath("//a[@class='position_link']/@href")
        for link in links:
            self.get_detail_page(link)
            time.sleep(1)
    def get_detail_page(self, url):
        
        self.driver.execute_script("window.open('%s')" % url)
       
        self.driver.switch_to.window(self.driver.window_handles[1])
        source = self.driver.page_source
        self.parse_detail_page(source)
       
        self.driver.close()
       
        self.driver.switch_to.window(self.driver.window_handles[0])
    def parse_detail_page(self, source):
        
        html = etree.HTML(source)
        position_name = html.xpath("//span[@class='name']/text()")[0]
        job_request_spans = html.xpath("//dd[@class='job_request']//span")
        salary = job_request_spans[0].xpath('.//text()')[0].strip()
        city = job_request_spans[1].xpath(".//text()")[0].strip()
        city = re.sub(r"[\s/]", "", city)
        work_years = job_request_spans[2].xpath(".//text()")[0].strip()
        work_years = re.sub(r"[\s/]", "", work_years)
        education = job_request_spans[3].xpath(".//text()")[0].strip()
        education = re.sub(r"[\s/]", "", education)
        desc = "".join(html.xpath("//dd[@class='job_bt']//text()")).strip()
        company_name = html.xpath("//h4[@class='company']/text()")[0].strip()
        position = {
            'name': position_name,
            'company_name': company_name,
            'salary': salary,
            'city': city,
            'work_years': work_years,
            'education': education,
            'desc': desc
        }
        self.positions.append(position)
        print(position)
        print('=' * 40)
if __name__ == '__main__':
    spider = LagouSpider()
    spider.run()

5、运行结果就不再展示了,测试过了没有问题的.....

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章