AirFlow容器部署和使用

一、如何製作AirFlow容器

1、安裝docker環境
基於centos環境下進行部署,建議在centos6或者centos7的環境下

1.1、下載docker安裝包
下載地址:https://download.docker.com/linux/static/stable/x86_64/
推薦使用的版本是18.09.6

1.2、下載到本地後解壓
tar -zxf docker-18.09.6.tgz

1.3、將解壓出來的docker文件內容移動到 /usr/bin/ 目錄下
cp docker/* /usr/bin/

1.4、將docker註冊爲service
新建文件
vim /etc/systemd/system/docker.service

並添加以下內容
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target


添加文件權限
chmod +x /etc/systemd/system/docker.service
systemctl daemon-reload


1.5、啓動docker
systemctl start docker


1.6、驗證
systemctl status docker         #查看Docker狀態
docker -v           #查看Docker版本
2. 在Docker環境安裝AirFlow

2.1、下載源碼到/root/airflow文件夾
git clone https://github.com/puckel/docker-airflow.git  /root/airflow 

2.2、運行容器
運行容器命令:
docker run --net=bridge --name AirFlow
 -e MYSQL_IP_PORT="172.16.117.125:3306/airflow" 
-e MYSQL_USERNAME="root" 
 -e MYSQL_PASSWORD="123456" 
-v /usr/local/airflow/dags:/usr/local/airflow/dags 
 -v /usr/local/airflow/airflowSql:/usr/local/airflow/airflowSql  
 -v /usr/local/airflow/airflow.cfg:/usr/local/airflow/airflow.cfg -id -p 8081:8080  
--privileged=true   puckel/docker-airflow

解釋:
AirFlow:容器的名稱
MYSQL_IP_PORT:mysql數據庫的ip地址:端口號/數據庫名稱
MYSQL_USERNAME:登錄mysql數據庫的用戶名
MYSQL_PASSWORD:登錄mysql的密碼

-v /usr/local/airflow/dags:/usr/local/airflow/dags 
宿主機的存放dag文件目錄:容器存放dag文件目錄

-v /usr/local/airflow/airflowSql:/usr/local/airflow/airflowSql
宿主機的存放執行腳本文件目錄:容器存放執行腳本文件目錄

-v /usr/local/airflow/airflow.cfg:/usr/local/airflow/airflow.cfg
將airflow的配置文件映射到宿主機

puckel/docker-airflow   鏡像名稱

2.3、進入容器 
docker exec -it -u root AirFlow bash
/*
默認是進入到容器的/usr/local/airflow目錄下(airflow的默認安裝目錄)
*/


2.4、修改配置文件
vim airflow.cfg

dags_folder =$AIRFLOW_HOME/dags #DAG文件存放的目錄

base_log_folder = $AIRFLOW_HOME/logs #運行日誌存放目錄

executor = LocalExecutor

sql_alchemy_conn =  mysql://$MYSQL_USERNAME:$MYSQL_PASSWORD@$MYSQL_IP_PORT

load_examples = False

dags_are_paused_at_creation = False


2.5、初始化數據庫
airflow initdb   

如果初始化出現這樣的錯誤:

airflow.exceptions.AirflowException: Could not create Fernet object: Incorrect padding

解決辦法:

python -c "from cryptography.fernet import Fernet;

print(Fernet.generate_key().decode())"

export AIRFLOW__CORE__FERNET_KEY=oNu9XwewQNyx9mAJT2vZvtm3qzPRZIWRqwk9hSVch4A=


airflow initdb // 重新運行初始化數據庫

2.6、後臺運行
後臺運行服務webserver和scheduler
nohup airflow webserver>>$AIRFLOW_HOME/airflow-webserver.log 2>&1 &

後臺運行調度
nohup airflow scheduler>>$AIRFLOW_HOME/airflow-scheduler.log 2>&1 &

2.7、在瀏覽器打開地址:  172.16.117.125:8081

二、如何將部署好的AirFlow容器遷移到其他服務器

/*
在容器遷移之前,先給容器安裝幾個常用的命令,考慮到目標服務器可能不能聯網
*/

1、安裝 vim  ping  ifconfig  等常用命令

apt-get update

apt-get install vim  //安裝vim

apt-get install  net-tools  //安裝ifconfig

apt-get install iputils-ping  //安裝ping


2、將配置好的airflow容器製作成鏡像

 docker commit 0e3d77afccc3 airflow
 /*
 docker commit 容器ID 鏡像名稱
 */

3、將鏡像保存爲一個文件包

docker save -o airflow.tar airflow

4、將該文件包拷貝到需要遷移的服務器上

5、在新的服務器上把文件包加載成鏡像

 docker load -i airflow.tar

6、通過新導入的鏡像來啓動容器
docker run --net=bridge --name AirFlow --hostname airflow 
-e MYSQL_IP_PORT="172.16.117.125:3306/airflow"
 -e MYSQL_USERNAME="root" -e MYSQL_PASSWORD="123456" 
 -v /usr/local/airflow/dags:/usr/local/airflow/dags 
 -v /usr/local/airflow/airflowSql:/usr/local/airflow/airflowSql  
 -v /usr/local/airflow/airflow.cfg:/usr/local/airflow/airflow.cfg -id -p 8084:8080  --privileged=true  airflow


解釋:
AirFlow:容器的名稱
MYSQL_IP_PORT:mysql數據庫的ip地址:端口號/數據庫名稱
MYSQL_USERNAME:登錄mysql數據庫的用戶名
MYSQL_PASSWORD:登錄mysql的密碼

-v /usr/local/airflow/dags:/usr/local/airflow/dags 
宿主機的存放dag文件目錄:容器存放dag文件目錄

-v /usr/local/airflow/airflowSql:/usr/local/airflow/airflowSql
宿主機的存放執行腳本文件目錄:容器存放執行腳本文件目錄

-v /usr/local/airflow/airflow.cfg:/usr/local/airflow/airflow.cfg
將airflow的配置文件映射到宿主機

airflow   鏡像名稱

7、進入容器 
docker exec -it -u root AirFlow bash
/*
默認是進入到容器的/usr/local/airflow目錄下(airflow的默認安裝目錄)
*/


8、修改配置文件
vim airflow.cfg

dags_folder =$AIRFLOW_HOME/dags #DAG文件存放的目錄

base_log_folder = $AIRFLOW_HOME/logs #運行日誌存放目錄

executor = LocalExecutor

sql_alchemy_conn =  mysql://$MYSQL_USERNAME:$MYSQL_PASSWORD@$MYSQL_IP_PORT

load_examples = False

dags_are_paused_at_creation = False


9、初始化數據庫
airflow initdb   

如果初始化出現這樣的錯誤:

airflow.exceptions.AirflowException: Could not create Fernet object: Incorrect padding

解決辦法:

python -c "from cryptography.fernet import Fernet;

print(Fernet.generate_key().decode())"

export AIRFLOW__CORE__FERNET_KEY=oNu9XwewQNyx9mAJT2vZvtm3qzPRZIWRqwk9hSVch4A=


airflow initdb // 重新運行初始化數據庫

10、後臺運行
後臺運行服務webserver和scheduler
nohup airflow webserver>>$AIRFLOW_HOME/airflow-webserver.log 2>&1 &

後臺運行調度
nohup airflow scheduler>>$AIRFLOW_HOME/airflow-scheduler.log 2>&1 &

11、在瀏覽器打開地址:  172.16.117.125:8084
/*
新的服務器ip地址:對應服務器的端口號(我這裏是8084)
*/

三、如何使用AirFlow容器

1、將dag任務文件放到/usr/local/airflow/dags目錄下(這個根據前面的配置來定)

2、調度任務在airflow所在服務器的模板

import airflow
import time
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
from datetime import datetime,timedelta
 
default_args = {
 'owner': 'airflow',
 'depends_on_past': False,
 'start_date': datetime(2019, 12, 17,17,12,1),
 'retries': 5,
 'retry_delay': timedelta(seconds=5), 
}
 
dag = DAG(
 'c_test', 
 default_args=default_args,
 description='my second DAG',
 schedule_interval=timedelta(minutes=1)
 )

filename1='/usr/local/airflow/test/a1.txt'
filename2='/usr/local/airflow/test/a2.txt'
filename3='/usr/local/airflow/test/a3.txt'

def print_hello1():
 print("Hello World!1111111")
 current_time = time.asctime( time.localtime(time.time()) )
 with open(filename1,'a') as f:
  f.write(current_time)
  
def print_hello2():
 print("Hello World!22222222")
 current_time = time.asctime( time.localtime(time.time()) )
 with open(filename2,'a') as f:
  f.write(current_time)
 
def print_hello3():
 print("Hello World!33333333")
 current_time = time.asctime( time.localtime(time.time()) )
 with open(filename3,'a') as f:
  f.write(current_time)
 
 
 
task1 = PythonOperator(
 task_id='task_1',
 python_callable=print_hello1,
 dag=dag)
 
task2 = PythonOperator(
 task_id='task_2',
 python_callable=print_hello2,
 dag=dag)
 
 
task3 = PythonOperator(
 task_id='task_3',
 python_callable=print_hello3,
 dag=dag)
 
task2.set_upstream(task1)
task3.set_upstream(task1)





3、調度任務在遠程服務器模板

from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.operators import ExternalTaskSensor
from airflow.operators import EmailOperator
from datetime import datetime, timedelta
from airflow.contrib.hooks.ssh_hook import SSHHook
from airflow.contrib.operators.ssh_operator import SSHOperator
sshHook = SSHHook(remote_host='172.16.117.126',username='root',password='GXcxkfbrgx@26',timeout=30)
default_args = {
    'owner': 'airflow',
    'depends_on_past': False,
    'start_date': datetime(2019, 12, 27,10,22,0),
    'retries': 3,
    'retryDelay': timedelta(seconds=5),
    'end_date': datetime(9999, 12, 31)
}

dag = DAG('hello',
    default_args=default_args,
    schedule_interval='0 * * * *')

hello = SSHOperator(
    ssh_hook=sshHook,
    task_id='hello',
    dag=dag,
    command='/opt/sh/hello.sh '
)
hello

/*
sshHook = SSHHook(remote_host='172.16.117.126',username='root',password='GXcxkfbrgx@26',timeout=30)
sshHook = SSHHook(remote_host='遠程服務器ip地址',username='用戶名',password='密碼',timeout=30)
*/

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章