基礎
- 編輯shell腳本
- shell腳本都是
.sh
結尾的
[hadoop@bigdata shell]$ vi test.sh
#! /bin/bash
echo "www.ruozedata.com"
# 查看文件,並賦予當前用戶執行它的權限
[hadoop@bigdata shell]$ ll
total 4
-rw-rw-r-- 1 hadoop hadoop 39 Aug 22 17:48 test.sh
[hadoop@bigdata shell]$ chmod u+x test.sh
- 執行shell腳本的三種方式
# 絕對路徑
[hadoop@bigdata shell]$ /home/hadoop/script/shell/test.sh
www.biglau.com
# 相對路徑
[hadoop@bigdata shell]$ ./test.sh
www.biglau.com
# sh 命令
# 使用sh 命令的時候,可以將第一行的 #! /bin/bash 省略
[hadoop@bigdata shell]$ sh test.sh
www.biglau.com
- debug
# 有 #! /bin/bash
# 在後面加一個 -x (小寫)
#! /bin/bash -x
echo "www.ruozedata.com"
[hadoop@bigdata shell]$ ./test.sh
+ echo www.biglau.com # 有加號的標識執行的命令
www.biglau.com
# 沒有 #! /bin/bash
[hadoop@bigdata shell]$ sh -x test.sh # 小寫x
+ echo www.biglau.com
www.biglau.com
- 變量
#! /bin/bash
DATE1="ruozedata"
DATE2="date"
DATE3='hh'
DATE4=`date` # !!!!! 不是單引號,是撇號,會把 date 當成是linux命令
echo $DATE1
echo ${DATE2}
echo ${DATE3}
echo ${DATE4}
[hadoop@bigdata shell]$ ./variable.sh
ruozedata
date
hh
Thu Aug 22 18:48:15 CST 2019
!!!!
變量的值可以用雙引號也可以用單引號,也可以不使用引號
=前後不能有空格
變量名一般大寫
使用變量一般使用{},避免錯誤
靜態
雙引號、單引號、不用引號
動態
使用 撇號 即 `` 使用的一瞬間給它賦值
- 參數
[hadoop@bigdata shell]$ vi parameter.sh
#! /bin/bash
echo $1 # 獲取第一個參數
echo $2 # 獲取第二個參數
echo "$#" # 將所有參數作爲一個整體
echo "$*" # 參數的個數
echo "PID: $$" # 當前進程的pid
[hadoop@bigdata shell]$ ./parameter.sh a b
a
b
2
a b
PID: 12388
- array
[hadoop@bigdata shell]$ vi arrary.sh
#! /bin/bash
arr=(rz jepson xingxing huhu)
echo ${arr[@]} # 打印數組所有的內容
echo ${arr[2]} # 打印第三個雲素,從零開始計數
echo ${#arr[@]} # 打印數組長度
[hadoop@bigdata shell]$ ./arrary.sh
rz jepson xingxing huhu
xingxing
4
- 判斷
[hadoop@bigdata shell]$ vi if.sh
#! /bin/bash
A="abc"
B="jepson"
if [ ${A} == ${B} ];then #注意空格 [] 以及 == 前後
echo "=="
elif [ ${A} == "abc" ];than
echo "=="
else
echo "!="
fi
[hadoop@bigdata shell]$ chmod u+x if.sh
[hadoop@bigdata shell]$ ./if.sh
!=
- forwhile
#! /bin/bash
for x in 1 2 3 4 5
do
echo ${x}
done
echo "------------------------------"
for ((i=1;i<10;i++))
do
echo ${i}
done
echo "-------------------------------"
J=1
while((${J}<10))
do
echo ${J}
let "J++"
done
- 分割
#! /bin/bash
S="rz,j,xx,huhu,yt,co"
OLD_IFS="$IFS"
IFS=","
arr=($S)
IFS="$OLD_IFS"
for x in ${arr[*]}
do
echo ${x}
done
- awk
- sed
腳本學習
- sync_hadoop.sh
- jps.sh
- start_cluster.sh
1、zookeeper啓動失敗
2、查找原因 ----》 定位log
3、先到zookeeper的conf目錄下查找log的存放路徑
4、定位到zoo.cfg 文件 --- 沒有
5、定位到 log4j.properties
zookeeper.log.dir=.
zookeeper.log.file=zookeeper.log
6、查找 zookeeper.log
# 先自己home目錄。然後全局。都爲找到
[hadoop@ruozedata001 conf]# find /home/hadoop -name "zookeeper.log"
[root@ruozedata001 conf]# find / -name "zookeeper.log"
7、zkServer.sh 啓動源頭
zkServer.sh start|stop|status
找到 start 部分
case $1 in
start)
echo -n "Starting zookeeper ... "
if [ -f "$ZOOPIDFILE" ]; then
if kill -0 `cat "$ZOOPIDFILE"` > /dev/null 2>&1; then
echo $command already running as process `cat "$ZOOPIDFILE"`.
exit 0
fi
fi
nohup "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \
-cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" > "$_ZOO_DAEMON_OUT" 2>&1 < /dev/null &
定位到
ZOO_LOG_DIR
在腳本中搜索
ZOO_LOG_DIR= 沒有
ZOO_LOG_DIR 找到 _ZOO_DAEMON_OUT="$ZOO_LOG_DIR/zookeeper.out"
8. 查找zookeeper.out
[root@ruozedata001 bin]# find / -name "zookeeper.out"
/home/hadoop/zookeeper.out
9. 查看zookeeper.out文件
[hadoop@ruozedata001 shell]$ cat /home/hadoop/zookeeper.out
nohup: failed to run command ‘java’: No such file or directory
10. 分析錯誤
[hadoop@ruozedata001 shell]$ ssh ruozedata001 "which java"
which: no java in (/usr/local/bin:/usr/bin)
[hadoop@ruozedata001 shell]$ ssh ruozedata001 "echo $JAVA_HOME"
/usr/java/jdk1.8.0_45
[hadoop@ruozedata001 shell]$ ssh ruozedata001 "echo $PATH"
/home/hadoop/app/hadoop/bin:/home/hadoop/app/hadoop/sbin:/home/hadoop/app/zookeeper/bin:/usr/java/jdk1.8.0_45/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/home/hadoop/.local/bin:/home/hadoop/bin
????
查看 zkServer.sh
if [ -e "$ZOOBIN/../libexec/zkEnv.sh" ]; then
. "$ZOOBINDIR/../libexec/zkEnv.sh"
else
. "$ZOOBINDIR/zkEnv.sh"
fi
查看 zkEnv.sh
if [ "$JAVA_HOME" != "" ]; then
JAVA="$JAVA_HOME/bin/java"
else
JAVA=java
fi
檢查 $JAVA_HOME
echo "-------------------ruozedata: $JAVA_HOME----------"
if [ "$JAVA_HOME" != "" ]; then
JAVA="$JAVA_HOME/bin/java"
else
JAVA=java
fi
[hadoop@ruozedata001 shell]$ ssh ruozedata001 "$ZOOKEEPER_HOME/bin/zkServer.sh start"
JMX enabled by default
-------------------ruozedata: ----------
Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
發現JAVA_HOME 找不到
兩種解決方法
1. 寫死路徑
if [ "$JAVA_HOME" != "" ]; then
JAVA="$JAVA_HOME/bin/java"
else
JAVA=/usr/java/jdk1.8.0_45/bin/java
fi
2. 修改生效 ~/.bashrc
ssh執行遠程命令和腳本
bash模式 加載環境變量配置文件:~/.bashrc
- stop_cluster.sh
- 郵件發送
1. 安裝 mail 、 mailx
yum install mail
yum install mailx
5. 創建認證
[hadoop@ruozedata001 ~]# mkdir -p ~/.certs/
[hadoop@ruozedata001 ~]# echo -n | openssl s_client -connect smtp.qq.com:465 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > ~/.certs/qq.crt
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = GeoTrust RSA CA 2018
verify return:1
depth=0 C = CN, ST = Guangdong, L = Shenzhen, O = Tencent Technology (Shenzhen) Company Limited, OU = R&D, CN = pop.qq.com
verify return:1
DONE
[hadoop@ruozedata001 ~]# certutil -A -n "GeoTrust SSL CA" -t "C,," -d ~/.certs -i ~/.certs/qq.crt
[hadoop@ruozedata001 ~]# certutil -A -n "GeoTrust Global CA" -t "C,," -d ~/.certs -i ~/.certs/qq.crt\
>
[hadoop@ruozedata001 ~]#
[hadoop@ruozedata001 ~]# certutil -L -d ~/.certs
Certificate Nickname Trust Attributes
SSL,S/MIME,JAR/XPI
GeoTrust SSL CA C,,
[hadoop@ruozedata001 ~]# cd ~/.certs
[hadoop@ruozedata001 .certs]# certutil -A -n "GeoTrust SSL CA - G3" -t "Pu,Pu,Pu" -d ./ -i qq.crt
Notice: Trust flag u is set automatically if the private key is present.
[hadoop@ruozedata001 .certs]# cd ../
6. 配置郵件發送者
[root@ruozedata001 ~]# chmod +w /etc/mail.rc
[hadoop@ruozedata001 ~]# vi /etc/mail.rc
#qq
set [email protected]
set smtp=smtps://smtp.qq.com:465
set smtp-auth-user=1056413727
set smtp-auth-password=mzdgegpndvaqbbic
set smtp-auth=login
set ssl-verify=ignore
set nss-config-dir=/home/hadoop/.certs
7. 測試
[hadoop@ruozedata001 ~]# echo hello word | mail -s "title" [email protected]
8. 發郵件不帶附件
[email protected]
[email protected]
echo -e "`date "+%Y-%m-%d %H:%M:%S"` : The current running $JOB_NAME job num is
$RUNNINGNUM in 192.168.137.201 ......" | mail \
-r "From: alertAdmin <${EMAILFROM}>" \
-s "Warn: Skip the new $JOB_NAME spark job." ${EMAILTO}
9. 發郵件帶附件
echo -e "`date "+%Y-%m-%d %H:%M:%S"` : Please to check the fail sql attachement." | mailx \
-r "From: alertAdmin <${EMAILFROM}>" \
-a error.log \
-s "Critical:KSSH fail sql." ${EMAILTO}
5. get_hdfs_ha_state.sh
# hdfs getconf -confkey 這個命令可以查看參數的值,如下
[root@ruozedata001 tmp]# hdfs getconf -confkey dfs.nameservices
ruozeclusterg7
[hadoop@ruozedata001 shell]$ ./get_hdfs_ha_state.sh
Hostname Namenode_Serviceid Namenode_State
ruozedata002 nn2 active
ruozedata001 nn1 standby
[hadoop@ruozedata001 shell]$ hdfs haadmin -failover nn2 nn1
Failover to NameNode at ruozedata001/172.31.92.236:8020 successful
[hadoop@ruozedata001 shell]$ ./get_hdfs_ha_state.sh
Hostname Namenode_Serviceid Namenode_State
ruozedata001 nn1 active
ruozedata002 nn2 standby
send a mail
- sparkjobtest.sh
00:10 00 JOB1 執行12分鐘 12
10 JOB2 執行13分鐘 10-12 先去判斷 job1存在不
存在 暫停提交,發送預警郵件,再提交,自己提交
不存在 提交
20 JOB3