ambari卸載出現的問題

可以參考網上博客卸載和刪除ambari的文件,和hdp的文件

記得主節點上執行yum remove ambari-server

從節點yum remove ambari-agent

錯誤信息:

==========================
Creating target directory...
==========================

Command start time 2018-10-10 11:11:40

Connection to node02 closed.
SSH command execution finished
host=node02, exitcode=0
Command end time 2018-10-10 11:11:40

Command start time 2018-10-10 11:11:40
/usr/sbin/ambari-agent: line 97: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 99: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 100: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 101: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 102: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 103: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 104: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 105: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 190: /var/log/ambari-agent/ambari-agent.out: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log' for reading: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log' for reading: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log' for reading: No such file or directory


Connection to node02 closed.
SSH command execution finished
host=node02, exitcode=255
Command end time 2018-10-10 11:11:46

ERROR: Bootstrap of host node02 fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: tcgetattr: Invalid argument
Connection to node02 closed.

STDOUT: /usr/sbin/ambari-agent: line 97: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 99: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 100: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 101: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 102: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 103: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 104: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 105: ambari-sudo.sh: command not found
/usr/sbin/ambari-agent: line 190: /var/log/ambari-agent/ambari-agent.out: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log' for reading: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log' for reading: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log' for reading: No such file or directory


Connection to node02 closed.

 

錯誤原因是從節點上雖然文件刪除了但是沒有執行yum remove ambari-agent

卸載參考博客:https://blog.csdn.net/github_38358734/article/details/79029692

以下文章轉載自:https://imaidata.github.io/blog/uninstall_hdp_ambari/
用於個人學習、備查,轉載請註明原作者。

簡介:

在不需要重裝操作系統的情況下完全卸載HDP,並準備好自動安裝HDP2.6的環境。

文章:

升級HDP失敗後,我被迫徹底清除HDP 2.4,Ambari 2.5並安裝HDP 2.6。 我想避免重新安裝操作系統,所以執行了如下的步驟。

1、停止在Ambari中的所有服務或殺死他們

可以通過ambari控制檯停止所有服務。在我的情況下,Ambari在降級時損壞了他的數據庫,無法啓動。 所以我手動殺死所有節點上的所有進程:

$ ps -u  hdfs (可以看到所有服務列表)
$ kill PID

####2、在所有集羣節點上運行python腳本

$ python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users

3、刪除所有節點上的Hadoop包

yum remove hive\*
yum remove oozie\*
yum remove pig\*
yum remove zookeeper\*
yum remove tez\*
yum remove hbase\*
yum remove ranger\*
yum remove knox\*
yum remove storm\*
yum remove accumulo\*
yum remove falcon\*
yum remove ambari-metrics-hadoop-sink 
yum remove smartsense-hst
yum remove slider_2_4_2_0_258
yum remove ambari-metrics-monitor
yum remove spark2_2_5_3_0_37-yarn-shuffle
yum remove spark_2_5_3_0_37-yarn-shuffle
yum remove ambari-infra-solr-client

4、刪除ambari-server(在Ambari主機上)和ambari-agent(在所有節點上)

ambari-server stop
ambari-agent stop
yum erase ambari-server
yum erase ambari-agent

5、刪除所有節點上的存儲庫

rm -rf /etc/yum.repos.d/ambari.repo /etc/yum.repos.d/HDP*
yum clean all

6、刪除所有節點上的日誌文件夾

rm -rf /var/log/ambari-agent
rm -rf /var/log/ambari-metrics-grafana
rm -rf /var/log/ambari-metrics-monitor
rm -rf /var/log/ambari-server/
rm -rf /var/log/falcon
rm -rf /var/log/flume
rm -rf /var/log/hadoop
rm -rf /var/log/hadoop-mapreduce
rm -rf /var/log/hadoop-yarn
rm -rf /var/log/hive
rm -rf /var/log/hive-hcatalog
rm -rf /var/log/hive2
rm -rf /var/log/hst
rm -rf /var/log/knox
rm -rf /var/log/oozie
rm -rf /var/log/solr
rm -rf /var/log/zookeeper

7、刪除所有節點上的Hadoop文件夾,包括HDFS數據

rm -rf /hadoop/*
rm -rf /hdfs/hadoop
rm -rf /hdfs/lost+found
rm -rf /hdfs/var
rm -rf /local/opt/hadoop
rm -rf /tmp/hadoop
rm -rf /usr/bin/hadoop
rm -rf /usr/hdp
rm -rf /var/hadoop

8、刪除所有節點上的配置文件夾

rm -rf /etc/ambari-agent
rm -rf /etc/ambari-metrics-grafana
rm -rf /etc/ambari-server
rm -rf /etc/ams-hbase
rm -rf /etc/falcon
rm -rf /etc/flume
rm -rf /etc/hadoop
rm -rf /etc/hadoop-httpfs
rm -rf /etc/hbase
rm -rf /etc/hive 
rm -rf /etc/hive-hcatalog
rm -rf /etc/hive-webhcat
rm -rf /etc/hive2
rm -rf /etc/hst
rm -rf /etc/knox 
rm -rf /etc/livy
rm -rf /etc/mahout 
rm -rf /etc/oozie
rm -rf /etc/phoenix
rm -rf /etc/pig 
rm -rf /etc/ranger-admin
rm -rf /etc/ranger-usersync
rm -rf /etc/spark2
rm -rf /etc/tez
rm -rf /etc/tez_hive2
rm -rf /etc/zookeeper

####9、刪除所有節點上的PID

rm -rf /var/run/ambari-agent
rm -rf /var/run/ambari-metrics-grafana
rm -rf /var/run/ambari-server
rm -rf /var/run/falcon
rm -rf /var/run/flume
rm -rf /var/run/hadoop 
rm -rf /var/run/hadoop-mapreduce
rm -rf /var/run/hadoop-yarn
rm -rf /var/run/hbase
rm -rf /var/run/hive
rm -rf /var/run/hive-hcatalog
rm -rf /var/run/hive2
rm -rf /var/run/hst
rm -rf /var/run/knox
rm -rf /var/run/oozie 
rm -rf /var/run/webhcat
rm -rf /var/run/zookeeper

10、刪除所有節點上的庫文件夾

rm -rf /usr/lib/ambari-agent
rm -rf /usr/lib/ambari-infra-solr-client
rm -rf /usr/lib/ambari-metrics-hadoop-sink
rm -rf /usr/lib/ambari-metrics-kafka-sink
rm -rf /usr/lib/ambari-server-backups
rm -rf /usr/lib/ams-hbase
rm -rf /usr/lib/mysql
rm -rf /var/lib/ambari-agent
rm -rf /var/lib/ambari-metrics-grafana
rm -rf /var/lib/ambari-server
rm -rf /var/lib/flume
rm -rf /var/lib/hadoop-hdfs
rm -rf /var/lib/hadoop-mapreduce
rm -rf /var/lib/hadoop-yarn 
rm -rf /var/lib/hive2
rm -rf /var/lib/knox
rm -rf /var/lib/smartsense
rm -rf /var/lib/storm

####11、清除所有節點上的文件夾/var/tmp/*

rm -rf /var/tmp/*

12、從cron在所有節點上刪除HST

/usr/hdp/share/hst/bin/hst-scheduled-capture.sh sync
/usr/hdp/share/hst/bin/hst-scheduled-capture.sh

####13、刪除數據庫。 刪除MySQL和Postgres的實例,以便Ambari安裝和配置新的數據庫。

yum remove mysql mysql-server
yum erase postgresql
rm -rf /var/lib/pgsql
rm -rf /var/lib/mysql

14、刪除所有節點上的符號鏈接。

尤其是檢查文件夾/usr/sbin和/usr/lib/python2.6/site-packages

cd /usr/bin
rm -rf accumulo
rm -rf atlas-start
rm -rf atlas-stop
rm -rf beeline
rm -rf falcon
rm -rf flume-ng
rm -rf hbase
rm -rf hcat
rm -rf hdfs
rm -rf hive
rm -rf hiveserver2
rm -rf kafka
rm -rf mahout
rm -rf mapred
rm -rf oozie
rm -rf oozied.sh
rm -rf phoenix-psql
rm -rf phoenix-queryserver
rm -rf phoenix-sqlline
rm -rf phoenix-sqlline-thin
rm -rf pig
rm -rf python-wrap
rm -rf ranger-admin
rm -rf ranger-admin-start
rm -rf ranger-admin-stop
rm -rf ranger-kms
rm -rf ranger-usersync
rm -rf ranger-usersync-start
rm -rf ranger-usersync-stop
rm -rf slider
rm -rf sqoop
rm -rf sqoop-codegen
rm -rf sqoop-create-hive-table
rm -rf sqoop-eval
rm -rf sqoop-export
rm -rf sqoop-help
rm -rf sqoop-import
rm -rf sqoop-import-all-tables
rm -rf sqoop-job
rm -rf sqoop-list-databases
rm -rf sqoop-list-tables
rm -rf sqoop-merge
rm -rf sqoop-metastore
rm -rf sqoop-version
rm -rf storm
rm -rf storm-slider
rm -rf worker-lanucher
rm -rf yarn
rm -rf zookeeper-client
rm -rf zookeeper-server
rm -rf zookeeper-server-cleanup

####15、刪除所有節點上的服務用戶

userdel -r accumulo
userdel -r ambari-qa
userdel -r ams
userdel -r falcon
userdel -r flume
userdel -r hbase
userdel -r hcat
userdel -r hdfs
userdel -r hive
userdel -r kafka
userdel -r knox
userdel -r mapred
userdel -r oozie
userdel -r ranger
userdel -r spark
userdel -r sqoop
userdel -r storm
userdel -r tez
userdel -r yarn
userdel -r zeppelin
userdel -r zookeeper

16、在所有節點上運行find / -name **

你一定會找到更多的文件/文件夾。 刪除它們

find / -name *ambari*
find / -name *accumulo*
find / -name *atlas*
find / -name *beeline*
find / -name *falcon*
find / -name *flume*
find / -name *hadoop*
find / -name *hbase*
find / -name *hcat*
find / -name *hdfs*
find / -name *hdp*
find / -name *hive*
find / -name *hiveserver2*
find / -name *kafka*
find / -name *mahout*
find / -name *mapred*
find / -name *oozie*
find / -name *phoenix*
find / -name *pig*
find / -name *ranger*
find / -name *slider*
find / -name *sqoop*
find / -name *storm*
find / -name *yarn*
find / -name *zookeeper*

17、重新啓動所有節點

reboot
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章