前面有篇博文已经介绍了HA得原理,这篇主要来介绍HA的实操,HA原理: https://blog.csdn.net/czz1141979570/article/details/104856251
NN切换:
切换前的正常状态为:hadoop101:active hadoop102:standby
现在使用命令kill -9进行人工干预:
test成功,再重新将hadoop101上nn启动
haadmin getServiceState:
言外之意就是获取serviceid状态,
[hadoop@hadoop101 sbin]$ hdfs haadmin -getServiceState nn1
20/03/19 19:16:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
standby[hadoop@hadoop102 ~]$ hdfs haadmin -getServiceState nn2
20/03/19 19:25:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
active当然,也可以在同一台机器上查看所有NN节点的工作状态
这个参数主要是用来为后期写shell脚本监控NN1和NN2的运行状态,后面会专门写篇博文介绍。
hdfs getconf(get config values from configuration):
言外之意便是根据指定的key可以获取相应的value,而且是从当前运行的集群xml配置文件中动态获取,这个参数也是主要用来监控,后面再介绍,先看下如何使用
[hadoop@hadoop101 sbin]$ hdfs getconf
20/03/19 19:31:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hdfs getconf is utility for getting configuration information from the config file.hadoop getconf
[-namenodes] gets list of namenodes in the cluster.
[-secondaryNameNodes] gets list of secondary namenodes in the cluster.
[-backupNodes] gets list of backup nodes in the cluster.
[-includeFile] gets the include file path that defines the datanodes that can join the cluster.
[-excludeFile] gets the exclude file path that defines the datanodes that need to decommissioned.
[-nnRpcAddresses] gets the namenode rpc addresses
[-confKey [key]] gets a specific key from the configuration
For Example:
[hadoop@hadoop101 sbin]$ hdfs getconf -confKey dfs.nameservices
20/03/19 19:33:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
liuyi
[hadoop@hadoop101 sbin]$ hdfs getconf -confKey dfs.blocksize
20/03/19 19:33:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
134217728
hdfs fsck:
在HDFS中,提供了fsck命令,用于检查HDFS上文件和目录的健康状态、获取文件的block块信息和位置信息等。
具体命令介绍:
-move: 移动损坏的文件到/lost+found目录下
-delete: 删除损坏的文件 注意:慎用!
-openforwrite: 输出检测中的正在被写的文件
-list-corruptfileblocks: 输出损坏的块及其所属的文件
-files: 输出正在被检测的文件
-blocks: 输出block的详细报告 (需要和-files参数一起使用)
-locations: 输出block的位置信息 (需要和-files参数一起使用)
-racks: 输出文件块位置所在的机架信息(需要和-files参数一起使用)
例如要查看HDFS中某个文件的block块的具体分布,可以这样写:
hadoop fsck /your_file_path -files -blocks -locations -racks
for example:
[hadoop@hadoop101 sbin]$ hdfs fsck /
Connecting to namenode via http://hadoop102:50070/fsck?ugi=hadoop&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.1.101 for path / at Thu Mar 19 19:59:42 CST 2020
Status: HEALTHY
Total size: 0 B
Total dirs: 7
Total files: 0
Total symlinks: 0
Total blocks (validated): 0
Minimally replicated blocks: 0
Over-replicated blocks: 0
Under-replicated blocks: 0
Mis-replicated blocks: 0
Default replication factor: 3
Average block replication: 0.0
Corrupt blocks: 0
Missing replicas: 0
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Thu Mar 19 19:59:42 CST 2020 in 3 milliseconds主要关注标红部分。
这里只简单介绍下,后面会有fsck故障恢复重演博文。
--------------------------
用人品去感动别人,用行动去带动别人,用阳光去照耀别人,用坚持去赢得别人,要求自己每天都去做与目标有关的事情,哪怕每天只进步一点点,坚持下来你就是最优秀卓越的!欢迎大家加入大数据qq交流群:725967421 微信群:flyfish运维实操 一起交流,一起进步!!
--------------------------