【學習筆記】Hadoop之HDFS常用shell命令

1.hadoop命令

$ hadoop
	fs                   run a generic filesystem user client
		#訪問文件系統,相當於hdfs dfs
	version              print the version
	jar <jar>            run a jar file
		#運行一個jar到yarn上
	checknative [-a|-h]  check native hadoop and compression libraries availability
		#檢查原生hadoop和壓縮庫的可用性
	distcp <srcurl> <desturl> copy file or directories recursively
		#用於集羣間複製備份hdfs文件,多用於運維
	archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
	classpath            prints the class path needed to get the
		#hadoop啓動加載類路徑
	credential           interact with credential providers
	                     Hadoop jar and the required libraries
	daemonlog            get/set the log level for each daemon
	trace                view and modify Hadoop tracing settings
	CLASSNAME            run the class named CLASSNAME

2.hdfs命令

$ hdfs
Usage: hdfs [--config confdir] COMMAND
	     where COMMAND is one of:
	dfs                  run a filesystem command on the file systems supported in Hadoop.
		#訪問文件系統
	namenode -format     format the DFS filesystem
		#格式化文件系統,通常只有在初始化文件系統的時候使用,初始化系統之後,儘量不要使用該命令,會造成集羣無法啓動問題
	secondarynamenode    run the DFS secondary namenode
	namenode             run the DFS namenode
	journalnode          run the DFS journalnode
	zkfc                 run the ZK Failover Controller daemon
	datanode             run a DFS datanode
	dfsadmin             run a DFS admin client
	haadmin              run a DFS HA admin client
	fsck                 run a DFS filesystem checking utility
		#檢查文件系統狀態,可以看到塊的狀態情況(包括損壞、丟失情況)
		#具體的使用方式可以關注另一篇博客:【學習筆記】Hadoop之HDFS Block損壞恢復最佳實踐(含思考題) https://blog.csdn.net/eryehong/article/details/95167059
	balancer             run a cluster balancing utility
		#平衡集羣中各個節點的塊分佈,運行該命令時,最好是集羣較爲空閒時,否則的話會對文件的讀寫產生影響
	jmxget               get JMX exported values from NameNode or DataNode.
	mover                run a utility to move block replicas across
	                     storage types
	oiv                  apply the offline fsimage viewer to an fsimage
	oiv_legacy           apply the offline fsimage viewer to an legacy fsimage
	oev                  apply the offline edits viewer to an edits file
	fetchdt              fetch a delegation token from the NameNode
	getconf              get config values from configuration
		#-查看當前生效的配置項值
	groups               get the groups which users belong to
	snapshotDiff         diff two snapshots of a directory or diff the
	                     current directory contents with a snapshot
	lsSnapshottableDir   list all snapshottable dirs owned by the current user
		#		#		#Use -help to see options
	portmap              run a portmap service
	nfs3                 run an NFS version 3 gateway
	cacheadmin           configure the HDFS cache
	crypto               configure HDFS encryption zones
	storagepolicies      list/get/set block storage policies
	version              print the version

3.hdfs dfs命令

$ hdfs dfs
Usage: hadoop fs [generic options]
	[-appendToFile <localsrc> ... <dst>]
	[-cat [-ignoreCrc] <src> ...]
		#查看HDFS文件內容
	[-checksum <src> ...]
	[-chgrp [-R] GROUP PATH...]
		#修改HDFS文件用戶組
	[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
		#修改HDFS文件權限
	[-chown [-R] [OWNER][:[GROUP]] PATH...]
		#修改HDFS文件用戶
	[-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
		#將本地文件複製到HDFS,相當於 -put
	[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
		#將HDFS文件複製到本地,相當於 -get
	[-count [-q] [-h] [-v] <path> ...]
	[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
		#將HDFS文件複製到HDFS的其他位置
	[-createSnapshot <snapshotDir> [<snapshotName>]]
	[-deleteSnapshot <snapshotDir> <snapshotName>]
	[-df [-h] [<path> ...]]
		#顯示HDFS的磁盤空間情況
	[-du [-s] [-h] <path> ...]
		#顯示HDFS文件和目錄的使用空間
	[-expunge]
	[-find <path> ... <expression> ...]
	[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
		#將HDFS文件複製到本地
	[-getfacl [-R] <path>]
	[-getfattr [-R] {-n name | -d} [-e en] <path>]
	[-getmerge [-nl] <src> <localdst>]
	[-help [cmd ...]]
	[-ls [-d] [-h] [-R] [<path> ...]]
	[-mkdir [-p] <path> ...]
	[-moveFromLocal <localsrc> ... <dst>]
	[-moveToLocal <src> <localdst>]
	[-mv <src> ... <dst>]
	[-put [-f] [-p] [-l] <localsrc> ... <dst>]
		#將本地文件複製到HDFS
	[-renameSnapshot <snapshotDir> <oldName> <newName>]
	[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
	[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
	[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
	[-setfattr {-n name [-v value] | -x name} <path>]
	[-setrep [-R] [-w] <rep> <path> ...]
	[-stat [format] <path> ...]
	[-tail [-f] <file>]
	[-test -[defsz] <path>]
	[-text [-ignoreCrc] <src> ...]
	[-touchz <path> ...]
	[-usage [cmd ...]]

Generic options supported are
-conf <configuration file>     specify an application configuration file
-D <property=value>            use value for given property
-fs <local|namenode:port>      specify a namenode
-jt <local|resourcemanager:port>    specify a ResourceManager
-files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章