【HDFS】二、HDFS命令行操作

二、HDFS命令行操作

基本命令

bin/hadoop fs 具體命令

參數大全

[daxiong@hadoop hadoop-2.7.2]$ bin/hadoop fs

        [-appendToFile <localsrc> ... <dst>]
        [-cat [-ignoreCrc] <src> ...]
        [-checksum <src> ...]
        [-chgrp [-R] GROUP PATH...]
        [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
        [-chown [-R] [OWNER][:[GROUP]] PATH...]
        [-copyFromLocal [-f] [-p] <localsrc> ... <dst>]
        [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-count [-q] <path> ...]
        [-cp [-f] [-p] <src> ... <dst>]
        [-createSnapshot <snapshotDir> [<snapshotName>]]
        [-deleteSnapshot <snapshotDir> <snapshotName>]
        [-df [-h] [<path> ...]]
        [-du [-s] [-h] <path> ...]
        [-expunge]
        [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-getfacl [-R] <path>]
        [-getmerge [-nl] <src> <localdst>]
        [-help [cmd ...]]
        [-ls [-d] [-h] [-R] [<path> ...]]
        [-mkdir [-p] <path> ...]
        [-moveFromLocal <localsrc> ... <dst>]
        [-moveToLocal <src> <localdst>]
        [-mv <src> ... <dst>]
        [-put [-f] [-p] <localsrc> ... <dst>]
        [-renameSnapshot <snapshotDir> <oldName> <newName>]
        [-rm [-f] [-r|-R] [-skipTrash] <src> ...]
        [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
        [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
        [-setrep [-R] [-w] <rep> <path> ...]
        [-stat [format] <path> ...]
        [-tail [-f] <file>]
        [-test -[defsz] <path>]
        [-text [-ignoreCrc] <src> ...]
        [-touchz <path> ...]
        [-usage [cmd ...]]

常用命令實際操作

  1. 啓動Hadoop集羣(方便後續的測試)

    啓動HDFS集羣
    [daxiong@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh 
    啓動yarn集羣
    [daxiong@hadoop102 hadoop-2.7.2]$ sbin/start-yarn.sh 
  2. -help:輸出這個命令參數

    
    #比如查看rm的用法
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -help rm
    
    -rm [-f] [-r|-R] [-skipTrash] <src> ... :
     Delete all files that match the specified file pattern. Equivalent to the Unix
     command "rm <src>"
    
     -skipTrash  option bypasses trash, if enabled, and immediately deletes <src>   
     -f          If the file does not exist, do not display a diagnostic message or 
                 modify the exit status to reflect an error.                        
     -[rR]       Recursively deletes directories     
  3. -ls: 顯示目錄信息

    [daxiong@hadoop102 bin]$ ./hadoop fs -ls /
    
    Found 1 items
    drwxrwx---   - daxiong supergroup          0 2018-04-08 22:49 /tmp
  4. -mkdir:在hdfs上創建目錄

    
    #創建一個 /daxiong文件夾
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -mkdir /daxiong
    
    #創建一個 /daxiong/haha/da/da/da文件夾,需要加上 -p 參數
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -mkdir -p  /daxiong/haha/da/da/da
    
    
    #查看是否創建成功
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -ls /
    Found 1 items
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:09 /daxiong
    
    #遞歸查看是否創建成功
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -ls -R /
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:51 /daxiong
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:51 /daxiong/haha
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:54 /daxiong/haha/da
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:54 /daxiong/haha/da/da
    drwxr-xr-x   - daxiong supergroup          0 2018-04-10 20:54 /daxiong/haha/da/da/da
    
  5. -moveFromLocal從本地剪切粘貼到hdfs

  6. -appendToFile :追加一個文件到已經存在的文件末尾

  7. -cat :顯示文件內容

  8. -tail:顯示一個文件的末尾

  9. -chgrp 、-chmod、-chown:linux文件系統中的用法一樣,修改文件所屬權限

  10. -copyFromLocal:從本地文件系統中拷貝文件到hdfs路徑去

  11. -copyToLocal:從hdfs拷貝到本地

  12. -cp :從hdfs的一個路徑拷貝到hdfs的另一個路徑

  13. -mv:在hdfs目錄中移動文件

  14. -get:等同於copyToLocal,就是從hdfs下載文件到本地

  15. -getmerge :合併下載多個文件,比如hdfs的目錄 /aaa/下有多個文件:log.1, log.2,log.3,…

  16. -put:等同於copyFromLocal

    
    #把本機/opt/moudle/hadoop-2.7.2/daxiong.txt 文件放到HDFS上的/daxiong文件夾下
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -put /opt/moudle/hadoop-2.7.2/daxiong.txt /daxiong
  17. -rm:刪除文件或文件夾

    
    #強制刪除 /tmp文件夾
    
    [daxiong@hadoop102 bin]$ ./hadoop fs -rm -r /tmp
    
    18/04/10 20:08:03 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Em
    ptier interval = 0 minutes.Deleted /tmp
    
  18. -rmdir:刪除空目錄

  19. -df :統計文件系統的可用空間信息

  20. -du統計文件夾的大小信息

    [daxiong@hadoop102 bin]$ ./hadoop fs -df
    
    Filesystem                    Size    Used    Available  Use%
    hdfs://hadoop101:8020  31304097792  114688  17920761856    0%
    

  21. -setrep:設置hdfs中文件的副本數量

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章