docker下的fastdfs建多個跟蹤器多個存儲器並測試上傳、下載、刪除

一、描述

  使用docker創建fastdfs,使用的鏡像是season/fastdfs,然後對鏡像進行了一些調整,並且改名爲yxq18509376997/fastdfs:v1.1,當前我使用的是yxq18509376997/fastdfs:v1.1,其實使用season/fastdfs也是可以的。

二、準備

1> 下載鏡像

docker pull season/fastdfs

docker pull yxq18509376997/fastdfs

我這裏使用的是yxq18509376997/fastdfs

2> 準備目錄

  創建如下目錄:

/home/docker/fastdfs/storage23000
/home/docker/fastdfs/storage24000
/home/docker/fastdfs/tracker22122
/home/docker/fastdfs/tracker22123
/home/docker/fastdfs/tracker22124

3> 準備tracker配置文件

  配置文件總共是3個,但是修改的地方只有一個就是port,分別是22122,22123,22124;

  分別放在如下目錄:
  /home/docker/fastdfs/tracker22122
  /home/docker/fastdfs/tracker22123
  /home/docker/fastdfs/tracker22124

# 需要自行配置 /fdfs_conf/storage

# is this config file disabled
# false for enabled
# true for disabled
# 該配置文件是否不生效,false 生效;true 不生效
disabled=false

# bind an address of this host
# empty for bind all addresses of this host
# 是否綁定IP,後面爲綁定的IP地址 (常用於服務器有多個IP但只希望一個IP提供服務)。如果不填則表示所有的
bind_addr=

# the tracker server port
# 跟蹤器服務端口
port=22122

# connect timeout in seconds
# default value is 30s
# 連接超時時間,針對socket套接字函數connect
connect_timeout=30

# network timeout in seconds
# default value is 30s
# tracker server的網絡超時,單位爲秒。發送或接收數據時,如果在超時時間後還不能發送或接收數據,則本次網絡通信失敗。
network_timeout=60

# the base path to store data and log files
# 目錄地址(根目錄必須存在,子目錄會自動創建)
# 附目錄說明: 
#  tracker server目錄及文件結構:
#  ${base_path}
#    |__data
#    |     |__storage_groups.dat:存儲分組信息
#    |     |__storage_servers.dat:存儲服務器列表
#    |__logs
#          |__trackerd.log:tracker server日誌文件
base_path=/fastdfs/tracker

# max concurrent connections this server supported
# 系統提供服務時的最大連接數。對於V1.x,因一個連接由一個線程服務,也就是工作線程數。
# 對於V2.x,最大連接數和工作線程數沒有任何關係
max_connections=256

# accept thread count
# default value is 1
# since V4.07
# 允許線程數
accept_threads=1

# work thread count, should <= max_connections
# default value is 4
# since V2.00
# V2.0引入的這個參數,工作線程數,通常設置爲CPU數
work_threads=4

# the method of selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
# 上傳組(卷) 的方式 0:輪詢方式 1: 指定組 2: 平衡負載(選擇最大剩餘空間的組(卷)上傳)
# 這裏如果在應用層指定了上傳到一個固定組,那麼這個參數被繞過
store_lookup=2

# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
# 當上一個參數設定爲1 時 (store_lookup=1,即指定組名時),必須設置本參數爲系統中存在的一個組名。如果選擇其他的上傳方式,這個參數就沒有效了。
store_group=group1

# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# 選擇哪個storage server 進行上傳操作(一個文件被上傳後,這個storage server就相當於這個文件的storage server源,會對同組的storage server推送這個文件達到同步效果)
# 0: 輪詢方式 
# 1: 根據ip 地址進行排序選擇第一個服務器(IP地址最小者)
# 2: 根據優先級進行排序(上傳優先級由storage server來設置,參數名爲upload_priority) 
store_server=0

# which path(means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
# 選擇storage server 中的哪個目錄進行上傳。storage server可以有多個存放文件的base path(可以理解爲多個磁盤)。
# 0: 輪流方式,多個目錄依次存放文件
# 2: 選擇剩餘空間最大的目錄存放文件(注意:剩餘磁盤空間是動態的,因此存儲到的目錄或磁盤可能也是變化的)
store_path=0

# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
# 選擇哪個 storage server 作爲下載服務器 
# 0: 輪詢方式,可以下載當前文件的任一storage server
# 1: 哪個爲源storage server 就用哪一個 (前面說過了這個storage server源 是怎樣產生的) 就是之前上傳到哪個storage server服務器就是哪個了
download_server=0

# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in 
# a group <= reserved_storage_space, 
# no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as reserved_storage_space = 10%
# storage server 上保留的空間,保證系統或其他應用需求空間。可以用絕對值或者百分比(V4開始支持百分比方式)。
#(指出 如果同組的服務器的硬盤大小一樣,以最小的爲準,也就是隻要同組中有一臺服務器達到這個標準了,這個標準就生效,原因就是因爲他們進行備份)
reserved_storage_space = 10%

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
# 選擇日誌級別
log_level=info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
# 操作系統運行FastDFS的用戶組 (不填 就是當前用戶組,哪個啓動進程就是哪個)
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
# 操作系統運行FastDFS的用戶 (不填 就是當前用戶,哪個啓動進程就是哪個)
run_by_user=

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# 可以連接到此 tracker server 的ip範圍(對所有類型的連接都有影響,包括客戶端,storage server)
allow_hosts=*

# sync log buff to disk every interval seconds
# default value is 10 seconds
# 同步或刷新日誌信息到硬盤的時間間隔,單位爲秒
# 注意:tracker server 的日誌不是時時寫硬盤的,而是先寫內存。
sync_log_buff_interval = 10

# check storage server alive interval seconds
# 檢測 storage server 存活的時間隔,單位爲秒。
# storage server定期向tracker server 發心跳,如果tracker server在一個check_active_interval內還沒有收到storage server的一次心跳,
# 那邊將認爲該storage server已經下線。所以本參數值必須大於storage server配置的心跳時間間隔。通常配置爲storage server心跳時間間隔的2倍或3倍。
check_active_interval = 120

# thread stack size, should >= 64KB
# default value is 64KB
# 線程棧的大小。FastDFS server端採用了線程方式。更正一下,tracker server線程棧不應小於64KB,不是512KB。
# 線程棧越大,一個線程佔用的系統資源就越多。如果要啓動更多的線程(V1.x對應的參數爲max_connections,V2.0爲work_threads),可以適當降低本參數值。
thread_stack_size = 64KB

# auto adjust when the ip address of the storage server changed
# default value is true
# 這個參數控制當storage server IP地址改變時,集羣是否自動調整。注:只有在storage server進程重啓時才完成自動調整。
storage_ip_changed_auto_adjust = true

# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
# V2.0引入的參數。存儲服務器之間同步文件的最大延遲時間,缺省爲1天。根據實際情況進行調整
# 注:本參數並不影響文件同步過程。本參數僅在下載文件時,判斷文件是否已經被同步完成的一個閥值(經驗值)
storage_sync_file_max_delay = 86400

# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
# V2.0引入的參數。存儲服務器同步一個文件需要消耗的最大時間,缺省爲300s,即5分鐘。
# 注:本參數並不影響文件同步過程。本參數僅在下載文件時,作爲判斷當前文件是否被同步完成的一個閥值(經驗值)
storage_sync_file_max_time = 300

# if use a trunk file to store several small files
# default value is false
# since V3.00
# V3.0引入的參數。是否使用小文件合併存儲特性,缺省是關閉的。
use_trunk_file = false 

# the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
# V3.0引入的參數。
# trunk file分配的最小字節數。比如文件只有16個字節,系統也會分配slot_min_size個字節。
slot_min_size = 256

# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <=  this value
# default value is 16MB
# since V3.00
# V3.0引入的參數。
# 只有文件大小<=這個參數值的文件,纔會合併存儲。如果一個文件的大小大於這個參數值,將直接保存到一個文件中(即不採用合併存儲方式)。
slot_max_size = 16MB

# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
# V3.0引入的參數。
# 合併存儲的trunk file大小,至少4MB,缺省值是64MB。不建議設置得過大。
trunk_file_size = 64MB

# if create trunk file advancely
# default value is false
# since V3.06
# 是否提前創建trunk file。只有當這個參數爲true,下面3個以trunk_create_file_打頭的參數纔有效
trunk_create_file_advance = false

# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
# 提前創建trunk file的起始時間點(基準時間),02:00表示第一次創建的時間點是凌晨2點。
trunk_create_file_time_base = 02:00

# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
# 創建trunk file的時間間隔,單位爲秒。如果每天只提前創建一次,則設置爲86400
trunk_create_file_interval = 86400

# the threshold to create trunk file
# when the free trunk file size less than the threshold, will create 
# the trunk files
# default value is 0
# since V3.06
# 提前創建trunk file時,需要達到的空閒trunk大小
# 比如本參數爲20G,而當前空閒trunk爲4GB,那麼只需要創建16GB的trunk file即可。
trunk_create_file_space_threshold = 20G

# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces 
# when startup. you should set this parameter to true when neccessary.
#trunk初始化時,是否檢查可用空間是否被佔用
trunk_init_check_occupying = false

# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
# 是否無條件從trunk binlog中加載trunk可用空間信息
# FastDFS缺省是從快照文件storage_trunk.dat中加載trunk可用空間,
# 該文件的第一行記錄的是trunk binlog的offset,然後從binlog的offset開始加載
trunk_init_reload_from_binlog = false

# if use storage ID instead of IP address
# default value is false
# since V4.00
# 是否使用server ID作爲storage server標識
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# since V4.00
# use_storage_id 設置爲true,才需要設置本參數
# 在文件中設置組名、server ID和對應的IP地址,參見源碼目錄下的配置示例:conf/storage_ids.conf
storage_ids_filename = storage_ids.conf

# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = ip

# if store slave file use symbol link
# default value is false
# since V4.01
# 存儲從文件是否採用symbol link(符號鏈接)方式
# 如果設置爲true,一個從文件將佔用兩個文件:原始文件及指向它的符號鏈接。
store_slave_file_use_link = false

# if rotate the error log every day
# default value is false
# since V4.02
# 是否定期輪轉error log,目前僅支持一天輪轉一次
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
# error log定期輪轉的時間點,只有當rotate_error_log設置爲true時有效
error_log_rotate_time=00:00

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
# error log按大小輪轉
# 設置爲0表示不按文件大小輪轉,否則當error log達到該大小,就會輪轉到新文件中
rotate_error_log_size = 0

# if use connection pool
# default value is false
# since V4.05
# 是否使用連接池
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
# 連接池的最大空閒時間 單位秒
connection_pool_max_idle_time = 3600

# HTTP port on this tracker server
# 以下是關於http的設置了 默認編譯是不生效的 要求更改 #WITH_HTTPD=1 將 註釋#去掉 再編譯
# 關於http的應用 說實話 不是很瞭解 沒有見到
http.server_port=8080

# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval=30

# check storage HTTP server alive type, values are:
#   tcp : connect to the storge server with HTTP port only, 
#        do not request and get response
#   http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type=tcp

# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri=/status.html


4> 準備client.conf配置文件

  以前使用的時候只是配置了tracker.conf和storage.conf,然後就可以正常的上傳文件了,但是再下載文件的時候發現就找不到了,後來纔再tracker服務裏面配置了client.conf才能正常的下載文件。

  再下面的目錄中各放一份:
  /home/docker/fastdfs/tracker22122
  /home/docker/fastdfs/tracker22123
  /home/docker/fastdfs/tracker22124

# 你麻痹
# connect timeout in seconds
# default value is 30s
# 連接超時時間,針對socket套接字函數connect
connect_timeout=30

# network timeout in seconds
# default value is 30s
#  storage server 網絡超時時間,單位爲秒。發送或接收數據時,如果在超時時間後還不能發送或接收數據,則本次網絡通信失敗。
network_timeout=60

# the base path to store log files
# 保存日誌文件的位置
base_path=/fastdfs/clinet

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
# tracker_server 的列表 要寫端口的哦 (再次提醒是主動連接tracker_server )
# 有多個tracker server時,每個tracker server寫一行
tracker_server=192.168.47.150:22122
tracker_server=192.168.47.150:22123
tracker_server=192.168.47.150:22124

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
# 日誌級別
log_level=info

# if use connection pool
# default value is false
# since V4.05
# 是否使用連接池
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
# 連接池最大空閒時間
connection_pool_max_idle_time = 3600

# if load FastDFS parameters from tracker server
# since V4.05
# default value is false
# 是否加載來自tracker的fdfs參數
load_fdfs_parameters_from_tracker=false

# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V4.05
# 是否使用存儲id代替ip地址
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V4.05
storage_ids_filename = storage_ids.conf


#HTTP settings
http.tracker_server_port=80

#use "#include" directive to include HTTP other settiongs
##include http.conf


4> 準備storage.conf配置文件

  該文件需要2份,但是他們配置中一個是{port=23000,group_name=group1},一個是{port=24000,group_name=group2}然後文件放在下面的目錄下:
  /home/docker/fastdfs/storage23000
  /home/docker/fastdfs/storage24000

# 需要自行配置 /fdfs_conf/storage


# is this config file disabled
# false for enabled
# true for disabled
# 該配置文件是否不生效,false 生效;true 不生效
disabled=false

# the name of the group this storage server belongs to
# 指定 此 storage server 所在 組(卷)
group_name=group1

# bind an address of this host
# empty for bind all addresses of this host
# 是否綁定IP,後面爲綁定的IP地址 (常用於服務器有多個IP但只希望一個IP提供服務)。如果不填則表示所有的
bind_addr=

# if bind an address of this host when connect to other servers 
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
# bind_addr通常是針對server的。當指定bind_addr時,本參數纔有效。
# 本storage server作爲client連接其他服務器(如tracker server、其他storage server),是否綁定bind_addr。
client_bind=true

# the storage server port
# storage server服務端口
port=23000

# connect timeout in seconds
# default value is 30s
# 連接超時時間,針對socket套接字函數connect
connect_timeout=30

# network timeout in seconds
# default value is 30s
#  storage server 網絡超時時間,單位爲秒。發送或接收數據時,如果在超時時間後還不能發送或接收數據,則本次網絡通信失敗。
network_timeout=60

# heart beat interval in seconds
# 心跳間隔時間,單位爲秒 (這裏是指主動向tracker server 發送心跳)
heart_beat_interval=30

# disk usage report interval in seconds
# storage server向tracker server報告磁盤剩餘空間的時間間隔,單位爲秒。
stat_report_interval=60

# the base path to store data and log files
# base_path 目錄地址,根目錄必須存在  子目錄會自動生成 (注 :這裏不是上傳的文件存放的地址,之前是的,在某個版本後更改了)
base_path=/fastdfs/storage

# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
# 系統提供服務時的最大連接數。對於V1.x,因一個連接由一個線程服務,也就是工作線程數。
# 對於V2.x,最大連接數和工作線程數沒有任何關係
max_connections=256

# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
# V2.0引入本參數。設置隊列結點的buffer大小。工作隊列消耗的內存大小 = buff_size * max_connections
# 設置得大一些,系統整體性能會有所提升。
# 消耗的內存請不要超過系統物理內存大小。另外,對於32位系統,請注意使用到的內存不要超過3GB
buff_size = 256KB

# accept thread count
# default value is 1
# since V4.07
accept_threads=1

# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
# V2.0引入的這個參數,工作線程數,通常設置爲CPU數
work_threads=4

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
# V2.0引入本參數。磁盤IO讀寫是否分離,缺省是分離的。
disk_rw_separated = true

# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
# V2.0引入本參數。針對單個存儲路徑的讀線程數,缺省值爲1。
# 讀寫分離時,系統中的讀線程數 = disk_reader_threads * store_path_count
# 讀寫混合時,系統中的讀寫線程數 = (disk_reader_threads + disk_writer_threads) * store_path_count
disk_reader_threads = 1

# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
# V2.0引入本參數。針對單個存儲路徑的寫線程數,缺省值爲1。
# 讀寫分離時,系統中的寫線程數 = disk_writer_threads * store_path_count
# 讀寫混合時,系統中的讀寫線程數 = (disk_reader_threads + disk_writer_threads) * store_path_count
disk_writer_threads = 1

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
# 同步文件時,如果從binlog中沒有讀到要同步的文件,休眠N毫秒後重新讀取。0表示不休眠,立即再次嘗試讀取。
# 出於CPU消耗考慮,不建議設置爲0。如何希望同步儘可能快一些,可以將本參數設置得小一些,比如設置爲10ms
sync_wait_msec=50

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
# 同步上一個文件後,再同步下一個文件的時間間隔,單位爲毫秒,0表示不休眠,直接同步下一個文件。 
sync_interval=0

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# 上面二個一起解釋。允許系統同步的時間段 (默認是全天) 。一般用於避免高峯同步產生一些問題而設定
sync_end_time=23:59

# write to the mark file after sync N files
# default value is 500
# 同步完N個文件後,把storage的mark文件同步到磁盤
# 注:如果mark文件內容沒有變化,則不會同步
write_mark_file_freq=500

# path(disk or mount point) count, default value is 1
# 存放文件時storage server支持多個路徑(例如磁盤)。這裏配置存放文件的基路徑數目,通常只配一個目錄。
store_path_count=1

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# 逐一配置store_path個路徑,索引號基於0。注意配置方法後面有0,1,2 ......,需要配置0到store_path - 1。
# 如果不配置base_path0,那邊它就和base_path對應的路徑一樣。
store_path0=/fastdfs/store_path
#store_path1=/home/yuqing/fastdfs2

# subdir_count  * subdir_count directories will be auto created under each 
# store_path (disk), value can be 1 to 256, default value is 256
# FastDFS存儲文件時,採用了兩級目錄。這裏配置存放文件的目錄個數 (系統的存儲機制,大家看看文件存儲的目錄就知道了)
# 如果本參數只爲N(如:256),那麼storage server在初次運行時,會自動創建 N * N 個存放文件的子目錄。
subdir_count_per_path=256

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
# tracker_server 的列表 要寫端口的哦 (再次提醒是主動連接tracker_server )
# 有多個tracker server時,每個tracker server寫一行
tracker_server=192.168.47.150:22122
tracker_server=192.168.47.150:22123
tracker_server=192.168.47.150:22124

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
# 日誌級別
log_level=info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
# 操作系統運行FastDFS的用戶組 (不填 就是當前用戶組,哪個啓動進程就是哪個)
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
# 操作系統運行FastDFS的用戶 (不填 就是當前用戶,哪個啓動進程就是哪個)
run_by_user=

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# 允許連接本storage server的IP地址列表 (不包括自帶HTTP服務的所有連接)
# 可以配置多行,每行都會起作用
allow_hosts=*

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
# 文件在data目錄下分散存儲策略。
# 0: 輪流存放,在一個目錄下存儲設置的文件數後(參數file_distribute_rotate_count中設置文件數),使用下一個目錄進行存儲。
# 1: 隨機存儲,根據文件名對應的hash code來分散存儲。
file_distribute_path_mode=0

# valid when file_distribute_to_path is set to 0 (round robin), 
# when the written file count reaches this number, then rotate to next path
# default value is 100
# 當上面的參數file_distribute_path_mode配置爲0(輪流存放方式)時,本參數有效。
# 當一個目錄下的文件存放的文件數達到本參數值時,後續上傳的文件存儲到下一個目錄中。
file_distribute_rotate_count=100

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
# 當寫入大文件時,每寫入N個字節,調用一次系統函數fsync將內容強行同步到硬盤。0表示從不調用fsync 
fsync_after_written_bytes=0

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
# 同步或刷新日誌信息到硬盤的時間間隔,單位爲秒
# 注意:storage server 的日誌信息不是時時寫硬盤的,而是先寫內存。
sync_log_buff_interval=10

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
# 同步binglog(更新操作日誌)到硬盤的時間間隔,單位爲秒
# 本參數會影響新上傳文件同步延遲時間
sync_binlog_buff_interval=10

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
# 把storage的stat文件同步到磁盤的時間間隔,單位爲秒。
# 注:如果stat文件內容沒有變化,不會進行同步
sync_stat_file_interval=300

# thread stack size, should >= 512KB
# default value is 512KB
# 線程棧的大小。FastDFS server端採用了線程方式。
# 對於V1.x,storage server線程棧不應小於512KB;對於V2.0,線程棧大於等於128KB即可。
# 線程棧越大,一個線程佔用的系統資源就越多。
# 對於V1.x,如果要啓動更多的線程(max_connections),可以適當降低本參數值。
thread_stack_size=512KB

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
#  本storage server作爲源服務器,上傳文件的優先級,可以爲負數。值越小,優先級越高。這裏就和 tracker.conf 中store_server= 2時的配置相對應了
upload_priority=10

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=

# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
# 是否檢測上傳文件已經存在。如果已經存在,則不存在文件內容,建立一個符號鏈接以節省磁盤空間。 
# 這個應用要配合FastDHT 使用,所以打開前要先安裝FastDHT 
# 1或yes 是檢測,0或no 是不檢測
check_file_duplicate=0

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
# 文件去重時,文件內容的簽名方式:
## hash: 4個hash code
## md5:MD5
file_signature_method=hash

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
# 當上個參數設定爲1 或 yes時 (true/on也是可以的) , 在FastDHT中的命名空間
key_namespace=FastDFS

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
# 與FastDHT servers 的連接方式 (是否爲持久連接) ,默認是0(短連接方式)。可以考慮使用長連接,這要看FastDHT server的連接數是否夠用。
keep_alive=0

# you can use "#include filename" (not include double quotes) directive to 
# load FastDHT server list, when the filename is a relative path such as 
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf
# 可以通過 #include filename 方式來加載 FastDHT servers  的配置,裝上FastDHT就知道該如何配置啦。
# 同樣要求 check_file_duplicate=1 時纔有用,不然系統會忽略
# fdht_servers.conf 記載的是 FastDHT servers 列表 

# if log to access log
# default value is false
# since V4.00
# 是否將文件操作記錄到access log
use_access_log = false

# if rotate the access log every day
# default value is false
# since V4.00
# 是否定期輪轉access log,目前僅支持一天輪轉一次
rotate_access_log = false

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
# access log定期輪轉的時間點,只有當rotate_access_log設置爲true時有效
access_log_rotate_time=00:00

# if rotate the error log every day
# default value is false
# since V4.02
# 是否定期輪轉error log,目前僅支持一天輪轉一次
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
# error log定期輪轉的時間點,只有當rotate_error_log設置爲true時有效
error_log_rotate_time=00:00

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
# access log按文件大小輪轉
# 設置爲0表示不按文件大小輪轉,否則當access log達到該大小,就會輪轉到新文件中
rotate_access_log_size = 0

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
# error log按文件大小輪轉
# 設置爲0表示不按文件大小輪轉,否則當error log達到該大小,就會輪轉到新文件中
rotate_error_log_size = 0

# if skip the invalid record when sync file
# default value is false
# since V4.02
# 文件同步的時候,是否忽略無效的binlog記錄
file_sync_skip_invalid_record=false

# if use connection pool
# default value is false
# since V4.05
# 是否使用連接池
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
# 連接池最大空閒時間
connection_pool_max_idle_time = 3600

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
# storage server上web server域名,通常僅針對單獨部署的web server。這樣URL中就可以通過域名方式來訪問storage server上的文件了,
# 這個參數爲空就是IP地址的方式。
http.domain_name=

# the port of the web server on this storage server
# 如果系統較大,這個服務有可能支持不了,可以自行換一個webserver,我喜歡lighttpd,當然ng也很好了。
http.server_port=8888


5> 創建fdfs-compose.yml文件

  該文件放在如下目錄:
  /home/docker/fastdfs

version: '3'
services:
   tracker22122:
      image: yxq18509376997/fastdfs
      container_name: tracker22122
      command: tracker
      restart: always
      network_mode: host
      volumes:
        - ./tracker22122/tracker.conf:/fdfs_conf/tracker.conf
        - ./tracker22122/client.conf:/fdfs_conf/client.conf
        - ./tracker22122/data:/fastdfs/tracker/data
   tracker22123:
      image: yxq18509376997/fastdfs
      container_name: tracker22123
      command: tracker
      restart: always
      network_mode: host
      volumes:
        - ./tracker22123/tracker.conf:/fdfs_conf/tracker.conf
        - ./tracker22123/client.conf:/fdfs_conf/client.conf
        - ./tracker22123/data:/fastdfs/tracker/data
   tracker22124:
      image: yxq18509376997/fastdfs
      container_name: tracker22124
      command: tracker
      restart: always
      network_mode: host
      volumes:
        - ./tracker22124/tracker.conf:/fdfs_conf/tracker.conf
        - ./tracker22124/client.conf:/fdfs_conf/client.conf
        - ./tracker22124/data:/fastdfs/tracker/data
        
        
   storage23000:
      image: yxq18509376997/fastdfs
      container_name: storage23000
      command: storage
      restart: always
      network_mode: host
      volumes:
        - ./storage23000/storage.conf:/fdfs_conf/storage.conf
        - ./data/group1/M00:/fastdfs/store_path/data
        - ./storage23000/data:/fastdfs/storage/data
   storage24000:
      image: yxq18509376997/fastdfs
      container_name: storage24000
      command: storage
      restart: always
      network_mode: host
      volumes:
        - ./storage24000/storage.conf:/fdfs_conf/storage.conf
        - ./data/group2/M00:/fastdfs/store_path/data
        - ./storage24000/data:/fastdfs/storage/data
  

三、啓動

進入到如下目錄:

cd /home/docker/fastdfs

執行如下命令:

docker-compose -f fdfs-compose.yml up -d

然後可以查看到啓動了5個容器tracker22122,tracker22123,tracker22124,storage23000,storage24000

四、測試

1> 開啓一個客戶端

docker run -tid --name fdfs-test --net=host yxq18509376997/fastdfs sh

2> 複製拷貝client.conf配置文件

docker cp /home/docker/fastdfs/tracker22122/client.conf fdfs-test:/fdfs_conf/

3> 進入到fdfs-test容器中

docker exec -it fdfs-test /bin/bash

4> 創建一個測試txt文件

echo haha>test.txt

5> 進行文件上傳測試

fdfs_upload_file /fdfs_conf/client.conf /test.txt

可能報出這樣的一個錯誤
在這裏插入圖片描述
如果出現了這樣的錯誤你就執行下面的命令

mkdir /fastdfs/client

之後再重新執行上傳命令,然後就可以看到
在這裏插入圖片描述
在這裏插入圖片描述

6> 下載文件

fdfs_download_file /fdfs_conf/client.conf group1/M00/00/00/wKgvll2FvIOAAejZAAAABRvwYuU534.txt

然後你就可以看到
在這裏插入圖片描述

7> 刪除文件

dfs_delete_file /fdfs_conf/client.conf group1/M00/00/00/wKgvll2FvIOAAejZAAAABRvwYuU534.txt

然後查看文件原來所在的位置
在這裏插入圖片描述
文件已經被刪除了。

注:java操作fastdfs進行上傳,下載,刪除,設置文件元信息,分組上傳縮略圖等可以查看網絡上關於fastdfs-client.jar包的相關操作,或聯繫本人[email protected]

本資料僅供學習

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章