極客時間運維進階訓練營第十一週作業

1、掌握對象存儲的特點及使用場景

對象存儲特性
沿用AWS S3 api標準,無需掛載
數據存在於平面地址空間內的同一級別,應用通過唯一地址來識別每個單獨的數據對象
每個對象可包含有助於檢索的元數據
通過restful接口實現數據的讀寫
eg:
rados GW對象存儲網關簡介:
RadosGW是對象存儲-oss objetct storage service的一種訪問實現,也成爲ceph 對象網關、RadosGW、RGW
可使客戶端能夠利用標準對象存儲api來方位ceph集羣,支持AWS S3和Swift api
RadosGW存儲特點
通過對象存儲網關將數據存儲爲對象,每個對象出了包含數據,還包含數據自身的元數據
通過object id來檢索,只能通過API來訪問或第三方客戶端
存儲在偏平的命名空間中,S3將這個扁平的命名空間成爲bucket,swift稱爲容器
命名空間不能嵌套創建
bucket需要被授權才能訪問,一個賬號可以多個bucket授權,權限可以不同
方便的橫向擴展、快速檢索數據
不支持客戶端掛載且需要客戶端訪問的時候指定文件名稱
適合1次寫多次讀的場景
ceph 使用bucket作爲存儲桶,實現對象數據的存儲和多用戶隔離,數據存儲在bucket中,用戶的權限也是針對bucket進行授權,可以設置用戶對不同的bucket擁有不同的權限,實現權限管理
bucket特性
所有對象必須隸屬於某個存儲空間,可以設置和修改存儲空間屬性來控制地域、訪問權限、生命週期等
同一個存儲空間的內部是扁平的,沒有文件系統的目錄等概念,所有的對象都直接隸屬於其對象的存儲空間
每個用戶可以有多個存儲空間
存儲空間的名稱在oss範圍內必須是全局唯一,一旦創建後無法修改名稱
存儲空間內存的對象數目沒有限制
參考
S3 提供商了user bucket object 分別表示用戶、存儲通和對象,其中 bucket 隸屬於 user, 可以針對user 設置不同 bucket 的明明空間的訪問權限,不同用戶允許訪問相同的bucket

2、在兩臺主機部署 radowsgw 存儲網關以實現高可用環境

端口7480
apt install -y radosgw
Centos 安裝命令
yum install ceph-redosgw
ceph-deploy rgw create ceph-mgr2
ceph -s
部署負載均衡器
安裝keepalived
apt install -y keepalived
find / -name "keep*"
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/
tee /etc/keepalived/keepalived.conf  << "EOF"
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.31.6.188 dev eth0 label eth0:0
}
}
EOF
systemctl  restart keepalived.service
systemctl  enable keepalived.service
ip a
ping 172.31.6.188
安裝 haproxy
apt install -y haproxy
tee -a /etc/haproxy/haproxy.cfg << "EOF"
listen ceph-rgw-7480
bind 172.31.6.188:80
mode tcp
server rgw1 172.31.6.103:7480 check inter 2s fall 3 rise 3
server rgw2 172.31.6.104:7480 check inter 2s fall 3 rise 3
EOF
haproxy -f /etc/haproxy/haproxy.cfg
systemctl restart haproxy.service
systemctl enable haproxy.service
netstat  -ntlp
curl http://172.31.6.188
curl http://rgw.iclinux.com

3、基於 s3cmd 實現 bucket 的管理及數據的上傳和下載

將ceph配置進行還原,client 使用部分配置如下
[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = civetweb port=9900
rgw_dns_name = rgw.iclinux.com
[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = civetweb port=9900
rgw_dns_name = rgw.iclinux.com
systemctl  restart [email protected]
netstat -ntlp
deploy 節點安裝agent
sudo apt-cache madison s3cmd
sudo apt install s3cmd
驗證
s3cmd --version
telnet rgw.iclinux.com 80
配置s3cmd
s3cmd  --configure
New settings:
Access Key: N6FH9IFQXZY0PLTWDX76
Secret Key: E05PpMdNhYqxV21swGggVkAlIdPLrWtUjG0w70Ov
Default Region: US
S3 Endpoint: rgw.iclinux.com
DNS-style bucket+hostname:port template for accessing a bucket: rgw.iclinux.com/%(bucket)
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
s3cmd 基本操作
列出所有bucket
s3cmd la
創建bucket
s3cmd mb s3://magedu
s3cmd mb s3://css
s3cmd mb s3://images
上傳測試文件
cd /tmp && 
curl -O https://img1.jcloudcs.com/portal/brand/2021/fl1-2.jpg
s3cmd  put fl1-2.jpg s3://images
s3cmd  put fl1-2.jpg s3://images/jpg
s3cmd ls s3://images
下載文件
mkdir /tmp/123
cd /tmp/123
s3cmd get s3://images/fl1-2.jpg /tmp/123
刪除bucket
首先刪除bucket中的所有內容
s3cmd  rm s3://images/*
s3cmd rb s3://images

  

4、基於 Nginx+RGW 的動靜分離及短視頻案例

rgw授權
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/example-bucket-policies.html
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/API/API_Operations.html
查看權限
s3cmd ls s3://
s3cmd mb s3://videos
s3cmd mb s3://images
s3cmd info s3://videos
授權匿名用戶只讀權限
編寫json配置文件
tee /tmp/mybucket-single_policy << "EOF"
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::images/"
]
}]
}
EOF
s3cmd setpolicy /tmp/mybucket-single_policy s3://images
成功執行後就可以匿名用戶就可以訪問了
http://rgw.iclinux.com/images/fl1-2.jpg
http://172.31.6.105:9900/images/fl1-2.jpg
授權videos匿名訪問
tee /tmp/mybucket-single_policy_videos << "EOF"
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::videos/"
]
}]
}
EOF
s3cmd setpolicy /tmp/mybucket-single_policy_videos s3://videos
cd /tmp && 
curl  -o 123.mp4 https://vod.300hu.com/4c1f7a6atransbjngwcloud1oss/5ff754f8381492940550189057/v.f30.mp4?source=1&h265=v.f1022_h265.mp4
s3cmd put /tmp/123.mp4 s3://videos
創建bucket video
s3cmd mb s3://video
tee /tmp/mybucket-single_policy_video << "EOF"
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::video/"
]
}]
}
EOF
s3cmd setpolicy /tmp/mybucket-single_policy_video s3://video
s3cmd put /tmp/123.mp4 s3://video
安裝nginx ubuntu 1804 -203
apt update && apt install -y iproute2  ntpdate  tcpdump telnet traceroute nfs-kernel-server nfs-common  lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute  gcc openssh-server lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip make && apt-get clean
cd /usr/local/src && curl -O https://nginx.org/download/nginx-1.21.6.tar.gz &&
tar xzf nginx-1.21.6.tar.gz &&
cd /usr/local/src/nginx-1.21.6 &&
./configure --prefix=/apps/nginx 
--user=nginx 
--group=nginx 
--with-http_ssl_module 
--with-http_v2_module 
--with-http_realip_module 
--with-http_stub_status_module 
--with-http_gzip_static_module 
--with-pcre 
--with-stream 
--with-stream_ssl_module 
--with-stream_realip_module &&
make && make install  &&
ln -sv /apps/nginx/sbin/nginx /usr/bin &&
rm -rf  /usr/local/src/nginx-1.21.6  &&
groupadd  -g 2088 nginx &&
useradd  -g nginx -s /usr/sbin/nologin -u 2088 nginx &&
chown -R nginx.nginx /apps/nginx
FILENAME="/apps/nginx/conf/nginx.conf"
if [[ -f ${FILENAME} ]];then
cp ${FILENAME}{,.$(date +%s).bak}
tee  ${FILENAME} << "EOF"
worker_processes  1;
events {
worker_connections  1024;
}
http {
include       mime.types;
default_type  application/octet-stream;
sendfile        on;
keepalive_timeout  65;
upstream videos {
server 172.31.6.104:9900;
server 172.31.6.105:9900;
}
upstream tomcat {
server 172.31.6.202:8080;
#server 172.31.6.105:9900;
}
server {
listen       80;
server_name  rgw.iclinux.com rgw.iclinux.net;
proxy_redirect              off;
proxy_set_header            Host $host;
proxy_set_header            Remote_Addr $remote_addr;
proxy_set_header   X-REAL-IP  $remote_addr;
proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
    location / {
        root   html;
        index  index.html index.htm;
    }
    location ~* \.(mp4|avi)$ {
       proxy_pass http://videos;
    }

    location /app1 {
       proxy_pass http://tomcat;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

}
EOF
fi
安裝tomcat模擬後端服務-- 172.31.6.202
yum install -y tomcat
systemctl restart tomcat
mkdir /usr/share/tomcat/webapps/app1
tee   /usr/share/tomcat/webapps/app1/index.jsp << "EOF"
java app1
EOF
systemctl  restart  tomcat
驗證地址: http://172.31.6.202:8080/app1/

5、啓用 ceph dashboard 並基於 prometheus 監控 ceph 集羣運行狀態

5.1 啓用ceph dashboard

部署在 mgr 節點
兩個節點均部署
apt update
apt-cache madison  ceph-mgr-dashboard
apt install -y ceph-mgr-dashboard
部署節點查看可用模塊
ceph mgr module ls | less
啓用dashboard 組件
ceph mgr module enable dashboard
ceph config set mgr mgr/dashboard/ssl false   # 通常在nginx中啓用
ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 172.31.6.104
ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 9009
長時間無法啓動端口,需要重啓下mgr服務
systemctl  restart  [email protected]
訪問入口 http://172.31.6.104:9009/
創建賬號密碼
echo "123456" > pass.txt
ceph dashboard set-login-credentials jack -i pass.txt
啓用證書
ceph dashboard create-self-signed-cert
ceph config set mgr mgr/dashboard/ssl true
ceph mgr services

5.2  基於 prometheus 監控 ceph 集羣運行狀態 

4個node 安裝node exporter

BASE_DIR="/apps"
install -d ${BASE_DIR}
tar xzf /usr/local/src/node_exporter-1.5.0.linux-amd64.tar.gz -C ${BASE_DIR}
ln -s /apps/node_exporter-1.5.0.linux-amd64/ /apps/node_exporter

tee  /etc/systemd/system/node-exporter.service << "EOF"
[Unit]
Description=Prometheus Node Exporter
After=network.target

[Service]
ExecStart=/apps/node_exporter/node_exporter

[Install]
WantedBy=multi-user.target
EOF

systemctl   daemon-reload &&  systemctl  restart node-exporter && systemctl  enable  node-exporter

配置prometheus 收集數據
cp /etc/prometheus/prometheus.yml{,.bak}

tee -a /etc/prometheus/prometheus.yml << "EOF"
  - job_name: "ceph-node-date"
    # metrics_path: '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["172.31.6.106:9100","172.31.6.107:9100","172.31.6.108:9100","172.31.6.109:9100"]
EOF
promtool check config /etc/prometheus/prometheus.yml
systemctl restart prometheus.service

# ceph 開啓 Prometheus 監控插件
部署節點執行
ceph mgr module enable prometheus
驗證
http://172.31.6.105:9283
http://172.31.6.104:9283
haproxy(172.31.6.204) 修改haproxy 配置,實現負載均衡
tee -a /etc/haproxy/haproxy.cfg  << "EOF"
listen ceph-prometheus-9283
  bind 172.31.6.188:9283
  mode tcp
  server rgw1 172.31.6.104:9283 check inter 2s fall 3 rise 3
  server rgw2 172.31.6.105:9283 check inter 2s fall 3 rise 3
EOF
systemctl restart prometheus
http://172.31.6.188:9283
配置Prometheus 實現數據的

tee -a /etc/prometheus/prometheus.yml << "EOF"
  - job_name: "ceph-clushter-date"
    static_configs:
      - targets: ["172.31.6.188:9283"]
EOF

systemctl restart prometheus

### grafana 模板
osd 監控 導入模板 17296  老版本可使用模板 5336
ceph 存儲池 使用模板 5342
ceph cluser 使用模板 7056

  

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章