1、掌握對象存儲的特點及使用場景
2、在兩臺主機部署 radowsgw 存儲網關以實現高可用環境
端口7480 apt install -y radosgw Centos 安裝命令 yum install ceph-redosgw ceph-deploy rgw create ceph-mgr2 ceph -s 部署負載均衡器 安裝keepalived apt install -y keepalived find / -name "keep*" cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/ tee /etc/keepalived/keepalived.conf << "EOF" ! Configuration File for keepalived global_defs { notification_email { acassen } notification_email_from [email protected] smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface eth0 garp_master_delay 10 smtp_alert virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.31.6.188 dev eth0 label eth0:0 } } EOF systemctl restart keepalived.service systemctl enable keepalived.service ip a ping 172.31.6.188 安裝 haproxy apt install -y haproxy tee -a /etc/haproxy/haproxy.cfg << "EOF" listen ceph-rgw-7480 bind 172.31.6.188:80 mode tcp server rgw1 172.31.6.103:7480 check inter 2s fall 3 rise 3 server rgw2 172.31.6.104:7480 check inter 2s fall 3 rise 3 EOF haproxy -f /etc/haproxy/haproxy.cfg systemctl restart haproxy.service systemctl enable haproxy.service netstat -ntlp curl http://172.31.6.188 curl http://rgw.iclinux.com
3、基於 s3cmd 實現 bucket 的管理及數據的上傳和下載
將ceph配置進行還原,client 使用部分配置如下 [client.rgw.ceph-mgr1] rgw_host = ceph-mgr1 rgw_frontends = civetweb port=9900 rgw_dns_name = rgw.iclinux.com [client.rgw.ceph-mgr2] rgw_host = ceph-mgr2 rgw_frontends = civetweb port=9900 rgw_dns_name = rgw.iclinux.com systemctl restart [email protected] netstat -ntlp deploy 節點安裝agent sudo apt-cache madison s3cmd sudo apt install s3cmd 驗證 s3cmd --version telnet rgw.iclinux.com 80 配置s3cmd s3cmd --configure New settings: Access Key: N6FH9IFQXZY0PLTWDX76 Secret Key: E05PpMdNhYqxV21swGggVkAlIdPLrWtUjG0w70Ov Default Region: US S3 Endpoint: rgw.iclinux.com DNS-style bucket+hostname:port template for accessing a bucket: rgw.iclinux.com/%(bucket) Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 s3cmd 基本操作 列出所有bucket s3cmd la 創建bucket s3cmd mb s3://magedu s3cmd mb s3://css s3cmd mb s3://images 上傳測試文件 cd /tmp && curl -O https://img1.jcloudcs.com/portal/brand/2021/fl1-2.jpg s3cmd put fl1-2.jpg s3://images s3cmd put fl1-2.jpg s3://images/jpg s3cmd ls s3://images 下載文件 mkdir /tmp/123 cd /tmp/123 s3cmd get s3://images/fl1-2.jpg /tmp/123 刪除bucket 首先刪除bucket中的所有內容 s3cmd rm s3://images/* s3cmd rb s3://images
4、基於 Nginx+RGW 的動靜分離及短視頻案例
rgw授權 https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/example-bucket-policies.html https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/API/API_Operations.html 查看權限 s3cmd ls s3:// s3cmd mb s3://videos s3cmd mb s3://images s3cmd info s3://videos 授權匿名用戶只讀權限 編寫json配置文件 tee /tmp/mybucket-single_policy << "EOF" { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::images/" ] }] } EOF s3cmd setpolicy /tmp/mybucket-single_policy s3://images 成功執行後就可以匿名用戶就可以訪問了 http://rgw.iclinux.com/images/fl1-2.jpg http://172.31.6.105:9900/images/fl1-2.jpg 授權videos匿名訪問 tee /tmp/mybucket-single_policy_videos << "EOF" { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::videos/" ] }] } EOF s3cmd setpolicy /tmp/mybucket-single_policy_videos s3://videos cd /tmp && curl -o 123.mp4 https://vod.300hu.com/4c1f7a6atransbjngwcloud1oss/5ff754f8381492940550189057/v.f30.mp4?source=1&h265=v.f1022_h265.mp4 s3cmd put /tmp/123.mp4 s3://videos 創建bucket video s3cmd mb s3://video tee /tmp/mybucket-single_policy_video << "EOF" { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::video/" ] }] } EOF s3cmd setpolicy /tmp/mybucket-single_policy_video s3://video s3cmd put /tmp/123.mp4 s3://video 安裝nginx ubuntu 1804 -203 apt update && apt install -y iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip make && apt-get clean cd /usr/local/src && curl -O https://nginx.org/download/nginx-1.21.6.tar.gz && tar xzf nginx-1.21.6.tar.gz && cd /usr/local/src/nginx-1.21.6 && ./configure --prefix=/apps/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module && make && make install && ln -sv /apps/nginx/sbin/nginx /usr/bin && rm -rf /usr/local/src/nginx-1.21.6 && groupadd -g 2088 nginx && useradd -g nginx -s /usr/sbin/nologin -u 2088 nginx && chown -R nginx.nginx /apps/nginx FILENAME="/apps/nginx/conf/nginx.conf" if [[ -f ${FILENAME} ]];then cp ${FILENAME}{,.$(date +%s).bak} tee ${FILENAME} << "EOF" worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream videos { server 172.31.6.104:9900; server 172.31.6.105:9900; } upstream tomcat { server 172.31.6.202:8080; #server 172.31.6.105:9900; } server { listen 80; server_name rgw.iclinux.com rgw.iclinux.net; proxy_redirect off; proxy_set_header Host $host; proxy_set_header Remote_Addr $remote_addr; proxy_set_header X-REAL-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { root html; index index.html index.htm; } location ~* \.(mp4|avi)$ { proxy_pass http://videos; } location /app1 { proxy_pass http://tomcat; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } EOF fi 安裝tomcat模擬後端服務-- 172.31.6.202 yum install -y tomcat systemctl restart tomcat mkdir /usr/share/tomcat/webapps/app1 tee /usr/share/tomcat/webapps/app1/index.jsp << "EOF" java app1 EOF systemctl restart tomcat 驗證地址: http://172.31.6.202:8080/app1/
5、啓用 ceph dashboard 並基於 prometheus 監控 ceph 集羣運行狀態
5.1 啓用ceph dashboard
部署在 mgr 節點 兩個節點均部署 apt update apt-cache madison ceph-mgr-dashboard apt install -y ceph-mgr-dashboard 部署節點查看可用模塊 ceph mgr module ls | less 啓用dashboard 組件 ceph mgr module enable dashboard ceph config set mgr mgr/dashboard/ssl false # 通常在nginx中啓用 ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 172.31.6.104 ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 9009 長時間無法啓動端口,需要重啓下mgr服務 systemctl restart [email protected] 訪問入口 http://172.31.6.104:9009/ 創建賬號密碼 echo "123456" > pass.txt ceph dashboard set-login-credentials jack -i pass.txt 啓用證書 ceph dashboard create-self-signed-cert ceph config set mgr mgr/dashboard/ssl true ceph mgr services
5.2 基於 prometheus 監控 ceph 集羣運行狀態
4個node 安裝node exporter BASE_DIR="/apps" install -d ${BASE_DIR} tar xzf /usr/local/src/node_exporter-1.5.0.linux-amd64.tar.gz -C ${BASE_DIR} ln -s /apps/node_exporter-1.5.0.linux-amd64/ /apps/node_exporter tee /etc/systemd/system/node-exporter.service << "EOF" [Unit] Description=Prometheus Node Exporter After=network.target [Service] ExecStart=/apps/node_exporter/node_exporter [Install] WantedBy=multi-user.target EOF systemctl daemon-reload && systemctl restart node-exporter && systemctl enable node-exporter 配置prometheus 收集數據 cp /etc/prometheus/prometheus.yml{,.bak} tee -a /etc/prometheus/prometheus.yml << "EOF" - job_name: "ceph-node-date" # metrics_path: '/metrics' # scheme defaults to 'http'. static_configs: - targets: ["172.31.6.106:9100","172.31.6.107:9100","172.31.6.108:9100","172.31.6.109:9100"] EOF promtool check config /etc/prometheus/prometheus.yml systemctl restart prometheus.service # ceph 開啓 Prometheus 監控插件 部署節點執行 ceph mgr module enable prometheus 驗證 http://172.31.6.105:9283 http://172.31.6.104:9283 haproxy(172.31.6.204) 修改haproxy 配置,實現負載均衡 tee -a /etc/haproxy/haproxy.cfg << "EOF" listen ceph-prometheus-9283 bind 172.31.6.188:9283 mode tcp server rgw1 172.31.6.104:9283 check inter 2s fall 3 rise 3 server rgw2 172.31.6.105:9283 check inter 2s fall 3 rise 3 EOF systemctl restart prometheus http://172.31.6.188:9283 配置Prometheus 實現數據的 tee -a /etc/prometheus/prometheus.yml << "EOF" - job_name: "ceph-clushter-date" static_configs: - targets: ["172.31.6.188:9283"] EOF systemctl restart prometheus ### grafana 模板 osd 監控 導入模板 17296 老版本可使用模板 5336 ceph 存儲池 使用模板 5342 ceph cluser 使用模板 7056