极客时间运维进阶训练营第十一周作业

1、掌握对象存储的特点及使用场景

对象存储特性
沿用AWS S3 api标准,无需挂载
数据存在于平面地址空间内的同一级别,应用通过唯一地址来识别每个单独的数据对象
每个对象可包含有助于检索的元数据
通过restful接口实现数据的读写
eg:
rados GW对象存储网关简介:
RadosGW是对象存储-oss objetct storage service的一种访问实现,也成为ceph 对象网关、RadosGW、RGW
可使客户端能够利用标准对象存储api来方位ceph集群,支持AWS S3和Swift api
RadosGW存储特点
通过对象存储网关将数据存储为对象,每个对象出了包含数据,还包含数据自身的元数据
通过object id来检索,只能通过API来访问或第三方客户端
存储在偏平的命名空间中,S3将这个扁平的命名空间成为bucket,swift称为容器
命名空间不能嵌套创建
bucket需要被授权才能访问,一个账号可以多个bucket授权,权限可以不同
方便的横向扩展、快速检索数据
不支持客户端挂载且需要客户端访问的时候指定文件名称
适合1次写多次读的场景
ceph 使用bucket作为存储桶,实现对象数据的存储和多用户隔离,数据存储在bucket中,用户的权限也是针对bucket进行授权,可以设置用户对不同的bucket拥有不同的权限,实现权限管理
bucket特性
所有对象必须隶属于某个存储空间,可以设置和修改存储空间属性来控制地域、访问权限、生命周期等
同一个存储空间的内部是扁平的,没有文件系统的目录等概念,所有的对象都直接隶属于其对象的存储空间
每个用户可以有多个存储空间
存储空间的名称在oss范围内必须是全局唯一,一旦创建后无法修改名称
存储空间内存的对象数目没有限制
参考
S3 提供商了user bucket object 分别表示用户、存储通和对象,其中 bucket 隶属于 user, 可以针对user 设置不同 bucket 的明明空间的访问权限,不同用户允许访问相同的bucket

2、在两台主机部署 radowsgw 存储网关以实现高可用环境

端口7480
apt install -y radosgw
Centos 安装命令
yum install ceph-redosgw
ceph-deploy rgw create ceph-mgr2
ceph -s
部署负载均衡器
安装keepalived
apt install -y keepalived
find / -name "keep*"
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/
tee /etc/keepalived/keepalived.conf  << "EOF"
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.31.6.188 dev eth0 label eth0:0
}
}
EOF
systemctl  restart keepalived.service
systemctl  enable keepalived.service
ip a
ping 172.31.6.188
安装 haproxy
apt install -y haproxy
tee -a /etc/haproxy/haproxy.cfg << "EOF"
listen ceph-rgw-7480
bind 172.31.6.188:80
mode tcp
server rgw1 172.31.6.103:7480 check inter 2s fall 3 rise 3
server rgw2 172.31.6.104:7480 check inter 2s fall 3 rise 3
EOF
haproxy -f /etc/haproxy/haproxy.cfg
systemctl restart haproxy.service
systemctl enable haproxy.service
netstat  -ntlp
curl http://172.31.6.188
curl http://rgw.iclinux.com

3、基于 s3cmd 实现 bucket 的管理及数据的上传和下载

将ceph配置进行还原,client 使用部分配置如下
[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = civetweb port=9900
rgw_dns_name = rgw.iclinux.com
[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = civetweb port=9900
rgw_dns_name = rgw.iclinux.com
systemctl  restart [email protected]
netstat -ntlp
deploy 节点安装agent
sudo apt-cache madison s3cmd
sudo apt install s3cmd
验证
s3cmd --version
telnet rgw.iclinux.com 80
配置s3cmd
s3cmd  --configure
New settings:
Access Key: N6FH9IFQXZY0PLTWDX76
Secret Key: E05PpMdNhYqxV21swGggVkAlIdPLrWtUjG0w70Ov
Default Region: US
S3 Endpoint: rgw.iclinux.com
DNS-style bucket+hostname:port template for accessing a bucket: rgw.iclinux.com/%(bucket)
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
s3cmd 基本操作
列出所有bucket
s3cmd la
创建bucket
s3cmd mb s3://magedu
s3cmd mb s3://css
s3cmd mb s3://images
上传测试文件
cd /tmp && 
curl -O https://img1.jcloudcs.com/portal/brand/2021/fl1-2.jpg
s3cmd  put fl1-2.jpg s3://images
s3cmd  put fl1-2.jpg s3://images/jpg
s3cmd ls s3://images
下载文件
mkdir /tmp/123
cd /tmp/123
s3cmd get s3://images/fl1-2.jpg /tmp/123
删除bucket
首先删除bucket中的所有内容
s3cmd  rm s3://images/*
s3cmd rb s3://images

  

4、基于 Nginx+RGW 的动静分离及短视频案例

rgw授权
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/example-bucket-policies.html
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/API/API_Operations.html
查看权限
s3cmd ls s3://
s3cmd mb s3://videos
s3cmd mb s3://images
s3cmd info s3://videos
授权匿名用户只读权限
编写json配置文件
tee /tmp/mybucket-single_policy << "EOF"
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::images/"
]
}]
}
EOF
s3cmd setpolicy /tmp/mybucket-single_policy s3://images
成功执行后就可以匿名用户就可以访问了
http://rgw.iclinux.com/images/fl1-2.jpg
http://172.31.6.105:9900/images/fl1-2.jpg
授权videos匿名访问
tee /tmp/mybucket-single_policy_videos << "EOF"
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::videos/"
]
}]
}
EOF
s3cmd setpolicy /tmp/mybucket-single_policy_videos s3://videos
cd /tmp && 
curl  -o 123.mp4 https://vod.300hu.com/4c1f7a6atransbjngwcloud1oss/5ff754f8381492940550189057/v.f30.mp4?source=1&h265=v.f1022_h265.mp4
s3cmd put /tmp/123.mp4 s3://videos
创建bucket video
s3cmd mb s3://video
tee /tmp/mybucket-single_policy_video << "EOF"
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "","Action": "s3:GetObject","Resource": ["arn:aws:s3:::video/"
]
}]
}
EOF
s3cmd setpolicy /tmp/mybucket-single_policy_video s3://video
s3cmd put /tmp/123.mp4 s3://video
安装nginx ubuntu 1804 -203
apt update && apt install -y iproute2  ntpdate  tcpdump telnet traceroute nfs-kernel-server nfs-common  lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute  gcc openssh-server lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip make && apt-get clean
cd /usr/local/src && curl -O https://nginx.org/download/nginx-1.21.6.tar.gz &&
tar xzf nginx-1.21.6.tar.gz &&
cd /usr/local/src/nginx-1.21.6 &&
./configure --prefix=/apps/nginx 
--user=nginx 
--group=nginx 
--with-http_ssl_module 
--with-http_v2_module 
--with-http_realip_module 
--with-http_stub_status_module 
--with-http_gzip_static_module 
--with-pcre 
--with-stream 
--with-stream_ssl_module 
--with-stream_realip_module &&
make && make install  &&
ln -sv /apps/nginx/sbin/nginx /usr/bin &&
rm -rf  /usr/local/src/nginx-1.21.6  &&
groupadd  -g 2088 nginx &&
useradd  -g nginx -s /usr/sbin/nologin -u 2088 nginx &&
chown -R nginx.nginx /apps/nginx
FILENAME="/apps/nginx/conf/nginx.conf"
if [[ -f ${FILENAME} ]];then
cp ${FILENAME}{,.$(date +%s).bak}
tee  ${FILENAME} << "EOF"
worker_processes  1;
events {
worker_connections  1024;
}
http {
include       mime.types;
default_type  application/octet-stream;
sendfile        on;
keepalive_timeout  65;
upstream videos {
server 172.31.6.104:9900;
server 172.31.6.105:9900;
}
upstream tomcat {
server 172.31.6.202:8080;
#server 172.31.6.105:9900;
}
server {
listen       80;
server_name  rgw.iclinux.com rgw.iclinux.net;
proxy_redirect              off;
proxy_set_header            Host $host;
proxy_set_header            Remote_Addr $remote_addr;
proxy_set_header   X-REAL-IP  $remote_addr;
proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
    location / {
        root   html;
        index  index.html index.htm;
    }
    location ~* \.(mp4|avi)$ {
       proxy_pass http://videos;
    }

    location /app1 {
       proxy_pass http://tomcat;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

}
EOF
fi
安装tomcat模拟后端服务-- 172.31.6.202
yum install -y tomcat
systemctl restart tomcat
mkdir /usr/share/tomcat/webapps/app1
tee   /usr/share/tomcat/webapps/app1/index.jsp << "EOF"
java app1
EOF
systemctl  restart  tomcat
验证地址: http://172.31.6.202:8080/app1/

5、启用 ceph dashboard 并基于 prometheus 监控 ceph 集群运行状态

5.1 启用ceph dashboard

部署在 mgr 节点
两个节点均部署
apt update
apt-cache madison  ceph-mgr-dashboard
apt install -y ceph-mgr-dashboard
部署节点查看可用模块
ceph mgr module ls | less
启用dashboard 组件
ceph mgr module enable dashboard
ceph config set mgr mgr/dashboard/ssl false   # 通常在nginx中启用
ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 172.31.6.104
ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 9009
长时间无法启动端口,需要重启下mgr服务
systemctl  restart  [email protected]
访问入口 http://172.31.6.104:9009/
创建账号密码
echo "123456" > pass.txt
ceph dashboard set-login-credentials jack -i pass.txt
启用证书
ceph dashboard create-self-signed-cert
ceph config set mgr mgr/dashboard/ssl true
ceph mgr services

5.2  基于 prometheus 监控 ceph 集群运行状态 

4个node 安装node exporter

BASE_DIR="/apps"
install -d ${BASE_DIR}
tar xzf /usr/local/src/node_exporter-1.5.0.linux-amd64.tar.gz -C ${BASE_DIR}
ln -s /apps/node_exporter-1.5.0.linux-amd64/ /apps/node_exporter

tee  /etc/systemd/system/node-exporter.service << "EOF"
[Unit]
Description=Prometheus Node Exporter
After=network.target

[Service]
ExecStart=/apps/node_exporter/node_exporter

[Install]
WantedBy=multi-user.target
EOF

systemctl   daemon-reload &&  systemctl  restart node-exporter && systemctl  enable  node-exporter

配置prometheus 收集数据
cp /etc/prometheus/prometheus.yml{,.bak}

tee -a /etc/prometheus/prometheus.yml << "EOF"
  - job_name: "ceph-node-date"
    # metrics_path: '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["172.31.6.106:9100","172.31.6.107:9100","172.31.6.108:9100","172.31.6.109:9100"]
EOF
promtool check config /etc/prometheus/prometheus.yml
systemctl restart prometheus.service

# ceph 开启 Prometheus 监控插件
部署节点执行
ceph mgr module enable prometheus
验证
http://172.31.6.105:9283
http://172.31.6.104:9283
haproxy(172.31.6.204) 修改haproxy 配置,实现负载均衡
tee -a /etc/haproxy/haproxy.cfg  << "EOF"
listen ceph-prometheus-9283
  bind 172.31.6.188:9283
  mode tcp
  server rgw1 172.31.6.104:9283 check inter 2s fall 3 rise 3
  server rgw2 172.31.6.105:9283 check inter 2s fall 3 rise 3
EOF
systemctl restart prometheus
http://172.31.6.188:9283
配置Prometheus 实现数据的

tee -a /etc/prometheus/prometheus.yml << "EOF"
  - job_name: "ceph-clushter-date"
    static_configs:
      - targets: ["172.31.6.188:9283"]
EOF

systemctl restart prometheus

### grafana 模板
osd 监控 导入模板 17296  老版本可使用模板 5336
ceph 存储池 使用模板 5342
ceph cluser 使用模板 7056

  

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章