測試環境ceph集羣報錯,內容如下:
[root@node241 ceph]# ceph -s
cluster 3b37db44-f401-4409-b3bb-75585d21adfe
health HEALTH_WARN
too many PGs per OSD (652 > max 300) 《==報錯內容
monmap e1: 1 mons at {node241=192.168.2.41:6789/0}
election epoch 1, quorum 0 node241
osdmap e408: 5 osds: 5 up, 5 in
pgmap v23049: 1088 pgs, 16 pools, 256 MB data, 2889 objects
6100 MB used, 473 GB / 479 GB avail
1088 active+clean
問題原因爲集羣osd 數量較少,測試過程中建立了大量的pool,每個pool要咋用一些pg_num 和pgs ,ceph集羣默認每塊磁盤都有默認值,好像每個osd 爲128個pgs,默認值可以調整,調整過大或者過小都會對集羣性能優影響,此爲測試環境以快速解決問題爲目的,解決此報錯的方法就是,調大集羣的此選項的告警閥值;方法如下,在mon節點的ceph.conf 配置文件中添加:
[global]
.......
mon_pg_warn_max_per_osd = 1000
然後重啓服務:
/etc/init.d/ceph restart mon
驗證:
[root@node241 ceph]# ceph -s
cluster 3b37db44-f401-4409-b3bb-75585d21adfe
health HEALTH_OK
monmap e1: 1 mons at {node241=192.168.2.41:6789/0}
election epoch 1, quorum 0 node241
osdmap e408: 5 osds: 5 up, 5 in
pgmap v23201: 1088 pgs, 16 pools, 256 MB data, 2889 objects
6101 MB used, 473 GB / 479 GB avail
1088 active+clean
告警解決;