Greenplum 數據庫 集羣安裝部署(生產環境) 所需硬件

Greenplum 數據庫安裝部署(生產環境)

硬件配置:
16 臺 IBM X3650,
節點配置:CPU 2 * 8core,內存 128GB,硬盤 16 * 900GB,萬兆網卡。
萬兆交換機。

安裝需求:
1臺Master,1臺Standby Master,14臺Segment計算節點。

安裝步驟:

  1. Master節點安裝
  2. 創建GP安裝配置文件並配置ssh互信
  3. 關閉防火牆及開啓自啓動
  4. 關閉SELinux
  5. 磁盤調度算法
  6. 磁盤預讀取配置
  7. 語言與字符集
  8. Sysctl.conf增加配置
  9. 用戶資源限額配置
  10. 時間校對
  11. 設定網卡自啓動
  12. 創建用戶(可選)
  13. 創建目錄並賦權
  14. 各節點GP軟件的安裝
  15. 初始化GP數據庫

1. Master節點安裝

先確認各節點目錄結構保持一致:

[root@XXXGPM01 db]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda6        20G  440M   18G   3% /
tmpfs            63G   76K   63G   1% /dev/shm
/dev/sda2       485M   39M  421M   9% /boot
/dev/sda1       200M  260K  200M   1% /boot/efi
/dev/sda4       5.5T  3.1G  5.5T   1% /data1
/dev/sdb1       5.8T   34M  5.8T   1% /data2
/dev/sda8       9.7G  150M  9.0G   2% /home
/dev/sda5        49G  3.1G   43G   7% /opt
/dev/sda9       9.7G  151M  9.0G   2% /tmp
/dev/sda7        20G  5.8G   13G  32% /usr
/dev/sda10      9.7G  338M  8.8G   4% /var

主節點/etc/hosts文件,追加內容(萬兆網卡的IP地址和主機名):

172.16.99.18 XXXGPM01
172.16.99.19 XXXGPM02
172.16.99.20 XXXGPD01
172.16.99.21 XXXGPD02
172.16.99.22 XXXGPD03
172.16.99.23 XXXGPD04
172.16.99.24 XXXGPD05
172.16.99.25 XXXGPD06
172.16.99.26 XXXGPD07
172.16.99.27 XXXGPD08
172.16.99.28 XXXGPD09
172.16.99.29 XXXGPD10
172.16.99.30 XXXGPD11
172.16.99.31 XXXGPD12
172.16.99.32 XXXGPD13
172.16.99.33 XXXGPD14
  1. 解壓安裝介質
    GP的安裝包:/opt/db.zip
    cd /data1; unzip /opt/db.zip
  2. 安裝目錄
    mkdir -p /data1/gpinstall

在 XXXGPM1 節點上,將 greenplum-db-4.x.x.x-build-5-RHEL5-x86_64.zip 解開, 以 root 用戶執行得到的.bin 文件。按照提示進行安裝。

# /bin/bash greenplum-db-4.3.2.0-build-1-RHEL5-x86_64.bin

選擇自定義安裝目錄: /data1/gpinstall/greenplum-db-4.3.2.0

安裝完成之後,可以看到如下目錄

[root@XXXGPM01 gpinstall]# pwd
/data1/gpinstall
[root@XXXGPM01 gpinstall]# ls -lh
總用量 4.0K
lrwxrwxrwx.  1 root root   22 64 18:51 greenplum-db -> ./greenplum-db-4.3.2.0
drwxr-xr-x. 11 root root 4.0K 64 18:51 greenplum-db-4.3.2.0

2. 創建GP安裝配置文件並配置ssh互信

mkdir -p /data1/gpinstall/config
創建兩個配置文件

1. allnodes.txt

XXXGPM01
XXXGPM02
XXXGPD01
XXXGPD02
XXXGPD03
XXXGPD04
XXXGPD05
XXXGPD06
XXXGPD07
XXXGPD08
XXXGPD09
XXXGPD10
XXXGPD11
XXXGPD12
XXXGPD13
XXXGPD14

2. nodes.txt

XXXGPD01
XXXGPD02
XXXGPD03
XXXGPD04
XXXGPD05
XXXGPD06
XXXGPD07
XXXGPD08
XXXGPD09
XXXGPD10
XXXGPD11
XXXGPD12
XXXGPD13
XXXGPD14

3. 配置所有GP節點root用戶的ssh互信:

source /data1/gpinstall/greenplum-db/greenplum_path.sh 
gpssh-exkeys -f /data1/gpinstall/config/allnodes.txt 

測試可以正常互信。此時就可以方便的去同步配置所有的參數。

先寫腳本同步各節點的/etc/hosts文件:

#!/bin/bash
#Usage: copy files to other hosts in cluster.
#ex: sh bulkcp.sh /etc/hosts /etc/hosts
#Author: AlfredZhao
#Version: 1.0.0
for((i=18;i<=33;i++))
do
 scp $1 172.16.99.$i:$2
 echo "scp $1 172.16.99.$i:$2"
done

gpssh -f /data1/gpinstall/config/allnodes.txt -e '' 回車即可進入交互性界面。

3. 關閉防火牆及開啓自啓動

檢查是否關閉及開機啓動項設置:

service iptables status
service ip6tables status
service libvirtd status

service iptables stop
service ip6tables stop
service libvirtd stop

chkconfig libvirtd off
chkconfig iptables off
chkconfig ip6tables off

4. 關閉SELinux

檢查SELinux的當前狀態和配置:

getenforce
more /etc/selinux/config | grep SELINUX=

臨時關閉SELinux和永久禁用SELinux:

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

5. 磁盤調度算法

檢查當前的磁盤調度算法:
cat /sys/block/sd*/queue/scheduler

原廠給的設置方法:
修改/boot/grub/menu.lst 找到 kernel /vmlinuz-xxx 這一行,在最後添加 elevator=deadline

檢查/驗證方法:
系統啓動正常後,執行 cat /sys/block/*/queue/scheduler 應能看到: noop anticipatory [deadline] cfq

實際遇到問題:RHEL6.5系統 沒有/boot/grub/menu.lst 這個文件。
nl /boot/grub/menu.lst
換一個思路,利用rc.local配置加兩行信息如下:

echo deadline > /sys/block/sda/queue/scheduler
echo deadline > /sys/block/sdb/queue/scheduler

這裏爲了快速同步集羣的配置統一,將這兩行寫入臨時配置文件/tmp/gpconfig/1deadline.conf,然後同步到各節點。
mkdir -p /tmp/gpconfig
最後在交互性界面執行命令:
cat /tmp/gpconfig/1deadline.conf >> /etc/rc.d/rc.local

6. 磁盤預讀取配置

檢查磁盤預讀取設置,應都是 16384:
blockdev --getra /dev/sd* blockdev --getra /dev/dm-*

設置辦法:
修改/etc/rc.d/rc.local 增加兩行

blockdev --setra 16384 /dev/sd* 
blockdev --setra 16384 /dev/dm-* 

檢查辦法:
系統重啓後,運行
blockdev --getra /dev/sd* blockdev --getra /dev/dm-* 應都是 16384

/etc/rc.d/rc.local
sh bulkcp.sh /tmp/gpconfig/2rclocal.conf /tmp/gpconfig/2rclocal.conf
gpssh -f /data1/gpinstall/config/allnodes.txt -e ''
cat /tmp/gpconfig/2rclocal.conf >> /etc/rc.d/rc.local

7. 語言與字符集

檢查語言和字符集:echo $LANG
en_US.UTF-8
設置辦法:系統安裝時指定。
檢查辦法:登錄系統,執行命令 locale 結果應該是 en_US.UTF-8

修改:
sed -i 's/zh_CN.UTF-8/en_US.UTF-8/g' /etc/sysconfig/i18n

8. Sysctl.conf增加配置

配置sysctl.conf
sysctl -p
有時需要 modprobe bridge

設置辦法: 修改/etc/sysctl.conf,
增加以下內容:

net.ipv4.ip_forward = 0  
net.ipv4.conf.default.accept_source_route = 0  
kernel.sysrq = 0  
kernel.core_uses_pid = 1  
net.ipv4.tcp_syncookies = 1  
kernel.msgmnb = 65536  
kernel.msgmax = 65536  
kernel.sem = 250 64000 100 2048  
kernel.shmmax = 5000000000  
kernel.shmmni = 40960  
kernel.shmall = 40000000000  
net.ipv4.tcp_tw_recycle=1  
net.ipv4.tcp_max_syn_backlog=4096  
net.core.netdev_max_backlog=10000  
vm.overcommit_memory=2  
net.ipv4.conf.all.arp_filter = 1  
net.ipv4.ip_local_port_range = 1025 65535 

然後執行 sysctl -p。

vi /tmp/gpconfig/4sysctl.conf
sh bulckcp.sh /tmp/gpconfig/4sysctl.conf /tmp/gpconfig/4sysctl.conf
gpssh -f /data1/gpinstall/config/allnodes.txt -e ''
cat /tmp/gpconfig/4sysctl.conf >> /etc/sysctl.conf

檢查辦法: 用 sysctl ,檢查上述各參數是否匹配。

9. 用戶資源限額配置

查看用戶資源限額:
ulimit -a

設置辦法:
修改 /etc/security/limits.d/90-nproc.conf
Replace with:

* soft nofile 1048576 
* hard nofile 1048576 
* soft nproc 1048576 
* hard nproc 1048576 

同步修改:

vi /tmp/gpconfig/5limits.conf
sh bulkcp.sh /tmp/gpconfig/5limits.conf /tmp/gpconfig/5limits.conf
nl /etc/security/limits.d/90-nproc.conf
cat /tmp/gpconfig/5limits.conf >> /etc/security/limits.d/90-nproc.conf

檢查辦法: 用任意普通用戶登錄,執行 ulimit -a,檢查上述參數是否匹配。

10. 時間校對

檢查時間:
date

校對時間:
--2015年6月5日9:17
date 060509172015
hwclock -w

11. 設定網卡自啓動

確定需要啓用的網卡:eth2
ifconfig eth2 | grep 172
more /etc/sysconfig/network-scripts/ifcfg-eth2 |grep ONBOOT

修改eth2網卡配置,開啓自啓動
sed -i 's/ONBOOT=no/ONBOOT=yes/g' /etc/sysconfig/network-scripts/ifcfg-eth2

以上步驟修改完畢後統一重啓下機器,再次驗證以上各個步驟是否修改成功。

12. 創建用戶(可選)

這個也可以在gpseginstall時指定選項創建,這裏爲了規範業務用戶的uid和gid事先先創建好組和用戶:
寫一個小腳本,分發到各節點,然後在交互式界面下統一執行腳本建立組,用戶,密碼:

#!/bin/bash
#Usage: create gpadmin
#Author: AlfredZhao
#version: 1.0.0

groupadd gpadmin -g 3030
useradd gpadmin -u 3030 -g 3030
passwd gpadmin <<EOF
> gpadminpwd
> gpadminpwd
> EOF

13. 創建目錄並賦權

Master 目錄:

gpssh -h XXXGPM01 -e 'mkdir -p /data1/master' 
gpssh -h XXXGPM02 -e 'mkdir -p /data1/master'

gpssh -h XXXGPM01 -e 'chown gpadmin:gpadmin /data1/master' 
gpssh -h XXXGPM02 -e 'chown gpadmin:gpadmin /data1/master'

數據庫數據文件存儲目錄:
注意這裏配置文件變爲nodes.txt,主節點並不需要創建這些目錄。

gpssh -f /data1/gpinstall/config/nodes.txt -e ''

mkdir -p /data1/primary 
mkdir -p /data1/mirror     
mkdir -p /data2/primary 
mkdir -p /data2/mirror

chown gpadmin:gpadmin /data1/primary 
chown gpadmin:gpadmin /data1/mirror     
chown gpadmin:gpadmin /data2/primary 
chown gpadmin:gpadmin /data2/mirror

14. 各節點GP軟件的安裝

gpseginstall -f allnodes.txt -c csv
如果前面沒有手工提前建立gpadmin用戶,這裏可以指定用戶密碼(-u -p),統一建立gpadmin用戶和密碼

[root@XXXGPM01 data1]# gpseginstall -f /data1/gpinstall/config/allnodes.txt -c csv
20150605:00:01:07:005656 gpseginstall:XXXGPM01:root-[INFO]:-Installation Info:
link_name greenplum-db
binary_path /data1/gpinstall/greenplum-db-4.3.2.0
binary_dir_location /data1/gpinstall
binary_dir_name greenplum-db-4.3.2.0
20150605:00:01:07:005656 gpseginstall:XXXGPM01:root-[INFO]:-check cluster password access
20150605:00:01:11:005656 gpseginstall:XXXGPM01:root-[INFO]:-de-duplicate hostnames
20150605:00:01:11:005656 gpseginstall:XXXGPM01:root-[INFO]:-master hostname: XXXGPM01
20150605:00:01:13:005656 gpseginstall:XXXGPM01:root-[INFO]:-chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db
20150605:00:01:13:005656 gpseginstall:XXXGPM01:root-[INFO]:-chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db-4.3.2.0
20150605:00:01:14:005656 gpseginstall:XXXGPM01:root-[INFO]:-rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar; rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz
20150605:00:01:14:005656 gpseginstall:XXXGPM01:root-[INFO]:-cd /data1/gpinstall; tar cf greenplum-db-4.3.2.0.tar greenplum-db-4.3.2.0
20150605:00:01:17:005656 gpseginstall:XXXGPM01:root-[INFO]:-gzip /data1/gpinstall/greenplum-db-4.3.2.0.tar
20150605:00:01:36:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: mkdir -p /data1/gpinstall
20150605:00:01:37:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: rm -rf /data1/gpinstall/greenplum-db-4.3.2.0
20150605:00:01:39:005656 gpseginstall:XXXGPM01:root-[INFO]:-scp software to remote location
20150605:00:01:41:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: gzip -f -d /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz
20150605:00:01:46:005656 gpseginstall:XXXGPM01:root-[INFO]:-md5 check on remote location
20150605:00:01:48:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: cd /data1/gpinstall; tar xf greenplum-db-4.3.2.0.tar
20150605:00:01:51:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar
20150605:00:01:52:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: cd /data1/gpinstall; rm -f greenplum-db; ln -fs greenplum-db-4.3.2.0 greenplum-db
20150605:00:01:54:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db
20150605:00:01:55:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db-4.3.2.0
20150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz
20150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-version string on master: gpssh version 4.3.2.0 build 1
20150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: . /data1/gpinstall/greenplum-db/./greenplum_path.sh; /data1/gpinstall/greenplum-db/./bin/gpssh --version
20150605:00:01:59:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: . /data1/gpinstall/greenplum-db-4.3.2.0/greenplum_path.sh; /data1/gpinstall/greenplum-db-4.3.2.0/bin/gpssh --version
20150605:00:02:06:005656 gpseginstall:XXXGPM01:root-[INFO]:-SUCCESS -- Requested commands completed

15. 初始化GP數據庫

以下工作,在Master節點上以gpadmin用戶登陸完成。
cd /data1/gpinstall/config
新建gpinitsystem_config配置文件,內容如下:

ARRAY_NAME="XXXGPDB"  
SEG_PREFIX=gpseg  
PORT_BASE=40000  
declare -a DATA_DIRECTORY=(/data1/primary /data1/primary /data1/primary /data1/primary /data1/primary /data1/primary /data2/primary /data2/primary /data2/primary /data2/primary /data2/primary /data2/primary)  
MASTER_HOSTNAME=XXXGPM01 
MASTER_DIRECTORY=/data1/master  
MASTER_PORT=5432  
TRUSTED_SHELL=ssh 
CHECK_POINT_SEGMENTS=256 
ENCODING=UNICODE 
MIRROR_PORT_BASE=50000  
REPLICATION_PORT_BASE=41000  
MIRROR_REPLICATION_PORT_BASE=51000  
declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror /data1/mirror /data1/mirror /data1/mirror /data1/mirror /data1/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror)   

15.1 配置下gpadmin用戶的各節點間互信:

gpssh-exkeys -f /data1/gpinstall/config/allnodes.txt

15.2 初始化數據庫:

gpinitsystem -c gpinitsystem_config -h nodes.txt -B 8

15.3 配置環境變量:

source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data1/master/gpseg-1

15.4 建立冗餘的master節點:

gpinitstandby -s XXXGPM02

15.5 調整數據庫參數:

以下數據庫參數調整後必須重新啓動數據庫。

調整方法:執行命令 gpconfig -c 參數名 -v 參數值 -m Master節點值

檢查方法:重啓數據庫後,執行命令 gpconfig -s 參數名

參數名 參數值 master節點值

gpconfig -c shared_buffers -v 128MB -m 128MB
gpconfig -c gp_vmem_protect_limit -v 15360 -m 15360
gpconfig -c max_connections -v 1000 -m 200
gpconfig --skipvalidation -c wal_send_client_timeout -v 60s -m 60s 

[gpadmin@XXXGPM01 greenplum-db]$  gpconfig -c shared_buffers -v 128MB -m 128MB
20150605:14:50:53:017038 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully
[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -c gp_vmem_protect_limit -v 15360 -m 15360
20150605:14:52:51:017179 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully
[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -c max_connections -v 1000 -m 200
20150605:14:53:08:017271 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully
[gpadmin@XXXGPM01 greenplum-db]$ gpconfig --skipvalidation -c wal_send_client_timeout -v 60s -m 60s 
20150605:14:53:23:017363 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully

15.6 關閉數據庫

[gpadmin@XXXGPM01 greenplum-db]$ gpstop -a
20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Starting gpstop with args: -a
20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Gathering information and validating the environment...
20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-There are 0 connections to the database
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Master host=XXXGPM01
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Master segment instance directory=/data1/master/gpseg-1
20150605:14:54:43:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Stopping master standby host XXXGPM02 mode=fast
20150605:14:54:45:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Successfully shutdown standby process on XXXGPM02
20150605:14:54:45:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing parallel primary segment instance shutdown, please wait...
........................... 
20150605:14:55:12:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing parallel mirror segment instance shutdown, please wait...
................ 
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-   Segments stopped successfully      = 336
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-   Segments with errors during stop   = 0
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Successfully shutdown 336 of 336 segment instances 
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Database successfully shutdown with no errors reported

15.7 啓動數據庫:

[gpadmin@XXXGPM01 greenplum-db]$ gpstart -a
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting gpstart with args: -a
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Gathering information and validating the environment...
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting Master instance in admin mode
20150605:14:56:40:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20150605:14:56:40:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...
20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Setting new master era
20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Master Started...
20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Shutting down master
20150605:14:56:46:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
.................. 
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Process results...
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-   Successful segment starts                                            = 336
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Successfully started 336 of 336 segment instances 
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting Master instance XXXGPM01 directory /data1/master/gpseg-1 
20150605:14:57:05:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Command pg_ctl reports Master XXXGPM01 instance active
20150605:14:57:06:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting standby master
20150605:14:57:06:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Checking if standby master is running on host: XXXGPM02  in directory: /data1/master/gpseg-1
20150605:14:57:09:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Database successfully started
[gpadmin@XXXGPM01 greenplum-db]$ 

15.8 檢查數據庫的參數

gpconfig -s shared_buffers
gpconfig -s gp_vmem_protect_limit
gpconfig -s max_connections
gpconfig -s wal_send_client_timeout

[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -s shared_buffers
20150605:11:37:40:011591 gpconfig:XXXGPM01:gpadmin-[ERROR]:-Failed to retrieve GUC information: error 'ERROR:  function gp_toolkit.gp_param_setting(unknown) does not exist
LINE 1: select * from gp_toolkit.gp_param_setting('shared_buffers')
                      ^
HINT:  No function matches the given name and argument types. You may need to add explicit type casts.
' in 'select * from gp_toolkit.gp_param_setting('shared_buffers')'

psql postgres

show shared_buffers;
show gp_vmem_protect_limit;
show max_connections;
show wal_send_client_timeout;

15.9 創建業務數據庫XXX

項目內規範:所有後期創建數據庫,都必須在psql 登陸到postgres數據庫後創建!!

psql postgres
postgres=# create database XXX;
CREATE DATABASE

psql XXX

15.10 調整連接控制參數

修改文件 $MASTER_DATA_DIRECTORY/pg_hba.conf
增加一行:
host all all 0/0 md5
修改standby master上的文件 $MASTER_DATA_DIRECTORY/pg_hba.conf
增加一行:
host all all 0/0 md5

15.11 查看數據庫的狀態(gpstate)

[gpadmin@XXXGPM01 ~]$ gpstate
20150605:13:48:23:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Starting gpstate with args: 
20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'
20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.2.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Jul 12 2014 17:02:40'
20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...
20150605:13:48:25:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Gathering data from segments...
................ 
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Greenplum instance status summary
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Master instance                                           = Active
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Master standby                                            = XXXGPM02
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Standby master state                                      = Standby host passive
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total segment instance count from metadata                = 336
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Primary Segment Status
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total primary segments                                    = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total primary segment failures (at master)                = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number postmaster processes found                   = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Mirror Segment Status
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total mirror segments                                     = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number postmaster processes found                   = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number mirror segments acting as primary segments   = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number mirror segments acting as mirror segments    = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
[gpadmin@XXXGPM01 ~]$ 

轉載自:http://www.cnblogs.com/jyzhao/p/4555171.html

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章