RHCS套件和nginx實現高可用集羣

準備環境

server1作爲主節點使用
server2作爲副節點使用
物理機作爲fence
server1:172.25.24.1

server2:172.25.24.2

VIP:172.25.24.100

在server1上裝luci和ricci(server1即作爲節點使用,也用來管理節點);在server2上安裝ricci,
luci是一個用來管理節點的軟件。

1:ssh互信

2:修改主機名爲server1,server2 兩個集羣的主機名稱必須不同,不然會在創建集羣之後重啓過程中起不來

3: ntp時間同步

下面開始本次實驗安裝:

1.兩個作爲節點的虛擬機都要配備高可用yum源,配置如下
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.24.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.24.250/rhel6.5/HighAvailability
gpgcheck=0

[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.24.250/rhel6.5/LoadBalancer
gpgcheck=0

[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.24.250/rhel6.5/ScalableFileSystem
gpgcheck=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.24.250/rhel6.5/ResilientStorage
gpgcheck=0
2.Server1(節點1):
[root@server1 ~]# yum install ricci luci -y
[root@server1 ~]# chkconfig luci on    ##開機自啓
[root@server1 ~]# chkconfig ricci on   ##開機自啓
[root@server1 ~]# passwd ricci     ##在兩臺節點上給ricci用戶設置密碼,可以與root的密碼不同
[root@server1 ~]# /etc/init.d/ricci start ##打開ricci服務
[root@server1 ~]# /etc/init.d/luci start   ##打開luci節點管理服務服務
3.server2(節點2):
[root@server2 ~]# yum install ricci -y  ##下載ricci服務
[root@server2 ~]# chkconfig ricci on   ##設置開機自啓
[root@server2 ~]# /etc/init.d/ricci start  ##打開ricci服務
[root@server2 ~]# passwd ricci        ##修改ricci用戶密碼

4在瀏覽器web登陸,luci端口爲8084, https://172.25.24.1:8084

這裏寫圖片描述
此登陸界面可以進行正常的用戶登陸

登陸之後選擇點擊Manager Clusters ->Create創建集羣,加入集羣節點

這裏寫圖片描述

點擊 創建集羣。點擊 創建集羣 後會有以下動作:

a. 如果您選擇「下載軟件包」,則會在節點中下載集羣軟件包。

b. 在節點中安裝集羣軟件(或者確認安裝了正確的軟件包)。

c. 在集羣的每個節點中更新並傳推廣羣配置文件。

d. 加入該集羣的添加的節點 顯示的信息表示正在創建該集羣。當集羣準備好後,該顯示會演示新創建集羣的狀態

現在點擊Create Cluster,就會出現下面方式,等重新啓動完成之後,節點添加完成

這裏寫圖片描述
這裏寫圖片描述
注:正常狀態下,節點Nodes名稱和Cluster Name均顯示爲綠色,如果出現異常,將顯示爲紅色。

在任意虛擬機測試

這裏寫圖片描述
節點添加成功

5.向集羣中添加fnce

如果集羣中一個節點通信失效,那麼集羣中的其他節點必須能夠保證將已經失效的節點與其正在訪問的共享資源(比如共享存儲)隔離開,出問題的集羣節點 本身無法做到這一點,因爲該集羣節點在此時可能已經失去響應(例如發生hung機),因此需要通過外部機制來實現這一點。這種方法被稱爲帶有fence代 理的隔離。

不配置隔離設備,我們沒有辦法知道之前斷開連接的集羣節點使用的資源是否已經被釋放掉。如果我們沒有配置隔離代理(或者設備),系統可能錯誤的認爲集羣節點已經釋放了它的資源,這將會造成數據損壞和丟失。 沒有配置隔離設備,數據的完整性就不能夠被保證,集羣配置將不被支持。

當隔離動作正在進行中時,不允許執行其他集羣操作。這包括故障轉移服務和獲取GFS文件系統或GFS2文件系統的新鎖。 在隔離動作完成之前或在該集羣節點已經重啓並且重新加入集羣之前,集羣不能恢復正常運行。

隔離代理(或設備)是一個外部設備,這個設備可以被集羣用於限制異常節點對共享存儲的訪問(或者硬重啓此集羣節點。

[root@xiaoqin Desktop]# yum install fence-virtd-multicast fence-virtd fence-virtd-libvirt -y   ##fence-virtd-multicast爲實現廣播同系機制,fence-virtd爲模擬fence,fence-virtd-libvirt是將livrt轉換爲fence
##1.創建fence,其中沒有特別輸入的爲默認的,輸入的是要轉換的
[root@xiaoqin Desktop]# fence_virtd -c  ##進行設置
#Module search path [/usr/lib64/fence-virt]: 

Available backends:
    libvirt 0.1
Available listeners:
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

#Listener module [multicast]: 

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

#Multicast IP Address [225.0.0.12]: 

Using ipv4 as family.

#Multicast IP Port [1229]: 

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

#Interface [virbr0]: br0

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

#Key File [/etc/cluster/fence_xvm.key]: 

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

#Backend module [libvirt]: 

Configuration complete.
##會得到如下配置文件
=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br0";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y


##創建鑰匙所在目錄,由於是第一次,此目錄不會生成,需要手動創建
[root@xiaoqin ~]# mkdir /etc/cluster
[root@xiaoqin ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000123739 s, 1.0 MB/s
[root@xiaoqin ~]# cd /etc/cluster/
[root@xiaoqin cluster]# ls
fence_xvm.key
[root@xiaoqin cluster]# file fence_xvm.key 
fence_xvm.key: data
[root@xiaoqin cluster]# netstat -anulp | grep fence_virtd
[root@xiaoqin cluster]# systemctl start fence_virtd.service  ##啓動fence服務
[root@xiaoqin cluster]# netstat -anulp | grep fence_virtd
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           6910/fence_virtd    
注意:fence_xvm.key用於fence連接集羣節點的認證,只有節點存在此密鑰,才能創建成功

這裏寫圖片描述
這裏寫圖片描述

以上fence準備工作完成,可以開始在瀏覽器創建fence

這裏寫圖片描述
這裏寫圖片描述

在server1或server2查看fence是否添加成功:

[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="2" name="xiaoqin">
    <clusternodes>

        <clusternode name="server1" nodeid="1"/>
        <clusternode name="server2" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="westos1"/>  ##this is
    </fencedevices>
</cluster>
然後在在各個節點中添加,server1和server2操作相同

這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述
如果集羣節點的名稱和真實server的主機名稱不對應該怎麼辦呢?本次實驗碰巧是對應的。
虛擬機的名稱是domainame,而集羣是hostname,可以利用主機的UUID做映射,將集羣節點的名稱和相應的主機對應。

Server2操作與server1相同

再次查看信息

[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="6" name="xiaoqin">  ##集羣名稱
    <clusternodes>
        <clusternode name="server1" nodeid="1">    ##server1爲節點1
            <fence>
                <method name="fence1">   
                    <device domain="dc2c88a2-32f8-462d-bd6e-140801bb8d45" name="westos1"/>   ##可以清楚的區別
                </method>
            </fence>
        </clusternode>
        <clusternode name="server2" nodeid="2">
            <fence>
                <method name="fence2">
                    <device domain="dc2c88a2-32f8-462d-bd6e-140801bb8d45" name="westos1"/>
                </method>
            </fence>
        </clusternode>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="westos1"/>
    </fencedevices>
</cluster>

創建Failover Domain

Failover Domain是配置集羣的失敗轉移域,通過失敗轉移域可以將服務和資源的切換限制在指定的節點間,下面的操作將創建1個失敗轉移域,

點擊Failover Domains–>Add

這裏寫圖片描述
Prioritized:是否在Failover domain 中啓用域成員優先級設置,這裏選擇啓用。

Restrict:表示是否在失敗轉移域成員中啓用服務故障切換限制。這裏選擇啓用。

Not failback :表示在這個域中使用故障切回功能,也就是說,主節點故障時,備用節點會自動接管主節點服務和資源,當主節點恢復正常時,集羣的服務和資源會從備用節點自動切換到主節點。

然後,在Member複選框中,選擇加入此域的節點,這裏選擇的是node2和node4節點在“priority”處將node1的優先級設置爲1,node2的優先級設置爲2。

需要說明的是“priority”設置爲1的節點,優先級是最高的,隨着數值的降低,節點優先級也依次降低。

所有設置完成,點擊Submit按鈕,開始創建Failover domain。

這裏寫圖片描述

這樣集羣節點和fence設備和故障轉移域就創建完成了,下面開始加入Resources

以web服務爲列

點擊Resources–>Add 添加VIP(172.25.24.100)

這裏寫圖片描述
Monitor Link 監控鏈接
Disable Updates to Static Routes 是否停止跟新靜態路由
Number ofSeconds to Sleep After Removing an IP Address 超時時間

點擊Resources–>Add 添加腳本

注意:nginx本身沒有啓動腳本的,所以要自己製作nginx腳本,可以從源碼包裏修改腳本
在RHCS中都是script類型的腳本在/etc/init.d/目錄下
這裏寫圖片描述

添加完VIP和腳本之後在此點擊Resources

這裏寫圖片描述

在這個時候需要給server1和server2添加腳本

root@server1 init.d]# vim nginx 
[root@server1 init.d]# /etc/init.d/nginx status
nginx is stopped
[root@server1 init.d]# cat nginx 
#!/bin/bash
# it is v.0.0.2 version.

# chkconfig: - 85 15

# description: Nginx is a high-performance web and proxy server.

#              It has a lot of features, but it's not for everyone.

# processname: nginx

# pidfile: /var/run/nginx.pid

# config: /usr/local/nginx/conf/nginx.conf

nginxd=/usr/local/nginx/sbin/nginx

nginx_config=/usr/local/nginx/conf/nginx.conf

nginx_pid=/var/run/nginx.pid

RETVAL=0

prog="nginx"

# Source function library.

. /etc/rc.d/init.d/functions

# Source networking configuration.

. /etc/sysconfig/network

# Check that networking is up.

[ ${NETWORKING} = "no" ] && exit 0

[ -x $nginxd ] || exit 0

# Start nginx daemons functions.

start() {

if [ -e $nginx_pid ];then

   echo "nginx already running...."

   exit 1

fi

   echo -n $"Starting $prog: "

   daemon $nginxd -c ${nginx_config}

   RETVAL=$?

   echo

   [ $RETVAL = 0 ] && touch /var/lock/subsys/nginx

   return $RETVAL

}

# Stop nginx daemons functions.

stop() {

        echo -n $"Stopping $prog: "

        killproc $nginxd

        RETVAL=$?

        echo

        [ $RETVAL = 0 ] && rm -f /var/lock/subsys/nginx /var/run/nginx.pid

}

# reload nginx service functions.

reload() {

    echo -n $"Reloading $prog: "

    #kill -HUP `cat ${nginx_pid}`

    killproc $nginxd -HUP

    RETVAL=$?

    echo

}

# See how we were called.

case "$1" in

start)

        start

        ;;

stop)

        stop

        ;;

reload)

        reload

        ;;

restart)

        stop

        start

        ;;

status)

        status $prog

        RETVAL=$?

        ;;



*)

        echo $"Usage: $prog {start|stop|restart|reload|status|help}"

        exit 1

esac

exit $RETVAL

選擇Service Group添加組

OK,資源添加完畢,下面定義服務組(資源必須在組內才能運行)資源組內定義的資源爲內部資源

這裏寫圖片描述
Automatically Start This Service 自動運行這個服務
Run Exclusive 排查運行
Failover Domain 故障轉移域(這裏要選擇你提前設置的那個故障轉移域,也可以不選)
Recovery Policy 資源轉移策略

創建完組之後,要把資源按順序添加到組內,你添加資源的順序就是

這裏寫圖片描述
這裏寫圖片描述
Maximum Number of Failures 最大錯誤次數
Failues Expire Time 錯誤超時時間
Maximum Number of Restars 最大重啓時間
Restart Expire Time(seconds) 重啓時間間隔

這裏寫圖片描述

測試高可用集羣是否成功:

[root@server1 html]# pwd
/usr/local/nginx/html
[root@server1 html]# cat westos.html 
<h1>server1</h1>
[root@server1 html]# 


root@server2 html]# pwd 
/usr/local/nginx/html
[root@server2 html]# cat westos.html 
<h1>server2</h1>
[root@server2 html]# 

這裏寫圖片描述

[root@server1 html]# /etc/init.d/nginx stop
Stopping nginx:                                            [  OK  ]
[root@server1 html]# 

關閉腳之後刷新
這裏寫圖片描述

這裏寫圖片描述

高可用集羣的拓展

1.高可用集羣的負載均衡

真機域名解析:
172.25.24.100 www.westos.org
Server1 和server2作爲節點 都要設置 ,server3 和server4 是後端

root@server1 conf]# vim nginx.conf

這裏寫圖片描述

http {
    upstream westos{
    server 172.25.24.3:80;  在http下便添加下列
    server 172.25.24.4:80;
    }

重新啓動任務

root@foundation24 mnt]# curl www.westos.org
www.westos.org  --server4
[root@foundation24 mnt]# curl www.westos.org
www.westos.org      -server3
[root@foundation24 mnt]# curl www.westos.org
www.westos.org  --server4
[root@foundation24 mnt]# curl www.westos.org
www.westos.org      -server3
[root@foundation24 mnt]# curl www.westos.org
www.westos.org  --server4
[root@foundation24 mnt]# curl www.westos.org
www.westos.org      -server3
[root@foundation24 mnt]# 

VIP存儲

這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述

Server1 和serve2構成一個集羣,這時候表現除集羣的,實現存儲同步
再添加一個server3,並且給server3添加一個虛擬磁盤
Server3作爲真實機器用
root@server3 html]# fdisk -l
Disk /dev/vdb: 8589 MB, 8589934592 bytes  ##這就是添加的磁盤
[root@server3 ~]# yum install scsi-* -y    ##下在

root@server3 ~]# cd /etc/tgt
[root@server3 tgt]# vim targets.conf

這裏寫圖片描述

<target iqn.2018-08.com.example:server.target1>
        backing-store /dev/vdb
        initiator-address 172.25.24.1
[root@server3 tgt]# /etc/init.d/tgtd start
        initiator-address 172.25.24.2
</target>
Server1和server都要做:
yum install -y iscsi-*
iscsiadm -m discovery -t st -p 172.25.24.3
iscsiadm -m node -l
[root@server2 ~]# iscsiadm -m discovery -t st -p 172.25.24.3
Starting iscsid:                                           [  OK  ]
172.25.24.3:3260,1 iqn.2018-08.com.example:server.target1
[root@server2 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2018-08.com.example:server.target1, portal: 172.25.24.3,3260] (multiple)
Login to [iface: default, target: iqn.2018-08.com.example:server.target1, portal: 172.25.24.3,3260] successful.

[root@server2 ~]# cat /proc/partitions 
major minor  #blocks  name

 252        0   20971520 vda   
 252        1     512000 vda1
 252        2   20458496 vda2
 253        0   19439616 dm-0
 253        1    1015808 dm-1
   8        0    8388608 sda   ##這就是添加的共享資源
下面這一部分是其中一個創建就會同步
[root@server1 ~]# pvcreate /dev/sda   ##創建一個PV
  Physical volume "/dev/sda" successfully created
[root@server1 ~]# vgcreate clustervg /dev/sda    ###創建一個VG
  Clustered volume group "clustervg" successfully created
[root@server1 ~]# lvcreate -L 4G -n demo clustervg   ##在劃分一個4Gd
  Logical volume "demo" created
[root@server1 ~]# mkfs.ext4 /dev/clustervg/demo 
在server2刷新
[root@server2 ~]# pvs
  PV         VG       Fmt  Attr PSize  PFree
  /dev/sda            lvm2 a--   8.00g 8.00g
  /dev/vda2  VolGroup lvm2 a--  19.51g    0 

[root@server2 ~]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  VolGroup    1   2   0 wz--n- 19.51g    0 
  clustervg   1   1   0 wz--nc  8.00g 4.00g
[root@server2 ~]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a-----   4.00g   

然後在共同下載

[root@server2 ~]# yum install mysql mysql-server -y
[root@server1 ~]# yum install -y mysql mysql-server

然後先server2

[root@server2 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 2 mysql mysql 4096 Aug  9  2013 /var/lib/mysql/
[root@server2 ~]# cd /var/lib/mysql/
[root@server2 mysql]# ls
[root@server2 mysql]# ll -d .
drwxr-xr-x 2 mysql mysql 4096 Aug  9  2013 .
[root@server2 mysql]# mount /dev/clustervg/demo /var/lib/mysql/  #將這個掛在在/var/lib/mysql目錄下
[root@server2 mysql]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1228944  16933408   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
/dev/mapper/clustervg-demo     4128448  139256   3779480   4% /var/lib/mysql
[root@server2 mysql]# cd
[root@server2 ~]# chown mysql.mysql /var/lib/mysql/  
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1228944  16933408   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
/dev/mapper/clustervg-demo     4128448  139256   3779480   4% /var/lib/mysql
[root@server2 ~]# /etc/init.d/mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1228944  16933408   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
/dev/mapper/clustervg-demo     4128448  139256   3779480   4% /var/lib/mysql
[root@server2 ~]# umount /var/lib/mysql/
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1228944  16933408   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot

Server1:

[root@server1 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server1 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 3 mysql mysql 4096 Aug  2 15:46 /var/lib/mysql/
[root@server1 ~]# cd /var/lib/mysql/
[root@server1 mysql]# ls
lost+found
[root@server1 mysql]# /etc/init.d//mysqld start
Initializing MySQL database:  Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h server1 password 'new-password'

Alternatively you can run:
/usr/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd /usr/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

                                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[root@server1 mysql]# /etc/init.d//mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@server1 mysql]# ls   ###在重新啓動mysql之後文件同步完成
ibdata1  ib_logfile0  ib_logfile1  lost+found  mysql  test
[root@server1 mysql]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1243916  16918436   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
/dev/mapper/clustervg-demo     4128448  160832   3757904   5% /var/lib/mysql
[root@server1 mysql]# umount /var/lib/mysql/
umount: /var/lib/mysql: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
[root@server1 mysql]# cd
[root@server1 ~]# umount /var/lib/mysql/
[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1243912  16918440   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
[root@server1 ~]# clustat 
Cluster Status for xiaoqin @ Thu Aug  2 17:02:21 2018
Member Status: Quorate

 Member Name                            ID   Status
 ------ ----                            ---- ------
 server1                                    1 Online, Local, rgmanager
 server2                                    2 Online, rgmanager

 Service Name                  Owner (Last)                  State         
 ------- ----                  ----- ------                  -----         
 service:mysql                 server1                       started       
[root@server1 ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.71 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 
mysql> show databases;
+---------------------+
| Database            |
+---------------------+
| information_schema  |
| #mysql50#lost+found |
| mysql               |
| test                |
+---------------------+
4 rows in set (0.00 sec)
發佈了70 篇原創文章 · 獲贊 3 · 訪問量 1萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章