clickhouse 版本的安裝 版本:20.1.6.30-2 環境:CentOS release 6.5
**安裝指南: **
1、分別在每臺集羣上面執行下面的命令安裝
sudo yum install yum-utils
sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
sudo yum install clickhouse-server clickhouse-client
sudo /etc/init.d/clickhouse-server start
clickhouse-client
2、安裝zookeeper
這步跳過,百度一下安裝指南一大把
3、在每臺機器上面增加配置文件/etc/metrika.xml 這個文件需要自己創建
注意每臺機器上面這個主機需要修改對應機器的主機名字
<macros>
<replica>cdh1</replica>
</macros>
<yandex>
<!-- /etc/clickhouse-server/config.xml 中配置的remote_servers的incl屬性值,-->
<clickhouse_remote_servers>
<!-- 3分片2備份 -->
<perftest_3shards_2replicas>
<!-- 數據分片1 -->
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>cdh1</host>
<port>9000</port>
</replica>
</shard>
<!-- 數據分片2 -->
<shard>
<replica>
<internal_replication>true</internal_replication>
<host>cdh2</host>
<port>9000</port>
</replica>
</shard>
<!-- 數據分片3 -->
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>cdh3</host>
<port>9000</port>
</replica>
</shard>
<!-- 數據分片4 -->
<shard>
<replica>
<host>cdh4</host>
<port>9000</port>
</replica>
</shard>
<!-- 數據分片5 -->
<shard>
<replica>
<host>cdh5</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards_2replicas>
</clickhouse_remote_servers>
<!--zookeeper相關配置-->
<zookeeper-servers>
<node index="1">
<host>cdh1</host>
<port>2181</port>
</node>
<node index="2">
<host>cdh2</host>
<port>2181</port>
</node>
<node index="3">
<host>cdh3</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<replica>cdh1</replica>
</macros>
<networks>
<ip>::/0</ip>
</networks>
<clickhouse_compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</clickhouse_compression>
</yandex>
4、在每臺主機上面啓動clickhouse server
/etc/init.d/clickhouse-server start #啓動命令
/etc/init.d/clickhouse-server status #查看狀態命令
5、在隨便一臺機器上面啓動client去查看clusters表可以查看到集羣的所有主機
這裏cluster名字爲什麼會出現這麼多test的
[root@cdh5 ~]# clickhouse-client
ClickHouse client version 20.1.6.30 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.1.6 revision 54431.
cdh5 :) SELECT * FROM system.clusters;
SELECT *
FROM system.clusters
┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ perftest_3shards_2replicas │ 1 │ 1 │ 1 │ cdh1 │ 192.168.18.160 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_2replicas │ 2 │ 1 │ 1 │ cdh2 │ 192.168.18.161 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_2replicas │ 3 │ 1 │ 1 │ cdh3 │ 192.168.18.162 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_2replicas │ 4 │ 1 │ 1 │ cdh4 │ 192.168.18.163 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_2replicas │ 5 │ 1 │ 1 │ cdh5 │ 192.168.18.164 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9440 │ 0 │ default │ │ 0 │ 0 │
│ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ ::1 │ 1 │ 0 │ default │ │ 0 │ 0 │
└───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘
13 rows in set. Elapsed: 0.005 sec.
cdh5 :)
問題:
爲什麼在cluster裏面會顯示這麼多test的。只需要將/etc/clickhouse-server/config.xml配置文件中的test註釋就可以了再次查詢就正常了
<remote_servers incl="clickhouse_remote_servers" >
<!-- Test only shard config for testing distributed storage -->
<!-- <test_shard_localhost>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
</test_shard_localhost>
<test_cluster_two_shards_localhost>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
</test_cluster_two_shards_localhost>
<test_cluster_two_shards>
<shard>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
</shard>
</test_cluster_two_shards>
<test_shard_localhost_secure>
<shard>
<replica>
<host>localhost</host>
<port>9440</port>
<secure>1</secure>
</replica>
</shard>
</test_shard_localhost_secure>
<test_unavailable_shard>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>localhost</host>
<port>1</port>
</replica>
</shard>
</test_unavailable_shard> -->
</remote_servers>
cdh3 :) SELECT * FROM system.clusters;
SELECT *
FROM system.clusters
┌─cluster────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ perftest_3shards_2replicas │ 1 │ 1 │ 1 │ cdh1 │ 192.168.18.160 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_2replicas │ 2 │ 1 │ 1 │ cdh2 │ 192.168.18.161 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_2replicas │ 3 │ 1 │ 1 │ cdh3 │ 192.168.18.162 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ perftest_3shards_2replicas │ 4 │ 1 │ 1 │ cdh4 │ 192.168.18.163 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_2replicas │ 5 │ 1 │ 1 │ cdh5 │ 192.168.18.164 │ 9000 │ 0 │ default │ │ 0 │ 0 │
└────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘
5 rows in set. Elapsed: 0.005 sec.
更多內容關注公衆號"數據專場"