资讯专栏INFORMATION COLUMN

zookeeper集群搭建 docker+zk集群搭建

mrli2016 / 2225人阅读

摘要:当超过设置倍数的时间,则连接失败同步通信时限,集群中的服务器与服务器之间请求和应答之间能容忍的最多心跳数的数量。此配置表示,与之间发送消息,请求和应答时间长度。如果在设置的时间内不能与进行通信,那么此将被丢弃命令的白名单,没有在的会禁止。

环境:

192.168.1.5 zk1
192.168.1.6 zk2
192.168.1.7 zk3

概念不再阐述,直接上步骤,文章结尾处,会有些常用的zookeeper调优参数

一、准备环境 ①拷贝环境

[root@kuting1 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.5 zk1
192.168.1.6 zk2
192.168.1.7 zk3

[root@kuting1 ~]# ssh-keygen -t rsa
[root@kuting1 ~]# for i in tail -3 /etc/hosts | awk "{print $2}" ; do ssh-copy-id $i ; done
[root@kuting1 ~]# for i in tail -3 /etc/hosts | awk "{print $2}" ; do scp /etc/hosts $i:/etc/hosts ; done

②下载zookeeper安装包

[root@kuting1 ~]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz
本次搭建实验使用的zk 3.4.11版本

[root@kuting1 ~]# tar zxf zookeeper-3.4.11.tar.gz
[root@kuting1 ~]# mkdir /data/server -p #程序目录
[root@kuting1 ~]# mkdir /data/data/zookeeper/0{0..2} -p #数据目录
[root@kuting1 ~]# mkdir /data/logs/zookeeper/0{0..2} -p #日志目录

③zookeeper环境必须有java环境以及默认的命令路径,部署过程此次省略 二、搭建单机伪分布式zookeeper集群

环境:

192.168.1.5 zk1
①配置zookeeper00

[root@kuting1 ~]# mv zookeeper-3.4.11 /data/server/zookeeper00
[root@kuting1 ~]# cd /data/server/zookeeper00/conf/
[root@kuting1 conf]# cp zoo_sample.cfg zoo.cfg
[root@kuting1 conf]# vim zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/data/zookeeper/00
dataLogDir=/data/logs/zookeeper/00
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
server.1=192.168.1.5:2888:3888
server.2=192.168.1.5:2889:3889
server.3=192.168.1.5:2890:3890

[root@kuting1 ~]# echo 1 > /data/data/zookeeper/00/myid

②配置zookeeper01

[root@kuting1 ~]# cp -rf /data/server/zookeeper/00 /data/server/zookeeper/01
[root@kuting1 ~]# vim /data/server/zookeeper/01/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/data/zookeeper/01     #目录名字
dataLogDir=/data/logs/zookeeper/01
# the port at which the clients will connect
clientPort=2182     #端口号必须要修改
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
server.1=192.168.1.5:2888:3888
server.2=192.168.1.5:2889:3889
server.3=192.168.1.5:2890:3890

[root@kuting1 ~]# echo 2 > /data/data/zookeeper/01/myid

③配置zookeeper02

[root@kuting1 ~]# cp -rf /data/server/zookeeper/00 /data/server/zookeeper/02
[root@kuting1 ~]# vim /data/server/zookeeper/02/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/data/zookeeper/02
dataLogDir=/data/logs/zookeeper/02
# the port at which the clients will connect
clientPort=2183
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
server.1=192.168.1.5:2888:3888
server.2=192.168.1.5:2889:3889
server.3=192.168.1.5:2890:3890

[root@kuting1 ~]# echo 3 > /data/data/zookeeper/02/myid

④测试运作

启动第一个节点
[root@kuting1 ~]# cd /data/server/zookeeper00/bin
[root@kuting1 bin]# ./zkServer.sh start

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper00/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[root@kuting1 bin]# ss -anpt | grep java

LISTEN     0      50          :::2181                    :::*                   users:(("java",pid=13285,fd=25))
LISTEN     0      50          :::46475                   :::*                   users:(("java",pid=13285,fd=19))
LISTEN     0      50      ::ffff:192.168.1.5:3888                    :::*                   users:(("java",pid=13285,fd=26))

启动第二个节点
[root@kuting1 bin]# ../../zookeeper01/bin/zkServer.sh start

启动第三个节点
[root@kuting1 bin]# ../../zookeeper02/bin/zkServer.sh start

[root@kuting1 bin]# ss -anpt | grep java

LISTEN     0      50          :::37985                   :::*                   users:(("java",pid=14163,fd=19))
LISTEN     0      50          :::2181                    :::*                   users:(("java",pid=13285,fd=25))
LISTEN     0      50          :::2182                    :::*                   users:(("java",pid=14163,fd=25))
LISTEN     0      50          :::2183                    :::*                   users:(("java",pid=14207,fd=25))
LISTEN     0      50      ::ffff:192.168.1.5:2889                    :::*                   users:(("java",pid=14163,fd=28))
LISTEN     0      50          :::46475                   :::*                   users:(("java",pid=13285,fd=19))
LISTEN     0      50      ::ffff:192.168.1.5:3888                    :::*                   users:(("java",pid=13285,fd=26))
LISTEN     0      50      ::ffff:192.168.1.5:3889                    :::*                   users:(("java",pid=14163,fd=26))
LISTEN     0      50      ::ffff:192.168.1.5:3890                    :::*                   users:(("java",pid=14207,fd=26))
LISTEN     0      50          :::42517                   :::*                   users:(("java",pid=14207,fd=19))
ESTAB      0      0       ::ffff:192.168.1.5:41592               ::ffff:192.168.1.5:3888                users:(("java",pid=14207,fd=27))
ESTAB      0      0       ::ffff:192.168.1.5:38194               ::ffff:192.168.1.5:2889                users:(("java",pid=14207,fd=29))
ESTAB      0      0       ::ffff:192.168.1.5:41080               ::ffff:192.168.1.5:3889                users:(("java",pid=14207,fd=28))
ESTAB      0      0       ::ffff:192.168.1.5:41584               ::ffff:192.168.1.5:3888                users:(("java",pid=14163,fd=27))
ESTAB      0      0       ::ffff:192.168.1.5:3889                ::ffff:192.168.1.5:41080               users:(("java",pid=14163,fd=30))
ESTAB      0      0       ::ffff:192.168.1.5:2889                ::ffff:192.168.1.5:38194               users:(("java",pid=14163,fd=31))
ESTAB      0      0       ::ffff:192.168.1.5:38188               ::ffff:192.168.1.5:2889                users:(("java",pid=13285,fd=28))
ESTAB      0      0       ::ffff:192.168.1.5:2889                ::ffff:192.168.1.5:38188               users:(("java",pid=14163,fd=29))
ESTAB      0      0       ::ffff:192.168.1.5:3888                ::ffff:192.168.1.5:41584               users:(("java",pid=13285,fd=27))
ESTAB      0      0       ::ffff:192.168.1.5:3888                ::ffff:192.168.1.5:41592               users:(("java",pid=13285,fd=29))
⑤测试创建一些znode

登陆到第一个节点,客户端端口为2181,创建znode
[root@kuting1 00]# ./zkCli.sh -server 127.0.0.1:2181
[zk: 127.0.0.1:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: 127.0.0.1:2181(CONNECTED) 1] create /data test-data
Created /data
[zk: 127.0.0.1:2181(CONNECTED) 2] ls /
[zookeeper, data]
[zk: 127.0.0.1:2181(CONNECTED) 3] quit

登陆到第二个节点,客户端端口为2182,查看znode是否同步
[root@kuting1 bin]# ./zkCli.sh -server 127.0.0.1:2182
[zk: 127.0.0.1:2182(CONNECTED) 0] ls /
[zookeeper, data]
[zk: 127.0.0.1:2182(CONNECTED) 1] get /data
test-data #数据是一致的
cZxid = 0x100000002
ctime = Sat Aug 04 18:31:39 CST 2018
mZxid = 0x100000002
mtime = Sat Aug 04 18:31:39 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 127.0.0.1:2182(CONNECTED) 2] quit

登陆到第三个节点,客户端端口为2183,查看znode是否同步
[root@kuting1 bin]# ./zkCli.sh -server 127.0.0.1:2183
[zk: 127.0.0.1:2183(CONNECTED) 0] ls /
[zookeeper, data]
[zk: 127.0.0.1:2183(CONNECTED) 1] get /data
test-data
cZxid = 0x100000002
ctime = Sat Aug 04 18:31:39 CST 2018
mZxid = 0x100000002
mtime = Sat Aug 04 18:31:39 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 127.0.0.1:2183(CONNECTED) 2] quit

⑥查看集群状态

查看集群的状态、主从信息需要使用 ./zkServer.sh status 命令,但是多个节点的话,逐个查看有些费劲,所以我们写一个简单的shell脚本来批量执行命令。如下
[root@kuting1 ~]# cat checkzk.sh

#!/bin/bash
n=(0 1 2)
for i in ${n[@]};do
echo $i
/data/server/zookeeper0$i/bin/zkServer.sh status
done

[root@kuting1 ~]# chmod +x checkzk.sh
[root@kuting1 ~]# ./checkzk.sh

0
ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper00/bin/../conf/zoo.cfg
Mode: follower
1
ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper01/bin/../conf/zoo.cfg
Mode: leader
2
ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper02/bin/../conf/zoo.cfg
Mode: follower

看到其中有节点已经当选为follower,数据已经同步,完成了单机伪分布式集群搭建

三、搭建分布式zookeeper集群 ①环境准备

在搭建单机版之前已经拷贝过环境,关闭防火墙,三台机器均必须拥有java环境

将单机版删除,保留一个节点拷贝到其他机器
[root@kuting1 ~]# rm -rf /data/server/zookeeper0{1..2}
[root@kuting1 ~]# mv /data/server/zookeeper00/ /data/server/zookeeper

在其他节点上创建/data/server预程序目录、/data/data/zookeeper数据目录、/data/logs/zookeeper程序日志目录
[root@kuting1 ~]# rsync -az /data/server/zookeeper zk2:/data/server/zookeeper
[root@kuting1 ~]# rsync -az /data/server/zookeeper zk3:/data/server/zookeeper

逐个设置变量,添加在/etc/profile尾部,source加载一下

export ZOOKEEPER_HOME=/data/server/zookeeper
export JAVA_HOME=/data/server/java
export PATH=$PATH:/data/server/java/bin:/data/server/zookeeper/bin
②配置zookeeper1

[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/data/zookeeper/
dataLogDir=/data/logs/zookeeper/
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
server.1=192.168.1.5:2888:3888
server.2=192.168.1.6:2888:3888
server.3=192.168.1.7:2888:3888

默认server.1为master节点
[root@kuting2 ~]# echo 1 > /data/data/zookeeper/myid

③配置zookeeper2

[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/data/zookeeper/
dataLogDir=/data/logs/zookeeper/
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
server.1=192.168.1.5:2888:3888
server.2=192.168.1.6:2888:3888
server.3=192.168.1.7:2888:3888

[root@kuting2 ~]# echo 2 > /data/data/zookeeper/myid

④配置zookeeper3

[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/data/zookeeper/
dataLogDir=/data/logs/zookeeper/
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
server.1=192.168.1.5:2888:3888
server.2=192.168.1.6:2888:3888
server.3=192.168.1.7:2888:3888

[root@kuting2 ~]# echo 3 > /data/data/zookeeper/myid

⑤测试启动zookeeper集群

[root@kuting1 conf]# zkServer.sh start
[root@kuting2 conf]# zkServer.sh start
[root@kuting3 conf]# zkServer.sh start

查看三节点的集群状态(在启动之后有节点选举的过程,注意关闭防火墙)
[root@kuting1 zookeeper]# zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Mode: leader

[root@kuting2 conf]# zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@kuting3 conf]# zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Mode: follower
⑥测试创建znode,查看是否同步

[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.7:2181
[zk: 192.168.1.7:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: 192.168.1.7:2181(CONNECTED) 1] create /real-culster real-data
Created /real-culster
[zk: 192.168.1.7:2181(CONNECTED) 2] ls /
[zookeeper, real-culster]
[zk: 192.168.1.7:2181(CONNECTED) 3] get /real-culster
real-data
cZxid = 0x100000002
ctime = Sat Sep 29 11:09:40 CST 2018
mZxid = 0x100000002
mtime = Sat Sep 29 11:09:40 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 192.168.1.7:2181(CONNECTED) 4] quit

[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.5:2181
[zk: 192.168.1.5:2181(CONNECTED) 0] ls /
[zookeeper, real-culster]
[zk: 192.168.1.5:2181(CONNECTED) 1] get /real-culster
real-data
cZxid = 0x100000002
ctime = Sat Sep 29 11:09:40 CST 2018
mZxid = 0x100000002
mtime = Sat Sep 29 11:09:40 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 192.168.1.5:2181(CONNECTED) 2] quit

[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.6:2181
[zk: 192.168.1.6:2181(CONNECTED) 0] ls /
[zookeeper, real-culster]
[zk: 192.168.1.6:2181(CONNECTED) 1] get /real-culster
real-data
cZxid = 0x100000002
ctime = Sat Sep 29 11:09:40 CST 2018
mZxid = 0x100000002
mtime = Sat Sep 29 11:09:40 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 192.168.1.6:2181(CONNECTED) 2] quit

发现两个follower节点已经同步znode

⑦测试集群选举高可用

停止leader节点
[root@kuting1 zookeeper]# zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Mode: leader

[root@kuting1 zookeeper]# zkServer.sh stop

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED

这时去到follower节点看看有没有被选举到leader
[root@kuting2 ~]# zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Mode: leader

[root@kuting2 ~]# ip a | grep inet | grep ens33$

inet 192.168.1.6/24 brd 192.168.1.255 scope global noprefixroute ens33

[root@kuting1 conf]# zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Mode: followe

[root@kuting1 conf]# ip a | grep inet | grep ens33$

inet 192.168.1.5/24 brd 192.168.1.255 scope global noprefixroute ens33

发现已经有follower节点被选举被leader,之后将停掉的zk启动起来
[root@kuting3 zookeeper]# zkServer.sh start
[root@kuting3 zookeeper]# zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Mode: follower

发现他不会成为leader,而是扮演follower角色

[root@kuting2 ~]# zkServer.sh  status 
ZooKeeper JMX enabled by default
Using config: /data/server/zookeeper/bin/../conf/zoo.cfg
Mode: leader

leader角色没有改变,所以只有在选举的时候集群中的节点才会切换角色

四、使用docker搭建zookeeper集群 ①下载docker以及docker-compose工具 ②配置docker-compose.yml文件搭建zk集群

[root@kuting1 ~]# docker pull zookeeper
[root@kuting1 ~]# mkdir -p /data/docker/docker-compose/zookeeper-cluster
[root@kuting1 ~]# cd $!
[root@kuting1 zookeeper-cluster]# vim docker-compose.yml

version: "2"
services:
  zoo1:
    image: zookeeper
    restart: always
    container_name: zk1
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888

  zk2:
    image: zookeeper
    restart: always
    container_name: zk2
    ports:
      - "2182:2181"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888

  zk3:
    image: zookeeper
    restart: always
    container_name: zk3
    ports:
      - "2183:2181"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888

[root@kuting1 zookeeper-cluster]# docker-compose up -d
[root@kuting1 zookeeper-cluster]# docker-compose ps

Name              Command               State                     Ports                   
------------------------------------------------------------------------------------------
zk1    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
zk2    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2182->2181/tcp, 2888/tcp, 3888/tcp
zk3    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2183->2181/tcp, 2888/tcp, 3888/tcp
③检查zookeeper集群状态

[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2182

Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
Clients:
 /172.18.0.1:40116[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x100000002
Mode: follower
Node count: 4

[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2181

Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
Clients:
 /172.18.0.1:55510[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/33/66
Received: 3
Sent: 2
Connections: 1
Outstanding: 0
Zxid: 0x100000002
Mode: follower
Node count: 4

[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2183

Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
Clients:
 /172.18.0.1:34678[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x100000002
Mode: leader
Node count: 4
Proposal sizes last/min/max: 32/32/36
④检查znode数据同步

[root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2181 #2181端口为zk1,根据yml文件中的映射关系依次类推
[zk: 127.0.0.1:2181(CONNECTED) 3] create /data test-data
[root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2182
[zk: 127.0.0.1:2182(CONNECTED) 0] ls /
[zookeeper, data]
[root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2183
[zk: 127.0.0.1:2183(CONNECTED) 0] ls /
[zookeeper, data]

发现集群状态与数据都已经正常,搭建完毕

五、关于常用的zookeeper优化
tickTime:CS通信心跳数,Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime时间就会发送一个心跳。tickTime以毫秒为单位。该参数用来定义心跳的间隔时间,zookeeper的客户端和服务端之间也有和web开发里类似的session的概念,而zookeeper里最小的session过期时间就是tickTime的两倍

initLimit:LF初始通信时限,集群中的follower服务器(F)与leader服务器(L)之间 初始连接时能容忍的最多心跳数(tickTime的数量)。此配置表示,允许 follower (相对于 leader 而言的“客户端”)连接 并同步到leader 的初始化连接时间,它以 tickTime 的倍数来表示。当超过设置倍数的 tickTime 时间,则连接失败

syncLimit:LF同步通信时限,集群中的follower服务器(F)与leader服务器(L)之间 请求和应答之间能容忍的最多心跳数(tickTime的数量)。此配置表示, leader 与 follower 之间发送消息,请求 和 应答时间长度。如果 follower 在设置的时间内不能与leader 进行通信,那么此 follower 将被丢弃

4lw.commands.whitelist:命令的白名单,没有在的会禁止。4lw.commands.whitelist=stat,ruok, conf, isro

服务器名称与地址:集群信息(服务器编号,服务器地址,LF通信端口,选举端口)

这个配置项的书写格式比较特殊,规则如下
server.N=YYY:A:B
server.1=itcast05:2888:3888
server.2=itcast06:2888:3888
server.3=itcast07:2888:3888

文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/33669.html

相关文章

  • docker环境搭建zk集群

    摘要:搭建系列环境搭建集群搭建集群环境搭建搭建集群环境搭建序对于个人开发者而言,学习分布式的好多东东,都比较费劲,因为手头的机器不够。本文主要是记录使用搭建集群的过程。鸣谢使用不同网络模型搭建集群这篇文章总结的很好有坑,没尝试成功 docker搭建系列 docker环境搭建zk集群 docker搭建redis集群 docker环境搭建elasticsearch docker搭建rabbit...

    xiongzenghui 评论0 收藏0
  • 使用 Docker 一步搞定 ZooKeeper 集群搭建

    摘要:背景原来学习时我是在本地搭建的伪集群虽然说使用起来没有什么问题但是总感觉部署起来有点麻烦刚好我发现了已经有了的镜像了于是就尝试了一下发现真是爽爆了几个命令就可以搭建一个完整的集群下面我简单记录一下使用搭建集群的一些步骤镜像下载上有不少镜像不 背景 原来学习 ZK 时, 我是在本地搭建的伪集群, 虽然说使用起来没有什么问题, 但是总感觉部署起来有点麻烦. 刚好我发现了 ZK 已经有了 D...

    B0B0 评论0 收藏0
  • 跟上大数据的步伐:快速搭建Spark集群

    摘要:本文详细介绍了基于进行集群资源调度的数人云,如何部署集群。数人云集群正是通过进行集群资源调度,因此,数人云部署集群,有着天然的优势。 Spark 是 UC Berkeley AMP lab 开源的类 Hadoop MapReduce 的通用的并行计算框架,Spark 基于 map reduce 算法实现的分布式计算,拥有 Hadoop MapReduce 所具有的优点,并且 能更好地适...

    elina 评论0 收藏0
  • Zookeeper学习系列【二】Zookeeper 集群章节之集群搭建

    摘要:本章内容主要讲的是集群搭建相关的知识。在集群模式下,最少需要三个节点。并且官方推荐你使用奇数数量的节点来组成集群。这个值必须是集群中唯一的。在确认每台服务器上的和文件修改创建之后,在三个节点上分别执行命令,启动。 前言 同道们,好久不见,上一章中,我主要讲了Zookeeper的一些基础的知识点。数据模型 + 原语集 + Watches机制。本章内容主要讲的是集群搭建相关的知识。 本篇的...

    shixinzhang 评论0 收藏0
  • zookeeper多节点集群搭建

    摘要:今天就在这给大家介绍下多节点的集群搭建此次用到台虚拟的系统,的大部分操作都是通过选举产生的。完成配置之后分发给其他两个节点在每个节点上依次启动后,检查集群是否启动成功返回结果可能为或则表示成功启动集群。 zk详细介绍 写在开始在上次关于zookeeper文章中给大家介绍了单节点情况下启动运行zk相关步骤,很简单,但是也很有必要。今天就在这给大家介绍下zk多节点的集群搭建(此次用到3台...

    tracymac7 评论0 收藏0

发表评论

0条评论

最新活动
阅读需要支付1元查看
<