祥和网上下载安装包,作者下载的是tar.gz安装包直接解压,也能够下载rpm格式

分布式服务协调技术zookeeper类别(壹)—— zookeeper 简介以及linux上的装置(单节点),zookeeperlinux

分布式服务协调技术zookeeper笔记——从入门到急忙驾驭,zookeeper笔记

 

本文首要学习ZooKeeper的种类布局、节点类型、节点监听、常用命令等基础知识,最终还学习了ZooKeeper的高可用集群的搭建与测试。目录如下:

 

ZooKeeper简介与安装

ZooKeeper体系结构

ZooKeeper节点类型

ZooKeeper节点监听

ZooKeeper常用命令

ZooKeeper客户端开发

ZooKeeper高可用集群搭建与测试

 

期待能给想迅速通晓ZooKeeper的校友有所帮忙。

linux安装zookeeper教程,linuxzookeeper教程

率先安装jdk

接下来下载

因为zookeeper是半拉子选举通过,所以1般安装奇数个节点,机器列表如下:

192.168.33.11

192.168.33.12

192.168.33.13

1.解压 tar -zxvf zookeeper-3.4.5.tar.gz

mvzookeeper-3.4.5 ../zookeeper

二.修改配置

cp /zookeeper/conf/zoo_sample.cfg zoo.cfg

vi zoo.cfg

修改:

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/usr/local/zookeeper/data

分布式服务协调技术zookeeper连串,从入门到高速控制。dataLogDir=/usr/local/zookeeper/logs

clientPort=2181

server.1=192.168.33.11:2881:3881

server.2=192.168.33.12:2882:3882

server.3=192.168.33.13:2883:3883

打开防火墙:

vi /etc/sysconfig/iptables

## zookeeper

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2181 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2881 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3881 -j ACCEPT

service iptables restart

vi /usr/local/zookeeper/data/myid 值为1

启动 ./bin/zkServer.sh start

关闭 ./bin/zkServer.sh stop

jps

1456 QuorumPeerMain

其间,QuorumPeerMain 是 zookeeper 进度,表明运行不奇怪

首先安装jdk 然后下载
因为zookeeper是半拉子大选通过,所以1般设置奇数个节点,机器列表如下:
1玖二….

一.下载zookeeper安装包,放到/usr/local/zookeeper安装包网上下载

ZooKeeper简介

ZooKeeper是二个分布式协调服务框架,平常用于缓解分布式项目中蒙受的某个管制协调的题材,如统一命名服务、分布式配置管理,分布式锁、集群节点状态协调等等。

ZooKeeper简介与安装

ZooKeeper是二个分布式协调服务框架,常常用于缓解分布式项目中境遇的有的管理协调的题材,如统一命名服务、分布式配置管理,分布式锁、集群节点状态协调等等。

2.解压文件tar -zxvf zookeeper-3.4.6.tar.gz

下载

下载

  1. 进入zookeeper-3.4.6目录,创建data文件夹。

  2. 把zoo_sample.cfg改名为zoo.cfg

解压

[[email protected]
ftpuser]# tar -zxvf zookeeper-3.4.9.tar.gz

解压

[[email protected]
ftpuser]# tar -zxvf zookeeper-3.4.9.tar.gz

mv zoo_sample.cfg zoo.cfg

创制数量日志目录

在zookeeper的解压目录下创办以下四个文本夹

 

[[email protected]
zookeeper-3.4.9]# mkdir data

[[email protected]
zookeeper-3.4.9]# mkdir logs

 

始建数量日志目录

在zookeeper的解压目录下创建以下五个文件夹

 

[[email protected]
zookeeper-3.4.9]# mkdir data

[[email protected]
zookeeper-3.4.9]# mkdir logs

 

伍.
起步、关闭、查看景况,注意关闭防火墙,那里zookeeper暂不开启,前面布署kafka里运营程序。

拷贝配置文件

到zookeeper的解压目录的conf目录下,将zoo_sample.cfg
文件拷贝1份,命名称叫为 zoo.cfg

 

[[email protected]
conf]# cp zoo_sample.cfg zoo.cfg

拷贝配置文件

到zookeeper的解压目录的conf目录下,将zoo_sample.cfg
文件拷贝1份,命名字为为 zoo.cfg

 

[roo[email protected]
conf]# cp zoo_sample.cfg zoo.cfg

./zkServer.sh start

修改配置文件

[[email protected]
conf]# vi zoo.cfg

 

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/home/ftpuser/zookeeper-3.4.9/data

dataLogDir=/home/ftpuser/zookeeper-3.4.9/logs

# the port at which the clients will connect

clientPort=2181

server.1=192.168.2.129:2888:3888

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

#

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to “0” to disable auto purge feature

#autopurge.purgeInterval=1

 

在zoo.cfg 配置dataDir,dataLogDir,server。

server.A=B:C:D:个中 A 是一个数字,表示这几个是第几号服务器;B 是其一服务

器的 IP 地址;C 表示的是那些服务器与集群中的 Leader
服务器交流音信的端口;D 表示的是若是集群中的 Leader
服务器挂了,须求五个端口来重新实行大选,选出多少个新的
Leader,而那个端口便是用来举行公投时服务器相互通讯的端口。

 

clientPort 这几个端口正是客户端(应用程序)连接 Zookeeper
服务器的端口,Zookeeper 会监听这一个端口接受客户端的拜访请求

修改配置文件

[[email protected]
conf]# vi zoo.cfg

 

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/home/ftpuser/zookeeper-3.4.9/data

dataLogDir=/home/ftpuser/zookeeper-3.4.9/logs

# the port at which the clients will connect

clientPort=2181

server.1=192.168.2.129:2888:3888

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

#

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to “0” to disable auto purge feature

#autopurge.purgeInterval=1

 

在zoo.cfg 配置dataDir,dataLogDir,server。

server.A=B:C:D:在那之中 A 是三个数字,表示那么些是第几号服务器;B 是其一服务

器的 IP 地址;C 表示的是这么些服务器与集群中的 Leader
服务器交流消息的端口;D 表示的是假如集群中的 Leader
服务器挂了,供给二个端口来重新展开公投,选出三个新的
Leader,而这几个端口正是用来推行竞选时服务器相互通讯的端口。

 

clientPort 那么些端口正是客户端(应用程序)连接 Zookeeper
服务器的端口,Zookeeper 会监听这么些端口接受客户端的拜访请求

./zkServer.sh stop

创建 myid 文件

进入/home/ftpuser/zookeeper-3.4.9/data并创建myid文件

 

[[email protected]
conf]# cd /home/ftpuser/zookeeper-3.4.9/data

[[email protected]
data]# vi myid

1

 

编写制定 myid 文件,并在对应的 IP 的机械上输入相应的号码。如在 zookeeper
上,myid

文本内容正是 一。借使只在单点上海展览中心开安装配置,那么唯有3个 server.一

创建 myid 文件

进入/home/ftpuser/zookeeper-3.4.9/data并创建myid文件

 

[[email protected]
conf]# cd /home/ftpuser/zookeeper-3.4.9/data

[[email protected]
data]# vi myid

1

 

编写 myid 文件,并在对应的 IP 的机器上输入相应的号码。如在 zookeeper
上,myid

文本内容就是 一。借使只在单点上开始展览安装配置,那么唯有三个 server.一

./zkServer.sh status

添加环境变量

[[email protected]
data]# vi /etc/profile

 

在文件末尾添加zookeeper 环境变量

 

# zookeeper env

export ZOOKEEPER_HOME=/home/ftpuser/zookeeper-3.4.9/

export PATH=$ZOOKEEPER_HOME/bin:$PATH

 

施行source /etc/profile命令是环境变量生效,执行echo $ZOOKEEPETiguan_HOME查看

拉长环境变量

[[email protected]
data]# vi /etc/profile

 

在文书末尾添加zookeeper 环境变量

 

# zookeeper env

export ZOOKEEPER_HOME=/home/ftpuser/zookeeper-3.4.9/

export PATH=$ZOOKEEPER_HOME/bin:$PATH

 

履行source /etc/profile命令是环境变量生效,执行echo $ZOOKEEPE奇骏_HOME查看

陆.zookeeper集群搭建

开辟端口

在防火墙中开辟要用到的端口
21八1、288八、388八。打开/etc/sysconfig/iptables增添以下 三 行

 

[[email protected]
data]# cat /etc/sysconfig/iptables

# Generated by iptables-save v1.4.7 on Thu Jun  2 22:41:13 2016

*filter

:INPUT ACCEPT [5:320]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [4:464]

-A INPUT -p udp -m udp –dport 23 -j ACCEPT

-A INPUT -p tcp -m tcp –dport 23 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2181 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2888 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3888 -j ACCEPT

COMMIT

# Completed on Thu Jun  2 22:41:13 2016

 

[[email protected]
data]#

 

查阅端口运维状态

 

[[email protected]
data]# service iptables status

 

开拓端口

在防火墙中开拓要用到的端口
21八1、288八、388八。打开/etc/sysconfig/iptables扩大以下 三 行

 

[[email protected]
data]# cat /etc/sysconfig/iptables

# Generated by iptables-save v1.4.7 on Thu Jun  2 22:41:13 2016

*filter

:INPUT ACCEPT [5:320]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [4:464]

-A INPUT -p udp -m udp –dport 23 -j ACCEPT

-A INPUT -p tcp -m tcp –dport 23 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2181 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2888 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3888 -j ACCEPT

COMMIT

# Completed on Thu Jun  2 22:41:13 2016

 

[[email protected]
data]#

 

翻看端口运行状态

 

[[email protected]
data]# service iptables status

 

壹.叁台机械的IP分别为1九二.16八.一.一、1九二.16捌.一.二、1九2.16八.一.3

启动zookeeper

[[email protected]
zookeeper-3.4.9]# cd bin

[[email protected]
bin]# ll

total 36

-rwxr-xr-x. 1 1001 1001  232 Aug 23 15:39 README.txt

-rwxr-xr-x. 1 1001 1001 1937 Aug 23 15:39 zkCleanup.sh

-rwxr-xr-x. 1 1001 1001 1032 Aug 23 15:39 zkCli.cmd

-rwxr-xr-x. 1 1001 1001 1534 Aug 23 15:39 zkCli.sh

-rwxr-xr-x. 1 1001 1001 1579 Aug 23 15:39 zkEnv.cmd

-rwxr-xr-x. 1 1001 1001 2696 Aug 23 15:39 zkEnv.sh

-rwxr-xr-x. 1 1001 1001 1065 Aug 23 15:39 zkServer.cmd

-rwxr-xr-x. 1 1001 1001 6773 Aug 23 15:39 zkServer.sh

[[email protected]
bin]# zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /home/ftpuser/zookeeper-3.4.9/bin/../conf/zoo.cfg

Starting zookeeper … STARTED

启动zookeeper

[[email protected]
zookeeper-3.4.9]# cd bin

[[email protected]
bin]# ll

total 36

-rwxr-xr-x. 1 1001 1001  232 Aug 23 15:39 README.txt

-rwxr-xr-x. 1 1001 1001 1937 Aug 23 15:39 zkCleanup.sh

-rwxr-xr-x. 1 1001 1001 1032 Aug 23 15:39 zkCli.cmd

-rwxr-xr-x. 1 1001 1001 1534 Aug 23 15:39 zkCli.sh

-rwxr-xr-x. 1 1001 1001 1579 Aug 23 15:39 zkEnv.cmd

-rwxr-xr-x. 1 1001 1001 2696 Aug 23 15:39 zkEnv.sh

-rwxr-xr-x. 1 1001 1001 1065 Aug 23 15:39 zkServer.cmd

-rwxr-xr-x. 1 1001 1001 6773 Aug 23 15:39 zkServer.sh

[[email protected]
bin]# zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /home/ftpuser/zookeeper-3.4.9/bin/../conf/zoo.cfg

Starting zookeeper … STARTED

修改主机名,vi /etc/hosts ,在hosts文件中添加

翻开zookeeper后台日志

[[email protected]
~]# tail -f /home/ftpuser/zookeeper-3.4.9/bin/zookeeper.out

查阅zookeeper后台日志

[[email protected]
~]# tail -f /home/ftpuser/zookeeper-3.4.9/bin/zookeeper.out

192.168.1.1 master

查看zookeeper进程

[[email protected]
bin]# jps

2011 QuorumPeerMain

1245 Bootstrap

2030 Jps

[[email protected]
bin]#

 

中间,QuorumPeerMain 是 zookeeper 进程,运营寻常。查看情形

 

[[email protected]
bin]# zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /home/ftpuser/zookeeper-3.4.9/bin/../conf/zoo.cfg

Mode: standalone

[[email protected]
bin]#

查看zookeeper进程

[[email protected]
bin]# jps

2011 QuorumPeerMain

1245 Bootstrap

2030 Jps

[[email protected]
bin]#

 

里头,QuorumPeerMain 是 zookeeper 进程,运行健康。查看意况

 

[[email protected]
bin]# zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /home/ftpuser/zookeeper-3.4.9/bin/../conf/zoo.cfg

Mode: standalone

[[email protected]
bin]#

192.168.1.2 slave1

停止zookeeper进程

[[email protected]
bin]# zkServer.sh stop

ZooKeeper JMX enabled by default

Using config: /home/ftpuser/zookeeper-3.4.9/bin/../conf/zoo.cfg

Stopping zookeeper … STOPPED

[[email protected]
bin]#

停止zookeeper进程

[[email protected]
bin]# zkServer.sh stop

ZooKeeper JMX enabled by default

Using config: /home/ftpuser/zookeeper-3.4.9/bin/../conf/zoo.cfg

Stopping zookeeper … STOPPED

[[email protected]
bin]#

192.168.1.3 slave2

客户端连接zookeeper

[[email protected]
bin]# ./zkCli.sh -server 192.168.2.129:2181

 

输入help命令来查看有何命令

 

[zk: 192.168.2.129:2181(CONNECTED) 0] help

ZooKeeper -server host:port cmd args

connect host:port

get path [watch]

ls path [watch]

set path data [version]

rmr path

delquota [-n|-b] path

quit

printwatches on|off

create [-s] [-e] path data acl

stat path [watch]

close

ls2 path [watch]

history

listquota path

澳门金沙国际 ,setAcl path acl

getAcl path

sync path

redo cmdno

addauth scheme auth

delete path [version]

setquota -n|-b val path

[zk: 192.168.2.129:2181(CONNECTED) 1]

zookeeper
简介以及linux上的装置(单节点),zookeeperlinux ZooKeeper简介
ZooKeeper是三个分布式协调服务框…

客户端连接zookeeper

[[email protected]
bin]# ./zkCli.sh -server 192.168.2.129:2181

 

输入help命令来查看有怎么样命令

 

[zk: 192.168.2.129:2181(CONNECTED) 0] help

ZooKeeper -server host:port cmd args

connect host:port

get path [watch]

ls path [watch]

set path data [version]

rmr path

delquota [-n|-b] path

quit

printwatches on|off

create [-s] [-e] path data acl

stat path [watch]

close

ls2 path [watch]

history

listquota path

setAcl path acl

getAcl path

sync path

redo cmdno

addauth scheme auth

delete path [version]

setquota -n|-b val path

[zk: 192.168.2.129:2181(CONNECTED) 1] 

二.跻身到zookeeper目录,创制文件夹data  mkdir data,

ZooKeeper种类结构

ZooKeeper的种类布局由2个命名空间组成,类似于八个专业的文件系统。命名空间由称为znodes节点的多寡寄存器组成,用ZooKeeper说法,那是看似于文件和目录。分歧于典型的文件系统的是,那是专为存款和储蓄数据而规划的,也正是说,ZooKeeper的节点数据是储存在内部存款和储蓄器中中,可以兑现高吞吐量和低顺延的数额。

 

ZooKeeper允许分布式进度(注意,那里强调的是进度,分布式环境中再3是例外主机之间的调和访问)通过共享ZooKeeper命名空间组织的znodes节点数据来兑现相互之间协调。

 

 

ZooKeeper命名空间组织示意图

跻身到data并创制文件vi myid,分别写入一,二,3,ID名称无法平等

ZooKeeper节点类型

 

从地点的ZooKeeper命名空间组织示意图中得以见到,ZooKeeper命名空间的根路径是“/”,每三个znode都有和好的path,每二个znode都存款和储蓄着1份协调数据,那些多少包括景况音讯、配置、地点新闻等等。由于znode维护的数据主要是用于分布式进度之间协调的,因而这几个多少一般一点都一点都不大。

取名空间中的每一个znode中的数据都是可读可写的,但各样节点都有3个访问控制列表(ACL),限制什么人能够做什么样。

命名空间中的各样znode都有和好的档次,znode的档次主要有上面三种:

(壹)PE奥德赛SISTEN,永久的(只要客户端不删除,则永远存在)

(2)PERSISTENT_SEQUENTIAL,永久且有序号的

(三)EMPEMERAL ,短暂的(只要客户端断开连接,则会被电动删除)

(4)EMPEMERAL_SEQUENTIAL, 短近期有序号的

3.进入到conf目录中,把zoo_sample.cfg文件修改为zoo.cfg,并修改个中的剧情

ZooKeeper节点监听

客户端能够对znode进行监听,当znode节点发生变化时,监听事件将会被触发,客户端接收到一个数目包说znode节点已发出改变。

 

ZooKeeper常用命令

ZooKeeper的常用命令13分容易,也就几条,下边大家来读书ZooKeeper的常用命令,加深对ZooKeeper连串结构的精通。

 

(一)ls——查看节点(路径)

 

[zk: 192.168.2.129:2181(CONNECTED) 1] ls /

[dubbo, zookeeper]

[zk: 192.168.2.129:2181(CONNECTED) 2] ls /zookeeper

[quota]

[zk: 192.168.2.129:2181(CONNECTED) 3] ls /dubbo

[com.alibaba.dubbo.monitor.MonitorService,
com.mcweb.api.service.IUserService]

[zk: 192.168.2.129:2181(CONNECTED) 4]

 

(贰)create——在命名空间中的某些路径下创办节点

 

[zk: 192.168.2.129:2181(CONNECTED) 7] create /nodes1 “test node”

Created /nodes1

[zk: 192.168.2.129:2181(CONNECTED) 8] ls /

[dubbo, nodes1, zookeeper]

 

(三)get——获取命名空间中有个别节点的数额

 

[zk: 192.168.2.129:2181(CONNECTED) 9] get /nodes1

test node

cZxid = 0x227

ctime = Tue Sep 27 04:01:49 CST 2016

mZxid = 0x227

mtime = Tue Sep 27 04:01:49 CST 2016

pZxid = 0x227

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x0

dataLength = 9

numChildren = 0

[zk: 192.168.2.129:2181(CONNECTED) 10]

 

(四)set——设置命名空间中有个别节点的数据

 

[zk: 192.168.2.129:2181(CONNECTED) 10] set /nodes1 “nodes1 data have
been setted”

cZxid = 0x227

ctime = Tue Sep 27 04:01:49 CST 2016

mZxid = 0x228

mtime = Tue Sep 27 04:07:04 CST 2016

pZxid = 0x227

cversion = 0

dataVersion = 1

aclVersion = 0

ephemeralOwner = 0x0

dataLength = 28

numChildren = 0

[zk: 192.168.2.129:2181(CONNECTED) 11] get /nodes1

nodes1 data have been setted

cZxid = 0x227

ctime = Tue Sep 27 04:01:49 CST 2016

mZxid = 0x228

mtime = Tue Sep 27 04:07:04 CST 2016

pZxid = 0x227

cversion = 0

dataVersion = 1

aclVersion = 0

ephemeralOwner = 0x0

dataLength = 28

numChildren = 0

[zk: 192.168.2.129:2181(CONNECTED) 12]

 

(伍)delete——删除命名空间中有个别节点

 

[zk: 192.168.2.129:2181(CONNECTED) 13] delete /nodes1

[zk: 192.168.2.129:2181(CONNECTED) 14] get /nodes1

Node does not exist: /nodes1

[zk: 192.168.2.129:2181(CONNECTED) 15]

 

除去节点新闻还足以用rmr命令。好了,常用命令介绍完了,别的命令能够通过help命令来查阅。下边大家来读书ZooKeeper
Java客户端的运用。

dataDir=/usr/zookeeper-3.4.10/data

ZooKeeper客户端开发

clientPort=2181

依赖jar

 

initLimit=10

测试代码

public class ZKTest {

 private ZooKeeper zk = null;

 @Before
 public void init() throws Exception {
  zk = new ZooKeeper("192.168.2.129:2181", 2000, new Watcher() {
   /**
    * 监听事件发生时的回调方法
    */
   @Override
   public void process(WatchedEvent event) {
    if (event.getType() == EventType.None) {
     System.out.println("Event:null");
     return;
    }
    System.out.println("EventType:" + event.getType());
    System.out.println("Path" + event.getPath());

    try {
     zk.getData("/nodes1", true, null);
     zk.getChildren("/nodes1", true);

    } catch (KeeperException | InterruptedException e) {

     e.printStackTrace();
    }
   }
  });
 }

 /**
  * 向zookeeper服务(集群)中注册数据,添加znode
  * @throws Exception
  */

 @Test
 public void testCreateZnode() throws Exception {
  zk.create("/nodes1", "nodes1".getBytes("utf-8"), 
    Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);

  // 在一个父节点的范围之内,sequential的顺序是递增的
  zk.create("/nodes1/testNode1", "/nodes1/testNode1".getBytes("utf-8"), 
    Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT_SEQUENTIAL);
  zk.create("/nodes1/testNode2", "/nodes1/testNode2".getBytes("utf-8"), 
    Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT_SEQUENTIAL);

  // 换一个父节点,序号的递增顺序重新开始
  zk.create("/nodes2", "nodes2".getBytes("utf-8"), 
    Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
  zk.create("/nodes2/testNode1", "/nodes2/testNode1".getBytes("utf-8"),
    Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT_SEQUENTIAL);

  zk.create("/nodes3", "/nodes3".getBytes("utf-8"), 
    Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);

  zk.close();
 }

 /**
  * 从zookeeper中删除znode
  * @throws Exception
  */
 @Test
 public void testDeleteZnode() throws Exception {

  // 参数1:要删除的节点的路径 参数2:要删除的节点的版本,-1匹配所有版本
  // 只能删除不为空的节点
  zk.delete("/nodes3", -1);
  Stat exists = zk.exists("/nodes3", false);
  System.out.println(exists);

 }

 @Test
 public void testUpdateZnode() throws Exception {
  byte[] data = zk.getData("/nodes1", false, null);
  System.out.println(new String(data, "utf-8"));

  zk.setData("/nodes1", "/nodes1 data changed".getBytes("utf-8"), -1);

  data = zk.getData("/nodes1", false, null);
  System.out.println(new String(data, "utf-8"));

 }

 /**
  * 获取子节点信息
  * @throws Exception
  */
 @Test
 public void testGetChildren() throws Exception {
  List<String> children = zk.getChildren("/nodes1", false);
  for (String child : children) {
   System.out.println(child);
  }
 }

 /**
  * zk的监听机制:
  * 在初始化zk对象的时候定义好回调函数,对znode进行操作时可以注册监听
  * 监听的znode上发生相应事件时,客户端zk会接收到zookeeper的事件通知
  * 客户端zk根据事件调用我们事先定义好的回调函数
  * @throws Exception
  * 
  */
 @Test
 public void testWatch() throws Exception {
  //获取/nodes1的数据时进行监听
         //第二个参数true表示监听
  byte[] data = zk.getData("/nodes1", true, null);

  //获取/nodes1的子节点时进行监听
  List<String> children = zk.getChildren("/nodes1", true);
  Thread.sleep(Long.MAX_VALUE);
 }


 /**
  * 将配置文件上传到zookeeper中进行管理
  * @throws Exception
  */
 @Test
 public void testUploadConfigFileToZookeeper() throws Exception{
  String schema_xml = FileUtils.readFileToString(new File("c:/web.xml"));
  zk.create("/conf", null, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
  zk.create("/conf/web.xml", schema_xml.getBytes(), 
    Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
  zk.close();
 }
}

 

急需专注的是,创立非叶子节点时,zk会活动抬高序号,所以在操作非叶子节点时,不能够像添加的时候同样以原路径操作,要拉长序号,如:

 

[zk: localhost:2181(CONNECTED) 50] ls /    

[dubbo, nodes2, conf, nodes1, zookeeper]

[zk: localhost:2181(CONNECTED) 51] ls /nodes1

[testNode20000000001, testNode10000000000]

[zk: localhost:2181(CONNECTED) 52] ls /nodes2

[testNode10000000000]

[zk: localhost:2181(CONNECTED) 53] get /nodes1/testNode1

Node does not exist: /nodes1/testNode1

[zk: localhost:2181(CONNECTED) 54] get /nodes1/testNode10000000000

/nodes1/testNode1

cZxid = 0x26a

…….

[zk: localhost:2181(CONNECTED) 55] delete /nodes1/testNode1      

Node does not exist: /nodes1/testNode1

[zk: localhost:2181(CONNECTED) 56]

 

syncLimit=5

Zookeeper高可用集群搭建与测试

tickTime=2000

服务器集群消息

Zookeeper
集群的统一筹划指标是高质量、高可用性、严苛有序访问的,个中只要有多半的节点是平常的景色下,那么全部集群对外正是可用的。正是基于那一个特点,要将
ZK 集群的节点数量要为奇数(贰n+1:如 三、伍、柒 个节点)较为合适。

 

服务器 1:192.168.2.127  端口:2181、2881、3881

服务器 2:192.168.2.128  端口:2182、2882、3882

服务器 3:192.168.2.130  端口:2183、2883、3883

 

端口表达:

 

21八x:客户端(应用程序)连接 Zookeeper 服务器的端口,Zookeeper
会监听那些端

28八x:该服务器与集群中的 Leader 服务器沟通消息的端口

388x:公投通讯端口,假设集群中的 Leader 服务器挂了,须求选出2个新的
Leader,而那一个端口正是用来执行大选时服务器相互通信的端口

 

server.①=master:2888:388八   (ZooKeeper会依据主机名找到IP)

安装多回答操作

在操作以前,先把Xshell也许SecureC锐界T设置成多会话操作,即同时决定八个会话窗口,很有益,打开127,12八,130会话窗口。

 

Xshell:查看-》撰写栏,Xshell的上面就会冒出贰个编辑框,正是撰写栏,点击左下角撰写栏上的橄榄绿图标,然后采取【全体会话】。

 

SecureC兰德昂科拉T:右击任何贰个对话窗口上面包车型大巴互相窗口,选中“发送交互到拥有标签”。

 

上面有个别操纵就足以在撰写栏上还要控制三台服务器了。

 

server.2=slave1:2888:3888

下载或上传文件

将zookeeper-三.四.玖.tar.gz下载也许上流传三台服务器中的/usr/ftpuser目录

server.3=slave2:2888:3888

解压

mkdir /usr/local/zookeeper

tar -zxvf zookeeper-3.4.9.tar.gz -C /usr/local/zookeeper

cd /usr/local/zookeeper

 

 

按节点号对 zookeeper 目录重命名

#127

[[email protected]
zookeeper]# mv zookeeper-3.4.9/ node-127

#128

[[email protected]
zookeeper]# mv zookeeper-3.4.9/ node-128

#130

[[email protected]
zookeeper]# mv zookeeper-3.4.9/ node-130

 

肆.在防火墙中扩充端口项vi /etc/sysconfig/iptables

开创数量日志目录

在各 zookeeper 节点目录/usr/local/zookeeper/node-*下创建data和logs目录

 

cd node-*

mkdir data

mkdir logs

 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2888 -j ACCEPT

拷贝修改配置文件

将 zookeeper/node-*/conf 目录下的 zoo_sample.cfg 文件拷贝一份,命名叫zoo.cfg

 

cd conf

cp zoo_sample.cfg zoo.cfg

 

#127

[[email protected]
conf]# vi zoo.cfg

#dataDir=/tmp/zookeeper

dataDir=/usr/local/zookeeper/node-127/data

dataLogDir=/usr/local/zookeeper/node-127/logs

# the port at which the clients will connect

clientPort=2181

server.1=192.168.2.127:2881:3881

server.2=192.168.2.128:2882:3882

server.3=192.168.2.130:2883:3883

 

 

#128

[[email protected]
conf]# vi zoo.cfg

#dataDir=/tmp/zookeeper

dataDir=/usr/local/zookeeper/node-128/data

dataLogDir=/usr/local/zookeeper/node-128/logs

# the port at which the clients will connect

clientPort=2182

server.1=192.168.2.127:2881:3881

server.2=192.168.2.128:2882:3882

server.3=192.168.2.130:2883:3883

 

#130

[[email protected]
conf]# vi zoo.cfg

#dataDir=/tmp/zookeeper

dataDir=/usr/local/zookeeper/node-130/data

dataLogDir=/usr/local/zookeeper/node-130/logs

# the port at which the clients will connect

clientPort=2183

server.1=192.168.2.127:2881:3881

server.2=192.168.2.128:2882:3882

server.3=192.168.2.130:2883:3883

 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3888 -j ACCEPT

创建 myid 文件

进入/usr/local/zookeeper/node-*/data,编辑 myid 文件,并在对应的 IP
的机械上输入相应的号码。如在 node-12八 上,myid 文件内容正是壹,node-12八上正是 2,node-130 上正是 3。

 

#127

[[email protected]
data]# vi /usr/local/zookeeper/node-127/data/myid ## 值为 1

#128

[[email protected]
data]# vi /usr/local/zookeeper/node-128/data/myid ## 值为 2

#130

[[email protected]
data]# vi /usr/local/zookeeper/node-130/data/myid ## 值为 3

 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2181 -j ACCEPT

打先河口

 

在防火墙中打开要用到的端口 218x、28八x、388x。打开vi
/etc/sysconfig/iptables扩展以下 3 行

 

#127

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2181 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2881 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3881 -j ACCEPT

 

#128

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2182 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2882 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3882 -j ACCEPT

 

#130

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2183 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2883 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3883 -j ACCEPT

 

重启防火墙

 

service iptables restart

 

重启iptables服务 /bin/systemctl restart iptables.service 或者service
iptables restart

启动zookeeper集群

cd node*/bin

./zkServer.sh start

 

 

翻开zookeeper集群状态

./zkServer.sh status

 

看得出,多少个节点中,有3个为Mode: leader,其它五个为Mode: follower

呈现状态报错参考:

 

5.执行命令: bin/zkServer.sh start conf/zoo.cfg,分别运维ZooKeeper

dubbo管理控制台链接集群

亟需先安装dubbo管理控制台,大家就用事先已经设置好的dubbo的管理控制台(1九二.168.贰.12九),下边大家修改下布署文件

 

[[email protected]
~]#

vi /usr/local/apache-tomcat-7.0.70/webapps/ROOT/WEB-INF/dubbo.properties

 

dubbo.registry.address=zookeeper://192.168.2.127:2181?backup=192.168.2.128:2182,192.168.2.130:2183

dubbo.admin.root.password=dubbo129

dubbo.admin.guest.password=dubbo129

 

然后:tail -300f zookeeper.out
查看输出的始末,假设没报错,表明运营成功;

启动dubbo管控台

[[email protected]
~]# cd /usr/local/apache-tomcat-7.0.70/bin/

[[email protected]
bin]# ./startup.sh

 

再接下去查看集群是搭建成功:

使用连zookeeper集群以及高可用测试

 

在《基于dubbo创设分布式项目与劳务模块》一文中,大家创造了劳动消费者与劳动提供者,今后大家将劳动提供者注册到zookeeper集群。

修改mcweb\mcweb-logic\src\main\resources\spring\dubbo-provider.xml

 

<!– zookeeper注册焦点地方 –>

<dubbo:registry protocol=”zookeeper”

address=”192.168.2.127:2181,192.168.2.128:2182,192.168.2.130:2183″ />

 

开头mcweb-logic。在dubbo的“首页 > 服务治理 >
服务”中得以看出曾经注册到 zookeeper
注册中央的劳动的相关事态。看看mcweb-logic的日志:

 

 澳门金沙国际 1

 

 

足见,应用已经再而三到了zookeeper集群中的12八节点。今后我们把128的zookeeper停止,看看mcweb-logic的日记变化:

 

澳门金沙国际 2 

 

足见,zookeeper集群的景况发生了扭转,当12八节点截至后,应用重新连接到了zookeeper集群中的1二七节点。以往集群中还有四个节点可用,集群还能够对外可用,当再把1二七节点结束后,集群对外就不可用:

 

澳门金沙国际 3 

 

倘若有多数的节点是正规的图景下,那么整个zookeeper集群对外正是可用的,那是zookeeper集群高可用的根基。

本文首要学习
ZooKeeper的类别布局、节点类型、节点监听、常用命令等…

实践:bin/zkCli.sh -server master:21八一 (master和slave一,随便选择三个)

进去到ZooKeeper的客户端命令行下边,执行  ls  /,会输出
[zookeeper],那时候成立一个节点 create  /test,然后再查看 get
/test,打字与印刷一下消息

testValue

cZxid = 0x200000005

ctime = Fri Mar 10 15:07:23 PST 2017

mZxid = 0x200000005

mtime = Fri Mar 10 15:07:23 PST 2017

pZxid = 0x200000005

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x0

dataLength = 9

numChildren = 0

 

再切换来其它四个IP,在实践实施:bin/zkCli.sh -server master:21八一 ,再 
ls  / 查看,就足以看看刚刚在其余一台机械上创办的节点test。

翻开zookeeper进程状态

  bin/zkServer.sh  status

     Using config: /opt/soft/zookeeper-3.4.8/bin/../conf/zoo.cfg

     Mode: follower   //角色

  jps

     3220 Jps

     2813 QuorumPeerMain   //zookeeper进程名

以上都通过,表明ZooKeeper集群搭建成功!

相关文章