简介

keepalived是HA Cluster(High Availability
Cluster,高可用集群)的一个服务软件,用来预防单点故障。

keepalived选拔VRRP(virtual router redundancy
protocol,虚拟路由冗余商讨),以软件的样式完毕服务器热备功效。寻常意况下是将两台linux服务器组成一个热备组(master-backup),同一时间热备组内唯有一台主服务器(master)提供劳动,同时master会虚拟出一个共用IP地址(VIP),那么些VIP只存在master上并对外提供劳务;如若keepalived检测到master宕机或劳动故障,备用服务器(backup)会活动接管VIP成为master,keepalived并将master从热备组移除,当master恢复生机后,会自行进入到热备组,默许再抢占成为master,起到故障转移效果。

高可用集群主要有两种落成形式:主备形式和主主情势:
主备形式:一个或三个VIP,一台主机对外提供劳动,此外一台做备用机,当主服务器出现难题,备用服务器接管IP继续提供劳务;
主主形式:五个或八个VIP,两台主机都对外提供服务,可以是同一个劳务,也足以是例外的劳动,那种形式升高了硬件的投入,也对负荷均衡起到一定的机能。

keepalived主要有七个模块,分别是ipvs wrapper、checkers、vrrp stack。ipvs
wrapper模块为keepalived的主导,负责主进度的起步、维护以及全局配置文件的加载和分析;checkers负责健康检查,包涵广大的各个检查办法;vrrp
stack模块是来贯彻VRRP协议的。

一、keepalived的着力介绍

一、高可用集群

1  概述

配备文件

keepalived的默许配置文件为/etc/keepalived/keepalived.conf,其主程序文件为/usr/sbin/keepalived。keepalived的布署文件的咬合首要分为三局地,而种种部分下边又富含着各自的子段,其构成及安排如下:

1、keepalivd的主干职能就是在linux系统上通过vrrp协议落到实处LVS的高可用。

(一)进步系统高可用性的缓解方案:冗余(redundant)

  • 办事情势

    • active/passive:主备
    • active/active:双主
  • 以心跳情势通报

    • active –> HEARTBEAT –> passive
    • active <–> HEARTBEAT <–> active
  • 故障处理

    • failover:故障切换,即某资源的主节点故障时,将资源转移至其余节点的操作
    • failback:故障移回,即某资源的主节点故障后重新修改上线后,将事先已转移至其它节点的资源再度切回的长河

本文紧要介绍keepalive 的相干安插

1、GLOBAL CONFIGURATION:全局配置段,包涵 Global definitions、Static routes/address/rules

2、vrrp协议虚拟冗余路由协和)可以将八个网关虚拟成一个网关,同时一组IP虚拟成VIP,及其MAC地址可以而且虚拟化。

(二)HA Cluster落成方案

  • ais:应用接口规范完备复杂的HA集群
    RHCS:Red Hat Cluster Suite红帽集群套件
    澳门金沙国际,heartbeat
    corosync

  • vrrp协议落到实处:虚拟路由冗余商谈
    keepalived

2  keepalived安装配备

Global definitions:用于定义全局设置,常用的参数及示范如下:

global_defs {
    notification_email {  #指定报警邮件发往的邮箱地址
        root@localhost
    }
    notification_email_from keepalived@localhost  #指定报警邮件的发件人
    smtp_server 127.0.0.1  #指定邮件服务器的地址
    smtp_connect_timeout 30  #指定邮件服务器的连接超时时长
    router_id node1  #设置路由器的标识
    vrrp_mcast_group4 224.0.100.19  #设置vrrp的广播地址,在同一个HA Cluster中要确保其广播地址一致,才能接收到对应的vrrp报文
    vrrp_strict  #执行严格的vrrp协议检查,下列情况将会阻止启动Keepalived:1. 没有VIP地址。2. 单播邻居。3. 在VRRP版本2中有IPv6地址。
}

3、keepalived通过vrrp协议可以很好落实故障转移,幸免单点故障发生,主节点服务故障时,备节点可以代表主节点继续提供服务。当故障节点苏醒正常后,能半自动将此节点到场到服务中。

二、KeepAlived基本介绍

CentOS 6.4以上的版本都合并在Base源里

Static routes/address/rules:用于配置keepalived中不会被vrrp移除的静态地址、路由或者规则,基本不用。

4、vrrp协议状态机制

(一)VRRP(Virtual Router Redundancy Protocol)协议术语

  • 虚构路由器:Virtual
    Router,八个大体路由器对外以一个IP地址提供劳务,就如一台路由器

    • 配置介绍,基于Keepalive落成LVS高可用集群。虚拟路由器标识:VRID(0-255),唯一标识虚拟路由器
    • VIP:Virtual IP,虚拟IP
    • VMAC:Virutal MAC (00-00-5e-00-01-VRID),虚拟MAC
  • 物理路由器
    master:主设备
    backup:备用设备
    priority:优先级

.程序环境:

2、VRRPD CONFIGURATION:vrrp相关计划段

澳门金沙国际 1

(二)KeepAlived的干活特色

  • 通告:心跳,优先级等;周期性

  • 行事章程:抢占式,非抢占式

  • 康宁注脚:

    • 无认证
    • 不难字符认证:预共享密钥
    • MD5
  • 行事情势:

    • 主/备:单虚拟路由器
    • 主/主:主/备(虚拟路由器1),备/主(虚拟路由器2)

.主配置文件:/etc/keepalived/keepalived.conf

VRRP instance(s):定义vrrp同步组。

vrrp_instance VIP_1 {    #定义vrrp实例,VIP_1为自定义实例名
    state MASTER|BACKUP    #指定此虚拟路由器在vrrp组的角色
    interface eno16777736    #绑定物理接口
    virtual_router_id 14    #唯一标识id,用于区分vrrp实例,范围为0-255
    priority 100    #设定优先级,范围为1-254
    advert_int 1    #发送vrrp通告的时间间隔
    nopreempt|preempt    #设定工作模式为非抢占或抢占模式
    preempt_delay 300    #在抢占式模式下,节点上线后触发新选举的延迟时长
    authentication {    #设置vrrp实例协商的方式及密码
        auth_type PASS    #定义认证类型为简单密码认证
        auth_pass 571f97b2    #定义密码串,最长不超过8个字符
    }
    virtual_ipaddress {    #在绑定的物理接口上添加虚拟ip地址
        #<IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL>
        10.1.0.91/16 dev eno16777736
    }
    track_interface {    #配置需要监控的网络接口,一旦接口出现故障该vrrp实例转为FAULT状态
      eth0
      eth1
      ...
     }
    track_script {    #调用在vrrp_script中定义的脚本,根据脚本进行监控调整
       <SCRIPT_NAME>
       <SCRIPT_NAME> weight <-254..254>
    }
    notify_master <STRING>|<QUOTED-STRING>    #当前节点成为主节点时触发的通知脚本
    notify_backup <STRING>|<QUOTED-STRING>    #当前节点转为备节点时触发的通知脚本
    notify_fault <STRING>|<QUOTED-STRING>    #当前节点转为fault状态时触发的通知脚本
    notify_stop <STRING>|<QUOTED-STRING>     #当前节点停止时所触发的通知脚本
}

5、keepalived服务的装置,基于Centos6.4的实验环境,间接动用1.2.7本子的rpm包安装keepalived。

(三)KeepAlived的功能

  • vrrp协议达成地点流动

  • 为vip地址所在的节点生成ipvs规则(在配备文件中先行定义)

  • 为ipvs集群的各RS做正规意况检测

  • 据悉脚本调用接口通过实施脚本已毕脚本中定义的作用,进而影响集群事务,以此协理nginx,
    haproxy等劳动

.主程序文件:/usr/sbin/keepalived

VRRP script(s):定义周期性执行的本子,用于检查相应的劳动或ip状态。

vrrp_script <SCRIPT_NAME> {    #定义周期执行的脚本,vrrp instances会根据脚本的退出码来调整优先级
    script <STRING>|<QUOTED-STRING>    #定义执行脚本的存放路径
    interval INT     #定义调用执行脚本的周期,默认为1s
    timeout <INTEGER>    #脚本执行超时时间,脚本执行超时后,则被认为失败
    rise <INTEGER>        #定义脚本检查成功多少次,才认可当前的状态为正常
    fall <INTEGER>        #定义检查失败多少次,才认为当前状态为失败
}

6、keepalived的主配置文件 /etc/keepalived/keepalived.conf

三、KeepAlived的配置

.Unit File:/usr/lib/systemd/system/keepalived.service

3、LVS CONFIGURATION:LVS配置段

keepalived的服务脚本 /etc/rc.d/init.d/keepalived

(一)HA Cluster配置准备:

  • 各节点时间必须一起:ntp服务(CentOS 6), chrony(CentOS 7)

    // 由于ntp/chrony服务不能同步差距过大的时间,需要先使用ntpdate命令同步一次,再开启服务
    ntpdate ntp_server_ip
    // 开启chronyd服务(CentOS 7)
    vim /etc/chrony.conf
    server 172.18.0.1 iburst
    systemctl enable chronyd
    systemctl start chronyd
    // 开启ntp服务(CentOS 6)
    vim /etc/ntp.conf
    server 172.18.0.1 iburst
    chkconfig ntpd on
    service ntpd start
    
  • 保证iptables及selinux不会成为阻挠

  • 各节点之间可因此主机名互相通讯(对KA并非必须),指出使用/etc/hosts文件贯彻

  • 各节点之间的root用户可以按照密钥认证的ssh服务到位相互通讯(对KA并非必须)

    ssh-keygen
    ssh-copy-id destination_ip
    

.Unit File的条件安插文件:/etc/sysconfig/keepalived

Virtual server(s):用于定义虚拟服务器的安装,虚拟服务器可以用ip port、fwmark、virtual server group(s)来定义。

virtual_server IP port | virtual_server fwmark <int>  |virtual_server group string
{
    delay_loop <INT>    #健康检查的时间间隔
    lb_algo rr|wrr|lc|wlc|lblc|sh|dh    #lvs调度方法
    lb_kind NAT|DR|TUN    #集群类型
    persistence_timeout <INT>    #持久连接时长
    protocol TCP|UDP|SCTP    #服务协议
    sorry_server <IPADDR> <PORT>    #备用服务器
    real_server <IPADDR> <PORT>{
        weight <INT>    #指定权重,默认是1
        notify_up <STRING>|<QUOTED-STRING>    #服务器健康检查成功时执行的脚本
        notify_down <STRING>|<QUOTED-STRING>    #服务器健康检查失败时执行的脚本
        HTTP_GET|SSL_GET {    #应用层检测
            url {
                path <URL_PATH>    #定义要监控的URL
                status_code <INT>    #健康状态的响应码
                digest <STRING>    #健康状态的响应的内容的校验码
            }
            nb_get_retry <INT>    #重试次数
            delay_before_retry <INT>    #重试之前的延迟时长
            connect_ip <IP ADDRESS>    #向哪个IP地址发起健康状态检测请求,默认是real server的ip地址
            connect_port <PORT>    #向哪个PORT发起健康状态检测请求,默认是real server的端口
            bindto <IP ADDRESS>    #发起连接的接口的ip地址
            bind_port <PORT>    #发起连接的接口的地址端口
            connect_timeout <INTEGER>    #连接请求的超时时长
        }
        TCP_CHECK {
            connect_ip <IP ADDRESS>    #向哪个IP地址发起健康状态检测请求,默认是real server的ip地址
            connect_port <PORT>    #向哪个PORT发起健康状态检测请求,默认是real server的端口
            bindto <IP ADDRESS>    #发起连接的接口的ip地址
            bind_port <PORT>    #发起连接的接口的地址端口
            connect_timeout <INTEGER>    #连接请求的超时时长
        }
    }
}

二、keepalived的布署文件

(二)KeepAlived的顺序环境

  • 主配置文件:/etc/keepalived/keepalived.conf

  • 主程序文件:/usr/sbin/keepalived

  • Unit File:/usr/lib/systemd/system/keepalived.service

  • Unit File的环境安顿文件:/etc/sysconfig/keepalived

3配备文件组件部分

 配置实例:主主情势

#一台主机配置VIP_1、VIP_2互为主备
global_defs {
    notification_email {
        root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id node1
    vrrp_mcast_group4 224.0.100.19
}

vrrp_instance VIP_1 {
    state MASTER
    interface eno16777736
    virtual_router_id 14
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 571f97b2
    }
    virtual_ipaddress {
        10.1.0.91/16 dev eno16777736
    }
}

vrrp_instance VIP_2 {
    state BACKUP
    interface eno16777736
    virtual_router_id 15
    priority 98
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 578f07b2
    }
    virtual_ipaddress {
        10.1.0.92/16 dev eno16777736
    }
}
#一台主机配置VIP_1、VIP_2互为主备
global_defs {
    notification_email {
        root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id node2
    vrrp_mcast_group4 224.0.100.19
}

vrrp_instance VIP_1 {
    state BACKUP
    interface eno16777736
    virtual_router_id 16
    priority 98
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 571f97b2
    }
    virtual_ipaddress {
        10.1.0.91/16 dev eno16777736
    }
}

vrrp_instance VIP_2 {
    state MASTER
    interface eno16777736
    virtual_router_id 17
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 578f07b2
    }
    virtual_ipaddress {
        10.1.0.92/16 dev eno16777736
    }
}

 

1、全局配置段

(三)KeepAlived的配置文件结构

  • GLOBAL CONFIGURATION:全局设置
    Global definitions
    Static routes/addresses

  • VRRPD CONFIGURATION:VRRP设置
    VRRP synchronization group(s):vrrp同步组
    VRRP instance(s):即一个vrrp虚拟路由器

  • LVS CONFIGURATION:LVS设置
    Virtual server group(s)
    Virtual server(s):ipvs集群的vs和rs

2.1  组配置文件

GLOBAL CONFIGURATION

(四)配置虚拟路由器

  • 语法:

    vrrp_instance <STRING> {
    ....
    }
    
  • 专用参数:

    • state MASTER | BACKUP
      现阶段节点在此虚拟路由器上的启幕状态;只好有一个是MASTER,余下的都应当为BACKUP
    • interface IFACE_NAME
      绑定为近年来虚拟路由器使用的情理接口
    • virtual_router_id VRID
      脚下虚拟路由器惟一标识,范围是0-255
    • priority 100
      此时此刻物理节点在此虚拟路由器中的优先级;范围1-254
    • advert_int 1
      vrrp布告的日子间隔,默许1s
    • authentication:认证机制

    authentication {
    auth_type AH|PASS
    auth_pass <PASSWORD> 仅前8位有效
    }
    
    • virtual_ipaddress:虚拟IP

    virtual_ipaddress { 
    <IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL>
    }
    
    • track_interface:配置监控网络接口,一旦现过逝障,则转为FAULT状态完毕地点转移

    track_interface {
    eth0
    eth1
    …
    }
    
    • nopreempt:定义工作形式为非抢占格局
    • preempt_delay
      300:抢占式形式,节点上线后触发新选出操作的推移时长,默许格局
    • 概念布告脚本:
      notify_master <STRING> | <QUOTED-STRING>:
      现阶段节点成为主节点时触发的剧本
      notify_backup <STRING> | <QUOTED-STRING>:
      眼前节点转为备节点时触及的脚本
      notify_fault <STRING> | <QUOTED-STRING>:
      眼下节点转为“失利”状态时触发的剧本
      notify <STRING> | <QUOTED-STRING>:
      通用格式的关照触发机制,一个本子可做到以上二种情形的更换时的通知
  • 尝试1:完结主/备虚拟路由器

    • 试验环境:
      物理路由器1:ip: 192.168.136.230, 主机名: node1, MASTER
      物理路由器2:ip: 192.168.136.130, 主机名: node2, BACKUP
      VIP:192.168.136.100

    // 配置物理路由器1
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node1@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1                  // vrrp中的路由器主机名
       vrrp_mcast_group4 224.0.0.58     // 设置组播ip地址
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens37
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6          // openssl rand -hex 4 生成8位16进制密码
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
    }
    
    systemctl start keepalived
    
    // 配置物理路由器2
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node2@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node2
       vrrp_mcast_group4 224.0.0.58
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface ens37
        virtual_router_id 51
        priority 90                    //作为BACKUP优先级比MASTER要低
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6         // 密码与node1相同
        }
        virtual_ipaddress {
           192.168.136.100/24
        }
    }
    
    systemctl start keepalived
    
    • 测试
      node1的ip地址已经出现VIP

    澳门金沙国际 2

    监听组播地址的tcp连接tcpdump -i ens37 -nn host 224.0.0.58,此时关门node1的keepalived服务systemctl stop keepalived,自动由node2接管并伊始注解自己具有虚拟路由器的IP

    澳门金沙国际 3

    VIP此时曾经被node2接管

    澳门金沙国际 4

  • 实验2:实现keepalived日志

vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D -S 3"    // -D:详细日志,-S 3: 设置日志facility为local3
vim /etc/rsyslog.conf 
local3.*               /var/log/keepalived.log    // 设置日志存储路径
systemctl restart rsyslog
systemctl restart keepalived
tail -f  /var/log/keepalived.log

澳门金沙国际 5

  • 实验3:落成主/主虚拟路由器,并且当节点暴发变化时积极发送邮件

    • 试验环境
      物理路由器1:ip: 192.168.136.230, 主机名: node1
      物理路由器2:ip: 192.168.136.130, 主机名: node2
      虚拟路由器1:MASTER: node1, BACKUP: node2, VIP:
      192.168.136.100
      虚拟路由器2:MASTER: node2, BACKUP: node1, VIP: 192.168.136.200

    // 配置物理路由器1(虚拟路由器1的MASTER,虚拟路由器2的BACKUP)
    vim /etc/keepalived/keepalived.conf
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1
       vrrp_mcast_group4 224.0.0.58
    }
    // 虚拟路由器1的设置
    vrrp_instance VI_1 {
        state MASTER
        interface ens37
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    // 虚拟路由器2的设置
    vrrp_instance VI_2 {
        state BACKUP
        interface ens37
        virtual_router_id 61
        priority 80
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass a56c19be
        }
        virtual_ipaddress {
            192.168.136.200/24
       }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    systemctl restart keepalived
    
    // 配置物理路由器2(虚拟路由器1的BACKUP,虚拟路由器2的MASTER)
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
       root@localhost
       }
       notification_email_from node2@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node2
       vrrp_mcast_group4 224.0.0.58
    }
    // 虚拟路由器1的设置
    vrrp_instance VI_1 {
        state BACKUP
        interface ens37
        virtual_router_id 51
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
           192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    // 虚拟路由器2的设置
    vrrp_instance VI_2 {
        state MASTER
        interface ens37
        virtual_router_id 61
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass a56c19be
        }
        virtual_ipaddress {
            192.168.136.200/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    
    // 在物理路由器1,2上添加脚本文件
    vim /etc/keepalived/notify.sh
    #! /bin/bash
    
    contact='root@localhost'
    notify() {
            mailsubject="$(hostname) to be $1, vip floating"
            mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
            echo "$mailbody" | mail -s "$mailsubject" $contact
    }
    
    case $1 in
    master)
            notify master
            ;;
    backup)
            notify backup
            ;;
    fault)
            notify fault
            ;;
    *)
            echo "Usage: $(basename $0) {master|backup|fault}"
            exit 1
            ;;
    esac
    chmod +x /etc/keepalived/notify.sh
    
    • 测试
      监听组播地址的tcp连接tcpdump -i ens37 -nn host 224.0.0.58,可以看到node1,
      node2分别声明拥有虚拟路由器1(vrid
      51)、虚拟路由器2(vrid61)的IP地址

    澳门金沙国际 6

    分级查看node1和node2的网卡IP地址,进一步确认上述结果

    澳门金沙国际 7

    澳门金沙国际 8

    那会儿,断开node1的网络连接
    虚拟路由器1的VIP立即由node2的网卡接管

    澳门金沙国际 9

    澳门金沙国际 10

    光复node1的网络连接,在node1和node2上都得以见到相应的邮件公告:
    node1上通报出错,很快文告我被切换为BACKUP,复苏网络连接后通报自己重新变成MASTER;

    澳门金沙国际 11

    node2上通报我切换为MASTER,恢复生机互连网连接后布告自己切换为BACKUP

    澳门金沙国际 12

有三段配置

global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

(五)Keepalived支持IPVS

  • 语法:

virtual_server {IP port | fwmark int}
{
    ...
    real_server{
        ...
    }
    ...
}
  • virtual_server常用参数

    • delay_loop <INT>
      自我批评后端服务器的时日间隔
    • lb_algo rr|wrr|lc|wlc|lblc|sh|dh
      概念调度格局
    • lb_kind NAT|DR|TUN
      集群的品类
    • persistence_timeout <INT>
      慎始而敬终连接时长
    • protocol TCP
      劳动协议,仅辅助TCP
    • sorry_server<IPADDR> <PORT>
      具有RS故障时,备用服务器地址
  • real_server <IPADDR> <PORT>常用参数

    • weight <INT>
      RS权重
    • notify_up <STRING>|<QUOTED-STRING>
      RS上线文告脚本
    • notify_down <STRING>|<QUOTED-STRING>
      RS下线公告脚本
    • HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { … }
      概念当前主机的正常化情形检测方法
  • HTTP_GET|SSL_GET:应用层健康情形检测

    HTTP_GET|SSL_GET {
    url {
    path <URL_PATH>               // 定义要监控的URL
    status_code <INT>             // 判断上述检测机制为健康状态的响应码
    digest <STRING>               // 判断为健康状态的响应的内容的校验码
    }
    connect_timeout <INTEGER>     // 连接请求的超时时长
    nb_get_retry <INT>            // 重试次数
    delay_before_retry <INT>      // 重试之前的延迟时长
    connect_ip <IP ADDRESS>       // 向当前RS哪个IP地址发起健康状态检测请求
    connect_port <PORT>           // 向当前RS的哪个PORT发起健康状态检测请求
    bindto <IP ADDRESS>           // 发出健康状态检测请求时使用的源地址
    bind_port <PORT>              // 发出健康状态检测请求时使用的源端口
    }
    
  • TCP_CHECK参数

    • connect_ip <IP ADDRESS>
      向当前RS的哪些IP地址发起健康情状检测请求
    • connect_port <PORT>
      向当前RS的哪些PORT发起健康情状检测请求
    • bindto <IP ADDRESS>
      爆发健康状态检测请求时接纳的源地址
    • bind_port <PORT>
      发生健康情形检测请求时采纳的源端口
    • connect_timeout <INTEGER>
      连年请求的晚点时长
  • 试验4:完结主/备模型的IPVS集群

    • 尝试环境:
      LB1(master)/VS:IP: 192.168.136.230
      LB2(backup)/VS:IP: 192.168.136.130
      VIP:192.168.136.100
      RS1:IP: 192.168.136.229
      RS2:IP: 192.168.136.129

    // 配置LB1的keepalived设置
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node1@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1
       vrrp_mcast_group4 224.0.0.58
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens37
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    
    virtual_server 192.168.136.100 80{
        delay_loop 3
        lb_algo wrr
        lb_kind DR
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 192.168.136.229 80{
            weight 2
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
        real_server 192.168.136.129 80{
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    
    // 配置LB2的keepalived设置
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node2@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node2
       vrrp_mcast_group4 224.0.0.58
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface ens37
        virtual_router_id 51
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
           192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    
    virtual_server 192.168.136.100 80{
        delay_loop 3
        lb_algo wrr
        lb_kind DR
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 192.168.136.229 80{
            weight 2
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
        real_server 192.168.136.129 80{
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    
    // 配置LB1, LB2的sorry server服务
    echo sorry on LB1 > /var/www/html/index.html     // LB1上操作
    echo sorry on LB2 > /var/www/html/index.html     // LB2上操作
    systemctl start httpd
    
    // 配置RS1, RS2的Web服务
    echo RS1 homepage > /var/www/html/index.html     // RS1上操作
    echo RS2 homepage > /var/www/html/index.html     // RS2上操作
    systemctl start httpd
    
    // 编辑脚本实现:禁止RS响应ARP请求,并将网卡绑定VIP
    vim lvs_dr_rs.sh
    #! /bin/bash
    vip='192.168.136.100'
    mask='255.255.255.255'
    dev=lo:1
    rpm -q httpd &> /dev/null || yum -y install httpd &>/dev/null
    service httpd start &> /dev/null && echo "The httpd Server is Ready!"
    
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig $dev $vip netmask $mask broadcast $vip up
        echo "The RS Server is Ready!"
        ;;
    stop)
        ifconfig $dev down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        echo "The RS Server is Canceled!"
        ;;
    *)
        echo "Usage: $(basename $0) start|stop"
        exit 1
        ;;
    esac
    
    chmod +x lvs_dr_rs.sh
    bash lvs_dr_rs.sh start
    
    // LB1, LB2启动KeepAlived服务,进行测试
    systemctl start keepalived
    

    做客VIP(192.168.136.100)的Web服务,正常工作

    澳门金沙国际 13

    停下RS2的Web服务,自动进行健康检查,全体调度至RS1

    澳门金沙国际 14

    终止RS1的Web服务,自动进行健康检查,调度至LB1的sorry server

    澳门金沙国际 15

    为止LB1的KeepAlived服务,自动切换至LB2

    澳门金沙国际 16

  • 试行5:完成主/主模型的IPVS集群

    • 试验环境:
      LB1/VS1:IP: 192.168.136.230,后端RS: RS1, RS2
      LB2/VS2:IP: 192.168.136.130,后端RS: RS3, RS4
      LB1 VIP:192.168.136.100
      LB2 VIP:192.168.136.200
      RS1:IP: 192.168.136.229
      RS2:IP: 192.168.136.129
      RS3:IP: 192.168.136.240
      RS4:IP: 192.168.136.250
      LB之间互为MASTER与BACKUP的关系
      MASTER:LB1,BACKUP:LB2
      MASTER:LB2,BACKUP:LB1

    // 配置LB1, LB2的keepalived设置
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node1@localhost     // LB1上操作
       notification_email_from node1@localhost     // LB2上操作
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1                             // LB1上操作
       router_id node2                             // LB2上操作
       vrrp_mcast_group4 224.0.0.58
    }
    vrrp_instance VI_1 {
        state MASTER                               // LB1上操作
        state BACKUP                               // LB2上操作
        interface ens37
        virtual_router_id 51
        priority 100                               // LB1上操作
        priority 90                                // LB2上操作
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    vrrp_instance VI_2 {
        state BACKUP                               // LB1上操作
        state MASTER                               // LB2上操作
        interface ens37
        virtual_router_id 61
        priority 80                                // LB1上操作
        priority 100                               // LB2上操作
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass a56c19be
        }
        virtual_ipaddress {
            192.168.136.200/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    
    }
    virtual_server 192.168.136.100 80{
        delay_loop 3
        lb_algo wrr
        lb_kind DR
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 192.168.136.229 80{
            weight 2
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    real_server 192.168.136.129 80{
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    virtual_server 192.168.136.200 80{
        delay_loop 3
        lb_algo wrr
        lb_kind DR
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 192.168.136.240 80{
            weight 2
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
        real_server 192.168.136.250 80{
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
           }
       }
    }
    
    // 配置LB1, LB2的sorry server服务
    echo sorry on LB1 > /var/www/html/index.html     // LB1上操作
    echo sorry on LB2 > /var/www/html/index.html     // LB2上操作
    systemctl start httpd
    
    // 配置RS1, RS2, RS3, RS4的Web服务
    echo RS1 homepage > /var/www/html/index.html     // RS1上操作
    echo RS2 homepage > /var/www/html/index.html     // RS2上操作
    echo RS3 homepage > /var/www/html/index.html     // RS3上操作
    echo RS4 homepage > /var/www/html/index.html     // RS4上操作
    systemctl start httpd
    
    // 编辑脚本实现:禁止RS响应ARP请求,并将网卡绑定VIP
    vim lvs_dr_rs.sh
    #! /bin/bash
    vip='192.168.136.100'                            // RS1, RS2上操作
    vip='192.168.136.200'                            // RS3, RS4上操作
    mask='255.255.255.255'
    dev=lo:1
    rpm -q httpd &> /dev/null || yum -y install httpd &>/dev/null
    service httpd start &> /dev/null && echo "The httpd Server is Ready!"
    
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig $dev $vip netmask $mask broadcast $vip up
        echo "The RS Server is Ready!"
        ;;
    stop)
        ifconfig $dev down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        echo "The RS Server is Canceled!"
        ;;
    *)
        echo "Usage: $(basename $0) start|stop"
        exit 1
        ;;
    esac
    
    chmod +x lvs_dr_rs.sh
    bash lvs_dr_rs.sh start
    
    // LB1, LB2启动KeepAlived服务,进行测试
    systemctl start keepalived
    

使用ipvsadm -Ln指令查看ipvs调度策略,与KeepAlived的布置吻合

澳门金沙国际 17

走访VIP1, VIP2(192.168.136.100, 192.168.136.200)的Web服务,正常干活

澳门金沙国际 18

停下RS1的Web服务,自动举办健康检查,全部调度至RS2

澳门金沙国际 19

终止RS2的Web服务,自动进行健康检查,调度至LB1的sorry server

澳门金沙国际 20

悬停LB1的KeepAlived服务,自动切换至LB2

澳门金沙国际 21

悬停RS3的Web服务,自动进行健康检查,全体调度至RS4

澳门金沙国际 22

甘休RS4的Web服务,自动进行健康检查,调度至LB2的sorry server

澳门金沙国际 23

GLOBAL CONFIGURATION

概念邮件收发,静态路由

(六)Keepalived调用脚本举办资源监察

  • keepalived调用外部的拉扯脚本进行资源监察,并依据监察的结果景况落成优先动态调整

  • vrrp_script:自定义资源监察脚本,vrrp实例根据脚本重回值,公共定义,可被四个实例调用,定义在vrrp实例之外

  • track_script:调用vrrp_script定义的脚本去监控资源,定义在实例之内,调用事先定义的vrrp_script

    • 分两步:(1) 先定义一个本子;(2) 调用此脚本
      格式:

    // 定义脚本,定义在实例外
    vrrp_script <SCRIPT_NAME> {
        script ""     // 引号内为脚本命令
        interval INT
        weight -INT
    }
    // 调用脚本,定义在实例内
    track_script {
        SCRIPT_NAME_1
        SCRIPT_NAME_2
    }
    
  • 试行6:完结主/主模型的高可用Nginx反向代理

    • 试行环境:
      LB1/VS1:IP: 192.168.136.230,后端RS: RS1, RS2
      LB2/VS2:IP: 192.168.136.130,后端RS: RS3, RS4
      LB1 VIP:192.168.136.100
      LB2 VIP:192.168.136.200
      RS1:IP: 192.168.136.229
      RS2:IP: 192.168.136.129
      RS3:IP: 192.168.136.240
      RS4:IP: 192.168.136.250
      LB之间互为MASTER与BACKUP的关联
      MASTER:LB1,BACKUP:LB2
      MASTER:LB2,BACKUP:LB1

    // 配置LB1, LB2的KeepAlived设置
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node1@localhost     // LB1上操作
       notification_email_from node2@localhost     // LB2上操作
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1                             // LB1上操作
       router_id node2                             // LB2上操作
       vrrp_mcast_group4 224.0.0.58
    }
    
    vrrp_script chk_nginx {
            script "killall -0 nginx && exit 0 || exit 1;"
            interval 1
            weight -20
            fall 3
            rise 3
    }
    vrrp_instance VI_1 {
        state MASTER                               // LB1上操作
        state BACKUP                               // LB2上操作
        interface ens37
        virtual_router_id 51
        priority 100                               // LB1上操作
        priority 90                                // LB2上操作
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    // 下面的脚本引用仅在LB1的配置文件出现
        track_script {
            chk_nginx
        }
    }
    vrrp_instance VI_2 {
        state BACKUP                               // LB1上操作
        state MASTER                               // LB2上操作
        interface ens37
        virtual_router_id 61
        priority 90                                // LB1上操作
        priority 100                               // LB2上操作
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass a56c19be
        }
        virtual_ipaddress {
            192.168.136.200/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    // 下面的脚本引用仅在LB2的配置文件出现
        track_script {
            chk_nginx
        }
    }
    
    // 配置LB1,LB2的nginx反向代理
    vim /etc/nginx/nginx.conf
    http {
        upstream websrvs1 {
            server 192.168.136.229:80 weight=2;
            server 192.168.136.129:80 weight=1;
        }
        upstream websrvs2 {
            server 192.168.136.240:80 weight=2;
            server 192.168.136.250:80 weight=1;
        }
        server {
            listen  192.168.136.100:80;
            location / {
                    proxy_pass http://websrvs1;
            }
        }
        server {
            listen  192.168.136.200:80;
            location / {
                    proxy_pass http://websrvs2;
            }
        }
    }
    nginx -t
    systemctl start nginx
    
    // 配置RS1, RS2, RS3, RS4的Web服务
    echo RS1 homepage > /var/www/html/index.html     // RS1上操作
    echo RS2 homepage > /var/www/html/index.html     // RS2上操作
    echo RS3 homepage > /var/www/html/index.html     // RS3上操作
    echo RS4 homepage > /var/www/html/index.html     // RS4上操作
    systemctl start httpd
    
    // LB1, LB2启动KeepAlived服务,进行测试
    systemctl start keepalived
    

    报到192.168.136.100和192.168.136.200的web服务,确实按照设置须要调度

    澳门金沙国际 24

    停下RS2的httpd服务,全体调度至RS1

    澳门金沙国际 25

    甘休RS3的httpd服务,全体调度至RS4

    澳门金沙国际 26

    关闭LB2的nginx反向代理服务,通过tcpdump -i ens37 -nn host 224.0.0.58查看组播情形。多个红框依次表明:
    (1)未关门nginx前的组播状态
    (2)关闭nginx后,LB2的vrid 61权重减去20变作80,而LB1vrid
    61的权重为90
    (3)由于LB1的权重高,VIP2的所有权被LB1接管

    澳门金沙国际 27

    关闭LB1的nginx反向代理服务,通过tcpdump -i ens37 -nn host 224.0.0.58翻看组播景况。多少个红框依次表明:
    (1)未关门nginx前的组播状态
    (2)关闭nginx后,LB1的vrid 51权重减去20变作80,而LB2的vrid
    51权重为90
    (3)由于LB2的权重高,VIP1的所有权被LB2接管

    澳门金沙国际 28

    是因为此时三个nginx反向代理均关门,故访问192.168.136.100和192.168.136.200的web服务整个破产

    澳门金沙国际 29

    开拓LB2的nginx反向代理服务,通过tcpdump -i ens37 -nn host 224.0.0.58翻看组播意况。多个红框依次表明:
    (1)未打开nginx前的组播状态
    (2)打开nginx后,LB2的vrid 61权重扩展20变作100,而LB1的vrid
    61权重为90
    (3)由于LB2的权重高,VIP2的所有权被LB2接管

    澳门金沙国际 30

    此时VIP1和VIP2均由LB2上的nginx服务器举办反向代理,192.168.136.100和192.168.136.200的web服务整个上涨

Global definitions

2、keepalived的vrpp实例配置段

(七)Keepalived同步组

  • LVS NAT模型VIP和DIP要求联合,须求同步组

  • 格式:

    vrrp_sync_group VG_1 {
      group {
          VI_1 # name of vrrp_instance(below)
          VI_2 # One for each moveable IP.
      }
    }
    vrrp_instance VI_1 {
      eth0
      vip
    }
    vrrp_instance VI_2 {
      eth1
      dip
    }
    

Static routes/addresses

VRRPD CONFIGURATION

VRRPD CONFIGURATION

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.200.16
        192.168.200.17
        192.168.200.18
    }
}

VRRP synchronization group(s):vrrp同步组

虚拟路由的布置实例为着力配置段

VRRP instance(s):即一个vrrp虚拟路由器

3、keepalived的LVS虚拟服务器配置段

LVS CONFIGURATION

LVS CONFIGURATION

Virtual server group(s)

virtual_server 192.168.200.100 443 {
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    nat_mask 255.255.255.0
    persistence_timeout 50
    protocol TCP

    real_server 192.168.201.100 443 {
        weight 1
        SSL_GET {
            url {
              path /
              digest ff20ad2481f97b1754ef3e12ecd3a9cc
            }
            url {
              path /mrtg/
              digest 9b3a0c85a887a256d6939da88aabd8cd
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

Virtual server(s):ipvs集群的vs和rs

三、keepalived完毕LVS的高可用的准备条件

2.2  配置语法

1、准备多个节点 ms/node1/node2。

.配置虚拟路由器:

2、在节点ms安装ansible服务,实现节点node1/node2的互信。

vrrp_instance    {

[[email protected] ~]# yum -y install ansible
[[email protected] ~]# ssh-keygen -t rsa -P ''
[[email protected] ~]# ssh-copy-id -i .ssh/id.rsa.pub [email protected]
[[email protected] ~]# ssh-copy-id -i .ssh/id.rsa.pub [email protected]

….

3、在节点node1/node2上安装keepalived服务。

}

[[email protected] ~]# ansible all -m shell -a "yum -y install keepalived"

.专用参数:

4、去节点node1/node2查看keepalived的配置。

state
MASTER|BACKUP:当前节点在此虚拟路由器上的始发状态;只好有一个是MASTER,余下的都应有为BACKUP

[[email protected] ~]# cd /etc/keepalived
[[email protected] keepalived]# vim keepalived.conf
[[email protected] ~]# cd /etc/keepalived
[[email protected] keepalivd]# vim keepalived.conf

interface  IFACE_NAME:绑定为眼前虚拟路由器使用的情理接口

5、另启动node1/node2的终点开启日志布告,随时检测。

virtual_router_id  VRID:当前虚拟路由器惟一标识,范围是0-255

[[email protected] ~]# tail -f /var/log/message
[[email protected] ~]# tail -f /var/log/message

priority 100:当前大体节点在此虚拟路由器中的优先级;范围1-254

四、keepalived怎么样兑现在情景转换时的布告

advert_int  1:vrrp文告的小运间隔,默许1s

1、通知地点

authentication { #注脚机制

vrrp_sync_group {

auth_typeAH|PASS

}

auth_pass PASSWORD #密码最长8位,当先8位,仅取前8位

最常用的地点

}

vrrp_instance {

virtual_ipaddress { #虚拟IP

}

/ brd dev scope  label

1)先定义一下大局配置段

192.168.200.17/24 dev eth1

global_defs {
      notification_email {
           [email protected]
      }
      notification_email_from [email protected]
      smtp_server 127.0.0.1
      smtp_connect_timeout 30
      router_id LVS_DEVEL

192.168.200.18/24 dev eth2 labeleth2:1

2)定义相关决定机制

}

vrrp_script chk_main {
          script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
          interval 1
          weight -2
    }

track_interface { #布置监控互连网接口,一旦出现故障,则转为FAULT状态

3)接着定义vrrp实例段

贯彻地点转移

节点node1的vrrp实例段配置

eth0

[[email protected] keepalived]# vim keepalived.conf
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 63
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    } 
    virtual_ipaddress {
        172.16.200.100
    } 
    track_script {
        chk_main
    }

eth1

节点node2vrrp实例段配置

[[email protected] keepalived]# vim keepalived.conf
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 63
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    } 
    virtual_ipaddress {
        172.16.200.100
    } 
    track_script {
        chk_main
    }

}

2、公告形式

.nopreempt:定义工作格局为非抢占格局

notify_master 主节点文告

.preempt_delay300:抢占式形式,节点上线后触发新选出操作的推移时长,默许格局

notify_backup 备节点公告

2.3  定义布告脚本

notify_fault 故障点文告

notify_master  |:当前节点成为主节点时触发的本子

4)在实例中可以定义使用notify.sh脚本控制布告格局

notify_backup  |:当前节点转为备节点时触及的台本,

notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"

notify_fault  |:当前节点转为“败北”状态时触发的剧本

* notify.sh实例脚本

notify
|:通用格式的布告触发机制,一个剧本可成功上述三种情状的变换时的文告

#!/bin/bash
# Author: MageEdu <[email protected]>
# description: An example of notify script
vip=172.16.200.100
contact='[email protected]'

notify() {
    mailsubject="`hostname` to be $1: $vip floating"
    mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
    echo $mailbody | mail -s "$mailsubject" $contact
}

case "$1" in
    master)
        notify master
        exit 0
    ;;
    backup)
        notify backup
        exit 0
    ;;
    fault)
        notify fault
        exit 0
    ;;
    *)
        echo 'Usage: `basename $0` {master|backup|fault}'
        exit 1
    ;;
esac

2.4  日志设置

5)在节点ms上重启node1/node2节点的keepalived服务并查阅virtual_ipaddress所在节点

笔录keepalived服务的日志,修改/etc/sysconfig/keepalived配置文件和日志配置文件rsyslog.conf

[[email protected] ~]# ansible all -a "service keepalived restart"
[[email protected] ~]# ansible alol -m shell -a "ip addr show | grep eth0"

vim /etc/sysconfig/keepalived

6)在主节点node1上编译down文件,已毕单点故障使virtual_ipaddress从主节点node1转移到node2上去,并在节点ms查看节点之间VIP转移现象

EEPALIVED_OPTIONS=”-D -S 6″

[[email protected] keepalived]# touch down
[[email protected] ~]# ansible all -m shell -a "ip addr show | grep eth0"

vim /etc/rsyslog.conf

7)
复苏主节点node2,再度查看VIP的更换现象

local6.*                      /var/log/keepalive.log

[[email protected] keepalived]# rm -rf down
[[email protected] ~]# ansible all -m shell -a "ip addr show | grep eth0"

重启rsyslog和keepalive服务

五、如何安顿ipvs

焦点配置段为 virtual server
定义虚拟主机

1、virtual_server IP port 定义虚拟主机IP地址及其端口

2、virtual_server fwmark int ipvs的防火墙打标,完结基于防火墙的LVS

3、virtual_server group string

4、lb_algo {rr|wrr|lc|wlc|lblc|lblcr} 定义LVS的调度算法

5、lb_kind {NAT|DR|TUN} 定义LVS的模型

6、presitence_timeout <INT> 定义帮助持久连接的时长

7、protocol ipvs规则所能援救的说道

1)在vrrp_server段配置ipvs的实例

主节点node1 vrrp_server中的ipvs配置

[[email protected] keepalived]# vim keepalived.conf
virtual_server 172.16.200.100 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
    persistence_timeout 0
    protocol TCP
    real_server 172.16.200.8 80{
        weight 1
        HTTP_GET {
            url {
              path /
            status_code 200
            } 
           connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        } 
    }
}

备节点node2 vrrp_server中的ipvs配置

[[email protected] keepalived]# vim keepalived.conf
virtual_server 172.16.200.100 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
    persistence_timeout 0
    protocol TCP
    real_server 172.16.200.9 80{
        weight 1
        HTTP_GET {
            url {
              path /
            status_code 200
            } 
           connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        } 
    }
}

2)在节点ms上为node1/node2节点安装ipvsadm服务,并启动主备节点的httpd服务

[[email protected] ~]# ansible all -m shell -a "yum -y install ipvsadm"
[[email protected] ~]# ansible all -a "service httpd start"

3) 去节点node1/node2上查看相关的ipvs规则

[[email protected] keepalived]# ipvsadm -L -n
[[email protected] keepalived]# ipvsadm -L -n

六、对特定的劳动做高可用

1、监控服务

vrrp_script {

}

2、在vrrp实例中追踪服务

track_script {

}

七、完成基于多虚拟路由的双master模型

要兑现基于多虚拟路由的master/master模型,则须要定义八个vrrp_intance段的布署。

1、配置节点node1上的vrrp_intance段,定义多个

[[email protected] keepalived]# vim keepalived.conf

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 63
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    } 
    virtual_ipaddress {
        172.16.200.100
    } 
    track_script {
        chk_main
    } 
vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 65
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 21112
    }
    virtual_ipaddress {
       172.16.200.200
    }
    track_srcipt {
       chk_main
    }

2、配置node2节点上vrrp_intance段,定义三个

[[email protected] keepalived]# vim keepalived.conf
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 63
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.16.200.100
    }
    track_script {
        chk_main
    }
vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 65
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 21112
    }
    virtual_ipaddress {
       172.16.200.200
    }
    track_srcipt {
       chk_main
    }

3、使主节点node1的keepalived的劳动停掉,在节点ms查看主备节点之间的VIP的转,同理使备节点node2的keepalived的劳务停掉并使node1的keepalived的劳务启动,于节点ms上查看主备节点之间VIP的转换。

[[email protected] keepalived]# servive keepalived stop
[[email protected] ~]# ansible all -m shell -a "ip addr show | grep eth0"
[[email protected] keepalived]# servive keepalived stop
[[email protected] keepalived]# servive keepalived start
[[email protected] ~]# ansible all -m shell -a "ip addr show | grep eth0"

正文出自 “丿Sky 灬ONE PEICE” 博客,转发请与小编联系!

1、keepalivd的主导成效就是在linux系统上通过vrrp协议落实LVS的高可用。
2、vrrp协议虚拟冗余路由协和)可以将七个网…

相关文章