落到实处双主模型的ngnix高可用(一)

澳门金沙国际 1

尝试目标:使用keepalived达成Nginx的双主高可用负载均衡集群。

澳门金沙国际 2

准备:主机7台

client:

172.18.x.x

调度器:keepalived+nginx 带172.18.x.x/16 网卡

192.168.234.27

192.168.234.37

real_server

192.168.234.47

192.168.234.57

192.168.234.67

192.168.234.77

尝试环境:两台Nginx
proxy(双主Nginx,各必要两块网卡,eth0连接内网,eth1连接外网)、两台web
server(请求的负载均衡)、一台client用于注解结果。


单主模型IPVS示例

实验结果

  1 [root@234c17 ~]# for i in {1..4};do curl www.a.com;curl www.b.com;sleep 1;done
  2 234.57
  3 234.77
  4 234.47
  5 234.67
  6 234.57
  7 234.77
  8 234.47
  9 234.67

澳门金沙国际 3


配置keepalive

高可用的ipvs集群示例:修改keepalived配置文件

过程:

选择keepalived完成lvs的高可用性,keepalived高可用代理服务。专注:为了不影响实验结果,在试行始于前先关闭iptables和selinux

一、haproxy和nginx的区别

修改主机:192.168.234.27的keepalived配置文件

  1 [root@234c27 ~]# vim /etc/keepalived/keepalived.conf
  2 ! Configuration File for keepalived
  3 
  4 global_defs {
  5 notification_email {
  6 root@localhost  //接受邮件地址
  7 }
  8 notification_email_from keepalived@localhost  //发送邮件地址
  9 smtp_server 127.0.0.1  //发送邮件服务器IP
 10 smtp_connect_timeout 30  //邮件连接超时时长
 11 router_id kptwo  //路由id
 12 vrrp _mcast_group4 234.10.10.10  //指定vrrp协议的多播地址
 13 }
 14 
 15 vrrp_instance VI_1 {  //vrrp协议的
 16 state MASTER  //lvs的MASTER服务器
 17 interface ens37  //
 18 virtual_router_id 50  //虚拟路由
 19 priority 100  //权重为100.越大越先
 20 advert_int 1  //发送组博包的间隔
 21 authentication {  //验证
 22 auth_type PASS  //方式为pass( 明文)
 23 auth_pass 1111  //密码
 24 }
 25 virtual_ipaddress { //keepalived虚拟ip
 26 10.0.0.100/24
 27 }
 28 }
 29 virtual_server 10.0.0.100 80 {
 30     delay_loop 6  //检查后端服务器的时间间隔
 31     lb_algo wrr  //定义调度方法
 32     lb_kind DR  //集群的类型
 33     #persistence_timeout 50  //持久连接时长
 34     protocol TCP  //服务协议,仅支持TCP
 35     real_server 192.168.234.47 80 {  //后端real_server服务器地址
 36         weight 1 //权重
 37         HTTP_GET {  //应用层检测
 38             url {
 39               path /  //定义要监控的URL
 40               status_code 200  //判断上述检测机制为健康状态的响应码
 41             }
 42             connect_timeout 3  //连接请求的超时时长
 43             nb_get_retry 3  //重试次数
 44             delay_before_retry 3  //重试之前的延迟时长
 45         }
 46     }
 47     real_server 192.168.234.57 80 {
 48         weight 2
 49         HTTP_GET {
 50             url {
 51                 path /
 52                 status_code 200
 53             }
 54             connect_timeout 3
 55             nb_get_retry 3
 56             delay_before_retry 3
 57         }
 58     }
 59 }

一、先配置4台real_澳门金沙国际 ,server,安装好测试用的httpd

  1 [root@234c47 ~]# curl 192.168.234.47;curl 192.168.234.57;curl 192.168.234.67;curl 192.168.234.77
  2 234.47
  3 234.57
  4 234.67
  5 234.77

操作步骤:

Haproxy的工作形式:代理格局为http和tcp做代理,可以为各个劳务做代办,它是一个特意的代理服务器,自己不可能变成web服务。

修改主机:192.168.234.37的keepalived配置文件

[root@234c37 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id kptwo
   vrrp _mcast_group4 234.10.10.10
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens37
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       10.0.0.100/24
    }
}
virtual_server 10.0.0.100 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    sorry_server 127.0.0.1:80
    real_server 192.168.234.47 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.234.57 80 {
        weight 2
        HTTP_GET {
            url {
              path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

二、配置keepalived

因为是双主模型

一、配置IP

nginx的劳作格局:web方式和代办,Nginx只为WEB服务做代理。

查看keepalived

[root@234c37 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
…………
[root@234c37 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
//暂无ipvsadm

1.配置keepalived主机234.27

[root@234c27 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
    notification_email {
      root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id kpone
    vrrp _mcast_group4 234.10.10.10
 }
 vrrp_instance VI_1 {
     state MASTER
     interface ens33
     virtual_router_id 50
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 1111
     }
     virtual_ipaddress {
         172.18.0.100/16  //这ip调度 192.168.234.47/57
     }
 }
vrrp_instance VI_2 {
     state BACKUP
     interface ens33
     virtual_router_id 51
     priority 80
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 2222
     }
     virtual_ipaddress {
         172.18.0.200/16  //这ip调度 192.168.234.147/157
     }
}

1.配置A主机的IP


启航服务

[root@234c27 keepalived]# systemctl start keepalived.service
[root@234c27 keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-08-31 20:30:02 CST; 12s ago
  Process: 9657 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 9658 (keepalived)
………………
[root@234c27 keepalived]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.100:80 wrr
  -> 192.168.234.47:80            Route   1      0          0
  -> 192.168.234.57:80            Route   2      0          0
//启动服务lvs vs已配置好

2.配置keepalived主机234.37

[root@234c37 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
    notification_email {
      root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id kpone
    vrrp _mcast_group4 234.10.10.10
 }
 vrrp_instance VI_1 {
     state BACKUP
     interface ens33
     virtual_router_id 50
     priority 80
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 1111
     }
     virtual_ipaddress {
         172.18.0.100/16  //这ip调度 192.168.234.47/57
     }
 }
vrrp_instance VI_2 {
     state MASTER
     interface ens33
     virtual_router_id 51
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 2222
     }
     virtual_ipaddress {
         172.18.0.200/16  //这ip调度 192.168.234.147/157
     }
}

如此双主模型不难的就搭建好了

# ip addr add dev eth0 192.168.10.2/24

二、安装配置

后端real_server准备

3.配置nginx主机234.27/37

先配置http语块

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    upstream web1{  //
        server 192.168.234.47:80;
        server 192.168.234.57:80;
        }
    upstream web2{
        server 192.168.234.67:80;
        server 192.168.234.77:80;
        }

/*
ngx_http_upstream_module
ngx_http_upstream_module模块
用于将多个服务器定义成服务器组,而由proxy_pass, fastcgi_pass等指令
进行引用
1、upstream name { ... }
定义后端服务器组,会引入一个新的上下文
默认调度算法是wrr
Context: http
upstream httpdsrvs {
server ...
server...
...
*/

下一场配置server

    server {
        listen       80 default_server; //默认监听80端口
        server_name www.a.com //域名
        listen       [::]:80 default_server;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
                proxy_pass http://web1 ;  //定义访问80端口的请求,以web1提供服务。而指定的web1在http语块中为 192.168.234.47/57:80 提供服务
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
    server {
        server_name www.b.com
        listen 80;
        location / {
                proxy_pass http://web2 ; //定义访问80端口的请求,以web2提供服务。而指定的web2在http语块中为 192.168.234.147/157:80 提供服务

        }
    }
}

这么访问 www.a.com就是访问192.168.234.47/57:80

访问 www.b.com固然访问192.168.234.67/77:80

现在客户机将host添加www.a/b.com

172.18.0.100 www.a.com
172.18.0.200
www.b.com

    客户端将www.a.com 解析 172.18.0.100

[root@234c17 ~]# ping www.a.com
PING www.a.com (172.18.0.100) 56(84) bytes of data.
64 bytes from www.a.com (172.18.0.100): icmp_seq=1 ttl=64 time=0.358 ms
64 bytes from www.a.com (172.18.0.100): icmp_seq=2 ttl=64 time=0.376 ms
64 bytes from www.a.com (172.18.0.100): icmp_seq=3 ttl=64 time=0.358 ms
64 bytes from www.a.com (172.18.0.100): icmp_seq=4 ttl=64 time=0.366 ms

    客户端将www.b.com 解析 172.18.0.200

[root@234c17 ~]# ping www.b.com
PING www.b.com (172.18.0.200) 56(84) bytes of data.
64 bytes from www.b.com (172.18.0.200): icmp_seq=1 ttl=64 time=0.582 ms
64 bytes from www.b.com (172.18.0.200): icmp_seq=2 ttl=64 time=0.339 ms
64 bytes from www.b.com (172.18.0.200): icmp_seq=3 ttl=64 time=0.524 ms
64 bytes from www.b.com (172.18.0.200): icmp_seq=4 ttl=64 time=0.337 ms

结果:

  1 [root@234c17 ~]# for i in {1..4};do curl www.a.com;curl www.b.com;sleep 1;done
  2 234.57
  3 234.77
  4 234.47
  5 234.67
  6 234.57
  7 234.77
  8 234.47
  9 234.67

2.配置B主机的IP

1、安装

追加ip在网卡上 修改限制arp公告及应答级别 rs1 rs2都做,网关并指向路由

ip a a 10.0.0.100/32 dev ens37

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

route add default gw 192.168.234.17

设置httpd服务 写好网页文件

贯彻双主模型的ngnix高可用(二)

澳门金沙国际 4

# ip addr add dev eth0 192.168.10.23/24

# yum -y install haproxy

启航服务

澳门金沙国际 5

明日增添实验

3.配置C主机的IP

留意,如若在生育中安装,一定要注意安装软件的本子要滞后最新版本一到三个,否则,新本子中冒出了bug不可以解决将是沉重的。

多主模型IPVS示例

澳门金沙国际 6

将192.168.234.47/57主机加ip地址

[root@234c47 ~]#ip a a dev ens37 192.168.167/24

[root@234c57 ~]#ip a a dev ens37 192.168.177/24

# ip addr add dev eth0 192.168.10.3/24

2、配置详解

配置keepalive

高可用的ipvs集群示例:修改keepalived配置文件

编辑http的的计划文件增加基于FQDN虚拟主机

[root@234c47 ~]# vim /etc/httpd/conf.d/vhost.conf

<virtualhost 192.168.234.167:80>
 documentroot /data/web1
 servername www.a.com
< directory /data/web1>
 require all granted
< /directory>
< /virtualhost>

4.配置D主机的IP

************************全局配置*****************************

修改主机:192.168.234.27的keepalived配置文件

[root@234c27 keepalived]# vim /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id kpone
   vrrp _mcast_group4 234.10.10.10
}

vrrp_instance VI_1 {
    state MASTER
    interface ens37
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       10.0.0.100/24
    }
}
vrrp_instance VI_2 {
    state BACKUP
    interface ens37
    virtual_router_id 51
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 2222
    }
    virtual_ipaddress {
        10.0.0.200/24
    }
}
virtual_server 10.0.0.100 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    #sorry_server 127.0.0.1:80
    real_server 192.168.234.47 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
virtual_server 10.0.0.200 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    #sorry_server 127.0.0.1:80
    real_server 192.168.234.57 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

另一个主机也增进虚拟主机

[root@234c57 ~]# vim /etc/httpd/conf.d/vhost.conf

<virtualhost 192.168.234.177:80>
documentroot /data/web1
servername www.a.com
<directory /data/web1>
require all granted
< /directory>
< /virtualhost>

# ip addr add dev eth0 192.168.10.33/24

Global
log     127.0.0.1 local2  # 定义全局日志服务器
chroot   /var/lib/haproxy  # 修改haproxy的工作目录到制定的目录,提高安全性
pidfile   /var/run/haproxy.pid # pid文件位置
maxconn   4000      # 最大连接数
user    haproxy     # 服务运行时的身份,也可以用uid来表示
group    haproxy     # 服务运行时的身份所属的组,可以用gid来表示
Daemon           # 服务以守护进程的身份运行
# turn on stats unix socket    # 默认打开UNIX socket
stats socket /var/lib/haproxy/stats # 指明unix socket 所在的位置
Node      www.a.com  # 定义当前节点的名称,用于HA场景中多haproxy进程共享同一个IP地址时
ulimit-n    100       # 设定每进程所能够打开的最大文件描述符数目,默认情况下其会自动进行计算,因此不推荐修改此选项

修改主机:192.168.234.37的keepalived配置文件

[root@234c37 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id kptwo
   vrrp _mcast_group4 234.10.10.10
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens37
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       10.0.0.100/24
    }
}
vrrp_instance VI_2 {
    state MASTER
    interface ens37
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 2222
    }
    virtual_ipaddress {
        10.0.0.200/24
    }
}
virtual_server 10.0.0.100 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    #sorry_server 127.0.0.1:80
    real_server 192.168.234.47 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
virtual_server 10.0.0.200 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    #sorry_server 127.0.0.1:80
    real_server 192.168.234.57 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

让10.0.0.100的ip优先分配至192.168.234.47 192.168.234.57备用

让10.0.0.200的ip优先分配至192.168.234.57 192.168.234.47备用

重启httpd服务

结果:访问www.a.com

  1 [root@234c17 ~]# for i in {1..8};do curl www.a.com;done
  2 234.167
  3 234.177
  4 234.47
  5 234.57
  6 234.167
  7 234.167
  8 234.177
  9 234.47
 10 

访问www.b.com

  1 [root@234c17 ~]# for i in {1..8};do curl www.b.com;done
  2 234.67
  3 234.67
  4 234.77
  5 234.67
  6 234.77
  7 234.67
  8 234.77
  9 234.77

二、配置web服务(C和D主机都做同样配备,只需修改默许主页中的IP地址为本机的IP即可,以示不同)

log``127.0``.``0.1``local2要想启用,可以看到默认配置文件中有这么一行注释

后端real_server准备

修改192.168.234.57的vip为10.0.0.200/32

  1 [root@234c27 keepalived]# ipvsadm -Ln
  2 IP Virtual Server version 1.2.1 (size=4096)
  3 Prot LocalAddress:Port Scheduler Flags
  4   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
  5 TCP  10.0.0.100:80 wrr
  6   -> 192.168.234.47:80            Route   1      0          0
  7 TCP  10.0.0.200:80 wrr
  8   -> 192.168.234.57:80            Route   1      0          0

澳门金沙国际 7

前几日宕掉一个lvs

  1 [root@234c27 keepalived]# systemctl stop keepalived.service
  2 [root@234c27 keepalived]# ipvsadm -Ln
  3 IP Virtual Server version 1.2.1 (size=4096)
  4 Prot LocalAddress:Port Scheduler Flags
  5   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
  6 

澳门金沙国际 8

依旧提供服务

  1 [root@234c37 ~]# ipvsadm -Ln
  2 IP Virtual Server version 1.2.1 (size=4096)
  3 Prot LocalAddress:Port Scheduler Flags
  4   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
  5 TCP  10.0.0.100:80 wrr
  6   -> 192.168.234.47:80            Route   1      0          21
  7 TCP  10.0.0.200:80 wrr
  8   -> 192.168.234.57:80            Route   1      0          39

后一个贯彻基于前一个的底子上改动来的

1.安装apache

#local2.*/var/log/haproxy.log

假使要贯彻sorry_server

1.把rs服务都停掉。然后在lvs上设置apache或者nginx服务

2.将keepalived配置文件中的

  1 virtual_server 10.0.0.200 80 {
  2     delay_loop 6
  3     lb_algo wrr
  4     lb_kind DR
  5     #persistence_timeout 50
  6     protocol TCP
  7     #sorry_server 127.0.0.1:80  //这一行来修改 写出服务出错之后的页面
  8     real_server 192.168.234.57 80 {
  9         weight 1
 10         HTTP_GET {
 11             url {
 12               path /
 13               status_code 200
 14             }
 15             connect_timeout 3
 16             nb_get_retry 3
 17             delay_before_retry 3
 18         }
 19     }
 20 }

# yum -y install apache

做如下配置即可启用

2.成立默许主页

# touch /var/log/haproxy.log
# vim /etc/rsyslog.conf
$ModLoad imudp
$UDPServerRun 514
# service rsyslog restart
# tail -f /var/log/haproxy.log
Oct  6 10:45:22 localhost haproxy[22208]: 172.16.5.200:50332 [06/Oct/2013:10:45:22.852] web static/www.web1.com 6/0/2/4/32 200 45383 - - ---- 3/3/0/1/0 0/0 "GET / HTTP/1.1"

# vim /var/www/html/index.html

展现了客户端ip和realserver主机名等音信

<h1>192.168.10.3</h1>

**********************默许配置*********************************

3.启动apache

defaults
mode  http      # 为http服务代理,http为7层协议,tcp4层
log   global     # 全局日志
option httplog      # 日志类别为http日志格式
option dontlognull   # 不记录健康查询的日志
#########健康状况检测的意义在于,后端服务器若挂掉了,就不会再向它发送请求信息。
option http-server-close  # 每次请求完后主动关闭http通道,支持客户端长连接
option forwardfor  except 127.0.0.0/8 # 如果后端服务器需要获得客户端真实ip需要配置的参数,可以从http header中获得客户端ip
option  redispatch   #serverid对应的服务器挂掉后,强制定向到其他健康的服务器
retries  3       #3次连接失败就认为服务不可用,也可以通过后面设置
timeout http-request 10s # 请求超时间
timeout queue  1m   # 排队超时
timeout connect 10s   # 连接超时
timeout client  1m   # 客户端超时
timeout server  1m   # 服务器端超时
timeout http-keep-alive 10s # 保持连接超时
timeout check  10s    # 健康检测超时
maxconn    3000   # 每个进程最大连接数,可以在global中配置

# service httpd start

************************前端代理配置******************************

三、配置sorry_server(此服务配置于Nginx proxy主机上,两台Nginx
proxy都做一样配备,只需修改默认主页中的IP地址为本机的IP即可,以示不一致)

frontend main *:5000  # 前端定义服务器名称和端口
acl url_static  path_beg -i /static /images /javascript /stylesheets
acl url_static  path_end -i .jpg .gif .png .css .js
use_backend static     if url_static
default_backend       app
定义访问控制,如果符合 url_static,就代理到static,如果不是url_static,就使用默认的后端服务

1.安装apache

***********************后端服务器配置*****************************

# yum -y install apache

backend static
balance   roundrobin  #负载均衡调度算法
server   static 127.0.0.1:4331 check # 定义了一个后端服务器并做健康状况检测
backend app
balance   roundrobin
server app1 127.0.0.1:5001 check rise 2 fall 1
server app2 127.0.0.1:5002 check rise 2 fall 1
server app3 127.0.0.1:5003 check rise 2 fall 1
server app4 127.0.0.1:5004 check rise 2 fall 1
# check rise 2 fall 1 健康状况检查,rise表示后端realserver从stop到start检查的次数,fall表示从start到stop检查的次数

2.创建默许主页


# vim /var/www/html/index.html


<h1>sorry_server:192.168.10.2</h1>

三、实例配置

3.修改监听端口为8080,避防与nginx所监听的端口冲突

本机ip:172.16.5.16

# vim /etc/httpd/conf/httpd.conf

敞开forward转载效用

Listen 8080

#sysctl-wnet.ipv4.ip_forward=1

4.启动apache服务

闭馆防火墙

四、配置代理(两台Nginx proxy都做一样配备)

为后端ip:172.16.6.1做代理

1.安装nginx

为后端服务器提供页面并启动httpd

# yum -y install nginx

# vim /var/www/html/index.html
<h1>welcome!</>
# service httpd start
global
log     127.0.0.1 local2
chroot   /var/lib/haproxy
pidfile   /var/run/haproxy.pid
maxconn   4000
user    haproxy
group    haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode          http
log           global
option         httplog
option         dontlognull
option http-server-close
option forwardfor    except 127.0.0.0/8 header X-Forward-For # 后端服务器日志中记录远程客户端ip,别忘了在后端服务器上修改log格式
option         redispatch
retries         3
timeout http-request  10s
timeout queue      1m
timeout connect     10s
timeout client     1m
timeout server     1m
timeout http-keep-alive 10s
timeout check      10s
maxconn         3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend web
bind *:80
default_backend static
也可以写成
frontend web 172.16.5.16:80
dfault_backend static
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
server   www.web1.com 172.16.6.1:80 check
stats          enable # 开启服务器状态信息
stats          hide-version # 隐藏版本信息
stats          realm haproxy\ stats # 说明认证信息 \ 转译了一个空格
stats          auth admin:admin # 认证用户
stats          admin if TRUE # 通过认证就允许管理
stats          uri /abc # 自定义stats显示页面uri

2.定义upstream集群组,在http{}段中定义;

效果图

# vim /etc/nginx/nginx.conf

澳门金沙国际 9

        http {

独自使用一个端口来监听stats状态新闻。

            upstream websrvs {

global
log     127.0.0.1 local2
chroot   /var/lib/haproxy
pidfile   /var/run/haproxy.pid
maxconn   4000
user    haproxy
group    haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
defaults
mode          http
log           global
option         httplog
option         dontlognull
option http-server-close
option forwardfor    except 127.0.0.0/8
option         redispatch
retries         3
timeout http-request  10s
timeout queue      1m
timeout connect     10s
timeout client     1m
timeout server     1m
timeout http-keep-alive 10s
timeout check      10s
maxconn         3000
listen stats
bind *:1080
stats          enable
stats          hide-version
stats          realm haproxy\ stats
stats          auth admin:admin
stats          admin if TRUE
stats          uri /abc
frontend web
bind *:80
default_backend static
backend static
server   www.web1.com 172.16.6.1:80 check

                server 192.168.10.3:80;

效果图:

                server 192.168.10.33:80;

澳门金沙国际 10

                server 127.0.0.1:8080 backup;

澳门金沙国际 11

            }


        }


3.调用定义的集群组,在server{}段的location{}段中调用;

四、负载均衡–调度算法

# vim /etc/nginx/conf.d/default.conf

roundrobin动态支持权重和在服务器运行时调整,协助慢速启动

        server {

static-rr静态不帮助在服务器运行时调整,不协助慢速启动

            location / {

leastconn最少连接,只指出选择至极长的对话

                proxy_pass http://wersrvs;

source:后端服务器时动态服务器时使用,类似于nginx的iphash

                index index.html;

Hash-type:map-based静态hash码取余总括ip的hash码除以所有的劳动器数,余数得几就置身第多少个服务器上

            }

Hash-type:consistent动态一致性hashhash环

        }

基于权重weight动态

4.启动服务

uri根据用户访问的uri来负载均衡,它也有hash表,同样有hash-type,第一回访问的结果被负载到哪些服务器,保存在了hash表中,在来拜访同一的uri,就会始终到那台服务器。

# service nginx start

url_param根据用户帐号音讯,将请求发往同一个服务器,同样有hash-type。

五、配置keepalived

hdr:首部基于请求首部调度,同样有hash-type

A主机上操作

requestheader请求首部

1.安装keepalived

reponseheader响应首部

# yum -y install keepalived

hdrhosts)格式

2.编辑A主机的布局文件/etc/keepalived/keepalived.conf,作如下配置:

hdrwww.a.com)实例

! Configuration File for keepalived

一致性hash负载均衡

    global_defs {

global
log     127.0.0.1 local2
chroot   /var/lib/haproxy
pidfile   /var/run/haproxy.pid
maxconn   4000
user    haproxy
group    haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode          http
log           global
option         httplog
option         dontlognull
option http-server-close
option forwardfor    except 127.0.0.0/8
option         redispatch
retries         3
timeout http-request  10s
timeout queue      1m
timeout connect     10s
timeout client     1m
timeout server     1m
timeout http-keep-alive 10s
timeout check      10s
maxconn         3000
listen stats
bind *:1080
stats          enable
stats          hide-version
stats          realm haproxy\ stats
stats          auth admin:admin
stats          admin if TRUE
stats          uri /abc
frontend web
bind *:80
default_backend static
backend static
balance   source
hash-type  consistent
server   www.web1.com 172.16.6.1:80 check weight 3
server   www.web2.com 172.16.6.2:80 check weight 1

    notification_email {


        root@localhost


    }

五、acl访问控制

    notification_email_from keepalived@localhost

frontend web
bind *:8080
default_backend static
acl abc src 172.16.5.100
redirect prefix http://172.16.5.16/def if abc

    smtp_server 127.0.0.1

当客户端ip为172.16.5.100时,重定向到

    smtp_connect_timeout 30

acl要和redirectprefix或者redirectlocation搭配使用

    router_id CentOS6


    vrrp_mcast_group4 224.0.100.39


    }

官方实例,将用户登录后的url重定向到https安全连接。

    vrrp_script chk_down {

acl clear   dst_port 80
acl secure   dst_port 8080
acl login_page url_beg  /login
acl logout   url_beg  /logout
acl uid_given url_reg  /login?userid=[^&]+
acl cookie_set hdr_sub(cookie) SEEN=1
redirect prefix  https://mysite.com set-cookie SEEN=1 if !cookie_set
redirect prefix  https://mysite.com      if login_page !secure
redirect prefix  http://mysite.com drop-query if login_page !uid_given
redirect location http://mysite.com/      if !login_page secure
redirect location / clear-cookie USERID=    if logout

        script “[[ -f /etc/keepalived/down ]] && exit 1 || exit 0”


        interval 1

做客阻止

        weight -5

frontend web
bind *:8080
default_backend static
acl abc src 172.16.5.100
block if abc  # 阻止访问

    }

澳门金沙国际 12

    vrrp_script chk_nginx {


        script “killall -0 nginx && exit 0 || exit 1”

修改原配置文件,达成情形分离

        interval 1

frontend web
bind *:80
acl url_static    path_beg    -i /static /images /javascript /stylesheets
#字符形式
acl url_static    path_reg    -i ^/static ^/images ^/javascript ^/stylesheets
#正则表达式
acl url_static    path_end    -i .jpg .jpeg .gif .png .css .js
#字符
acl url_static    path_reg   -i .jpg $.jpeg$ .gif $.png$ .css$ .js$
# 正则表达式
#一般能用字符,就不要用正则表达式,字符的比正则表达式快。
use_backend static_servers     if url_static
default_backend dynamic_servers
backend static_servers
balance roundrobin
server imgsrv1 172.16.200.7:80 check maxconn 6000
server imgsrv2 172.16.200.8:80 check maxconn 6000
backend dynamic_servers
balance source
server websrv1 172.16.200.7:80 check maxconn 1000
server websrv2 172.16.200.8:80 check maxconn 1000
server websrv3 172.16.200.9:80 check maxconn 1000

        weight -5


        fall 2

haproxylisten配置示范:

        rise 1

listen webfarm
bind 192.168.0.99:80
mode http
stats enable
stats auth someuser:somepassword
balance roundrobin
cookie JSESSIONID prefix
option httpclose
option forwardfor
option httpchk HEAD /check.txt HTTP/1.0
server webA 192.168.0.102:80 cookie A check
server webB 192.168.0.103:80 cookie B check

    }


    vrrp_instance ngx {

Haproxy综合配置事例

        state MASTER

global
pidfile /var/run/haproxy.pid
log 127.0.0.1 local0 info
defaults
mode http
clitimeout   600000
srvtimeout   600000
timeout connect 8000
stats enable
stats auth  admin:admin
stats uri/monitor
stats refresh5s
option httpchk GET /status
retries5
option redispatch
errorfile 503 /path/to/503.text.file
balanceroundrobin# each server is used in turns, according to assigned weight
frontend http
bind :80
monitor-uri  /haproxy # end point to monitor HAProxy status (returns 200)
acl api1 path_reg ^/api1/?
acl api2 path_reg ^/api2/?
use_backend api1 if api1
use_backend api2 if api2
backend api1
# option httpclose
server srv0 172.16.5.15:80 weight 1 maxconn 100 check inter 4000
server srv1 172.16.5.16:80 weight 1 maxconn 100 check inter 4000
server srv2 172.16.5.16:80 weight 1 maxconn 100 check inter 4000
backend api2
option httpclose
server srv01 172.16.5.18:80 weight 1 maxconn 50 check inter 4000

        interface eth1


        virtual_router_id 14


        priority 100


        advert_int 1

六、结合keepalived做高可用代理

        authentication {


            auth_type PASS

拓扑图

            auth_pass MDQ41fTp

澳门金沙国际 13

        }

        virtual_ipaddress {


            192.168.20.100/24 dev eth1

规划:

        }

准备工作请参见从前写的博客,无非就是时刻一起,双机互信,主机名称可以互为解析。

        track_script {


            chk_down

node1:

            chk_nginx

ip:172.16.5.15

        }

hostname:www.a.com

    }

node2

    vrrp_instance ngx2 {

ip:172.16.5.16

        state BACKUP

hostname:www.b.com

        interface eth1

后端realserver让别人代做

        virtual_router_id 15


        priority 98

配置haproxy

        advert_int 1


        authentication {

node1:# yum -y install haproxy
node2:# yum -y install haproxy
# cd /etc/haproxy
# mv haproxy.cfg haproxy.bak
# vim haproxy.cfg
global
log         127.0.0.1 local2
chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode                    http
log                     global
option                  httplog
option                  dontlognull
option http-server-close
option forwardfor       except 127.0.0.0/8 header X-Forward-For
option                  redispatch
retries                 3
timeout http-request    10s
timeout queue           1m
timeout connect         10s
timeout client          1m
timeout server          1m
timeout http-keep-alive 10s
timeout check           10s
maxconn                 3000
listen stats #专门弄个端口进行状态管理
bind *:1080
stats                   enable
stats                   hide-version
stats                   realm haproxy\ stats
stats                   auth admin:admin
stats                   admin if TRUE
stats                   uri /abc
frontend web
    bind *:80
    acl danymic path_end -i .php
    acl abc src 172.16.5.100
    block if abc
    use_backend php if danymic
    default_backend static
backend static
    balance     roundrobin
    server      www.web1.com 172.16.5.16:8080 check rise 2 fall 1 weight 1
    server      www.web2.com 172.16.5.15:8080 check rise 2 fall 1 weight 1
backend php
    balance roundrobin
    server    www.web3.com 172.16.6.1:80 check rise 2 fall 1 weight 1
    server    www.web4.com 172.16.6.2:80 check rise 2 fall 1 weight 1
# scp haproxy.cfg b:/etc/haproxy/

            auth_type PASS


            auth_pass XYZ41fTp

配置keepalived

        }

node1

        virtual_ipaddress {

# yum -y install keepalived
# cd /etc/keepalived/
# vim keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 1
weight 2
}
#vrrp_script chk_mantaince_down {
#   script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
#   interval 1
#   weight 2
#}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 5
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
172.16.5.100/16
}
track_script {
chk_mantaince_down
chk_haproxy
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
172.16.5.101/16
}
track_script {
chk_mantaince_down
chk_haproxy
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}

            192.168.20.200/24 dev eth1

该配置文件根本完成的功效:1、五个实例VI,完毕了双主模型,首要为前端dns负载均衡使用;2、单个主从模型能够兑现高可用,前提是一旦针对某个服务,这一个服务必须在keepalived启动之前启动,而且要对之监控;3、当然,也要做好对keepalived服务本身的监督,那就需求编制别的的脚本,脚本所在的目录必须与notify_master”/etc/keepalived/notify.shmaster”中提到的均等。

        }

编排对keepalived服务本身的督察脚本

        track_script {

# vim /etc/keepalived/notify.sh
#!/bin/bash
# Author: MageEdu <[email protected]>
# description: An example of notify script
#
vip=172.16.5.100
contact='[email protected]'
Notify() {
mailsubject="`hostname` to be $1: $vip floating"
mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
master)
notify master
/etc/rc.d/init.d/haproxy start
exit 0
;;
backup)
notify backup
/etc/rc.d/init.d/haproxy restart
exit 0
;;
fault)
notify fault
exit 0
;;
*)
echo 'Usage: `basename $0` {master|backup|fault}'
exit 1
;;
esac

            chk_down

瞩目:本脚本中关系了vip,而本实验是双主模型,其中有四个vip,假使想方便,就写一个就行了,倘诺求精确,可以复制那么些剧本,修改vip然后在配置文件中修改另一个实例中的notify.sh的门径。

        chk_nginx

node2中也要这么布署,可是要修改主从和优先级,那里不再罗嗦。

        }

计划完事后,启动了haproxy和keepalived之后,对配备做下校验。

    }

#service haproxy start
#service keepalived start
node1
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a5:31:22 brd ff:ff:ff:ff:ff:ff
inet 172.16.5.15/16 brd 172.16.255.255 scope global eth0
inet 172.16.5.101/16 scope global secondary eth0
inet6 fe80::20c:29ff:fea5:3122/64 scope link
valid_lft forever preferred_lft forever
node2
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:cc:55:6d brd ff:ff:ff:ff:ff:ff
inet 172.16.5.16/16 brd 172.16.255.255 scope global eth0
inet 172.16.5.100/16 scope global secondary eth0
inet6 fe80::20c:29ff:fecc:556d/64 scope link
valid_lft forever preferred_lft forever

B主机也作同样配备,稍作修改即可,须求修改的地点如下:

表明作用

    vrrp_instance ngx {


        state BACKUP

###########################keepalived的双主模型完成的负载均衡##################################

        priority 98

澳门金沙国际 14

    }

澳门金沙国际 15

    vrrp_instance ngx2 {


        state MASTER

############################状态分离之静态页面负载均衡############################

        priority 100

澳门金沙国际 16

    }

澳门金沙国际 17

六、模拟故障,验证结果


1.启动两台Nginx proxy的keepalived服务

############################景况分离之动态页面负载均衡##############################

# service keepalived start

澳门金沙国际 18

2.做客192.168.20.100,结果应是后端的web server轮询响应请求

澳门金沙国际 19

澳门金沙国际 20


3.拜访192.168.20.200,结果应是后端的web server轮询响应请求

**************************************************************************************************做客专门设定的用来查看代理状态的页面

澳门金沙国际 21

澳门金沙国际 22

4.将后端的web
server关闭一台,访问192.168.20.100或192.168.20.200,响应请求的将只是另一台正常运作web
server的主机


澳门金沙国际 23

**************************************************************************************************修改配置文件,将拒绝访问的ip改为客户端ip,得到如下页面

5.将后端的web
server都关闭,此时拜会192.168.20.100或192.168.20.200,响应请求的将只是Nginx
proxy中定义的主server中的sorry_server

frontendweb

澳门金沙国际 24

bind*:80

6.闭馆一台Nginx proxy
的nginx服务,备server将把IP地址添加到本机,继续提供服务,此时拜会192.168.20.100或192.168.20.200并不会有其余察觉

default_backendstatic

澳门金沙国际 25

aclabcsrc172.16.5.200

局地关于Keepalived相关课程集合

blockifabc

CentOS 7下Keepalived + HAProxy 搭建配置详解 
http://www.linuxidc.com/Linux/2017-03/141593.htm

172.16.5.200是本身物理机的IP地址

Keepalived高可用集群应用场景与布置
http://www.linuxidc.com/Linux/2017-03/141866.htm

澳门金沙国际 26

Nginx+Keepalived完毕站点高可用 
http://www.linuxidc.com/Linux/2016-12/137883.htm

以上统计,有不足之处,望指教。。

Nginx+Keepalived完结站点高可用(负载均衡) 
http://www.linuxidc.com/Linux/2016-12/138221.htm

本文出自 “秋风颂”
博客,请务必保留此出处

创设高可用集群Keepalived+Haproxy负载均衡
http://www.linuxidc.com/Linux/2016-12/138917.htm

haproxy 和 nginx 的分别 Haproxy
的行事方式:代理形式为 http 和 tcp
做代理,可以为各个服务做代办,它是一个特其他代理服务器,自己不…

CentOS6.5下 Keepalived高可用服务单实例配置
http://www.linuxidc.com/Linux/2016-12/138110.htm

Keepalived安装与配置
http://www.linuxidc.com/Linux/2017-02/140421.htm

Linux下Keepalived服务安装文档 
http://www.linuxidc.com/Linux/2017-03/141441.htm

正文永久更新链接地址:http://www.linuxidc.com/Linux/2017-05/143738.htm

澳门金沙国际 27

相关文章