keepalived高可用

  • keepalived高可用已关闭评论
  • 98 次浏览
  • A+
所属分类:linux技术
摘要

keepalived官网
Keepalived 软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。因此,Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx、Haproxy、MySQL等)的高可用解决方案软件。


keepalived高可用


keepalived简介

keepalived官网
Keepalived 软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。因此,Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx、Haproxy、MySQL等)的高可用解决方案软件。

Keepalived软件主要是通过VRRP协议实现高可用功能的。VRRP是Virtual Router RedundancyProtocol(虚拟路由器冗余协议)的缩写,VRRP出现的目的就是为了解决静态路由单点故障问题的,它能够保证当个别节点宕机时,整个网络可以不间断地运行。

所以,Keepalived 一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可实现系统网络服务的高可用功能。

keepalived的重要功能

eepalived 有三个重要的功能,分别是:

管理LVS负载均衡软件
实现LVS集群节点的健康检查
作为系统网络服务的高可用性(failover)

keepalived工作原理描述

Keepalived高可用对之间是通过VRRP通信的,因此,我们从 VRRP开始了解起:

  1. VRRP,全称 Virtual Router Redundancy Protocol,中文名为虚拟路由冗余协议,VRRP的出现是为了解决静态路由的单点故障。
  2. VRRP是通过一种竟选协议机制来将路由任务交给某台 VRRP路由器的。
  3. VRRP用 IP多播的方式(默认多播地址(224.0_0.18))实现高可用对之间通信。
  4. 工作时主节点发包,备节点接包,当备节点接收不到主节点发的数据包的时候,就启动接管程序接管主节点的开源。备节点可以有多个,通过优先级竞选,但一般 Keepalived系统运维工作中都是一对。
  5. VRRP使用了加密协议加密数据,但Keepalived官方目前还是推荐用明文的方式配置认证类型和密码。

介绍完 VRRP,接下来我再介绍一下 Keepalived服务的工作原理:

Keepalived高可用是通过 VRRP 进行通信的, VRRP是通过竞选机制来确定主备的,主的优先级高于备,因此,工作时主会优先获得所有的资源,备节点处于等待状态,当主挂了的时候,备节点就会接管主节点的资源,然后顶替主节点对外提供服务。

在 Keepalived 服务之间,只有作为主的服务器会一直发送 VRRP 广播包,告诉备它还活着,此时备不会枪占主,当主不可用时,即备监听不到主发送的广播包时,就会启动相关服务接管资源,保证业务的连续性.接管速度最快可以小于1秒。

keepalived实现nginx负载均衡机高可用

环境说明:

系统信息 主机名 IP
centos 8.5 master 192.168.222.138
centos 8.5 backup 192.168.222.139

本次高可用虚拟IP(VIP)地址暂定为192.168.222.133

keepalived安装

阿里云官网
配置主keepalived

关闭防火墙: [root@master ~]# systemctl stop firewalld.service  [root@master ~]# vim /etc/selinux/config  SELINUX=disabled [root@master ~]# setenforce 0 [root@master ~]# systemctl disable --now firewalld.service  Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. 配置网络源: [root@master ~]# dnf -y install wget [root@master ~]# cd /etc/yum.repos.d/ [root@master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo [root@master yum.repos.d]#sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo 安装epel源: [root@master yum.repos.d]#dnf install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm [root@master yum.repos.d]#sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel* [root@master yum.repos.d]#sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel* [root@master yum.repos.d]# ls CentOS-Base.repo   epel-next-testing.repo  epel-playground.repo       epel-testing.repo epel-modular.repo  epel-next.repo          epel-testing-modular.repo  epel.repo 查找keepalived: [root@master yum.repos.d]# cd [root@master ~]# dnf list all |grep keepalived Failed to set locale, defaulting to C.UTF-8 Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream    安装keepalived: [root@master ~]# dnf -y install keepalived 查看配置文件: [root@master ~]# ls /etc/keepalived/ keepalived.conf 查看安装生成的文件: [root@master ~]# rpm -ql keepalived  /etc/keepalived     //配置目录 /etc/keepalived/keepalived.conf   //此为主配置文件 /etc/sysconfig/keepalived /usr/bin/genhash /usr/lib/.build-id /usr/lib/.build-id/0a /usr/lib/.build-id/0a/410997e11c666114ca6d785e58ff0cc248744e /usr/lib/.build-id/6f /usr/lib/.build-id/6f/ba0d6bad6cb5ff7b074e703849ed93bebf4a0f /usr/lib/systemd/system/keepalived.service  //此为服务控制文件 /usr/libexec/keepalived /usr/sbin/keepalived /usr/share/doc/keepalived /usr/share/doc/keepalived/AUTHOR /usr/share/doc/keepalived/CONTRIBUTORS /usr/share/doc/keepalived/COPYING /usr/share/doc/keepalived/ChangeLog /usr/share/doc/keepalived/README /usr/share/doc/keepalived/TODO /usr/share/doc/keepalived/keepalived.conf.HTTP_GET.port /usr/share/doc/keepalived/keepalived.conf.IPv6 /usr/share/doc/keepalived/keepalived.conf.PING_CHECK /usr/share/doc/keepalived/keepalived.conf.SMTP_CHECK /usr/share/doc/keepalived/keepalived.conf.SSL_GET /usr/share/doc/keepalived/keepalived.conf.SYNOPSIS /usr/share/doc/keepalived/keepalived.conf.UDP_CHECK /usr/share/doc/keepalived/keepalived.conf.conditional_conf /usr/share/doc/keepalived/keepalived.conf.fwmark /usr/share/doc/keepalived/keepalived.conf.inhibit /usr/share/doc/keepalived/keepalived.conf.misc_check /usr/share/doc/keepalived/keepalived.conf.misc_check_arg /usr/share/doc/keepalived/keepalived.conf.quorum /usr/share/doc/keepalived/keepalived.conf.sample /usr/share/doc/keepalived/keepalived.conf.status_code /usr/share/doc/keepalived/keepalived.conf.track_interface /usr/share/doc/keepalived/keepalived.conf.virtual_server_group /usr/share/doc/keepalived/keepalived.conf.virtualhost /usr/share/doc/keepalived/keepalived.conf.vrrp /usr/share/doc/keepalived/keepalived.conf.vrrp.localcheck /usr/share/doc/keepalived/keepalived.conf.vrrp.lvs_syncd /usr/share/doc/keepalived/keepalived.conf.vrrp.routes /usr/share/doc/keepalived/keepalived.conf.vrrp.rules /usr/share/doc/keepalived/keepalived.conf.vrrp.scripts /usr/share/doc/keepalived/keepalived.conf.vrrp.static_ipaddress /usr/share/doc/keepalived/keepalived.conf.vrrp.sync /usr/share/man/man1/genhash.1.gz /usr/share/man/man5/keepalived.conf.5.gz /usr/share/man/man8/keepalived.8.gz /usr/share/snmp/mibs/KEEPALIVED-MIB.txt /usr/share/snmp/mibs/VRRP-MIB.txt /usr/share/snmp/mibs/VRRPv3-MIB.txt   

用同样的方法在备服务器上安装keepalived

关闭防火墙: [root@backup ~]# systemctl stop firewalld.service  [root@backup ~]# vim /etc/selinux/config  SELINUX=disabled [root@backup ~]# setenforce 0 [root@backup ~]# systemctl disable --now firewalld.service  Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. 配置网络源: [root@backup ~]# dnf -y install wget [root@backup ~]# cd /etc/yum.repos.d/ [root@backup yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo [root@backup yum.repos.d]#sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo 安装epel源 [root@backup yum.repos.d]#dnf install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm [root@backup yum.repos.d]#sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel* [root@backup yum.repos.d]#sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel* [root@backup yum.repos.d]# ls CentOS-Base.repo   epel-next-testing.repo  epel-playground.repo       epel-testing.repo epel-modular.repo  epel-next.repo          epel-testing-modular.repo  epel.repo 查找keepalived: [root@backup yum.repos.d]# cd [root@backup ~]# dnf list all |grep keepalived Failed to set locale, defaulting to C.UTF-8 Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream    安装keepalived: [root@backup ~]# dnf -y install keepalived 查看配置文件: [root@backup ~]# ls /etc/keepalived/ keepalived.conf 查看安装生成的文件: [root@backup ~]# rpm -ql keepalived  /etc/keepalived     //配置目录 /etc/keepalived/keepalived.conf   //此为主配置文件 /etc/sysconfig/keepalived /usr/bin/genhash /usr/lib/.build-id /usr/lib/.build-id/0a /usr/lib/.build-id/0a/410997e11c666114ca6d785e58ff0cc248744e /usr/lib/.build-id/6f /usr/lib/.build-id/6f/ba0d6bad6cb5ff7b074e703849ed93bebf4a0f /usr/lib/systemd/system/keepalived.service  //此为服务控制文件 /usr/libexec/keepalived /usr/sbin/keepalived /usr/share/doc/keepalived /usr/share/doc/keepalived/AUTHOR /usr/share/doc/keepalived/CONTRIBUTORS /usr/share/doc/keepalived/COPYING /usr/share/doc/keepalived/ChangeLog /usr/share/doc/keepalived/README /usr/share/doc/keepalived/TODO /usr/share/doc/keepalived/keepalived.conf.HTTP_GET.port /usr/share/doc/keepalived/keepalived.conf.IPv6 /usr/share/doc/keepalived/keepalived.conf.PING_CHECK /usr/share/doc/keepalived/keepalived.conf.SMTP_CHECK /usr/share/doc/keepalived/keepalived.conf.SSL_GET /usr/share/doc/keepalived/keepalived.conf.SYNOPSIS /usr/share/doc/keepalived/keepalived.conf.UDP_CHECK /usr/share/doc/keepalived/keepalived.conf.conditional_conf /usr/share/doc/keepalived/keepalived.conf.fwmark /usr/share/doc/keepalived/keepalived.conf.inhibit /usr/share/doc/keepalived/keepalived.conf.misc_check /usr/share/doc/keepalived/keepalived.conf.misc_check_arg /usr/share/doc/keepalived/keepalived.conf.quorum /usr/share/doc/keepalived/keepalived.conf.sample /usr/share/doc/keepalived/keepalived.conf.status_code /usr/share/doc/keepalived/keepalived.conf.track_interface /usr/share/doc/keepalived/keepalived.conf.virtual_server_group /usr/share/doc/keepalived/keepalived.conf.virtualhost /usr/share/doc/keepalived/keepalived.conf.vrrp /usr/share/doc/keepalived/keepalived.conf.vrrp.localcheck /usr/share/doc/keepalived/keepalived.conf.vrrp.lvs_syncd /usr/share/doc/keepalived/keepalived.conf.vrrp.routes /usr/share/doc/keepalived/keepalived.conf.vrrp.rules /usr/share/doc/keepalived/keepalived.conf.vrrp.scripts /usr/share/doc/keepalived/keepalived.conf.vrrp.static_ipaddress /usr/share/doc/keepalived/keepalived.conf.vrrp.sync /usr/share/man/man1/genhash.1.gz /usr/share/man/man5/keepalived.conf.5.gz /usr/share/man/man8/keepalived.8.gz /usr/share/snmp/mibs/KEEPALIVED-MIB.txt /usr/share/snmp/mibs/VRRP-MIB.txt /usr/share/snmp/mibs/VRRPv3-MIB.txt 

在主备机上分别安装nginx

在master上安装nginx

[root@master ~]# dnf -y install nginx [root@master ~]# cd /usr/share/nginx/html/ [root@master html]# ls 404.html  50x.html  index.html  nginx-logo.png  poweredby.png [root@master html]# echo 'master' > index.html [root@master html]# systemctl start nginx [root@master html]# ss -antl State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process      LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                     LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                     LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                     LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                     LISTEN     0          128                      [::]:111                    [::]:*                     LISTEN     0          128                      [::]:80                     [::]:*                     LISTEN     0          128                      [::]:22                     [::]:*                     [root@master html]# systemctl enable nginx Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service. //在主节点这边需要设置开机自启 

在backup上安装nginx

[root@backup ~]# dnf -y install nginx [root@backup ~]# cd /usr/share/nginx/html/ [root@backup html]# ls 404.html  50x.html  index.html  nginx-logo.png  poweredby.png [root@backup html]# echo 'backup' > index.html root@backup html]# systemctl start nginx [root@backup html]# ss -antl State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process      LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                     LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                     LISTEN     0          128                      [::]:22                     [::]:*                     LISTEN     0          128                      [::]:80                     [::]:*                     //在备节点这边不需要设置开机自启 

在浏览器上访问试试,确保master上的nginx服务能够正常访问
keepalived高可用
keepalived高可用

keepalived配置

配置主keepalived

[root@master html]# cd /etc/keepalived/ [root@master keepalived]# ls keepalived.conf [root@master keepalived]# mv keepalived.conf{,-bak} [root@master keepalived]# ls keepalived.conf-bak                 //备份一下配置文件 [root@master keepalived]# dnf -y install vim [root@master keepalived]# vim keepalived.conf  //编辑一个新配置文件 [root@master keepalived]# cat keepalived.conf ! Configuration File for keepalived  global_defs {    router_id lb01 }  vrrp_instance VI_1 {        //这里主备节点需要一致     state BACKUP     interface ens33      //网卡     virtual_router_id 51     priority 100     //这里比备节点的高     advert_int 1     authentication {         auth_type PASS         auth_pass tushanbu   //密码(可以随机生成)     }     virtual_ipaddress {         192.168.222.133    //高可用虚拟IP(VIP)地址     } }  virtual_server 192.168.222.133 80 {     delay_loop 6     lb_algo rr     lb_kind DR     persistence_timeout 50     protocol TCP      real_server 192.168.222.138 80 {         weight 1         TCP_CHECK {             connect_port 80             connect_timeout 3             nb_get_retry 3             delay_before_retry 3         }     }      real_server 192.168.222.139 80 {         weight 1         TCP_CHECK {             connect_port 80             connect_timeout 3             nb_get_retry 3             delay_before_retry 3         }     } } [root@master keepalived]# ls keepalived.conf  keepalived.conf-bak [root@master keepalived]# systemctl enable --now keepalived Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service. [root@master keepalived]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:f6:83:57 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet 192.168.222.133/32 scope global ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fef6:8357/64 scope link         valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff     inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0        valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff //此时备节点的keepalived还没有启动 [root@master keepalived]# scp keepalived.conf 192.168.222.139:/etc/keepalived The authenticity of host '192.168.222.139 (192.168.222.139)' can't be established. ECDSA key fingerprint is SHA256:anVVbTlEIzA1E8rB7IbLzaf7t9oQjB0qFP6Dd/ijnJI. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.222.139' (ECDSA) to the list of known hosts. [email protected]'s password:  keepalived.conf                                                    100%  875   905.2KB/s   00:00     //将创建的这个配置文件传到备节点上去,因为主,备节点的这个配置文件基本上都是一样的只需要改一点点 

配置备keepalived

[root@backup html]# cd /etc/keepalived/ [root@backup keepalived]# ls keepalived.conf [root@backup keepalived]# mv keepalived.conf{,-bak} [root@backup keepalived]# ls keepalived.conf-bak               //备份一下配置文件 [root@backup keepalived]# dnf -y install vim [root@backup keepalived]# ls     //接收到主节点传过来的配置文件 keepalived.conf  keepalived.conf-bak [root@backup keepalived]# vim keepalived.conf    //进行修改一下 [root@backup keepalived]# cat keepalived.conf ! Configuration File for keepalived  global_defs {    router_id lb02     }  vrrp_instance VI_1 {       //这里主备节点需要一致     state BACKUP     interface ens33      //网卡     virtual_router_id 51     priority 90     //这里比主节点的小     advert_int 1     authentication {         auth_type PASS         auth_pass tushanbu   //密码(可以随机生成)     }     virtual_ipaddress {         192.168.222.133    //高可用虚拟IP(VIP)地址     } }  virtual_server 192.168.222.133 80 {     delay_loop 6     lb_algo rr     lb_kind DR     persistence_timeout 50     protocol TCP      real_server 192.168.222.138 80 {         weight 1         TCP_CHECK {             connect_port 80             connect_timeout 3             nb_get_retry 3             delay_before_retry 3         }     }      real_server 192.168.222.139 80 {         weight 1         TCP_CHECK {             connect_port 80             connect_timeout 3             nb_get_retry 3             delay_before_retry 3         }     } } [root@backup keepalived]# systemctl enable --now keepalived Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service. 

查看VIP在哪里

在MASTER上查看

[root@master keepalived]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:f6:83:57 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet 192.168.222.133/32 scope global ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fef6:8357/64 scope link         valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff     inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0        valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff //主节点上面有vip 

在BACKUP上查看

[root@backup keepalived]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:31:af:f9 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fe31:aff9/64 scope link         valid_lft forever preferred_lft forever //备节点上面没有vip 

测试

停掉master的keepalived服务,开启backup的niginx和keepalived服务然后查看主权情况
master

[root@master keepalived]# systemctl stop keepalived.service  

backup:

[root@backup keepalived]# systemctl start nginx.service [root@backup keepalived]# systemctl start keepalived.service [root@backup keepalived]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:31:af:f9 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet 192.168.222.133/32 scope global ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fe31:aff9/64 scope link         valid_lft forever preferred_lft forever 

keepalived高可用
//此时可以看见backup是主
然后再开启master的keepalived服务再查看主权情况
master

[root@master keepalived]# systemctl start keepalived.service  [root@master keepalived]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:f6:83:57 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet 192.168.222.133/32 scope global ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fef6:8357/64 scope link         valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff     inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0        valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff 

backup

[root@backup keepalived]# systemctl stop nginx.service  //此时测试的时候backup上面的nginx是要进行关闭的 [root@backup keepalived]# ss -antl State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process      LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                     LISTEN     0          128                      [::]:22                     [::]:*                     [root@backup keepalived]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:31:af:f9 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fe31:aff9/64 scope link         valid_lft forever preferred_lft forever 

keepalived高可用
//此时可以看见master还是主

让keepalived监控nginx负载均衡机

keepalived通过脚本来监控nginx负载均衡机的状态
在master上编写脚本

[root@master keepalived]# cd [root@master ~]# mkdir /scripts [root@master ~]# cd /scripts/ [root@master scripts]# vim check_nginx.sh [root@master scripts]# cat check_nginx.sh #!/bin/bash nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'bnginxb'|wc -l) if [ $nginx_status -lt 1 ];then     systemctl stop keepalived fi [root@master scripts]# chmod +x check_nginx.sh  [root@master scripts]# ll total 4 -rwxr-xr-x. 1 root root 142 Oct  8 23:21 check_nginx.sh [root@master scripts]# vim notify.sh [root@master scripts]# cat notify.sh #!/bin/bash case "$1" in     master)         nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'bnginxb'|wc -l)         if [ $nginx_status -lt 1 ];then             systemctl start nginx         fi     ;;     backup)         nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'bnginxb'|wc -l)         if [ $nginx_status -gt 0 ];then             systemctl stop nginx         fi     ;;     *)         echo "Usage:$0 master|backup VIP"     ;; esac [root@master scripts]# chmod +x notify.sh  [root@master scripts]# ll total 8 -rwxr-xr-x. 1 root root 142 Oct  8 23:21 check_nginx.sh -rwxr-xr-x. 1 root root 383 Oct  8 23:31 notify.sh [root@master scripts]# scp check_nginx.sh 192.168.222.139:/scripts/ [email protected]'s password:  check_nginx.sh                                                     100%  142   113.6KB/s   00:00     [root@master scripts]# scp notify.sh 192.168.222.139:/scripts/ [email protected]'s password:  notify.sh                                                          100%  383   244.7KB/s   00:00     //将这个脚本传给备节点上提前创建好的目录里面 

在backup上编写脚本

[root@backup keepalived]# cd [root@backup ~]# mkdir /scripts [root@backup ~]# cd /scripts/ [root@backup scripts]# ll total 8 -rwxr-xr-x. 1 root root 142 Oct  8 23:39 check_nginx.sh -rwxr-xr-x. 1 root root 383 Oct  8 23:36 notify.sh 

配置keepalived加入监控脚本的配置

配置主keepalived

[root@master scripts]# cd [root@master ~]# vim /etc/keepalived/keepalived.conf [root@master ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived  global_defs {    router_id lb01 }  vrrp_script nginx_check {   //添加这一部分     script "/scripts/check_nginx.sh"     interval 5     weight -20 }  vrrp_instance VI_1 {     state BACKUP     interface ens33           virtual_router_id 51     priority 100     advert_int 1     authentication {         auth_type PASS         auth_pass tushanbu     }     virtual_ipaddress {         192.168.222.133     }     track_script {    //添加这一部分         nginx_check     }     notify_master "/scripts/notify.sh master 192.168.222.133"        notify_backup "/scripts/notify.sh backup 192.168.222.133" }  virtual_server 192.168.222.133 80 {     delay_loop 6     lb_algo rr     lb_kind DR     persistence_timeout 50     protocol TCP      real_server 192.168.222.138 80 {         weight 1         TCP_CHECK {             connect_port 80             connect_timeout 3             nb_get_retry 3             delay_before_retry 3         }     }      real_server 192.168.222.139 80 {         weight 1         TCP_CHECK {             connect_port 80             connect_timeout 3             nb_get_retry 3             delay_before_retry 3         }     } } [root@master ~]# systemctl restart keepalived.service [root@master ~]# systemctl restart nginx.service 

配置备keepalived
backup无需检测nginx是否正常,当升级为MASTER时启动nginx,当降级为BACKUP时关闭

[root@backup scripts]# cd [root@backup ~]# vim /etc/keepalived/keepalived.conf [root@backup ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived  global_defs {    router_id lb02 }  vrrp_instance VI_1 {     state BACKUP     interface ens33           virtual_router_id 51     priority 90     advert_int 1     authentication {         auth_type PASS         auth_pass tushanbu     }     virtual_ipaddress {         192.168.222.133     }     notify_master "/scripts/notify.sh master 192.168.222.133" //添加     notify_backup "/scripts/notify.sh backup 192.168.222.133" //添加 }  virtual_server 192.168.222.133 80 {     delay_loop 6     lb_algo rr     lb_kind DR     persistence_timeout 50     protocol TCP      real_server 192.168.222.138 80 {         weight 1         TCP_CHECK {             connect_port 80             connect_timeout 3             nb_get_retry 3             delay_before_retry 3         }     }      real_server 192.168.222.139 80 {         weight 1         TCP_CHECK {             connect_port 80             connect_timeout 3             nb_get_retry 3             delay_before_retry 3         }     } } [root@backup ~]# systemctl restart keepalived.service  [root@backup ~]# systemctl restart nginx.service  

测试

正常状态运行查看状态

master: [root@master ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:f6:83:57 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet 192.168.222.133/32 scope global ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fef6:8357/64 scope link         valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff     inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0        valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff [root@master ~]# curl 192.168.222.133 master backup: [root@backup ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:31:af:f9 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fe31:aff9/64 scope link         valid_lft forever preferred_lft forever 停止nginx [root@master ~]# systemctl stop nginx.service  [root@master ~]# ss -antl State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process      LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                     LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                     LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                     LISTEN     0          128                      [::]:111                    [::]:*                     LISTEN     0          128                      [::]:22                     [::]:*                     

master上停止nginx后的状态

master: [root@master ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:f6:83:57 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fef6:8357/64 scope link         valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff     inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0        valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000     link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff backup: [root@backup ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000     link/ether 00:0c:29:31:af:f9 brd ff:ff:ff:ff:ff:ff     inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33        valid_lft forever preferred_lft forever     inet 192.168.222.133/32 scope global ens33        valid_lft forever preferred_lft forever     inet6 fe80::20c:29ff:fe31:aff9/64 scope link         valid_lft forever preferred_lft forever [root@backup ~]# curl 192.168.222.133 backup