使用Ansible自动化部署Lvs+keepalived

  • A+
所属分类:linux技术
摘要

博客参考https://www.cnblogs.com/zhaoya2019/archive/2020/03/31/12609142.htmlansible-playbook入口文件

博客参考https://www.cnblogs.com/zhaoya2019/archive/2020/03/31/12609142.html

ansible-playbook入口文件

使用Ansible自动化部署Lvs+keepalived

根据主机功能分配3个角色,目录结构如下

使用Ansible自动化部署Lvs+keepalived

使用Ansible自动化部署Lvs+keepalived

使用Ansible自动化部署Lvs+keepalived

nfs角色文件

[root@zqf ~]# cat /etc/ansible/roles/nfs/files/exports   /data 192.168.1.0/24(rw,sync)  [root@zqf ~]# cat /etc/ansible/roles/nfs/handlers/main.yaml   - name: reload nfs    service: name=nfs state=reloaded  [root@zqf ~]# cat /etc/ansible/roles/nfs/tasks/main.yaml   - name: yum install nfs services    yum: name=nfs-utils state=installed  - name: yum install rpcbind     yum: name=rpcbind state=installed  - name: create share directory    file: path={{ share_path }} owner=nfsnobody group=nfsnobody state=directory recurse=yes    notify: reload nfs  - name: nfs configure    copy: src=exports dest=/etc/    notify: reload nfs  - name: start nfs service    service: name=nfs state=started enabled=yes  - name: start rpcbind    service: name=rpcbind state=started enabled=yes  [root@zqf ~]# cat /etc/ansible/roles/nfs/vars/main.yaml   share_path: /data

Web功能文件

[root@zqf ~]# cat /etc/ansible/roles/web/files/ifcfg-lo:0   DEVICE=lo:0  IPADDR=192.168.1.200  NETMASK=255.255.255.255  NETWORK=127.0.0.0  # If you're having problems with gated making 127.0.0.0/8 a martian,  # you can change this to something else (255.255.255.255, for example)  BROADCAST=127.255.255.255  ONBOOT=yes  NAME=loopback  [root@zqf ~]# cat /etc/ansible/roles/web/files/index.html   this is web1  [root@zqf ~]# cat /etc/ansible/roles/web/files/sysctl.conf   # sysctl settings are defined through files in  # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.  #  # Vendors settings live in /usr/lib/sysctl.d/.  # To override a whole file, create a new file with the same in  # /etc/sysctl.d/ and put new settings there. To override  # only specific settings, add a file with a lexically later  # name in /etc/sysctl.d/ and put new settings there.  #  # For more information, see sysctl.conf(5) and sysctl.d(5).  net.ipv4.conf.all.arp_ignore = 1  net.ipv4.conf.all.arp_announce = 2  net.ipv4.conf.default.arp_ignore = 1  net.ipv4.conf.default.arp_announce = 2  net.ipv4.conf.lo.arp_ignore = 1  net.ipv4.conf.lo.arp_announce = 2    [root@zqf ~]# cat /etc/ansible/roles/web/handlers/main.yaml   - name: reload httpd    service: name=httpd state=reloaded  - name: restart network    service: name=network state=reloaded  [root@zqf ~]# cat /etc/ansible/roles/web/tasks/main.yaml   - name: install httpd    yum: name=httpd state=installed  - name: write the index    copy: src=index.html dest=/var/www/html    notify: reload httpd  - name: start httpd    service: name=httpd state=started enabled=yes  - name: yum install nfs services    yum: name=nfs-utils state=installed  - name: yum install rpcbind     yum: name=rpcbind state=installed  - name: start nfs service    service: name=nfs state=started enabled=yes  - name: start rpcbind    service: name=rpcbind state=started enabled=yes  - name: create mount directory    file: path=/var/www/html/nfs state=directory  - name: mount nfs    mount: src=192.168.1.135:{{ share_path }} path=/var/www/html/nfs fstype=nfs state=mounted    notify: reload httpd  - name: stop NetworkManager    service: name=NetworkManager state=stopped   - name: bind loopback    copy: src=ifcfg-lo:0 dest=/etc/sysconfig/network-scripts/  - name: start network    shell: systemctl restart network  - name: trun off arp    copy: src=sysctl.conf dest=/etc/sysctl.conf  - name: load sysctl configuration    shell: sysctl -p  - name: install network-tools    yum: name=net-tools state=installed  - name: add route record    shell: route add -host 192.168.1.200 dev lo:0  - name: route add local    shell: echo "route add -host 192.168.1.200 dev lo:0" >> /etc/rc.local    [root@zqf ~]# cat /etc/ansible/roles/web/vars/main.yaml   share_path: /data

LVS功能文件

[root@zqf ~]# cat /etc/ansible/roles/lvs/files/epel.repo   [epel]  name=Extra Packages for Enterprise Linux 7 - $basearch  baseurl=http://mirrors.aliyun.com/epel/7/$basearch  failovermethod=priority  enabled=1  gpgcheck=0  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7    [epel-debuginfo]  name=Extra Packages for Enterprise Linux 7 - $basearch - Debug  baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug  failovermethod=priority  enabled=0  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7  gpgcheck=0    [epel-source]  name=Extra Packages for Enterprise Linux 7 - $basearch - Source  baseurl=http://mirrors.aliyun.com/epel/7/SRPMS  failovermethod=priority  enabled=0  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7  gpgcheck=0  [root@zqf ~]# cat /etc/ansible/roles/lvs/files/ifcfg-ens33:0   TYPE="Ethernet"  DEVICE="ens33:0"  ONBOOT="yes"  IPADDR=192.168.1.200  NETMASK=255.255.255.0  [root@zqf ~]# cat /etc/ansible/roles/lvs/files/sysctl.conf1   net.ipv4.conf.all.send_redirects = 0  net.ipv4.conf.default.send_redirects = 0  net.ipv4.conf.ens33.send_redirects = 0  [root@zqf ~]# cat /etc/ansible/roles/lvs/handlers/main.yaml   - name: reload keepalived    shell: systemctl restart keepalived  [root@zqf ~]# cat /etc/ansible/roles/lvs/tasks/main.yaml   - name: stop NetwormManager    service: name=NetworkManager state=stopped  - name: bind vip    copy: src=ifcfg-ens33:0 dest=/etc/sysconfig/network-scripts/  - name: sysctl    copy: src=sysctl.conf1 dest=/etc/sysctl.conf  - name: sysctl -p    shell: sysctl -p  - name: epel     copy: src=epel.repo dest=/etc/yum.repos.d/  - name: install ipvsadm    yum: name=ipvsadm state=installed  - name: load to kernel    shell: modprobe ip_vs  - name: ipvsadm configure    shell: ipvsadm -A -t {{ vip }}:80 -s rr  - name: ipvsadm list2     shell: ipvsadm -a -t {{ vip }}:80 -r {{ rs1 }}:80 -g;  - name: ipvsadm list3           shell: ipvsadm -a -t {{ vip }}:80 -r {{ rs2 }}:80 -g;  - name: restart this net    shell: systemctl restart network  - name: install keepalived    yum: name=keepalived state=installed  - name: configure keepalived    template: src=keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf    notify: reload keepalived   - name: start keepalived    service: name=keepalived state=started enabled=yes  [root@zqf ~]# cat /etc/ansible/roles/lvs/templates/keepalived.conf.j2   ! Configuration File for keepalived    global_defs {     router_id R1   #命名主机名(同一个组里机器名称不能一致)  }    vrrp_instance VI_1 {          {% if ds_master == ansible_hostname%}      state MASTER          priority 80          {% elif ds_slave == ansible_hostname%}          state BACKUP        priority 47          {% endif %}   #设置主/从并配置优先级      interface ens33      virtual_router_id 66  #组号,如果是一组就是相同的ID号,一个主里面只能由一个主服务器和多个从服务器      advert_int 1   #心跳检测时间,检测对方存货      authentication {     #存活验证密码          auth_type PASS          auth_pass 1111      }      virtual_ipaddress {          192.168.1.200    #集群VIP地址      }  }    virtual_server 192.168.1.200 80 {   #设置集群地址以及端口号      delay_loop 2   #健康检查间隔      lb_algo rr #当前使用轮询调度算法      lb_kind DR   #LVS工作方式      protocol TCP   #使用的协议       real_server 192.168.1.131 80 {  #真实服务器Ip信息以及使用端口          weight 1    #权重          TCP_CHECK {     #状态检查方式              connect_port 80                 connect_timeout 3 #连接超时(秒)              nb_get_retry 3    #重试次数              dealy_before_retry 4 #重试间隔(秒)              }     }     real_server 192.168.1.132 80 {          weight 1               TCP_CHECK {                 connect_port 80               connect_timeout 3              nb_get_retry 3                  dealy_before_retry 4          }     }  }  [root@zqf ~]# cat /etc/ansible/roles/lvs/vars/main.yaml   vip: 192.168.1.200  rs1: 192.168.1.131  rs2: 192.168.1.132   ds_master: ds1  ds_slave: ds2  

执行结果

使用Ansible自动化部署Lvs+keepalived
使用Ansible自动化部署Lvs+keepalived

结果测试

vip在ds1上

使用Ansible自动化部署Lvs+keepalived
ds2
使用Ansible自动化部署Lvs+keepalived

访问VIP
使用Ansible自动化部署Lvs+keepalived

查看共享存储是否成功
使用Ansible自动化部署Lvs+keepalived

断开web2服务,业务并没有停止
使用Ansible自动化部署Lvs+keepalived

关掉ds1主负载均衡器 vip飘到了ds2备上
使用Ansible自动化部署Lvs+keepalived