第1章 LVS(负载均衡)+keepalived(高可用)
提示:接下来的实验又是一个新的开始,配置负载均衡不需要ipvsadm这个工具,因为keepalived本来就是另外一个自动管理工具。
1.1 负载均衡服务端配置
1.1.1 负载均衡服务器安装Keepalived
1.1.1.1 清理负载均衡信息
[root@lb01 ~]# ipvsadm -C [root@lb01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn
1.1.1.2 安装keepalived
[root@lb01 ~]# yum install -y keepalived
1.1.2 配置keepalived单实例
1.1.2.1 lb01配置文件
[root@lb01 ~]# vim /etc/keepalived/keepalived.conf global_defs { router_id LVS_01 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.3/24 } } virtual_server 10.0.0.3 80 { # 虚拟IP,来源于上面的虚拟IP地址,后面加空格加端口号 delay_loop 6 # 健康检查间隔,单位为秒 lb_algo wrr # 负载均衡调度算法,一般用wrr、rr、wlc lb_kind DR # 负载均衡转发规则。一般包括DR,NAT,TUN 3种 nat_mask 255.255.255.0 persistence_timeout 50 # 会话保持时间。会话保持就是把用户请求转发给同一个服务器,否则刚在1上提交完帐号密码,就跳转到另一台服务器2上了 protocol TCP # 转发协议,有TCP和UDP两种,一般用TCP,没用过UDP real_server 10.0.0.7 80 { # 真实服务器,包括IP和端口号 weight 1 # 权重,数值越大,权重越高,被访问的概率越大 TCP_CHECK { # 通过tcpcheck判断RealServer的健康状态 connect_timeout 8 # 连接超时时间 nb_get_retry 3 # 重连次数 delay_before_retry 3 # 在重连前等待的时间 connect_port 80 # 检测端口 } } real_server 10.0.0.8 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
1.1.2.2 lb02配置文件
[root@lb02 ~]# vim /etc/keepalived/keepalived.conf global_defs { router_id LVS_02 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.3/24 } } virtual_server 10.0.0.3 80 { delay_loop 6 lb_algo wrr lb_kind DR nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP real_server 10.0.0.7 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 10.0.0.8 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
1.1.3 检查配置情况
[root@lb01 ~]# systemctl start keepalived
[root@lb01 ~]# ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:7b:8b:0f brd ff:ff:ff:ff:ff:ff inet 10.0.0.5/24 brd 10.0.0.255 scope global eth0 valid_lft forever preferred_lft forever inet 10.0.0.3/24 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe7b:8b0f/64 scope link valid_lft forever preferred_lft forever
[root@lb01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.3:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0 -> 10.0.0.8:80 Route 1 0 0
1.1.3.1 单实例配置文件对比
[root@lb01 ~]# diff keepalived-lb01.conf keepalived-lb02.conf 2c2 < router_id LVS_01 --- > router_id LVS_02 6c6 < state MASTER --- > state BACKUP 9c9 < priority 150 --- > priority 100
1.1.4 多实例配置方案
1.1.4.1 lb01配置文件
[root@lb01 ~]# vim /etc/keepalived/keepalived.conf global_defs { router_id LVS_01 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.3/24 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 52 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 2222 } virtual_ipaddress { 10.0.0.4/24 } } virtual_server 10.0.0.3 80 { delay_loop 6 lb_algo wrr lb_kind DR nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP real_server 10.0.0.7 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 10.0.0.8 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } } virtual_server 10.0.0.4 80 { delay_loop 6 lb_algo wrr lb_kind DR nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP real_server 10.0.0.7 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 10.0.0.8 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
1.1.4.2 lb02配置文件
[root@lb02 ~]# vim /etc/keepalived/keepalived.conf global_defs { router_id LVS_02 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.3/24 } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 52 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 2222 } virtual_ipaddress { 10.0.0.4/24 } } virtual_server 10.0.0.3 80 { delay_loop 6 lb_algo wrr lb_kind DR nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP real_server 10.0.0.7 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 10.0.0.8 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } } virtual_server 10.0.0.4 80 { delay_loop 6 lb_algo wrr lb_kind DR nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP real_server 10.0.0.7 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 10.0.0.8 80 { weight 1 TCP_CHECK { connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
1.1.4.3 查看lb01配置情况
[root@lb01 ~]# systemctl restart keepalived
[root@lb01 ~]# ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:7b:8b:0f brd ff:ff:ff:ff:ff:ff inet 10.0.0.5/24 brd 10.0.0.255 scope global eth0 valid_lft forever preferred_lft forever inet 10.0.0.3/24 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe7b:8b0f/64 scope link valid_lft forever preferred_lft forever
[root@lb01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.3:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0 -> 10.0.0.8:80 Route 1 0 0 TCP 10.0.0.4:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0 -> 10.0.0.8:80 Route 1 0 0
1.1.4.4 查看lb02配置情况
[root@lb02 ~]# systemctl restart keepalived
[root@lb02 ~]# ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:06:04:d1 brd ff:ff:ff:ff:ff:ff inet 10.0.0.6/24 brd 10.0.0.255 scope global eth0 valid_lft forever preferred_lft forever inet 10.0.0.4/24 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe06:4d1/64 scope link valid_lft forever preferred_lft forever
[root@lb02 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.3:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0 -> 10.0.0.8:80 Route 1 0 0 TCP 10.0.0.4:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0 -> 10.0.0.8:80 Route 1 0 0
1.1.4.5 多实例配置文件对比
[root@lb01 ~]# diff keepalived-lb01-多实例.conf keepalived-lb02-多实例.conf 2c2 < router_id LVS_01 --- > router_id LVS_02 6c6 < state MASTER --- > state BACKUP 9c9 < priority 150 --- > priority 100 21c21 < state BACKUP --- > state MASTER 24c24 < priority 100 --- > priority 150
1.2 Web服务端配置
1.2.1 在lo网卡绑定VIP地址(ip)
[root@web01 ~]# ip addr add 10.0.0.3/32 dev lo
1.2.2 修改内核参数抑制ARP响应
[root@web01 ~]# cat >>/etc/sysctl.conf<<EOF net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.lo.arp_ignore = 1 net.ipv4.conf.lo.arp_announce = 2 EOF [root@web01 ~]# sysctl -p
1.3 Keepalive健康检查功能
Keepalived可以自动判断LVS代理的真实服务器存活情况,如果真实服务器发生宕机,Keepalived可以自动将宕机的服务器从LVS负载池中去除,当宕机的服务器重新启动后又会自动添加进LVS负载池中,这种检查功能叫做Keepalive健康检查。
1.3.1 关闭web01服务器
[root@web01 ~]# nginx -s stop
1.3.2 查看LVS情况
[root@lb01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.3:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0 TCP 10.0.0.4:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0
1.3.3 开启Web01服务器
[root@web01 ~]# nginx
1.3.4 查看LVS情况
[root@lb01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.3:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0 -> 10.0.0.8:80 Route 1 0 0 TCP 10.0.0.4:80 wrr persistent 50 -> 10.0.0.7:80 Route 1 0 0 -> 10.0.0.8:80 Route 1 0 0
第2章 LVS故障排错
- 排查过程:
- 首先通过客户端先访问真实服务器查看是否可以访问
- 再通过LVS服务器访问真实服务器查看是否可以访问
- 再通过客户端访问LVS访问真实服务器
- 最后通过正确的方法从头到尾梳理一遍安装过程,查看是否还有问题

我的微信
如果有技术上的问题可以扫一扫我的微信