k8s1.18三节点安装

  • A+
所属分类:linux技术
摘要

今天本来准备继续学习的,新版视频需要1.18.准备升级集群版本,看了一下我的集群才1.15,官网说不能跨级升级,只能一级一级升了。


K8s升级

今天本来准备继续学习的,新版视频需要1.18.准备升级集群版本,看了一下我的集群才1.15,官网说不能跨级升级,只能一级一级升了。

环境

[root@master ~]# kubectl get nodes NAME     STATUS   ROLES    AGE     VERSION master   Ready    master   4d13h   v1.15.1 node1    Ready    node     4d12h   v1.15.1 node2    Ready    node     4d12h   v1.15.1 [root@master ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) [root@master ~]# cat /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 10.1.1.36   master 10.1.1.37   node1 10.1.1.38   node2 

一、升级第一个master节点

1.在第一个节点检查kubeadm版本并升级

[root@master ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes 在列表中找到最新的1.16版本。 [root@master ~]# yum install -y kubeadm-1.16.8-0 --disableexcludes=kubernetes [root@master ~]# kubeadm version         #检查版本 

2.排除当前节点

kubectl drain <cp-node-name> --ignore-daemonsets   #cp-node-name通过kubectl get nodes查询 [root@master ~]# kubectl drain master --ignore-daemonsets 

3.显示升级计划

[root@master ~]# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.15.1 [upgrade/versions] kubeadm version: v1.16.8 W0903 10:51:05.203538     953 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0903 10:51:05.203645     953 version.go:102] falling back to the local client version: v1.16.8 [upgrade/versions] Latest stable version: v1.16.8 [upgrade/versions] Latest version in the v1.15 series: v1.15.12  Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT   CURRENT       AVAILABLE Kubelet     3 x v1.15.1   v1.15.12  Upgrade to the latest version in the v1.15 series:  COMPONENT            CURRENT   AVAILABLE API Server           v1.15.1   v1.15.12 Controller Manager   v1.15.1   v1.15.12 Scheduler            v1.15.1   v1.15.12 Kube Proxy           v1.15.1   v1.15.12 CoreDNS              1.3.1     1.6.2 Etcd                 3.3.10    3.3.10  You can now apply the upgrade by executing the following command:          kubeadm upgrade apply v1.15.12  _____________________________________________________________________  Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT   CURRENT       AVAILABLE Kubelet     3 x v1.15.1   v1.16.8  Upgrade to the latest stable version:  COMPONENT            CURRENT   AVAILABLE API Server           v1.15.1   v1.16.8 Controller Manager   v1.15.1   v1.16.8 Scheduler            v1.15.1   v1.16.8 Kube Proxy           v1.15.1   v1.16.8 CoreDNS              1.3.1     1.6.2 Etcd                 3.3.10    3.3.15-0  You can now apply the upgrade by executing the following command:          kubeadm upgrade apply v1.16.8  _____________________________________________________________________   

4.kubeadm升级

[root@master ~]# kubeadm upgrade apply v1.16.8 

....镜像下载失败了,,直接安装吧

Kubernets1.18安装

准备环境

1,所有节点主机名及绑定

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 10.1.1.36   master 10.1.1.37   node1 10.1.1.38   node2 

2, 所有节点关闭selinux

#sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux #setenforce 0 

3, 所有节点时间同步

systemctl enable chronyd #重启之后,验证是否启动 chronyc sources -v 

4, 所有节点准备yum源(在centos默认源的基础上再加上以下两个yum源)

# vim /etc/yum.repos.d/kubernetes.repo [k8s] name=k8s baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 
# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo 

5, 所有节点安装iptables服务,并保存为空规则

# yum install iptables-services -y # systemctl restart iptables # systemctl enable iptables  # iptables -F # iptables -F -t nat # iptables -F -t mangle # iptables -F -t raw  # iptables -L Chain INPUT (policy ACCEPT) target     prot opt source               destination  Chain FORWARD (policy ACCEPT) target     prot opt source               destination  Chain OUTPUT (policy ACCEPT) target     prot opt source               destination  # service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ] 

5, 所有节点关闭swap(kubernetes1.8开始不关闭swap无法启动)

# swapoff -a  打开fstab文件将swap那一行注释保存 # vim /etc/fstab UUID=38182b36-9be4-45f8-9b3f-f9b3605fcdf0 /                       xfs     defaults        0 0 UUID=6b69e04f-4a85-4612-b1c0-24939fd84962 /boot                   xfs     defaults        0 0 #UUID=9ba6a188-d8e1-4983-9abe-ba4a29b1d138 swap                    swap    defaults        0 0 

6, RHEL7和CentOS7有由于iptables被绕过而导致流量路由不正确的问题,需要所有节点做如下操作:

# cat > /etc/sysctl.d/k8s.conf <<EOF net.ipv4.ip_forward = 1 vm.swappiness = 0 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF # modprobe br_netfilter # lsmod |grep br_netfilter # sysctl -p /etc/sysctl.d/k8s.conf 

7, 所有节点设置kube-proxy开启ipvs的前置条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块

# cat > /etc/sysconfig/modules/ipvs.modules <<EOF modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe nf_conntrack_ipv4 EOF  # chmod 755 /etc/sysconfig/modules/ipvs.modules # sh /etc/sysconfig/modules/ipvs.modules # lsmod |egrep 'ip_vs|nf_conntrack' 

安装软件

1,所有节点安装Docker服务

# yum -y install docker-ce -y  # docker -v Docker version 19.03.12, build 48a66213fe  # systemctl enable docker # systemctl start docker 

2, 所有节点配置加速器和将cgroupdrivier改为systemd,并重启docker服务

修改Docker Cgroup Driver为systemd,如果不修改则在后续添加Worker节点时可能会遇到“detected cgroupfs as ths Docker driver.xx”的报错信息,并配置Docker本地镜像库; 
# vim /etc/docker/daemon.json {    "registry-mirrors": ["https://j2n6giq8.mirror.aliyuncs.com"],    "exec-opts": ["native.cgroupdriver=systemd"] } # systemctl restart docker 

3, 所有节点安装kubelet,kubeadm,kubectl.并enable kubelet服务(注意: 不要start启动)

# yum install kubelet-1.18.0-0 kubeadm-1.18.0-0 kubectl-1.18.0-0 -y # systemctl enable kubelet 

yum install kubelet-1.18.0-0 kubeadm-1.18.0-0 kubectl-1.18.0-0 -y

Kubelet负责与其他节点集群通信,并进行本节点Pod和容器的管理。

Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。

Kubectl是Kubernetes集群管理工具。

kubeadm初始化

在master节点上操作(其它节点不操作)

[root@master ~]# kubeadm init --kubernetes-version=1.18.0 --apiserver-advertise-address=10.1.1.36 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.2.0.0/16 --pod-network-cidr=10.3.0.0/16  --kubernetes-version # 为Control-plane选择一个特定的 Kubernetes 版本 --apiserver-advertise-address # API 服务器所公布的其正在监听的 IP 地址 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  # 选择用于拉取Control-plane的镜像的容器仓库 --pod-network-cidr 10.3.0.0/16  # 指定pod的IP地址范围 --service-cidr 10.2.0.0/16  # 指定Service的VIP地址范围 

验证镜像

[root@master ~]# docker images REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        5 months ago        117MB registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        5 months ago        173MB registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        5 months ago        162MB registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        5 months ago        95.3MB registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        6 months ago        683kB registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        7 months ago        43.8MB registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        10 months ago       288MB  

启动集群

在master节点上操作(其它节点不操作)

执行export KUBECONFIG=/etc/kubernetes/admin.conf就可以启动集群(加到/etc/profile里实现开机自动启动)

确认kubelet服务启动了 [root@master ~]# systemctl status kubelet.service   [root@master ~]# vim /etc/profile export KUBECONFIG=/etc/kubernetes/admin.conf  [root@master ~]# source /etc/profile 

查看集群状态

[root@master ~]# kubectl get cs NAME                 STATUS    MESSAGE             ERROR controller-manager   Healthy   ok scheduler            Healthy   ok etcd-0               Healthy   {"health":"true"} [root@master ~]# kubectl get node NAME     STATUS     ROLES    AGE   VERSION master   NotReady   master   16m   v1.18.0 

创建flannel网络

参考: https://github.com/coreos/flannel

在master节点上操作(其它节点不操作)

1,下载kube-flannel.yml

[root@master ~]# mkdir /root/k8s [root@master ~]# cd /root/k8s/ [root@master k8s]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 

2, 应用kube-flannel.yml创建pod(这一步非常慢,因为要下载镜像,可以使用共享的镜像先导入)

[root@master k8s]# kubectl apply -f kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created 

3, 要确认所有的pod为running状态

[root@master k8s ~]#  kubectl get pods -n kube-system NAME                             READY   STATUS    RESTARTS   AGE coredns-7ff77c879f-9cc5b         1/1     Running   0          17m coredns-7ff77c879f-jv6gh         1/1     Running   0          17m etcd-master                      1/1     Running   0          17m kube-apiserver-master            1/1     Running   0          17m kube-controller-manager-master   1/1     Running   0          17m kube-flannel-ds-amd64-n4z5g      1/1     Running   0          3m43s kube-proxy-t8lm4                 1/1     Running   0          17m kube-scheduler-master            1/1     Running   0          17m  

验证master节点ok

[root@master ~]# kubectl get nodes NAME     STATUS   ROLES    AGE   VERSION master   Ready    master   18m   v1.18.0 

加入其它节点

kubeadm join 10.1.1.6:6443 --token xrnj5o.a0n9lz8w69osyae2
--discovery-token-ca-cert-hash sha256:137b72bcbd6ad1b3aa5d005b18644b44de921a068a735c6f5da3d6a78d60086f

1, node1上join集群

[root@node2 ~]# kubeadm join 10.1.1.36:6443 --token vbxx0f.bj3jfbwkx5apil9u       --discovery-token-ca-cert-hash sha256:f9c3b556d3877d1e721c3e067031b8dd5b19b29ff3c0f1569db1429651141e44  [root@node1 ~]# kubeadm join 10.1.1.36:6443 --token vbxx0f.bj3jfbwkx5apil9u  >     --discovery-token-ca-cert-hash sha256:f9c3b556d3877d1e721c3e067031b8dd5b19b29ff3c0f1569db1429651141e44   #node节点加入时会下载flannel镜像,需要等待下载镜像并启动状态才会ready 

验证集群OK

在master上验证集群OK

[root@master ~]# kubectl get nodes NAME     STATUS   ROLES    AGE   VERSION master   Ready    master   33m   v1.18.0 node1    Ready    <none>   11m   v1.18.0 node2    Ready    <none>   11m   v1.18.0 

node节点管理集群

只要把master上的管理文件/etc/kubernetes/admin.conf拷贝到node节点的$HOME/.kube/config就可以让node节点也可以实现kubectl命令管理

1, 在node节点的用户家目录创建.kube目录

[root@node1 ~]# mkdir /root/.kube	 

2, 在master节点做如下操作

[root@master ~]# scp /etc/kubernetes/admin.conf node1:/root/.kube/config 

3, 在node节点验证

[root@node1 ~]# kubectl get nodes NAME     STATUS   ROLES    AGE   VERSION master   Ready    master   36m   v1.18.0 node1    Ready    <none>   14m   v1.18.0 node2    Ready    <none>   14m   v1.18.0 

移除节点

1, 在master节点上执行

[root@master ~]# kubectl drain node2 --delete-local-data --force --ignore-daemonsets [root@master ~]# kubectl delete node node2 

2, 在node2节点上执行

[root@node2 ~]# kubeadm reset [root@node2 ~]# ifconfig cni0 down [root@node2 ~]# ip link delete cni0 [root@node2 ~]# ifconfig flannel.1 down [root@node2 ~]# ip link delete flannel.1 [root@node2 ~]# rm -rf /var/lib/cni/ 

3,在node1上执行

[root@node1 ~]# kubectl delete node node2