centos7安装kubernetes k8s 1.18

时间:2021-01-21

可以参考其他网友的阿里云搭建k8s高可用集群(1.17.3)

https://www.cnblogs.com/gmmy/p/12372805.html
准备四台centos7虚拟机,用来安装k8s集群
master01(192.168.1.203)配置:2核cpu,2G内存,60G硬盘 桥接网络
master02(192.168.1.204)配置:2核cpu,2G内存,60G硬盘 桥接网络
master03(192.168.1.205)配置:2核cpu,2G内存,60G硬盘 桥接网络
node01(192.168.1.206)配置:2核cpu,1G内存,60G硬盘 桥接网络

所有master和node节点都要安装的基础组件

#以下教程非一键式,需要逐行查看,在提示的服务器中一一部署 #在大部分情况下,可以直接复制代码到命令行中,需要一点点的Linux基础知识 #请根据自己的情况,修改自己的主机名(如master01,master02,master03,node01) #在master01 hostnamectl set-hostname master01 #在master02 hostnamectl set-hostname master02 #在master03 hostnamectl set-hostname master03 #在node01 hostnamectl set-hostname node01 #在master01, master02, master03, node01上/etc/hosts文件增加如下几行: cat >> /etc/hosts << EOF 192.168.1.202 master01 192.168.1.203 master02 192.168.1.204 master03 192.168.1.205 node01 EOF #设置免密登陆的密钥 #默认相关内容放在/root/.ssh/下面 #在所有master节点运行如下 mkdir /root/.ssh/ chmod 600 /root/.ssh/ touch /root/.ssh/authorized_keys chmod 600 /root/.ssh/authorized_keys ssh-keygen -t rsa #enter ,enter, enter #yum安装一些必备的软件 yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake\ libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel\ wget vim ncurses-devel autoconf automake zlib-devel python-devel\ epel-release lrzsz openssh-server socat ipvsadm conntrack bind-utils epel-release libffi-devel\ libaio-devel libxml2-devel cmake python-devel\ device-mapper-persistent-data lvm2 yum-utils #关闭防火墙 systemctl stop firewalld && systemctl disable firewalld yum install iptables-services -y iptables -F && service iptables save service iptables stop && systemctl disable iptables #修改时区,设置ntp时间更新 mv -f /etc/localtime /etc/localtime.bak /bin/cp -rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'ZONE="CST"' > /etc/sysconfig/clock ntpdate cn.pool.ntp.org echo "* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org" >> /etc/crontab service crond restart #关闭selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config setenforce 0 #开发最大文件描述符限制 cat /etc/profile |grep ulimit || echo "ulimit -n 65536" >> /etc/profile cat /etc/security/limits.conf |grep 65536 || echo "root soft nofile 65536" >> /etc/security/limits.conf cat /etc/security/limits.conf |grep 65536 || echo "root hard nofile 65536" >> /etc/security/limits.conf cat /etc/security/limits.conf |grep 65536 || echo "* soft nofile 65536" >> /etc/security/limits.conf cat /etc/security/limits.conf |grep 65536 || echo "* hard nofile 65536" >> /etc/security/limits.conf #关闭swap swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab #更换yum源到阿里的 mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #配置安装k8s需要的yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 EOF #配置docker yum源 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #清理缓存 yum clean all yum makecache fast #安装19.03.7版本 yum install -y docker-ce-19.03.7-3.el7 systemctl enable docker && systemctl start docker systemctl status docker #修改docker配置文件 cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF systemctl daemon-reload && systemctl restart docker #开启k8s 网络桥接相关内核配置 #设置网桥包经IPTables,core文件生成路径,配置永久生效 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF cat >> /etc/sysctl.conf << EOF vm.swappiness = 0 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sysctl -p #开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核 cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in \${ipvs_modules}; do /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe \${kernel_module} fi done EOF chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep ip_vs #在master01、master02、master03和node01上安装kubeadm,kubelet和kubectl yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet

Master节点的安装和配置

#在master01,master02,master03上部署keepalive+lvs实现master节点高可用-对apiserver做高可用 yum install -y socat keepalived ipvsadm conntrack #修改master1的/etc/keepalived/keepalived.conf文件 #master01节点作如下操作,该配置的priority为100, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改 #如下的虚拟IP地址是192.168.1.199 wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #在master02节点作如下操作,修改priority 为 50, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改 #如下的虚拟IP地址是192.168.1.199 wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf sed -i 's/priority 100/priority 50/g' /etc/keepalived/keepalived.conf sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #在master03节点作如下操作,修改priority 为 30, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改 #如下的虚拟IP地址是192.168.1.199 wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf sed -i 's/priority 100/priority 30/g' /etc/keepalived/keepalived.conf sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #如果你的主网卡名称是ens33,那就需要sed -i 's/eth0/ens33/g' /etc/keepalived/keepalived.conf sed -i 's/eth0/eth0/g' /etc/keepalived/keepalived.conf #新下载的keepalived.conf, 主要是让keepalive配置为BACKUP模式,而且是非抢占模式nopreempt,假设master01宕机,启动之后vip不会自动漂移到master01,这样可以保证k8s集群始终处于正常状态,因为假设master01启动,apiserver等组件不会立刻运行,如果vip漂移到master01,那么整个集群就会挂掉,这就是为什么我们需要配置成非抢占模式了 #通过修改priority的值,让启动顺序master01->master02->master03 #在master1、master2、master3依次执行如下命令 systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived #keepalived启动成功之后,在master1上通过ip addr可以看到vip 192.168.1.199(本教程的虚拟IP) 已经绑定到网卡上了 #在master01上执行如下命令 cd /usr/local/src wget -O /usr/local/src/kubeadm-config.yaml http://download.zhufunin.com/k8s_1.18/kubeadm-config.yaml #这个文件是给master初始化使用,如下是修改初始化中节点所对应的IP地址,需要根据自己的情况,做适当的调整 sed -i 's/master01/192.168.1.202/g' kubeadm-config.yaml sed -i 's/master02/192.168.1.203/g' kubeadm-config.yaml sed -i 's/master03/192.168.1.204/g' kubeadm-config.yaml sed -i 's/VIP_addr/192.168.1.199/g' kubeadm-config.yaml #master01初始化命 kubeadm init --config kubeadm-config.yaml kubeadm config images list #10.244.0.0/16是flannel网络插件的默认网段,后面会用到 #如果报错,那么用下面的kubeadm-config.yaml文件多一个imageRepository: registry.aliyuncs.com/google_containers参数,表示走的是阿里云镜像,我们可以直接访问,这个方法更简单,但是在这里了解即可,先不使用这种方法,使用的话在后面手动加节点到k8s集群会有问题。 #本教程会需要的镜像 #如果有报错,可以重置 kubeadm reset rm -rf ~/.kube/ rm -rf /etc/kubernetes/ rm -rf /var/lib/kubelet/ rm -rf /var/lib/etcd rm -rf /var/lib/dockershim rm -rf /var/run/kubernetes rm -rf /var/lib/cni rm -rf /etc/cni/net.d #下面是手动下载到本机 wget http://download.zhufunin.com/k8s_1.18/1-18-kube-apiserver.tar.gz wget http://download.zhufunin.com/k8s_1.18/1-18-kube-scheduler.tar.gz wget http://download.zhufunin.com/k8s_1.18/1-18-kube-controller-manager.tar.gz wget http://download.zhufunin.com/k8s_1.18/1-18-pause.tar.gz wget http://download.zhufunin.com/k8s_1.18/1-18-cordns.tar.gz wget http://download.zhufunin.com/k8s_1.18/1-18-etcd.tar.gz wget http://download.zhufunin.com/k8s_1.18/1-18-kube-proxy.tar.gz docker load -i 1-18-kube-apiserver.tar.gz docker load -i 1-18-kube-scheduler.tar.gz docker load -i 1-18-kube-controller-manager.tar.gz docker load -i 1-18-pause.tar.gz docker load -i 1-18-cordns.tar.gz docker load -i 1-18-etcd.tar.gz docker load -i 1-18-kube-proxy.tar.gz echo """ 说明: pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2 etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0         cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7 apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像分别是 k8s.gcr.io/kube-apiserver:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2 如果机器很多,我们只需要把这些镜像传到我们的内部私有镜像仓库即可,这样我们在kubeadm初始化kubernetes时可以通过"--image-repository=私有镜像仓库地址"的方式进行镜像拉取,这样不需要手动传到镜像到每个机器 """ #走完这一步,差不多要结束了 #初始化成功后会看到类似如下的,照做就好 #mkdir -p $HOME/.kube #sudo cp -i  /etc/kubernetes/admin.conf  $HOME/.kube/config #sudo chown $(id -u):$(id -g)  $HOME/.kube/config #看到kubeadm join ...这条命令需要记住,我们把k8s的master02、master03,node01节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到 #查看状态 kubectl get nodes #把master1节点的证书拷贝到master02和master03上 #在master2和master3上创建证书存放目录 cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/ #在master1节点把证书拷贝到master2和master3上,在master1上操作如下,下面的scp命令大家最好一行一行复制,这样不会出错 scp /etc/kubernetes/pki/ca.crt master02:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key master02:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key master02:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub master02:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt master02:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key master02:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt master02:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/etcd/ca.key master02:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/ca.crt master03:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key master03:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key master03:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub master03:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt master03:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key master03:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt master03:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/etcd/ca.key master03:/etc/kubernetes/pki/etcd/ #证书拷贝之后在master2和master3上执行如下命令,大家复制自己的,这样就可以把master2和master3加入到集群 #类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \     --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c   --control-plane #--control-plane:这个参数表示加入到k8s集群的是master节点 #在master2和master3上操作:     mkdir -p $HOME/.kube     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config     sudo chown $(id -u):$(id -g)$HOME/.kube/config kubectl get nodes  #在master01节点部署calico.yaml,master01就是主控,其他的master02,master03就是备用而已 wget http://download.zhufunin.com/k8s_1.18/calico.yaml #(原地址https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml) kubectl apply -f calico.yaml

把node01节点加入到k8s集群,在node1节点操作

确保已完成“上面”的所有master和node节点都要安装的基础组件内容
特别是yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet

类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \     --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c

kubeadm 报错那就在最后面添加-v 6参数,查看更多信息
如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询
kubeadm token create --print-join-command
在master01节点查看集群节点状态
kubectl get nodes  
显示如下:

NAME STATUS ROLES AGE VERSION master1 Ready master 3m36s v1.18.2 master2 Ready master 3m36s v1.18.2 master3 Ready master 3m36s v1.18.2 node1 Ready <none> 3m36s v1.18.2

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

通用node节点初始化

#设置hostname, 并且把hostname写到Master的hosts文件中 hostnamectl set-hostname xxxx #必要的yum软件安装 yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake\ libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel\ wget vim ncurses-devel autoconf automake zlib-devel python-devel\ epel-release lrzsz openssh-server socat ipvsadm conntrack bind-utils epel-release libffi-devel\ libaio-devel libxml2-devel cmake python-devel\ device-mapper-persistent-data lvm2 yum-utils #关闭防火墙 systemctl stop firewalld && systemctl disable firewalld yum install iptables-services -y iptables -F && service iptables save service iptables stop && systemctl disable iptables #修改时区,设置ntp时间更新 mv -f /etc/localtime /etc/localtime.bak /bin/cp -rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'ZONE="CST"' > /etc/sysconfig/clock ntpdate cn.pool.ntp.org echo "* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org" >> /etc/crontab service crond restart #关闭selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config setenforce 0 #开发最大文件描述符限制 cat /etc/profile |grep ulimit || echo "ulimit -n 65536" >> /etc/profile cat /etc/security/limits.conf |grep 65536 || echo "root soft nofile 65536" >> /etc/security/limits.conf cat /etc/security/limits.conf |grep 65536 || echo "root hard nofile 65536" >> /etc/security/limits.conf cat /etc/security/limits.conf |grep 65536 || echo "* soft nofile 65536" >> /etc/security/limits.conf cat /etc/security/limits.conf |grep 65536 || echo "* hard nofile 65536" >> /etc/security/limits.conf #关闭swap swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab #更换yum源到阿里的,可以不换 #mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup #wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #配置安装k8s需要的yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 EOF #配置docker yum源,如果你的很慢,可以选择下方的阿里云docker yum源 #yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #清理缓存 yum clean all yum makecache fast #安装19.03.7版本 yum install -y docker-ce-19.03.7-3.el7 systemctl enable docker && systemctl start docker systemctl status docker #修改docker配置文件 cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF systemctl daemon-reload && systemctl restart docker #开启k8s 网络桥接相关内核配置 #设置网桥包经IPTables,core文件生成路径,配置永久生效 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF cat >> /etc/sysctl.conf << EOF vm.swappiness = 0 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sysctl -p #安装kubeadm,kubelet和kubectl yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet #类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \     --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c #开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核 cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in \${ipvs_modules}; do /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe \${kernel_module} fi done EOF chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep ip_vs

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询
kubeadm token create --print-join-command

posted on 2020-11-11 21:38  快乐嘉年华  阅读()  评论()  编辑  收藏

码字不易,如果您觉得文章写得不错,您又有闲心的话,请点击广告支持