k8s 1.17.0 安装(1master+2node)(ipvs、calico网络安装)

olivee 5年前 ⋅ 1419 阅读

1. 安装环境准备

本文通过单台虚拟机安装好基础组件后,再拷贝虚拟机的方式安装,避免重复安装基础组件耗费时间。(如果想离线安装,只需要把kubelet、kubectl、kubeadm等文中提到的yum安装需要的rpm包下载下来,通过rpm命令安装即可)

1.1 准备一台虚拟机

虚拟机最好配置为桥接网络模式,NAT模式可能安装会存在一些问题。安装好好CentOS 7.5操作系统。

1.2 配置IP

vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
## 配置如下
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPADDR=192.168.19.170
NETMASK=255.255.255.0
GATEWAY=192.168.19.1
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
DEVICE="enp0s3"
ONBOOT="yes"

1.3 配置dns

vi /etc/resolv.conf
## 配置如下
nameserver 8.8.8.8
nameserver 114.114.114.114

1.4 设置时区

timedatectl set-timezone Asia/Shanghai

1.5 关闭所有节点的seliux以及firewalld

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld

1.6 配置yum源

添加repo文件

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

repo文件再追加如下内容:

[k8s]
name=k8s
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/virt/$basearch/kubernetes110/
        http://mirrors.aliyuncs.com/centos/$releasever/virt/$basearch/kubernetes110/
gpgcheck=0

设置kubeadmin的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

更新yum缓存

sudo yum makecache fast

1.7 更新操作系统

此步为可选步骤

yum update 

1.8 安装必要的基础工具

yum install -y wget vim yum-utils device-mapper-persistent-data lvm2 tcpdump

1.9 安装必要的网络组件

yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp socat

1.10 添加docker的yum源:

sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.11 开启IPVS

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip\_vs
modprobe -- ip\_vs\_rr
modprobe -- ip\_vs\_wrr
modprobe -- ip\_vs\_sh
modprobe -- nf\_conntrack\_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip\_vs -e nf\_conntrack\_ipv4

1.12 开启br_netfilter

modprobe br_netfilter

1.13 开启bridge和ipv4.ip_forward

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

/usr/lib/sysctl.d/00-system.conf文件中也有bridge-nf-call参数,需要同时修改

vi /usr/lib/sysctl.d/00-system.conf
## 修改以下参数
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
sysctl --system

1.14 关闭swap

swapoff -a
#再编辑 /etc/fstab 文件把swap一行注释 (vi /etc/fstab   #swap)

1.15 安装docker-ce

sudo yum -y install docker-ce

配置cgroupfs

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF

启动docker,并设置为开机自启

systemctl start docker && systemctl enable docker

1.16 安装kubelet组件

yum install kubeadm-1.17.0-0.x86_64 kubectl-1.17.0-0.x86_64 kubelet-1.17.0-0.x86_64 kubernetes-cni-0.7.5-0.x86_64 --disableexcludes=kubernetes
systemctl enable --now kubelet

1.17 下载基础镜像

通过“kubeadm config images list”命令可以查看需要下载哪些镜像 将需要的镜像下载下来(k8s.gcr.io的镜像如果无法翻墙下载,可以通过mirrorgooglecontainers下载后,再tag为k8s.gcr.io的镜像)

docker pull calico/node:v3.11.1
docker pull calico/pod2daemon-flexvol:v3.11.1
docker pull calico/cni:v3.11.1
docker pull calico/kube-controllers:v3.11.1
docker pull k8s.gcr.io/kube-proxy:v1.17.0
docker pull k8s.gcr.io/kube-apiserver:v1.17.0
docker pull k8s.gcr.io/kube-scheduler:v1.17.0
docker pull k8s.gcr.io/kube-controller-manager:v1.17.0
docker pull k8s.gcr.io/coredns:1.6.5
docker pull k8s.gcr.io/etcd:3.4.3-0
docker pull k8s.gcr.io/pause:3.1

1.18 把这台初始化好的虚拟机拷贝两份,分别设置IP、主机名和hosts

根据你的网络配置ip,这里我的ip分别配置为170、171、172,再配置主机名

主节点设置:
hostnamectl set-hostname master1
node1节点设置:
hostnamectl set-hostname node1
node2节点设置:
hostnamectl set-hostname node2

我的/etc/hosts配置为:

192.168.19.170  master1
192.168.19.171  node1
192.168.19.172  node2

2. 初始化master1主节点

2.1 编辑ipvs模式安装的初始化配置文件(如果用iptables模式安装,则不需要配置这个文件)

文件可以通过"kubeadm config print init-defaults --component-configs KubeletConfiguration,KubeProxyConfiguration > kube-init.yaml"命令生成,再修改里面几个参数即可。涉及修改的地方如下图:

kube-init.png

修改后的最终文件参考如下:

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.19.170
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: ipvs-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs:
  - 192.168.19.170
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 0
clusterCIDR: ""
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 0s
conntrack:
  maxPerCore: null
  min: null
  tcpCloseWaitTimeout: null
  tcpEstablishedTimeout: null
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: null
  minSyncPeriod: 0s
  syncPeriod: 0s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  strictARP: false
  syncPeriod: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: ""
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
udpIdleTimeout: 0s
winkernel:
  enableDSR: false
  networkName: ""
  sourceVip: ""

2.2 kubeadm init主节点,在master1节点执行

如果以ipvs模式安装,则执行:

kubeadm init --config kube-init.yaml

如果以iptables模式安装,则执行类似如下命令(修改ip):

kubeadm init --apiserver-advertise-address=192.168.19.170 --apiserver-cert-extra-sans=192.168.19.170 --pod-network-cidr=10.244.0.0/16

安装完成后会有如下类似信息:

kubeadm join 192.168.19.170:6443 --token qlihzi.kvwri2vmyznkggh5 --discovery-token-ca-cert-hash sha256:768dae713a8562e0f21f71cf1b08804faa7b398484b77d5cbab985d2de74e70a

2.3 配置kubectl客户端

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.4 kubectl自动提示

yum install bash-completion
source /usr/share/bash-completion/bash_completion
kubectl completion bash >/etc/bash_completion.d/kubectl

2.5 检查所有的pod状态

检查所有的pod状态(kubedns也依赖于容器网络,此时pending是正常的)

kubectl get pod --all-namespaces

3. 加入节点

在node1和node2节点执行类似如下命令(命令根据master的安装提示结果输入)

kubeadm join 192.168.19.170:6443 --token qlihzi.kvwri2vmyznkggh5 --discovery-token-ca-cert-hash sha256:768dae713a8562e0f21f71cf1b08804faa7b398484b77d5cbab985d2de74e70a

4. 安装网络插件(1.17.0安装flannel网络插件出现service跨节点网络不通问题,安装calico网络插件正常)

4.1 如果安装calico网络插件

4.1.1 下载并配置文件

curl -O https://docs.projectcalico.org/v3.8/manifests/calico.yaml
#或
curl -O https://docs.projectcalico.org/v3.11/manifests/calico.yaml
#再编辑配置文件
vi calico.yaml
# 修改CALICO_IPV4POOL_CIDR为10.244.0.0/16
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"
# 你之前下载的calico镜像是什么版本,就把calico.yaml中的镜像改成对应的版本

4.1.2 执行安装

kubectl create -f calico.yaml 

4.2 如果安装flannel网络

通过官方模版导入flannel,yaml文件中有一行参数,Network": "10.244.0.0/16",这个要与kubeadm init时候参数一致,是pod IP的范围 执行一下命令:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

安装完成后要等一会儿,等节点状态变成Ready后再执行后续操作,通过以下两个命名查看状态:

kubectl get node
或
kubectl get pod --all-namespaces

5. 安装验证

安装完成后可以创建个nginx的deployment和service来验证

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.12.2
        ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 1080
    targetPort: 80

然后找到service的ip,通过跨节点的"curl http://service的ip:1080/"测试服务是否可用。