Kubeadm安裝Kubernetes 1.8.4

NO IMAGE

Kubeadm安裝Kubernetes 1.8.4

 

Posted by yaoice on December 5, 2017

背景

kubeadm是kubernetes官方提供的快速安裝k8s叢集工具,kubeadm部署出來的叢集不是高可用的,所以目前kubeadm一般不用於生產環境。

環境

  • CentOS 7.3
  • docker-ce-17.11.0.ce-1.el7.centos.x86_64
  • kubectl-1.8.4-0.x86_64
  • kubeadm-1.8.4-0.x86_64
  • kubelet-1.8.4-0.x86_64
  • kubernetes-cni-0.5.1-1.x86_64

系統配置

單節點,靜態主機名解析

[[email protected] ~]# cat /etc/hosts  
172.19.0.14 master

禁用防火牆和selinux

[[email protected] ~]# systemctl stop firewalld     
[[email protected] ~]# systemctl disable firewalld
[[email protected] ~]# setenforce 0
[[email protected] ~]# vim /etc/selinux/config
SELINUX=disabled

k8s從1.8版本開始kubelet要求關閉系統swap,不然無法啟動;可以通過kubelet的啟動引數–fail-swap-on=false更改這個限制。

[[email protected] ~]# swapoff -a  # 關閉系統swap

編輯/etc/fstab,禁用swap分割槽的掛載

[[email protected] ~]# free -m   # 可用free -m檢視swap是否關閉
total        used        free      shared  buff/cache   available
Mem:            992         564          72           1         355         255
Swap:             0           0           0

swappiness引數調整

[[email protected] ~]# vim /etc/sysctl.d/k8s.conf   # 新增如下內容
vm.swappiness=0
[[email protected] ~]# sysctl -p /etc/sysctl.d/k8s.conf  # 使之生效

安裝docker

安裝相關依賴包

[[email protected] ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

新增docker stable版本的repo

[email protected] ~]# yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

如果想安裝最新版docker的話,就採用這個

[[email protected] ~]# curl -fsSL "https://get.docker.com/" | sh

安裝docker包

[[email protected] ~]# yum install -y docker-ce docker-ce-selinux  

docker從1.13版本開始禁用iptables filter表中的FOWARD鏈,這樣會導致k8s叢集跨Node的Pod無法通訊

[[email protected] ~]# vim /usr/lib/systemd/system/docker.service # 編輯docker systemd檔案,在ExecStart上面加入ExecStartPost命令
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
ExecStart=。。。。。。

啟動docker服務

[[email protected]ster ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable docker
[[email protected] ~]# systemctl start docker

安裝kubeadm、kubelet、kubectl

新增kubernetes repo源

[[email protected] ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

測試k8s repo地址是否可用,不然可能需要想辦法了。

[[email protected] ~]# curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

安裝kubelet kubeadm kubectl包

[[email protected] ~]# yum install -y kubelet kubeadm kubectl

kubelet服務開機自啟動

[[email protected] ~]# systemctl enable kubelet.service

注:kubelet啟動時帶的cgroup-driver引數(預設cgroup)和docker使用的cgroup-driver引數(預設cgroup)有所不同,會導致kubelet服務啟動失敗,因為kubeadm包裝出來的配置檔案中cgroup-driver為systemd,所以這裡選擇修改docker cgroup-driver為systemd

[[email protected] ~]# rpm -ql kubeadm-1.8.4-0.x86_64
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
/usr/bin/kubeadm
[[email protected] ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
[[email protected] ~]# vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}

重啟docker服務

[[email protected] ~]# systemctl restart docker
[[email protected] ~]# systemctl status docker

kubeadm建立叢集

這裡選用flannel網路外掛,還需要改些bridge引數

[[email protected] ~]# vim /etc/sysctl.d/k8s.conf  # 新增如下內容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[[email protected] ~]# sysctl -p /etc/sysctl.d/k8s.conf  # 使之生效

初始化叢集

[[email protected] ~]# kubeadm init  --kubernetes-version=v1.8.4 \
--pod-network-cidr=10.244.0.0/16 \ --apiserver-advertise-address=172.19.0.14
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.11.0-ce. Max validated version: 17.03
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.19.0.14]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 30.001692 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 5b0855.b90b56759e07723e
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
# 這個跟後續新增節點有關,務必記住
kubeadm join --token 5b0855.b90b56759e07723e 172.19.0.14:6443 --discovery-token-ca-cert-hash sha256:2cad4211f45f0d454f9a3ac7f59e997248c57497421a0df719a08fca9e385cc1

安裝pod網路外掛

[[email protected] ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

如果節點有多個網絡卡的話,需要指定網絡卡名稱,詳情見這裡:https://github.com/kubernetes/kubernetes/issues/39701

不同網路外掛略有不同,其它網路外掛配置詳情見這裡:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

按照初始化叢集后的提示,進行如下操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

檢視叢集狀態

[[email protected] ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

kubeadm安裝出來的k8s服務都是以容器形式執行,檢視所有pod狀態

[[email protected] ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
kube-system   etcd-master                      1/1       Running   0          20h       172.19.0.14   master
kube-system   kube-apiserver-master            1/1       Running   0          20h       172.19.0.14   master
kube-system   kube-controller-manager-master   1/1       Running   0          20h       172.19.0.14   master
kube-system   kube-dns-545bc4bfd4-nt7k6        3/3       Running   0          20h       10.244.0.2    master
kube-system   kube-flannel-ds-p9n2n            1/1       Running   0          20h       172.19.0.14   master
kube-system   kube-proxy-5j8h7                 1/1       Running   0          20h       172.19.0.14   master
kube-system   kube-scheduler-master            1/1       Running   0          20h       172.19.0.14   master

讓master節點也參與排程

[[email protected] ~]# kubectl taint nodes master node-role.kubernetes.io/master-
node "master" untainted

叢集重置

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

參考連結