Kubernetes,簡稱 k8s(k,8 個字符,s——明白了?)或者 “kube”,是一個開源的?Linux 容器自動化運維平臺,它消除了容器化應用程序在部署、伸縮時涉及到的許多手動操作。換句話說,你可以將多臺主機組合成集群來運行 Linux 容器,而 Kubernetes 可以幫助你簡單高效地管理那些集群。構成這些集群的主機還可以跨越公有云、私有云以及混合云。目前已經是容器編排的標準,背后主要有google和紅帽支持。
kubeadm是Kubernetes官方提供的快速搭建k8s集群的工具,比目前網上使用其他方法搭建要簡單快速的多,而且碰都的問題也少。如果大家按照本文操作出現任何問題,請留言,我會盡量及時回復。本文主要是參考的Kubernetes官方文檔Using kubeadm to Create a Cluster。按照本文搭建的集群不夠安全,也做不到高可用,僅建議個人學習研究用途,不建議部署至生產環境。
前提條件
1. 2臺以上安裝了centos7 x64的服務器,可以是物理機,虛擬機或者vps。其中一臺作為master節點,其他的作為node節點。
2. 每臺機器至少2g內存,作為master的服務器至少要2個核。
3. 所有服務器之間的網絡是互通的,hostname不能相同,并且不含有下劃線。
4. 服務器是在墻外的,因為搭建的過程中要下載的一些文件,墻內可能會很慢或者根本下載不了,這意味著使用國內的阿里云,騰訊云之類的vps搭建會很麻煩。如果還沒有墻外的服務器,可以去看下vultr,我用的就是這個,性價比比較高,也很穩定,? ?推薦使用東京或美國西部的節點,連接比較快,有時ip會ping不通,應該是被墻了,在其他區域再創建一個再試就ok了。
5. 會用ssh連接服務器,并能執行簡單的命令,以及編輯保存文件。下文的命令有些可能需要root權限,如果提示沒有權限,在命令行前面加sudo再執行一次。
搭建步驟
第1-6步是每臺服務器都需要的。
升級系統,在命令行運行
yum update -y
?????2. 關閉防火墻,swap,因為k8s需要運行多個服務在不同的服務器上通訊,需要開放多個端口,簡單起見,直接把防火墻關了,不推薦在生產環境這么做。關掉swap,k8s的組件kebelet才可以正常工作。
systemctl disable firewalld
systemctl stop firewalld
swapoff -a????
??3.安裝docker
yum install -y docker
systemctl enable docker && systemctl start docker
????4.安裝kubeadm,kubelet,kubectl
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
5.關閉selinux,因為kubelet目前支持selinux還有點問題
setenforce 0
打開/etc/sysconfig/selinux文件
vi /etc/sysconfig/selinux
找到SELINUX那行,改為
SELINUX=disabled
保存文件
????6.設置net.bridge.bridge-nf-call-iptables為1
cat <? /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
????7.初始化master,在master的節點上運行
kubeadm init --pod-network-cidr=192.168.0.0/16
如果你看到類似下面的信息說明master初始化成功了
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 39.511972 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token:
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
? mkdir -p $HOME/.kube
? sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
? sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
? http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
? kubeadm join --token : --discovery-token-ca-cert-hash sha256:
把上面輸出的最后一行 kubeadm join復制保存下來,后面在node節點加入到集群中需要用到
運行下面的命令初始化kebectl配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
????8.安裝網絡插件,以使pod能相互通訊,這里我們安裝的是Calico.在master節點運行
kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
運行以下命令檢查kube-dns pod 已經運行,一般需要幾十秒
kubectl get pods --all-namespaces
如果輸出中有名字以kube-dns的pod狀態是Running,說明網絡插件已經正常工作,然后就可以把node節點加入到集群
[root@kube-master ~]# kubectl get pods --all-namespaces
NAMESPACE? ? NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ? STATUS? ? RESTARTS? AGE
kube-system? calico-etcd-dfpnn? ? ? ? ? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? calico-kube-controllers-5449fdfcd-z8n45? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? calico-node-8jmzt? ? ? ? ? ? ? ? ? ? ? ? 2/2? ? ? Running? 0? ? ? ? ? 13h
kube-system? calico-node-b4x99? ? ? ? ? ? ? ? ? ? ? ? 2/2? ? ? Running? 0? ? ? ? ? 13h
kube-system? etcd-kube-master? ? ? ? ? ? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? kube-apiserver-kube-master? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? kube-controller-manager-kube-master? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? kube-dns-86f4d74b45-v6qr5? ? ? ? ? ? ? ? 3/3? ? ? Running? 0? ? ? ? ? 14h
kube-system? kube-proxy-8nl2w? ? ? ? ? ? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? kube-proxy-klnjb? ? ? ? ? ? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 14h
kube-system? kube-scheduler-kube-master? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
????9.使pod能運行在master上,在master運行如下命令。否則k8s不會調度非系統pod到master節點上
kubectl taint nodes --all node-role.kubernetes.io/master-
?10.kube-dns運行后加入node節點,在node節點運行第7步保存的kubeadm join,類似下面的語句。
kubeadm join --token : --discovery-token-ca-cert-hash sha256:
如果成功,輸出類似下面
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.138.0.4:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.138.0.4:6443"
[discovery] Requesting info from "https://10.138.0.4:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.138.0.4:6443"
[discovery] Successfully established connection with API Server "10.138.0.4:6443"
[bootstrap] Detected server version: v1.8.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
Node join complete:
* Certificate signing request sent to master and response
? received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
???11.驗證node成功加入集群,在master命令行運行
kubectl get node
如果集群正常運行,輸出類似
NAME? ? ? ? ? STATUS? ? ROLES? ? AGE? ? ? VERSION
kube-master? Ready? ? master? ? 1h? ? ? ? v1.10.0
kube-node? ? Ready? ? ? ? 2m? ? ? ? v1.10.0