二進制部署k8s

二進制部署k8s

一、安裝要求

在開始之前,部署Kubernetes集群機器需要滿足以下幾個條件:

??一臺或多臺機器,操作系統 CentOS7.x-86_x64

1、硬件配置:2GB或更多RAM,2個CPU或更多CPU,硬盤30GB或更多

2、可以訪問外網,需要拉取鏡像,如果服務器不能上網,需要提前下載鏡像并導入節點

3、禁止swap分區

二、單Master服務器規劃

k8s-master????192.168.31.71????kube-apiserver,kube-controller-manager,kube-scheduler,etcd

k8s-node1????192.168.31.72????kubelet,kube-proxy,docker etcd

k8s-node2????192.168.31.73????kubelet,kube-proxy,docker,etcd

高可用集群規劃(在單點上擴展的)

192.168.10.136? master1

192.168.10.137? node

192.168.10.138? node

192.168.10.139? master2

192.168.10.140? load

192.168.10.141? load

192.168.10.142? vip

三、操作系統初始化

#關閉防火墻

systemctl stop firewalld

systemctl disable firewalld

# 關閉selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config??# 永久

setenforce 0??# 臨時

#關閉swap

swapoff -a??# 臨時

sed -ri 's/.*swap.*/#&/' /etc/fstab????# 永久

#根據規劃設置主機名

hostnamectl set-hostname <hostname>

#在master添加hosts

cat >> /etc/hosts << EOF

192.168.10.136 master1

192.168.10.137 node1

192.168.10.138 node2

EOF

#將橋接的IPv4流量傳遞到iptables的鏈

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system??# 生效

#時間同步

yum install ntpdate -y

ntpdate?time.windows.com

四、部署Etcd集群(三臺機器都要部署,找任意一臺服務器操作,這里用Master節點)

1、準備cfssl證書生成工具,cfssl是一個開源的證書管理工具,使用json文件生成證書,相比openssl更方便使用。

wget?https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget?https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget?https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2、生成Etcd證書

自簽證書頒發機構(CA)

①創建工作目錄:

mkdir -p ~/TLS/{etcd,k8s}

cd TLS/etcd

②自簽CA:

vi? ca-config.json

{

??"signing": {

????"default": {

??????"expiry": "87600h"

????},

????"profiles": {

??????"www": {

?????????"expiry": "87600h",

?????????"usages": [

????????????"signing",

????????????"key encipherment",

????????????"server auth",

????????????"client auth"

????????]

??????}

????}

??}

}

vi? ca-csr.json

{

????"CN": "etcd CA",

????"key": {

????????"algo": "rsa",

????????"size": 2048

????},

????"names": [

????????{

????????????"C": "CN",

????????????"L": "Beijing",

????????????"ST": "Beijing"

????????}

????]

}

③生成證書:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem??看下數字證書ca-key.pem和ca.pem是否生成

3、使用自簽CA簽發Etcd HTTPS證書(上述文件hosts字段中IP為所有etcd節點的集群內部通信IP,為了方便后期擴容可以多寫幾個預留的IP)

vi? server-csr.json

{

????"CN": "etcd",

????"hosts": [

????"192.168.10.136",

????"192.168.10.137",

????"192.168.10.138"

????],

????"key": {

????????"algo": "rsa",

????????"size": 2048

????},

????"names": [

????????{

????????????"C": "CN",

????????????"L": "BeiJing",

????????????"ST": "BeiJing"

????????}

????]

}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem??看下數字證書server-key.pem和server.pem是否生成

4、下載etcd二進制包,并部署(以下在master上操作,為簡化操作,待會將master上生成的所有文件拷貝etcd其他節點)

地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

①、創建工作目錄并解壓二進制包

mkdir /opt/etcd/{bin,cfg,ssl} -p

tar xf etcd-v3.4.9-linux-amd64.tar.gz

mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

②、創建etcd配置文件

vi? /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME="etcd-1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.10.136:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.10.136:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.10.136:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.10.136:2379"

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.10.136:2380,etcd-2=https://192.168.10.137:2380,etcd-3=https://192.168.10.138:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

注釋:

ETCD_NAME:節點名稱,集群中唯一

ETCD_DATA_DIR:數據目錄

ETCD_LISTEN_PEER_URLS:集群通信監聽地址

ETCD_LISTEN_CLIENT_URLS:客戶端訪問監聽地址

ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址

ETCD_ADVERTISE_CLIENT_URLS:客戶端通告地址

ETCD_INITIAL_CLUSTER:集群節點地址

ETCD_INITIAL_CLUSTER_TOKEN:集群Token

ETCD_INITIAL_CLUSTER_STATE:加入集群的當前狀態,new是新集群,existing表示加入已有集群

③、systemd管理etcd

vi? /usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

EnvironmentFile=/opt/etcd/cfg/etcd.conf

ExecStart=/opt/etcd/bin/etcd \

--cert-file=/opt/etcd/ssl/server.pem \

--key-file=/opt/etcd/ssl/server-key.pem \

--peer-cert-file=/opt/etcd/ssl/server.pem \

--peer-key-file=/opt/etcd/ssl/server-key.pem \

--trusted-ca-file=/opt/etcd/ssl/ca.pem \

--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \

--logger=zap

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

④、把剛才生成的證書拷貝到配置文件中的路徑

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

⑤、啟動并設置開機啟動

systemctl daemon-reload && systemctl start etcd && systemctl enable etcd

⑥、將master上所有生成的文件拷貝到node1和node2上

scp -r /opt/etcd root@192.168.10.137:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.10.137:/usr/lib/systemd/system/

scp -r /opt/etcd root@192.168.10.138:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.10.138:/usr/lib/systemd/system/

⑦、然后在node1和node2上分別修改etcd.conf配置文件中的節點名稱和當前服務器IP

vi /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME="etcd-1"??#修改此處,節點2改為etcd-2,節點3改為etcd-3

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"????#修改此處為當前服務器IP

ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"??#修改此處為當前服務器IP

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"??#修改此處為當前服務器IP

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"????????#修改此處為當前服務器IP

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

最后啟動etcd并設置開機啟動,同上

⑧、查看集群狀態(成功會返回healthy:successfully狀態)

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.136:2379,https://192.168.10.137:2379,https://192.168.10.138:2379" endpoint health

五、安裝Docker(以下在所有節點操作,這里采用二進制安裝,用yum安裝也一樣)

下載地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

1、解壓二進制包

tar xf docker-19.03.9.tgz

mv docker/* /usr/bin

2、systemd管理docker

vi? /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target

[Service]

Type=notify

ExecStart=/usr/bin/dockerd

ExecReload=/bin/kill -s HUP $MAINPID

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

TimeoutStartSec=0

Delegate=yes

KillMode=process

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

[Install]

WantedBy=multi-user.target

3、創建配置文件

mkdir /etc/docker

vi? /etc/docker/daemon.json

{

??"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]

}

4、啟動并設置開機啟動

systemctl daemon-reload && systemctl start docker && systemctl enable docker

六、部署Master

1、生成kube-apiserver證書

自簽證書頒發機構(CA)

cd TLS/k8s

vi ca-config.json

{

??"signing": {

????"default": {

??????"expiry": "87600h"

????},

????"profiles": {

??????"kubernetes": {

?????????"expiry": "87600h",

?????????"usages": [

????????????"signing",

????????????"key encipherment",

????????????"server auth",

????????????"client auth"

????????]

??????}

????}

??}

}

vi? ca-csr.json

{

????"CN": "kubernetes",

????"key": {

????????"algo": "rsa",

????????"size": 2048

????},

????"names": [

????????{

????????????"C": "CN",

????????????"L": "Beijing",

????????????"ST": "Beijing",

????????????"O": "k8s",

????????????"OU": "System"

????????}

????]

}

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem??看下數字證書ca-key.pem和ca.pem是否生成

2、使用自簽CA簽發kube-apiserver HTTPS證書

創建證書申請文件:

vi? server-csr.json

{

????"CN": "kubernetes",

????"hosts": [

??????"10.0.0.1",

??????"127.0.0.1",

??????"192.168.10.136",

??????"192.168.10.137",

??????"192.168.10.138",

??????"192.168.10.139",

??????"192.168.10.140",

??????"192.168.10.141",

??????"192.168.10.142",

??????"kubernetes",

??????"kubernetes.default",

??????"kubernetes.default.svc",

??????"kubernetes.default.svc.cluster",

??????"kubernetes.default.svc.cluster.local"

????],

????"key": {

????????"algo": "rsa",

????????"size": 2048

????},

????"names": [

????????{

????????????"C": "CN",

????????????"L": "BeiJing",

????????????"ST": "BeiJing",

????????????"O": "k8s",

????????????"OU": "System"

????????}

????]

}

注:上述文件hosts字段中IP為所有Master/LB/VIP IP,一個都不能少!為了方便后期擴容可以多寫幾個預留的IP。

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

ls server*pem??看下數字證書server-key.pem和server.pem是否生成

3、下載可以部署master和node的二進制文件(打開鏈接你會發現里面有很多包,下載一個server包就夠了,包含了Master和Worker二進制文件。)

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

①、解壓二進制包

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

tar xf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin

cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin

cp kubectl /usr/bin/

4、部署kube-apiserver

①、創建配置文件(注:上面兩個\ \第一個是轉義符,第二個是換行符,使用轉義符是為了使用EOF保留換行符。)

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF

KUBE_APISERVER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--etcd-servers=https://192.168.10.136:2379,https://192.168.10.137:2379,https://192.168.10.138:2379?\\

--bind-address=192.168.10.136 \\

--secure-port=6443 \\

--advertise-address=192.168.10.136 \\

--allow-privileged=true \\

--service-cluster-ip-range=10.0.0.0/24 \\

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\

--authorization-mode=RBAC,Node \\

--enable-bootstrap-token-auth=true \\

--token-auth-file=/opt/kubernetes/cfg/token.csv \\

--service-node-port-range=30000-32767 \\

--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\

--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\

--tls-cert-file=/opt/kubernetes/ssl/server.pem??\\

--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\

--client-ca-file=/opt/kubernetes/ssl/ca.pem \\

--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--etcd-cafile=/opt/etcd/ssl/ca.pem \\

--etcd-certfile=/opt/etcd/ssl/server.pem \\

--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\

--audit-log-maxage=30 \\

--audit-log-maxbackup=3 \\

--audit-log-maxsize=100 \\

--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

EOF

注釋:

logtostderr:啟用日志

-v:日志等級

log-dir:日志目錄

etcd-servers:etcd集群地址

bind-address:監聽地址

secure-port:https安全端口

advertise-address:集群通告地址

allow-privileged:啟用授權

service-cluster-ip-range:Service虛擬IP地址段

enable-admission-plugins:準入控制模塊

authorization-mode:認證授權,啟用RBAC授權和節點自管理

enable-bootstrap-token-auth:啟用TLS bootstrap機制

token-auth-file:bootstrap token文件

service-node-port-range:Service nodeport類型默認分配端口范圍

kubelet-client-xxx:apiserver訪問kubelet客戶端證書

tls-xxx-file:apiserver https證書

etcd-xxxfile:連接Etcd集群證書

audit-log-xxx:審計日志、

②、把剛才生成的證書拷貝到配置文件中的路徑:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

③、啟用 TLS Bootstrapping 機制

TLS Bootstraping:Master apiserver啟用TLS認證后,Node節點kubelet和kube-proxy要與kube-apiserver進行通信,必須使用CA簽發的有效證書才可以,當Node節點很多時,這種客戶端證書頒發需要大量工作,同樣也會增加集群擴展復雜度。為了簡化流程,Kubernetes引入了TLS bootstraping機制來自動頒發客戶端證書,kubelet會以一個低權限用戶自動向apiserver申請證書,kubelet的證書由apiserver動態簽署。所以強烈建議在Node上使用這種方式,目前主要用于kubelet,kube-proxy還是由我們統一頒發一個證書。

④、創建上述配置文件中token文件(格式:token,用戶名,UID,用戶組)

cat > /opt/kubernetes/cfg/token.csv << EOF

c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

EOF?

token也可自行生成替換(獲取隨機token)

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

⑤、systemd管理apiserver

vi? /usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf

ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

⑥、啟動并設置開機啟動

systemctl daemon-reload && systemctl start kube-apiserver && systemctl enable kube-apiserver

⑦、授權kubelet-bootstrap用戶允許請求證書(kubectl命令需要配置config,要不然下面命令不可用)

kubectl create clusterrolebinding kubelet-bootstrap \

--clusterrole=system:node-bootstrapper \

--user=kubelet-bootstrap

⑧、配置kubectl,創建kubeconfig文件(注意命令執行的位置,要在證書所在目錄下)

#生成管理員證書(在mater節點上操作)

cd? /root/TLS/k8s

vi? admin-csr.json

{

??"CN": "admin",

??"hosts": [],

??"key": {

????"algo": "rsa",

????"size": 2048

??},

??"names": [

????{

??????"C": "CN",

??????"L": "BeiJing",

??????"ST": "BeiJing",

??????"O": "system:masters",

??????"OU": "System"

????}

??]

}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#設置集群參數

kubectl config set-cluster kubernetes \

??--server=https://192.168.10.136:6443?\

??--certificate-authority=ca.pem \

??--embed-certs=true \

??--kubeconfig=config

#設置客戶端認證參數

kubectl config set-credentials cluster-admin \

??--certificate-authority=ca.pem \

??--embed-certs=true \

??--client-key=admin-key.pem \

??--client-certificate=admin.pem \

??--kubeconfig=config

#設置上下文參數

kubectl config set-context default \

??--cluster=kubernetes \

??--user=cluster-admin \

??--kubeconfig=config

#設置默認上下文

kubectl config use-context default --kubeconfig=config

#使命令生效

mv config /root/.kube/

5、部署kube-controller-manager

①、創建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--leader-elect=true \\

--master=127.0.0.1:8080 \\

--bind-address=127.0.0.1 \\

--allocate-node-cidrs=true \\

--cluster-cidr=10.244.0.0/16 \\

--service-cluster-ip-range=10.0.0.0/24 \\

--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\

--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem??\\

--root-ca-file=/opt/kubernetes/ssl/ca.pem \\

--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--experimental-cluster-signing-duration=87600h0m0s"

EOF

注釋:

master:通過本地非安全本地端口8080連接apiserver。

leader-elect:當該組件啟動多個時,自動選舉(HA)

cluster-signing-cert-file/--cluster-signing-key-file:自動為kubelet頒發證書的CA,與apiserver保持一致

②、systemd管理controller-manager

vi? /usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf

ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

③、啟動并設置開機啟動

systemctl daemon-reload && systemctl start kube-controller-manager && systemctl enable kube-controller-manager

systemctl status kube-controller-manager

6、部署kube-scheduler

①、創建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF

KUBE_SCHEDULER_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--leader-elect \

--master=127.0.0.1:8080 \

--bind-address=127.0.0.1"

EOF

注釋:

master:通過本地非安全本地端口8080連接apiserver。

leader-elect:當該組件啟動多個時,自動選舉(HA)

②、systemd管理kube-scheduler

vi? /usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf

ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

③、啟動并設置開機啟動

systemctl daemon-reload && systemctl start kube-scheduler && systemctl enable kube-scheduler

systemctl status kube-scheduler

④、查看集群狀態

所有組件都已經啟動成功,通過kubectl工具查看當前集群組件狀態:

kubectl get cs

七、部署node節點

1、創建目錄&從master拷貝命令到node上

①、在所有node節點創建工作目錄

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

②、從master節點拷貝:

scp /opt/kubernetes/ssl/ca.pem 192.168.10.137:/opt/kubernetes/ssl/

cd /root/kubernetes/server/bin

scp kubelet kube-proxy 192.168.10.137:/opt/kubernetes/bin

scp kubelet kube-proxy 192.168.10.138:/opt/kubernetes/bin

scp kubectl?192.168.10.137:/usr/bin

scp kubectl?192.168.10.138:/usr/bin

2、部署kubelet

①、創建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF

KUBELET_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--hostname-override=node1 \\???#此處需要修改為節點主機名

--network-plugin=cni \\

--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\

--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\

--config=/opt/kubernetes/cfg/kubelet-config.yml \\

--cert-dir=/opt/kubernetes/ssl \\

--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

EOF

注釋:

hostname-override:顯示名稱,集群中唯一

network-plugin:啟用CNI

kubeconfig:空路徑,會自動生成,后面用于連接apiserver

bootstrap-kubeconfig:首次啟動向apiserver申請證書

config:配置參數文件

cert-dir:kubelet證書生成目錄

pod-infra-container-image:管理Pod網絡容器的鏡像

②、配置參數文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

address: 0.0.0.0

port: 10250

readOnlyPort: 10255

cgroupDriver: cgroupfs

clusterDNS:

- 10.0.0.2

clusterDomain: cluster.local

failSwapOn: false

authentication:

??anonymous:

????enabled: false

??webhook:

????cacheTTL: 2m0s

????enabled: true

??x509:

????clientCAFile: /opt/kubernetes/ssl/ca.pem

authorization:

??mode: Webhook

??webhook:

????cacheAuthorizedTTL: 5m0s

????cacheUnauthorizedTTL: 30s

evictionHard:

??imagefs.available: 15%

??memory.available: 100Mi

??nodefs.available: 10%

??nodefs.inodesFree: 5%

maxOpenFiles: 1000000

maxPods: 110

EOF

③、生成bootstrap.kubeconfig文件(下面命令在master節點的/opt/kubernetes/cfg目錄下操作)

設置變量:

KUBE_APISERVER="https://192.168.10.136:6443"??#apiserver IP:PORT

TOKEN="c47ffb939f5ca36231d9e3121a252940"??#與token.csv里保持一致

生成kubelet bootstrap kubeconfig配置文件

kubectl config set-cluster kubernetes \

??--certificate-authority=/opt/kubernetes/ssl/ca.pem \

??--embed-certs=true \

??--server=${KUBE_APISERVER} \

??--kubeconfig=bootstrap.kubeconfig

kubectl config set-credentials "kubelet-bootstrap" \

??--token=${TOKEN} \

??--kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \

??--cluster=kubernetes \

??--user="kubelet-bootstrap" \

??--kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

scp?/opt/kubernetes/cfg/bootstrap.kubeconfig 192.168.10.137:/opt/kubernetes/cfg/

scp?/opt/kubernetes/cfg/bootstrap.kubeconfig 192.168.10.138:/opt/kubernetes/cfg/

④、systemd管理kubelet

vi? /usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet

After=docker.service

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf

ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

⑤、啟動并設置開機啟動

systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet

ps -ef | grep kubelet

netstat -antp | grep 10250?

⑥、批準kubelet證書申請并加入集群(下面命令在master節點操作)

#查看kubelet證書請求

kubectl get csr(命令可以查看到哪些節點申請了證書請求)

#批準申請

kubectl certificate approve 后面加上上條命令返回的節點名稱

#查看節點

kubectl get node(此時節點都是notready的狀態,因為還沒有部署cni網絡插件)

3、部署kube-proxy

①、創建配置文件

vi? /opt/kubernetes/cfg/kube-proxy.conf

KUBE_PROXY_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

②、配置參數文件

vi? /opt/kubernetes/cfg/kube-proxy-config.yml

kind: KubeProxyConfiguration

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 0.0.0.0

metricsBindAddress: 0.0.0.0:10249

clientConnection:

??kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig

hostnameOverride: k8s-master? ?#此處修改為自己的主機名

clusterCIDR: 10.0.0.0/24

③、生成kube-proxy.kubeconfig證書文件(下面命令在master節點操作)

#切換工作目錄

cd /root/TLS/k8s

#創建證書請求文件

vi? kube-proxy-csr.json

{

??"CN": "system:kube-proxy",

??"hosts": [],

??"key": {

????"algo": "rsa",

????"size": 2048

??},

??"names": [

????{

??????"C": "CN",

??????"L": "BeiJing",

??????"ST": "BeiJing",

??????"O": "k8s",

??????"OU": "System"

????}

??]

}

#生成證書

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ls kube-proxy*pem??看下數字證書kube-proxy-key.pem和kube-proxy.pem是否生成

#拷貝證書到node節點

scp kube-proxy-key.pem kube-proxy.pem 192.168.10.137:/opt/kubernetes/ssl

scp kube-proxy-key.pem kube-proxy.pem 192.168.10.138:/opt/kubernetes/ssl

④、生成kubeconfig文件:

KUBE_APISERVER="https://192.168.10.136:6443"

kubectl config set-cluster kubernetes \

??--certificate-authority=/opt/kubernetes/ssl/ca.pem \

??--embed-certs=true \

??--server=${KUBE_APISERVER} \

??--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \

??--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \

??--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \

??--embed-certs=true \

??--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \

??--cluster=kubernetes \

??--user=kube-proxy \

??--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

⑤、systemd管理kube-proxy

vi? /usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Proxy

After=network.target

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf

ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

⑤、啟動并設置開機啟動

systemctl daemon-reload && systemctl start kube-proxy && systemctl enable kube-proxy

ps -ef | grep kube-proxy

netstat -antp | grep 10249

4、部署cni網絡插件(在node節點上操作)

①、先準備好CNI二進制文件(這個可以下載到)

下載地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

②、解壓二進制包并移動到默認工作目錄

mkdir /opt/cni/bin -p

tar xf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

③、部署CNI網絡(下面那個鏈接訪問不到,我自行下載了yaml文件,默認鏡像地址無法訪問,修改為docker hub鏡像倉庫)

wget?https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml

kubectl apply -f kube-flannel.yml

kubectl get node(此時node顯示為ready狀態)

④、授權apiserver訪問kubelet

vi? apiserver-to-kubelet-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

??annotations:

????rbac.authorization.kubernetes.io/autoupdate: "true"

??labels:

????kubernetes.io/bootstrapping: rbac-defaults

??name: system:kube-apiserver-to-kubelet

rules:

??- apiGroups:

??????- ""

????resources:

??????- nodes/proxy

??????- nodes/stats

??????- nodes/log

??????- nodes/spec

??????- nodes/metrics

??????- pods/log

????verbs:

??????- "*"

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

??name: system:kube-apiserver

??namespace: ""

roleRef:

??apiGroup: rbac.authorization.k8s.io

??kind: ClusterRole

??name: system:kube-apiserver-to-kubelet

subjects:

??- apiGroup: rbac.authorization.k8s.io

????kind: User

????name: kubernetes

kubectl apply -f apiserver-to-kubelet-rbac.yaml

八、部署Dashboard和CoreDNS

①、部署Dashboard(下載不了,已自行下載)

wget?https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

②、授權訪問dashboard

cat??dashboard-adminuser.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

??name: admin-user

??namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

??name: admin-user

roleRef:

??apiGroup: rbac.authorization.k8s.io

??kind: ClusterRole

??name: cluster-admin

subjects:

- kind: ServiceAccount

??name: admin-user

??namespace: kubernetes-dashboard

kubectl apply -f dashboard-adminuser.yaml

③、獲取可以訪問dashboard頁面的token(復制token即可)

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret |grep admin-user|awk '{print $1}')

④、訪問地址:https://NodeIP:30001

⑤、部署CoreDNS,CoreDNS用于集群內部Service名稱解析(自行下載yaml文件)

kubectl apply -f coredns.yaml

kubectl get pods -n kube-system

NAME??????????????????????????READY???STATUS????RESTARTS???AGE

coredns-5ffbfd976d-j6shb??????1/1?????Running???0??????????32s

⑥、DNS解析測試

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh

/ # nslookup kubernetes

Server:????10.0.0.2

Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:??????kubernetes

Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

可以看到解析沒問題

九、新增加Node節點

1、拷貝已部署好的Node相關文件到新節點,在node1節點將涉及文件拷貝到新節點node2,

scp -r /opt/kubernetes root@192.168.10.138:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.10.138:/usr/lib/systemd/system

scp -r /opt/cni/ root@192.168.10.138:/opt/

2、刪除kubelet證書和kubeconfig文件(注:這幾個文件是證書申請審批后自動生成的,每個Node不同,必須刪除重新生成)

rm /opt/kubernetes/cfg/kubelet.kubeconfig

rm -f /opt/kubernetes/ssl/kubelet*

3、修改主機名

vi /opt/kubernetes/cfg/kubelet.conf

--hostname-override=node2

vi /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: node2

4、啟動并設置開機啟動

systemctl daemon-reload

systemctl start kubelet

systemctl enable kubelet

systemctl start kube-proxy

systemctl enable kube-proxy

5、在Master上批準新Node kubelet證書申請

kubectl get csr

kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro

6、查看Node狀態

kubectl get node

至此單master集群部署完成!!

十、高可用架構(擴容多Master架構,多部署一個master,兩個nginx)

集群ip規劃

192.168.10.136,master

192.168.10.137,node

192.168.10.138,node

192.168.10.139,master2

192.168.10.140,nginx-load

192.168.10.141,nginx-load

192.168.10.142,vip

1、安裝Docker(master2操作)

同上,不再贅述。

2、部署master2(192.168.10.139)

master2與已部署的Master1所有操作一致。所以我們只需將Master1所有K8s文件拷貝過來,再修改下服務器IP和主機名啟動服務即可。

3、創建etcd證書目錄

在Master2創建etcd證書目錄:

mkdir -p /opt/etcd/ssl

4、拷貝文件(Master1操作)

拷貝master1上所有K8s文件和etcd證書到master2

scp -r /opt/kubernetes root@192.168.10.139:/opt

scp -r /opt/cni/ root@192.168.10.139:/opt

scp -r /opt/etcd/ssl root@192.168.10.139:/opt/etcd

scp /usr/lib/systemd/system/kube* root@192.168.10.139:/usr/lib/systemd/system

scp /usr/bin/kubectl root@192.168.10.139:/usr/bin

5、刪除證書文件

刪除kubelet證書和kubeconfig文件

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig

rm -f /opt/kubernetes/ssl/kubelet*

6、修改配置文件IP和主機名

修改apiserver、kubelet和kube-proxy配置文件為本地IP

vi /opt/kubernetes/cfg/kube-apiserver.conf

--bind-address=192.168.10.139 \??#修改為本機IP

--advertise-address=192.168.10.139 \??#修改為本機IP

vi /opt/kubernetes/cfg/kubelet.conf

--hostname-override=master2??#修改為本機主機名

vi /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: master2??#修改為本機主機名

7、啟動并設置開機啟動

systemctl daemon-reload

systemctl start kube-apiserver

systemctl start kube-controller-manager

systemctl start kube-scheduler

systemctl start kubelet

systemctl start kube-proxy

systemctl enable kube-apiserver

systemctl enable kube-controller-manager

systemctl enable kube-scheduler

systemctl enable kubelet

systemctl enable kube-proxy

十一、部署Nginx負載均衡器

1、原理

Nginx是一個主流Web服務和反向代理服務器,這里用四層實現對apiserver實現負載均衡。

Keepalived是一個主流高可用軟件,基于VIP綁定實現服務器雙機熱備,Keepalived主要根據Nginx運行狀態判斷是否需要故障轉移(偏移VIP),例如當Nginx主節點掛掉,VIP會自動綁定在Nginx備節點,從而保證VIP一直可用,實現Nginx高可用。

2、安裝軟件包(主/備都要操作)

yum install epel-release -y

yum install nginx keepalived -y

3、Nginx配置文件(主/備一樣)

cat > /etc/nginx/nginx.conf << EOF

user nginx;

worker_processes auto;

error_log /var/log/nginx/error.log;

pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {

????worker_connections 1024;

}

stream {

????log_format??main??'$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

????access_log??/var/log/nginx/k8s-access.log??main;

????upstream k8s-apiserver {

???????server 192.168.10.136:6443;???

???????server 192.168.10.139:6443;

????}

????server {

???????listen 6443;

???????proxy_pass k8s-apiserver;

????}

}

http {

????log_format??main??'$remote_addr - $remote_user [$time_local] "$request" '

??????????????????????'$status $body_bytes_sent "$http_referer" '

??????????????????????'"$http_user_agent" "$http_x_forwarded_for"';

????access_log??/var/log/nginx/access.log??main;

????sendfile????????????on;

????tcp_nopush??????????on;

????tcp_nodelay?????????on;

????keepalive_timeout???65;

????types_hash_max_size 2048;

????include?????????????/etc/nginx/mime.types;

????default_type????????application/octet-stream;

????server {

????????listen???????80 default_server;

????????server_name??_;

????????location / {

????????}

????}

}

EOF

4、keepalived配置文件(nginx-master)

cat > /etc/keepalived/keepalived.conf << EOF

global_defs {

???notification_email {

?????acassen@firewall.loc

?????failover@firewall.loc

?????sysadmin@firewall.loc

???}

???notification_email_from Alexandre.Cassen@firewall.loc??

???smtp_server 127.0.0.1

???smtp_connect_timeout 30

???router_id NGINX_MASTER??#這個位置不同

}

vrrp_script check_nginx {

????script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

????state MASTER??#這個位置不同

????interface ens32??#修改為實際網卡名

????virtual_router_id 10??#修改為ip地址第三位

????priority 100???

????advert_int 1???#指定VRRP心跳包通告間隔時間,默認1秒

????authentication {

????????auth_type PASS??????

????????auth_pass 1111

????}??

????virtual_ipaddress {

????????192.168.10.142/24

????}

????track_script {

????????check_nginx

????}

}

EOF

5、檢查nginx狀態腳本(master上面做)

cat > /etc/keepalived/check_nginx.sh << EOF

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

????exit 1

else

????exit 0

fi

EOF

chmod +x /etc/keepalived/check_nginx.sh

6、keepalived配置文件(nginx-backup)

cat > /etc/keepalived/keepalived.conf << EOF

global_defs {

???notification_email {

?????acassen@firewall.loc

?????failover@firewall.loc

?????sysadmin@firewall.loc

???}

???notification_email_from Alexandre.Cassen@firewall.loc??

???smtp_server 127.0.0.1

???smtp_connect_timeout 30

???router_id NGINX_BACKUP??#這個位置和master不同

}

vrrp_script check_nginx {

????script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

????state BACKUP??#這個位置和master不同

????interface ens32

????virtual_router_id 10

????priority 90

????advert_int 1

????authentication {

????????auth_type PASS??????

????????auth_pass 1111

????}??

????virtual_ipaddress {

????????192.168.10.142/24

????}

????track_script {

????????check_nginx

????}

}

EOF

7、檢查nginx狀態腳本(backup上面做)

cat > /etc/keepalived/check_nginx.sh << EOF

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

????exit 1

else

????exit 0

fi

EOF

chmod +x /etc/keepalived/check_nginx.sh

注:keepalived根據腳本返回狀態碼(0為工作正常,非0不正常)判斷是否故障轉移

8、啟動并設置開機啟動

systemctl daemon-reload

systemctl start nginx

systemctl start keepalived

systemctl enable nginx

systemctl enable keepalived

9、查看keepalived工作狀態

ip a??查看是否能在ens32網卡信息里看到vip的地址

10、Nginx+Keepalived高可用測試

關閉主節點nginx,測試VIP是否漂移到備節點服務器。

在nginx?master執行pkill nginx

在nginx backup,ip a命令查看已成功綁定VIP

11、訪問負載均衡器測試

找K8s集群中任意一個節點,使用curl查看K8s版本測試,使用VIP訪問

curl -k https://192.168.10.142:6443/version

{

??"major": "1",

??"minor": "18",

??"gitVersion": "v1.18.2",

??"gitCommit": "52c56ce7a8272c798dbc29846288d7cd9fbae032",

??"gitTreeState": "clean",

??"buildDate": "2020-04-16T11:48:36Z",

??"goVersion": "go1.13.9",

??"compiler": "gc",

??"platform": "linux/amd64"

}

可以正確獲取到K8s版本信息,說明負載均衡器搭建正常

12、通過查看nginx日志也可以看到轉發apiserver ip

tail /var/log/nginx/k8s-access.log -f

13、修改所有node節點連接LB的VIP(在所有node節點上執行)

sed -i 's#192.168.10.136:6443#192.168.10.142:6443#' /opt/kubernetes/cfg/*

systemctl restart kubelet

systemctl restart kube-proxy

14、檢查節點狀態

kubectl get node??所有節點狀態正常

一套完整的Kubernetes高可用集群就部署完成了!

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,702評論 6 534
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,615評論 3 419
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,606評論 0 376
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,044評論 1 314
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,826評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,227評論 1 324
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,307評論 3 442
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,447評論 0 289
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 48,992評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,807評論 3 355
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,001評論 1 370
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,550評論 5 361
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,243評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,667評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,930評論 1 287
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,709評論 3 393
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 47,996評論 2 374

推薦閱讀更多精彩內容

  • 今天感恩節哎,感謝一直在我身邊的親朋好友。感恩相遇!感恩不離不棄。 中午開了第一次的黨會,身份的轉變要...
    迷月閃星情閱讀 10,592評論 0 11
  • 彩排完,天已黑
    劉凱書法閱讀 4,260評論 1 3
  • 表情是什么,我認為表情就是表現出來的情緒。表情可以傳達很多信息。高興了當然就笑了,難過就哭了。兩者是相互影響密不可...
    Persistenc_6aea閱讀 125,539評論 2 7