k8s二進制部署+高可用master

0.服務器規劃

角色 IP 組件

k8s-master1 192.168.31.63 kube-apiserver

kube-controller-manager

kube-scheduler

etcd

k8s-master2 192.168.31.64 kube-apiserver

kube-controller-manager

kube-scheduler

k8s-node1 192.168.31.65 kubelet

kube-proxy

docker

etcd

k8s-node2 192.168.31.66 kubelet

kube-proxy

docker

etcd

Load?Balancer(Master) 192.168.31.61

192.168.31.60 (VIP) Nginx L4

Load?Balancer(Backup) 192.168.31.62 Nginx L4

1.系統初始化

關閉防火墻:

# systemctl stop firewalld

# systemctl disable firewalld

關閉selinux:

# setenforce 0 # 臨時

# sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久

關閉swap:

# swapoff -a? # 臨時

# vim /etc/fstab? # 永久

同步系統時間:

# ntpdate time.windows.com

添加hosts:

# vim /etc/hosts

192.168.31.63 k8s-master1

192.168.31.64 k8s-master2

192.168.31.65 k8s-node1

192.168.31.66 k8s-node2

修改主機名:

hostnamectl set-hostname k8s-master1

2.Etcd集群

可在任意節點完成以下操作。

2.1 生成etcd證書

# cd TLS/etcd

安裝cfssl工具:

# ./cfssl.sh

修改請求文件中hosts字段包含所有etcd節點IP:

# vi server-csr.json

{

? ? "CN": "etcd",

? ? "hosts": [

? ? ? ? "192.168.31.63",

? ? ? ? "192.168.31.64",

? ? ? ? "192.168.31.65"

? ? ? ? ],

? ? "key": {

? ? ? ? "algo": "rsa",

? ? ? ? "size": 2048

? ? },

? ? "names": [

? ? ? ? {

? ? ? ? ? ? "C": "CN",

? ? ? ? ? ? "L": "BeiJing",

? ? ? ? ? ? "ST": "BeiJing"

? ? ? ? }

? ? ]

}

# ./generate_etcd_cert.sh

# ls *pem

ca-key.pem? ca.pem? server-key.pem? server.pem

2.2 部署三個Etcd節點

# tar zxvf etcd.tar.gz

# cd etcd

# cp TLS/etcd/ssl/{ca,server,server-key}.pem ssl

分別拷貝到Etcd三個節點:

# scp –r etcd root@192.168.31.63:/opt

# scp etcd.service root@192.168.31.63:/usr/lib/systemd/system

登錄三個節點修改配置文件 名稱和IP:

# vi /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME="etcd-1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.31.63:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.31.63:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.63:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.63:2379"

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.63:2380,etcd-2=https://192.168.31.64:2380,etcd-3=https://192.168.31.65:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

# systemctl start etcd

# systemctl enable etcd

2.3 查看集群狀態

# /opt/etcd/bin/etcdctl \

> --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \

> --endpoints="https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379" \

> cluster-health

member 37f20611ff3d9209 is healthy: got healthy result from https://192.168.31.63:2379

member b10f0bac3883a232 is healthy: got healthy result from https://192.168.31.64:2379

member b46624837acedac9 is healthy: got healthy result from https://192.168.31.65:2379

cluster is healthy

3.部署Master Node

3.1生成apiserver證書

# cd TLS/k8s

修改請求文件中hosts字段包含所有etcd節點IP:

# vi server-csr.json

{

? ? "CN": "kubernetes",

? ? "hosts": [

? ? ? "10.0.0.1",

? ? ? "127.0.0.1",

? ? ? "kubernetes",

? ? ? "kubernetes.default",

? ? ? "kubernetes.default.svc",

? ? ? "kubernetes.default.svc.cluster",

? ? ? "kubernetes.default.svc.cluster.local",

? ? ? "192.168.31.60",

? ? ? "192.168.31.61",

? ? ? "192.168.31.62",

? ? ? "192.168.31.63",

? ? ? "192.168.31.64",

? ? ? "192.168.31.65",

? ? ? "192.168.31.66"

? ? ],

? ? "key": {

? ? ? ? "algo": "rsa",

? ? ? ? "size": 2048

? ? },

? ? "names": [

? ? ? ? {

? ? ? ? ? ? "C": "CN",

? ? ? ? ? ? "L": "BeiJing",

? ? ? ? ? ? "ST": "BeiJing",

? ? ? ? ? ? "O": "k8s",

? ? ? ? ? ? "OU": "System"

? ? ? ? }

? ? ]

}

# ./generate_k8s_cert.sh

# ls *pem

ca-key.pem? ca.pem? kube-proxy-key.pem? kube-proxy.pem? server-key.pem? server.pem

3.2 部署apiserver,controller-manager和scheduler

在Master節點完成以下操作。

二進制包下載地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161

二進制文件位置:kubernetes/serverr/bin

# tar zxvf k8s-master.tar.gz

# cd kubernetes

# cp TLS/k8s/ssl/*.pem ssl

# cp –rf kubernetes /opt

# cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system

# cat /opt/kubernetes/cfg/kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \

--bind-address=192.168.31.63 \

--secure-port=6443 \

--advertise-address=192.168.31.63 \

……

# systemctl start kube-apiserver

# systemctl start kube-controller-manager

# systemctl start kube-scheduler

# systemctl enable kube-apiserver

# systemctl enable kube-controller-manager

# systemctl enable kube-scheduler

3.3 啟用TLS Bootstrapping

為kubelet TLS Bootstrapping 授權:

# cat /opt/kubernetes/cfg/token.csv

c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

格式:token,用戶,uid,用戶組

給kubelet-bootstrap授權:

kubectl create clusterrolebinding kubelet-bootstrap \

--clusterrole=system:node-bootstrapper \

--user=kubelet-bootstrap

token也可自行生成替換:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

但apiserver配置的token必須要與node節點bootstrap.kubeconfig配置里一致。

4.部署Worker Node

4.1安裝Docker

二進制包下載地址:https://download.docker.com/linux/static/stable/x86_64/

# tar zxvf k8s-node.tar.gz

# tar zxvf docker-18.09.6.tgz

# mv docker/* /usr/bin

# mkdir /etc/docker

# mv daemon.json /etc/docker

# mv docker.service /usr/lib/systemd/system

# systemctl start docker

# systemctl enable docker

4.2 部署kubelet和kube-proxy

拷貝證書到Node:

# cd TLS/k8s

# scp ca.pem kube-proxy*.pem root@192.168.31.65:/opt/kubernetes/ssl/

# cp kube-apiserver.service kube-controller-manager.service kube-

# tar zxvf k8s-node.tar.gz

# mv kubernetes /opt

# cp kubelet.service kube-proxy.service /usr/lib/systemd/system

修改以下三個文件中IP地址:

# grep 192 *

bootstrap.kubeconfig:? ? server: https://192.168.31.63:6443

kubelet.kubeconfig:? ? server: https://192.168.31.63:6443

kube-proxy.kubeconfig:? ? server: https://192.168.31.63:6443

修改以下兩個文件中主機名:

# grep hostname *

kubelet.conf:--hostname-override=k8s-node1 \

kube-proxy-config.yml:hostnameOverride: k8s-node1

# systemctl start kubelet

# systemctl start kube-proxy

# systemctl enable kubelet

# systemctl enable kube-proxy

4.3 允許給Node頒發證書

# kubectl get csr

# kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI

# kubectl get node

4.4 部署CNI網絡

二進制包下載地址:https://github.com/containernetworking/plugins/releases

# mkdir /opt/cni/bin /etc/cni/net.d

# tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz –C /opt/cni/bin

確保kubelet啟用CNI:

# cat /opt/kubernetes/cfg/kubelet.conf

--network-plugin=cni

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

在Master執行:

kubectl apply –f kube-flannel.yaml

# kubectl get pods -n kube-system

NAME? ? ? ? ? ? ? ? ? ? ? ? ? READY? STATUS? ? RESTARTS? AGE

kube-flannel-ds-amd64-5xmhh? 1/1? ? Running? 6? ? ? ? ? 171m

kube-flannel-ds-amd64-ps5fx? 1/1? ? Running? 0? ? ? ? ? 150m

4.5 授權apiserver訪問kubelet

為提供安全性,kubelet禁止匿名訪問,必須授權才可以。

# cat /opt/kubernetes/cfg/kubelet-config.yml

……

authentication:

? anonymous:

? ? enabled: false

? webhook:

? ? cacheTTL: 2m0s

? ? enabled: true

? x509:

clientCAFile: /opt/kubernetes/ssl/ca.pem

……

# kubectl apply –f apiserver-to-kubelet-rbac.yaml

5.部署Web UI和DNS

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

# vi recommended.yaml

kind: Service

apiVersion: v1

metadata:

? labels:

? ? k8s-app: kubernetes-dashboard

? name: kubernetes-dashboard

? namespace: kubernetes-dashboard

spec:

? type: NodePort

? ports:

? ? - port: 443

? ? ? targetPort: 8443

? ? ? nodePort: 30001

? selector:

? ? k8s-app: kubernetes-dashboard

# kubectl apply -f recommended.yaml

創建service account并綁定默認cluster-admin管理員集群角色:

# cat dashboard-adminuser.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

? name: admin-user

? namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

? name: admin-user

roleRef:

? apiGroup: rbac.authorization.k8s.io

? kind: ClusterRole

? name: cluster-admin

subjects:

- kind: ServiceAccount

? name: admin-user

? namespace: kubernetes-dashboard

獲取token:

# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

訪問地址:http://NodeIP:30001

使用輸出的token登錄Dashboard。

# kubectl apply –f coredns.yaml

# kubectl get pods -n kube-system

6.Master高可用

5.1 部署Master組件(與Master1一致)

拷貝master1/opt/kubernetes和service文件:

# scp –r /opt/kubernetes root@192.168.31.64:/opt

# scp –r /opt/etcd/ssl root@192.168.31.64:/opt/etcd

# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.31.64:/usr/lib/systemd/system

修改apiserver配置文件為本地IP:

# cat /opt/kubernetes/cfg/kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \

--bind-address=192.168.31.64 \

--secure-port=6443 \

--advertise-address=192.168.31.64 \

……

# systemctl start kube-apiserver

# systemctl start kube-controller-manager

# systemctl start kube-scheduler

# systemctl enable kube-apiserver

# systemctl enable kube-controller-manager

# systemctl enable kube-scheduler

5.2 部署Nginx負載均衡

nginx rpm包:http://nginx.org/packages/rhel/7/x86_64/RPMS/

# rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

# vim /etc/nginx/nginx.conf

……

stream {

? ? log_format? main? '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

? ? access_log? /var/log/nginx/k8s-access.log? main;

? ? upstream k8s-apiserver {

? ? ? ? ? ? ? ? server 192.168.31.63:6443;

? ? ? ? ? ? ? ? server 192.168.31.64:6443;

? ? ? ? ? ? }


? ? server {

? ? ? listen 6443;

? ? ? proxy_pass k8s-apiserver;

? ? }

}

……

# systemctl start nginx

# systemctl enable nginx

5.3 Nginx+Keepalived高可用

主節點:

# yum install keepalived

# vi /etc/keepalived/keepalived.conf

global_defs {

? notification_email {

? ? acassen@firewall.loc

? ? failover@firewall.loc

? ? sysadmin@firewall.loc

? }

? notification_email_from Alexandre.Cassen@firewall.loc?

? smtp_server 127.0.0.1

? smtp_connect_timeout 30

? router_id NGINX_MASTER

}

vrrp_script check_nginx {

? ? script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

? ? state MASTER

? ? interface ens33

? ? virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的

? ? priority 100? ? # 優先級,備服務器設置 90

? ? advert_int 1? ? # 指定VRRP 心跳包通告間隔時間,默認1秒

? ? authentication {

? ? ? ? auth_type PASS? ? ?

? ? ? ? auth_pass 1111

? ? }?

? ? virtual_ipaddress {

? ? ? ? 192.168.31.60/24

? ? }

? ? track_script {

? ? ? ? check_nginx

? ? }

}

# cat /etc/keepalived/check_nginx.sh

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

? ? exit 1

else

? ? exit 0

fi

# systemctl start keepalived

# systemctl enable keepalived

備節點:

# cat /etc/keepalived/keepalived.conf


global_defs {

? notification_email {

? ? acassen@firewall.loc

? ? failover@firewall.loc

? ? sysadmin@firewall.loc

? }

? notification_email_from Alexandre.Cassen@firewall.loc?

? smtp_server 127.0.0.1

? smtp_connect_timeout 30

? router_id NGINX_BACKUP

}

vrrp_script check_nginx {

? ? script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

? ? state BACKUP

? ? interface ens33

? ? virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的

? ? priority 90? ? # 優先級,備服務器設置 90

? ? advert_int 1? ? # 指定VRRP 心跳包通告間隔時間,默認1秒

? ? authentication {

? ? ? ? auth_type PASS? ? ?

? ? ? ? auth_pass 1111

? ? }?

? ? virtual_ipaddress {

? ? ? ? 192.168.31.60/24

? ? }

? ? track_script {

? ? ? ? check_nginx

? ? }

}

# cat /etc/keepalived/check_nginx.sh

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

? ? exit 1

else

? ? exit 0

fi

# systemctl start keepalived

# systemctl enable keepalived

測試:

# ip a

2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

? ? link/ether 00:0c:29:9d:ee:30 brd ff:ff:ff:ff:ff:ff

? ? inet 192.168.31.63/24 brd 192.168.31.255 scope global noprefixroute ens33

? ? ? valid_lft forever preferred_lft forever

? ? inet 192.168.31.60/24 scope global secondary ens33

? ? ? valid_lft forever preferred_lft forever

? ? inet6 fe80::20c:29ff:fe9d:ee30/64 scope link

? ? ? valid_lft forever preferred_lft forever

關閉nginx測試VIP是否漂移到備節點。

5.4 修改Node連接VIP

測試VIP是否正常工作:

# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.31.60:6443/version

{

? "major": "1",

? "minor": "16",

? "gitVersion": "v1.16.0",

? "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",

? "gitTreeState": "clean",

? "buildDate": "2019-09-18T14:27:17Z",

? "goVersion": "go1.12.9",

? "compiler": "gc",

? "platform": "linux/amd64"

}

將Node連接VIP:

# cd /opt/kubernetes/cfg

# grep 192 *

bootstrap.kubeconfig:? ? server: https://192.168.31.63:6443

kubelet.kubeconfig:? ? server: https://192.168.31.636443

kube-proxy.kubeconfig:? ? server: https://192.168.31.63:6443

批量修改:

sed -i 's#192.168.31.63#192.168.31.60#g' *

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。