K8S 使用kubeadm在Centos8.4上部署kubernetes1.22.1

Centos8系統發布已有一段時間。kubernetes1.22.1也發布了,今天使用kubeadm在Centos8系統上部署kubernetes。

本次測試環境在ESXI下的虛擬機

系統:CentOS8.4

Kube:kubernetes1.22.1

ip規劃表:

hostname角色IP

node-master01master0110.255.10.140

node-worker01node-worker0110.255.10.141

node-worker02node-worker0210.255.10.142

1. 系統準備(以下部署步驟master與node節點相同)

查看系統版本

查看系統版本

1

2

[root@localhost]# cat /etc/centos-release?

CentOS Linux release 8.4.2105

配置網絡

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens160?

TYPE=Ethernet?

PROXY_METHOD=none?

BROWSER_ONLY=no?

BOOTPROTO=static?

DEFROUTE=yes

IPV4_FAILURE_FATAL=no?

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no?

IPV6_ADDR_GEN_MODE=stable-privacy?

NAME=enp0s3?

UUID=039303a5-c70d-4973-8c91-97eaa071c23d?

DEVICE=enp0s3?

ONBOOT=yes

IPADDR=10.255.10.140?

NETMASK=255.255.255.0?

GATEWAY=10.255.10.2?

DNS1=119.29.29.29?

DNS1=223.5.5.5

添加阿里源

[root@localhost?~]# rm -rfv /etc/yum.repos.d/*?

[root@localhost?~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

配置主機名

[root@master01?~]# cat /etc/hosts?

127.0.0.1?localhost localhost.localdomain localhost4 localhost4.localdomain4?

::1?localhost localhost.localdomain localhost6 localhost6.localdomain6?

10.255.10.140?node-master01?

10.255.10.141?node-worker01?

10.255.10.142?node-worker02?



[root@master01?~]# hostnamectl set-hostname node-master01

關閉swap分區,在fdisk注釋swap分區

[root@master01?~]# swapoff -a?

[root@master01?~]# cat /etc/fstab?

#?

# /etc/fstab?

# Created by anaconda on Tue Mar?31?22:44:34?2020

#?

# Accessible filesystems, by reference, are maintained under?'/dev/disk/'.?

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8)?for?more info.?

#?

# After editing?this?file, run?'systemctl daemon-reload'?to update systemd?

# units generated from?this?file.?

#?

/dev/mapper/cl-root / xfs defaults?0?0

UUID=5fecb240-379b-4331-ba04-f41338e81a6e /boot ext4 defaults?1?2

/dev/mapper/cl-home /home xfs defaults?0?0

#/dev/mapper/cl-swap swap swap defaults?0?0

配置內核參數,將橋接的IPv4流量傳遞到iptables的鏈

[root@master01?~]# cat <<EOF > /etc/sysctl.d/k8s.conf?

net.bridge.bridge-nf-call-ip6tables =?1

net.bridge.bridge-nf-call-iptables =?1

EOF?



[root@master01?~]# sysctl --system

2. 安裝常用包

[root@master01?~]# yum install vim bash-completion net-tools gcc -y

3. 使用aliyun源安裝docker-ce

[root@master01?~]# yum install -y yum-utils device-mapper-persistent-data lvm2?

[root@master01?~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo?

[root@master01?~]# yum -y install docker-ce containerd.io

安裝docker-ce如果出現以下錯

[root@master01?~]# yum -y install docker-ce?

CentOS-8?- Base - mirrors.aliyun.com?

14?kB/s |?3.8?kB?00:00

CentOS-8?- Extras - mirrors.aliyun.com?

6.4?kB/s |?1.5?kB?00:00

CentOS-8?- AppStream - mirrors.aliyun.com?

16?kB/s |?4.3?kB?00:00

Docker CE Stable - x86_64?

40?kB/s |?22?kB?00:00

Error:?

????Problem:?package?docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >=?1.2.2-3, but none of the providers can be installed?

- cannot install the best candidate?for?the job?

-?package?containerd.io-1.2.10-3.2.el7.x86_64 is excluded?

-?package?containerd.io-1.2.13-3.1.el7.x86_64 is excluded?

-?package?containerd.io-1.2.2-3.3.el7.x86_64 is excluded?

-?package?containerd.io-1.2.2-3.el7.x86_64 is excluded?

-?package?containerd.io-1.2.4-3.1.el7.x86_64 is excluded?

-?package?containerd.io-1.2.5-3.1.el7.x86_64 is excluded?

-?package?containerd.io-1.2.6-3.3.el7.x86_64 is excluded?

(try?to add?'--skip-broken'?to skip uninstallable packages or?'--nobest'?to use not only best candidate packages)

解決方法

[root@master01?~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/7.9/x86_64/stable/Packages/containerd.io-1.4.9-3.1.el7.x86_64.rpm?

[root@master01?~]# yum install containerd.io-1.4.9-3.1.el7.x86_64.rpm

然后再安裝docker-ce即可成功?

添加aliyundocker倉庫加速器

[root@master01?~]# sudo mkdir -p /etc/docker?

[root@master01?~]# sudo tee /etc/docker/daemon.json <<-'EOF'

{?

????"registry-mirrors": ["https://jsfcoj0r.mirror.aliyuncs.com"],?

????"exec-opts": ["native.cgroupdriver=systemd"]?

} EOF?

[root@master01?~]# sudo systemctl daemon-reload?

[root@master01?~]# sudo systemctl restart docker

docker info | grep Cgroup查看輸出:Cgroup Driver: systemd?表示成功

4. 安裝kubectl、kubelet、kubeadm

添加阿里kubernetes源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo?

[kubernetes]?

name=Kubernetes?

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/?

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg?https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg?

EOF

安裝

[root@master01?~]# yum makecache?

[root@master01?~]# yum list kubeadm --showduplicates | sort -r #查看可安裝版本?

[root@master01?~]# yum install -y kubelet-1.22.1-0?kubeadm-1.22.1-0?kubectl-1.22.1-0

[root@master01?~]# systemctl enable kubelet && systemctl start kubelet

5. 初始化k8s集群(master節點)

因為k8s的容器下載是境外的網,解決方式有兩種(1.掛梯子 2.換國內源)

[root@master01?~]# kubeadm init \?

--apiserver-advertise-address=10.255.10.140?\?

--image-repository registry.aliyuncs.com/google_containers \?

--kubernetes-version v1.22.1?\?

--service-cidr=10.1.0.0/16?\?

--pod-network-cidr=10.50.0.0/16

POD的網段為: 10.50.0.0/16, api server地址就是master本機IP。

這一步很關鍵,由于kubeadm 默認從官網k8s.grc.io下載所需鏡像,國內無法訪問,因此需要通過–image-repository指定阿里云鏡像倉庫地址。

集群初始化成功后返回如下信息:

[init] Using Kubernetes version: v1.22.1

[preflight] Running pre-flight checks

[preflight] Pulling images required?for?setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform?this?action in beforehand using?'kubeadm config images pull'

[certs] Using certificateDir folder?"/etc/kubernetes/pki"

[certs] Generating?"ca"?certificate and key

[certs] Generating?"apiserver"?certificate and key

[certs] apiserver serving cert is signed?for?DNS names [kubernetes kubernetes.default?kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.1.0.1?10.255.10.140]

[certs] Generating?"apiserver-kubelet-client"?certificate and key

[certs] Generating?"front-proxy-ca"?certificate and key

[certs] Generating?"front-proxy-client"?certificate and key

[certs] Generating?"etcd/ca"?certificate and key

[certs] Generating?"etcd/server"?certificate and key

[certs] etcd/server serving cert is signed?for?DNS names [localhost localhost.localdomain] and IPs [10.255.10.140?127.0.0.1?::1]

[certs] Generating?"etcd/peer"?certificate and key

[certs] etcd/peer serving cert is signed?for?DNS names [localhost localhost.localdomain] and IPs [10.255.10.140?127.0.0.1?::1]

[certs] Generating?"etcd/healthcheck-client"?certificate and key

[certs] Generating?"apiserver-etcd-client"?certificate and key

[certs] Generating?"sa"?key and?public?key

[kubeconfig] Using kubeconfig folder?"/etc/kubernetes"

[kubeconfig] Writing?"admin.conf"?kubeconfig file

[kubeconfig] Writing?"kubelet.conf"?kubeconfig file

[kubeconfig] Writing?"controller-manager.conf"?kubeconfig file

[kubeconfig] Writing?"scheduler.conf"?kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file?"/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file?"/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder?"/etc/kubernetes/manifests"

[control-plane] Creating?static?Pod manifest?for?"kube-apiserver"

[control-plane] Creating?static?Pod manifest?for?"kube-controller-manager"

[control-plane] Creating?static?Pod manifest?for?"kube-scheduler"

[etcd] Creating?static?Pod manifest?for?local etcd in?"/etc/kubernetes/manifests"

[wait-control-plane] Waiting?for?the kubelet to boot up the control plane as?static?Pods from directory?"/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after?6.003919?seconds

[upload-config] Storing the configuration used in ConfigMap?"kubeadm-config"?in the?"kube-system"?Namespace

[kubelet] Creating a ConfigMap?"kubelet-config-1.22"?in namespace kube-system with the configuration?for?the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: lccux7.6pkxkidaxj2l6uzq

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order?for?nodes to get?long?term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation?for?all node client certificates in the cluster

[bootstrap-token] Creating the?"cluster-info"?ConfigMap in the?"kube-public"?namespace

[kubelet-finalize] Updating?"/etc/kubernetes/kubelet.conf"?to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy


Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


??mkdir -p $HOME/.kube

??sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

??sudo chown $(id -u):$(id -g) $HOME/.kube/config


Alternatively,?if?you are the root user, you can run:


??export KUBECONFIG=/etc/kubernetes/admin.conf


You should now deploy a pod network to the cluster.

Run?"kubectl apply -f [podnetwork].yaml"?with one of the options listed at:

??<https://kubernetes.io/docs/concepts/cluster-administration/addons/>


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join?10.255.10.140:6443?--token lccux7.6pkxkidaxj2l6uzq \\

????--discovery-token-ca-cert-hash sha256:ebdf5006d7a0033e5ae77587da03855236b47d4afd98c89ec88fcce59d14e086

記錄生成的最后部分內容,此內容需要在其它節點加入Kubernetes集群時執行。?

根據提示創建kubectl

[root@master01?~]#? mkdir -p $HOME/.kube

[root@master01?~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master01?~]#?? sudo chown $(id -u):$(id -g) $HOME/.kube/config

執行下面命令,使kubectl可以自動補充

[root@master01?~]# source <(kubectl completion bash)

查看node和pod

[root@master01?~]# kubectl get node

NAME??????????? STATUS?? ROLES????????????????? AGE?? VERSION

node-master01?? Ready??? control-plane,master?? 10m?? v1.22.1


[root@master01?~]# kubectl get pod --all-namespaces

NAMESPACE???? NAME?????????????????????????????????????? READY?? STATUS??? RESTARTS??????? AGE

kube-system?? coredns-7f6cbbb7b8-h2zd4?1/1?Running?0?10m

kube-system?? coredns-7f6cbbb7b8-w7rt9?1/1?Running?0?10m

kube-system?? etcd-node-master01?1/1?Running?4?(6m33s ago)?? 10m

kube-system?? kube-apiserver-node-master01?1/1?Running?4?(6m23s ago)?? 10m

kube-system?? kube-controller-manager-node-master01?1/1?Running?4?(6m33s ago)?? 10m

kube-system?? kube-proxy-qfvq4?1/1?Running?0?2m14s

kube-system?? kube-proxy-svkhj?1/1?Running?0?54s

kube-system?? kube-proxy-zvh7p?1/1?Running?1?(6m33s ago)?? 10m

kube-system?? kube-scheduler-node-master01?1/1?Running?3?(6m28s ago)?? 10m

6. 安裝calico網絡(這里使用的是calico,后面會上傳一篇Flannel和Calico的工作流程對比)

[root@master01?~]# kubectl apply -f <https://docs.projectcalico.org/manifests/calico.yaml>

configmap/calico-config created

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

daemonset.apps/calico-node created

serviceaccount/calico-node created

deployment.apps/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

poddisruptionbudget.policy/calico-kube-controllers created

查看pod和node

[root@master01?~]# kubectl get node

NAME??????????? STATUS?? ROLES????????????????? AGE?? VERSION

node-master01?? Ready??? control-plane,master?? 10m?? v1.22.1

[root@master01?~]# kubectl get pod --all-namespaces

NAMESPACE???? NAME?????????????????????????????????????? READY?? STATUS??? RESTARTS??????? AGE

kube-system?? calico-kube-controllers-75f8f6cc59-dpznm?1/1?Running?0?5m45s

kube-system?? calico-node-7bbpc?1/1?Running?0?2m14s

kube-system?? calico-node-tjx6c?1/1?Running?0?5m45s

kube-system?? calico-node-xzgtg?1/1?Running?0?54s

kube-system?? coredns-7f6cbbb7b8-h2zd4?1/1?Running?0?10m

kube-system?? coredns-7f6cbbb7b8-w7rt9?1/1?Running?0?10m

kube-system?? etcd-node-master01?1/1?Running?4?(6m33s ago)?? 10m

kube-system?? kube-apiserver-node-master01?1/1?Running?4?(6m23s ago)?? 10m

kube-system?? kube-controller-manager-node-master01?1/1?Running?4?(6m33s ago)?? 10m

kube-system?? kube-proxy-qfvq4?1/1?Running?0?2m14s

kube-system?? kube-proxy-svkhj?1/1?Running?0?54s

kube-system?? kube-proxy-zvh7p?1/1?Running?1?(6m33s ago)?? 10m

kube-system?? kube-scheduler-node-master01?1/1?Running?3?(6m28s ago)?? 10m

此時集群狀態正常

7. 修改kube-apiserver端口釋放

默認情況下k8s對外暴露端口范圍只能是30000-32767,這個范圍其實是可以通過設置改變的。

編輯配置文件vim /etc/kubernetes/manifests/kube-apiserver.yaml

spec: containers: - command:后面添加-?--service-node-port-range=1-65535

[root@master01?~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

apiVersion: v1

kind: Pod

metadata:

??annotations:

????kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint:?10.255.10.140:6443

??creationTimestamp:?null

??labels:

????component: kube-apiserver

????tier: control-plane

??name: kube-apiserver

??namespace: kube-system

spec:

??containers:

??- command:

????- kube-apiserver

????- --advertise-address=10.255.10.140

????- --allow-privileged=true

????- --authorization-mode=Node,RBAC

????- --client-ca-file=/etc/kubernetes/pki/ca.crt

????- --enable-admission-plugins=NodeRestriction

????- --enable-bootstrap-token-auth=true

????- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt

????- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt

????- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key

????- --etcd-servers=https://127.0.0.1:2379

????- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt

????- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key

????- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

????- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt

????- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key

????- --requestheader-allowed-names=front-proxy-client

????- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt

????- --requestheader-extra-headers-prefix=X-Remote-Extra-

????- --requestheader-group-headers=X-Remote-Group

????- --requestheader-username-headers=X-Remote-User

????- --secure-port=6443

????- --service-account-issuer=https://kubernetes.default.svc.cluster.local

????- --service-account-key-file=/etc/kubernetes/pki/sa.pub

????- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key

????- --service-cluster-ip-range=10.1.0.0/16

????- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt

????- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

????- --service-node-port-range=1-65535

????image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.1

????imagePullPolicy: IfNotPresent

????livenessProbe:

??????failureThreshold:?8

??????httpGet:

????????host:?10.255.10.140

????????path: /livez

????????port:?6443

????????scheme: HTTPS

??????initialDelaySeconds:?10

??????periodSeconds:?10

??????timeoutSeconds:?15

......................................................

然后對每臺機器執行

[root@master01?~]# systemctl restart docker.service && systemctl restart kubelet.service

[root@node-worker01 ~]# systemctl restart docker.service && systemctl restart kubelet.service

[root@node-worker02 ~]# systemctl restart docker.service && systemctl restart kubelet.service

查看pod狀態

[root@master01?~]# kubectl get pod --all-namespaces

NAMESPACE????????????? NAME??????????????????????????????????????? READY?? STATUS??? RESTARTS????? AGE

kube-system??????????? calico-kube-controllers-75f8f6cc59-dpznm?1/1?Running?6?(52s ago)?? 3h27m

kube-system??????????? calico-node-7bbpc?1/1?Running?2?(35s ago)?? 3h23m

kube-system??????????? calico-node-tjx6c?1/1?Running?2?(52s ago)?? 3h27m

kube-system??????????? calico-node-xzgtg?1/1?Running?2?(29s ago)?? 3h22m

kube-system??????????? coredns-7f6cbbb7b8-h2zd4?1/1?Running?2?(47s ago)?? 3h31m

kube-system??????????? coredns-7f6cbbb7b8-w7rt9?1/1?Running?2?(47s ago)?? 3h31m

kube-system??????????? etcd-node-master01?1/1?Running?6?(52s ago)?? 3h31m

kube-system??????????? kube-apiserver-node-master01?1/1?Running?1?(36s ago)?? 29s

kube-system??????????? kube-controller-manager-node-master01?1/1?Running?7?(52s ago)?? 3h31m

kube-system??????????? kube-proxy-qfvq4?1/1?Running?2?(35s ago)?? 3h23m

kube-system??????????? kube-proxy-svkhj?1/1?Running?2?(29s ago)?? 3h22m

kube-system??????????? kube-proxy-zvh7p?1/1?Running?3?(52s ago)?? 3h31m

kube-system??????????? kube-scheduler-node-master01?1/1?Running?6?(52s ago)?? 3h31m

8. 安裝kubernetes-dashboard

官方部署dashboard的服務沒使用nodeport,將yaml文件下載到本地,在service里添加nodeport

[root@master01?~]# wget? https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml

[root@master01?~]# vim recommended.yaml

kind: ServiceapiVersion: v1metadata:

??labels:

????k8s-app: kubernetes-dashboard

??name: kubernetes-dashboard

??namespace: kubernetes-dashboard

spec:

??type: NodePort

??ports:

????- port:?443

??????targetPort:?8443

??????nodePort:?30000

??selector:

????k8s-app: kubernetes-dashboard


[root@master01?~]# kubectl create -f recommended.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment.apps/dashboard-metrics-scraper created

查看pod,service

[root@master01?~]# kubectl get pod --all-namespaces | grep kubernetes-dashboard

kubernetes-dashboard?? dashboard-metrics-scraper-c45b7869d-fzrsr?1/1?Running?0?54s

kubernetes-dashboard?? kubernetes-dashboard-576cb95f94-dl8dr?1/1?Running?0?54s

[root@master01?~]# kubectl get svc -n kubernetes-dashboard

NAME??????????????????????? TYPE??????? CLUSTER-IP???? EXTERNAL-IP?? PORT(S)???????? AGE

dashboard-metrics-scraper?? ClusterIP?10.1.128.40??8000/TCP??????? 84s

kubernetes-dashboard??????? NodePort?10.1.131.243??443:30000/TCP?? 85s

通過頁面訪問,推薦使用firefox瀏覽器(Chrome和其他瀏覽器會出現“證書不信任”導致的無法打開頁面)

使用token進行登錄,執行下面命令獲取token

[root@master01?~]# kubectl create serviceaccount dashboard -n kubernetes-dashboard

[root@master01?~]# kubectl create rolebinding def-ns-admin --clusterrole=admin --serviceaccount=default:def-ns-admin

[root@master01?~]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard

[root@master01?~]# kubectl describe sa dashboard -n kubernetes-dashboard

Name:??????????????? dashboard

Namespace:?????????? kubernetes-dashboard

Labels:????????????? <none>

Annotations:???????? <none>

Image pull secrets:? <none>

Mountable secrets:?? dashboard-token-2bwzt

Tokens:????????????? dashboard-token-2bwzt

Events:????????????? <none>

[root@master01?~]# kubectl describe secret dashboard-token-2bwzt -n kubernetes-dashboard

Name:???????? dashboard-token-2bwzt

Namespace:??? kubernetes-dashboard

Labels:?????? <none>

Annotations:? kubernetes.io/service-account.name: dashboard

??????????????kubernetes.io/service-account.uid: 3c0a442f-06ee-4b81-b056-11f9b12ca0f5


Type:? kubernetes.io/service-account-token


Data

====

token:????? eyJhbGciOiJSUzI1NiIsImtpZCI6InlwSUFzN1AwbEM4NmgyTERZd19NeVZkMWJaMXh2dHNmQUR5aHg3aE5BYW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tMmJ3enQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2MwYTQ0MmYtMDZlZS00YjgxLWIwNTYtMTFmOWIxMmNhMGY1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZCJ9.eABZfsBDw-jow3lBX2DGt4O0kebiWKc4S23HIngLmyW4A_kkKNAkfDEzvozo3DFTQ0WRIJy3KFhDvPBcpttqRjxj2_DCFFrOTFJkgpYRd9E_8aaB_cSeCbANPp0bVVaZhor50fRgRQBiXe_8Bq_rYpjvlJD3TB87H8M1OdSCZ5kZWmt6aKsu-g8N_dFIf8rxyyHn-TB9aOQq6_v7Vv50UN7LfJj3HY3Bm1Kb66kS8vmp4X2QDwlyOn4zw11EkrHNcZ1Er0wRtsau3IFx_bydJXJ9GvDFa-vUs85cshyoRtPIcnaWAeOCvRbcX1X8UwOm6_QT0PaQDTORBAhiBcDNKg

ca.crt:?1099?bytes

namespace:?20?bytes

登陸界面如下

按照以上步驟可在CentOS8成功安裝k8s 1.22.1版本

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,739評論 6 534
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,634評論 3 419
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,653評論 0 377
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,063評論 1 314
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,835評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,235評論 1 324
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,315評論 3 442
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,459評論 0 289
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 49,000評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,819評論 3 355
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,004評論 1 370
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,560評論 5 362
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,257評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,676評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,937評論 1 288
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,717評論 3 393
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,003評論 2 374

推薦閱讀更多精彩內容