高階k8s HA 集群搭建(一)

前言

嘗到k8s甜頭以后,我們就想著應用到生產環境里去,以提高業務迭代效率,可是部署在生產環境里有一個要求,就是k8s集群不可以存在單點故障。。。誒唷我的乖乖,這不就要求k8s集群高可用嗎,好,下面就是介紹兩種目前比較火的k8s集群master高可用方式。


介紹

首先介紹的第一種k8sHA集群我覺得更應該叫做主從結構k8s集群,它由三臺master組成,有三個keepalived提供一個vip 來作為apiserver的ip入口,keepalived設置權重,使得vip落在權重大的master節點上,node節點通過訪問這個vip從而訪問到這一臺master,另外兩臺master則通過etcd集群,來完成數據同步。

缺點:這樣的集群是通過keepalived來實現高可用的,也就是說在權重較大的節點沒有故障之前,keepalived所指向的流量永遠都是經過主master,只有當主master出現故障或者宕機的情況下,才有可能轉移到另外兩臺從master節點上。這樣會導致主master節點壓力過大,而另外兩臺從master可能永遠不會被調用,導致資源浪費等等情況。

不過,這也是排除單點故障的一種方式。

下面是理想的高可用架構圖。


k8s 理想HA高可用

本文中要部署高可用的架構圖:


本文高可用架構

上圖摘抄至https://www.kubernetes.org.cn/3536.html

好了,到此我們整理一下本文中需要使用的技術棧

keepalived+etcd+k8s master

其中keepalived提供vip供node做apiserver入口,etcd必須是高可用集群,實現數據同步;以及基本的k8s master節點部署。


安裝準備


節點部署相關情況

軟件版本:

docker17.03.2-ce

socat-1.7.3.2-2.el7.x86_64

kubelet-1.10.0-0.x86_64

kubernetes-cni-0.6.0-0.x86_64

kubectl-1.10.0-0.x86_64

kubeadm-1.10.0-0.x86_64

以上軟件在上一篇初階k8s集群搭建里已經介紹并附有下載地址。

環境配置

systemctl stop firewalldsystemctl disable firewalld

修改每個節點hostname

cat < /etc/hosts?> /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.1?master1

192.168.100.2?master2

192.168.100.3?master3

EOF

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

setenforce 0

echo "* soft nofile 65536" >> /etc/security/limits.conf

echo "* hard nofile 65536" >> /etc/security/limits.conf

echo "* soft nproc 65536" >> /etc/security/limits.conf

echo "* hard nproc 65536" >> /etc/security/limits.conf

echo "* soft memlock unlimited" >> /etc/security/limits.conf

echo "* hard memlock unlimited" >> /etc/security/limits.conf

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

echo 1 > /proc/sys/net/ipv4/ip_forward

sysctl -w net.bridge.bridge-nf-call-iptables=1

vim /etc/sysctl.conf

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

sysctl -p


keepalived安裝

libnfnetlink-devel-1.0.1-4.el7.x86_64.rpm

wget?http://www.keepalived.org/software/keepalived-1.4.3tar.gz

yum install -y libnfnetlink-devel-1.0.1-4.el7.x86_64.rpm

yum -y install libnl libnl-devel

tar -xzvf keepalived-1.4.3.tar.gz

cd?keepalived-1.4.3

./configure --prefix=/usr/local/keepalived #檢查環境配置


出現上圖即為正確環境,如果出現錯誤

checking?openssl/ssl.h?usability...?no

checking?openssl/ssl.h?presence...?no

checking?foropenssl/ssl.h...?no

configure:?error:?

??!!!?OpenSSL?is?not?properly?installed?on?your?system.?!!!

??!!!?Can?not?include?OpenSSL?headers?files.????????????!!!

則:安裝openssl和openssl-devel包,然后從新編譯配置文件。

yum?install?openssl?openssl-devel

./configure --prefix=/usr/local/keepalived

make && make install

cp keepalived/etc/init.d/keepalived /etc/init.d/

mkdir /etc/keepalived

cp?/usr/local/keepalived/etc/keepalived/keepalived.conf?/etc/keepalived/

cp keepalived/etc/sysconfig/keepalived /etc/sysconfig/

cp /usr/local/keepalived/sbin/keepalived /usr/sbin/?

ps -aux |grep keepalived

chkconfig keepalived on

通過systemctl status keepalived查看keepalived狀態

三臺master重復以上步驟,直到完成keepalived的安裝。

安裝完成后編寫配置文件:

master1的keepalived.conf

cat >/etc/keepalived/keepalived.conf <<EOF

global_defs {

????router_id LVS_k8s

}

vrrp_script CheckK8sMaster{

????????script "curl -k https://192.168.100.4:6443"

????????interval 3

????????timeout 9

????????fall 2

????????rise 2 ? ?

}

vrrp_instance VI_1 {

????state MASTER

????interface ens33?#本機物理網卡名字,可通過ip a來查看

????virtual_router_id 61

????priority 120??# 主節點權重最高 依次減少

????advert_int 1

????mcast_src_ip 192.168.100.1??#修改為本地IP

????nopreempt

????authentication {

????????auth_type PASS

????????auth_pass awzhXylxy.T

????}

????unicast_peer{

????????#注釋掉本地IP?

????????#192.168.100.1

????????192.168.100.2

????????192.168.100.3

????}

????virtual_ipaddress {

????????192.168.100.4/22 #VIP

????}

????track_script {

????????#CheckK8sMaster#這個方法在沒部署k8s之前最好注釋掉,因為很可能因為這個報錯

????}

}

EOF


master2的keepalived.conf

cat >/etc/keepalived/keepalived.conf <

global_defs {

????router_id LVS_k8s

}

vrrp_script CheckK8sMaster{

????????script "curl -k https://192.168.100.4:6443"

????????interval 3

????????timeout 9

????????fall 2

????????rise 2 ? ?

}

vrrp_instance VI_1 {

????state?BACKUP

????interface ens33?#本機物理網卡名字,可通過ip a來查看

????virtual_router_id 61

????priority 110 ?# 主節點權重最高 依次減少

????advert_int 1

????mcast_src_ip 192.168.100.2 ?#修改為本地IP

????nopreempt

????authentication {

????????auth_type PASS

????????auth_pass awzhXylxy.T

????}

????unicast_peer{

????????#注釋掉本地IP?

? ? ? ? 192.168.100.1

????????#192.168.100.2

????????192.168.100.3

????}

????virtual_ipaddress {

????????192.168.100.4/22 #VIP

????}

????track_script {

????????#CheckK8sMaster#這個方法在沒部署k8s之前最好注釋掉,因為很可能因為這個報錯

????}

}

EOF

master3的keepalived.conf

cat >/etc/keepalived/keepalived.conf <

global_defs {

????router_id LVS_k8s

}

vrrp_script CheckK8sMaster{

????????script "curl -k https://192.168.100.4:6443"

????????interval 3

????????timeout 9

????????fall 2

????????rise 2 ? ?

}

vrrp_instance VI_1 {

????state?BACKUP

????interface ens33?#本機物理網卡名字,可通過ip a來查看

????virtual_router_id 61

????priority 100 ?# 主節點權重最高 依次減少

????advert_int 1

????mcast_src_ip 192.168.100.3 ?#修改為本地IP

????nopreempt

????authentication {

????????auth_type PASS

????????auth_pass awzhXylxy.T

????}

????unicast_peer{

????????#注釋掉本地IP?

? ? ? ? 192.168.100.1

? ? ? ? 192.168.100.2

????????#192.168.100.3

????}

????virtual_ipaddress {

????????192.168.100.4/22 #VIP

????}

????track_script {

????????#CheckK8sMaster#這個方法在沒部署k8s之前最好注釋掉,因為很可能因為這個報錯

????}

}

EOF

啟動keepalived

systemctl restart keepalived

通過ip a可以查看


除了本機ip還多了一個虛擬ip

也可以通過ping ip去驗證vip是否生效。

安裝ETCD

1:設置cfssl環境

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

chmod +x cfssljson_linux-amd64

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

chmod +x cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfoexport PATH=/usr/local/bin:$PATH

2:創建 CA 配置文件(下面配置的IP為etc節點的IP

mkdir /root/ssl

cd /root/ssl

cat >? ca-config.json <<EOF

{"signing": {"default": { "expiry": "8760h"},"profiles": { "kubernetes-Soulmate": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" }}}}EOF

cat >? ca-csr.json <<EOF

{"CN": "kubernetes-Soulmate","key": {"algo": "rsa","size": 2048},"names": [{ "C": "CN", "ST": "shanghai", "L": "shanghai", "O": "k8s", "OU": "System"}]}EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

cat > etcd-csr.json?<<EOF

{ "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.100.1", "192.168.100.2", "192.168.100.3" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "shanghai", "L": "shanghai", "O": "k8s", "OU": "System" } ]}EOF

cfssl gencert -ca=ca.pem \

? -ca-key=ca-key.pem \

? -config=ca-config.json \

? -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd

3:master1分發etcd證書到master2、master3上面

mkdir -p /etc/etcd/ssl

cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/

ssh -n master2 "mkdir -p /etc/etcd/ssl && exit"

ssh -n master3 "mkdir -p /etc/etcd/ssl && exit"

scp -r /etc/etcd/ssl/*.pem master2:/etc/etcd/ssl/

scp -r /etc/etcd/ssl/*.pem master3:/etc/etcd/ssl/

解壓etcd-v3.3.2-linux-amd64.tar.gz并安裝

wget https://github.com/coreos/etcd/releases/download/v3.3.2/etcd-v3.3.2-linux-amd64.tar.gz

tar -xzvf etcd-v3.3.2-linux-amd64.tar.gz

cd etcd-v3.3.2-linux-amd64

cp etcd* /bin/

#查看是否安裝好

etcd -version?

etcd Version: 3.3.2Git SHA: c9d46ab37Go Version: go1.9.4Go OS/Arch: linux/amd64

etcdctl -version

etcdctl version: 3.3.2API version: 2

在每一個master上都創建一個etcd存儲目錄mkdir -p /u03/etcd/

這里可以自行選擇儲存數據地址,但是要記得在下面配置文件中做修改

master1

cat <<EOF >/etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/u03/etcd/

ExecStart=/usr/bin/etcd \

? --name master1 \

? --cert-file=/etc/etcd/ssl/etcd.pem \

? --key-file=/etc/etcd/ssl/etcd-key.pem \

? --peer-cert-file=/etc/etcd/ssl/etcd.pem \

? --peer-key-file=/etc/etcd/ssl/etcd-key.pem \

? --trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --initial-advertise-peer-urls https://192.168.100.1:2380 \

? --listen-peer-urls https://192.168.100.1:2380 \

? --listen-client-urls https://192.168.100.1:2379,http://127.0.0.1:2379 \

? --advertise-client-urls https://192.168.100.1:2379 \

? --initial-cluster-token etcd-cluster-0 \

? --initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \

? --initial-cluster-state new \

? --data-dir=/u03/etcd/

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF


master2

cat <<EOF >/etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/u03/etcd/

ExecStart=/usr/bin/etcd \

? --name master2 \

? --cert-file=/etc/etcd/ssl/etcd.pem \

? --key-file=/etc/etcd/ssl/etcd-key.pem \

? --peer-cert-file=/etc/etcd/ssl/etcd.pem \

? --peer-key-file=/etc/etcd/ssl/etcd-key.pem \

? --trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --initial-advertise-peer-urls https://192.168.100.2:2380 \

? --listen-peer-urls https://192.168.100.2:2380 \

? --listen-client-urls https://192.168.100.2:2379,http://127.0.0.1:2379 \

? --advertise-client-urls https://192.168.220.146:2379 \

? --initial-cluster-token etcd-cluster-0 \

? --initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \

? --initial-cluster-state new \

? --data-dir=/u03/etcd/

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF


master3

cat <<EOF >/etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/u03/etcd/

ExecStart=/usr/bin/etcd \

? --name master3 \

? --cert-file=/etc/etcd/ssl/etcd.pem \

? --key-file=/etc/etcd/ssl/etcd-key.pem \

? --peer-cert-file=/etc/etcd/ssl/etcd.pem \

? --peer-key-file=/etc/etcd/ssl/etcd-key.pem \

? --trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --initial-advertise-peer-urls https://192.168.100.3:2380 \

? --listen-peer-urls https://192.168.100.3:2380 \

? --listen-client-urls https://192.168.100.3:2379,http://127.0.0.1:2379 \

? --advertise-client-urls https://192.168.100.3:2379 \

? --initial-cluster-token etcd-cluster-0 \

? --initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \

? --initial-cluster-state new \

? --data-dir=/u03/etcd/

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF


每個master都執行以下命令以啟動etcd集群

cd /etc/systemd/system/

mv etcd.service /usr/lib/systemd/system/

systemctl daemon-reload

systemctl enable etcd

systemctl restart etcd

systemctl status etcd

通過以下命令檢測集群是否正常

etcdctl --endpoints=https://192.168.100.1:2379,https://192.168.100.2:2379,https://192.168.100.3:2379 \

? --ca-file=/etc/etcd/ssl/ca.pem \

? --cert-file=/etc/etcd/ssl/etcd.pem \

? --key-file=/etc/etcd/ssl/etcd-key.pem? cluster-health


keepalived+etcd安裝完成后,開始部署k8s

安裝docker、k8s相關rpm包,以及上傳k8s相關鏡像。請看我上一篇初階k8s集群搭建

所有節點修改kubelet配置文件

sed -i -e 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

重啟kubelet

systemctl daemon-reload && systemctl restart kubelet

初始化集群,創建集群配置文件

# 生成token

# 保留token后面還要使用

我們使用coreDNS作為k8s集群內部DNS解析,使用canal作為網絡服務

kubeadm token generate

cat <<EOF >?config.yaml

apiVersion: kubeadm.k8s.io/v1alpha1

kind: MasterConfiguration

etcd:

? endpoints:

? - https://192.168.100.1:2379

? - https://192.168.100.2:2379

? - https://192.168.100.3:2379

? caFile: /etc/etcd/ssl/ca.pem

? certFile: /etc/etcd/ssl/etcd.pem

? keyFile: /etc/etcd/ssl/etcd-key.pem

? dataDir: /var/lib/etcd

networking:

? podSubnet: 10.244.0.0/16

kubernetesVersion: 1.10.0

api:

? advertiseAddress: "192.168.150.186"

token: "hpobow.vw1g1ya5dre7sq06" #剛剛保存的token

tokenTTL: "0s"#表示永不過期

apiServerCertSANs:

- master1

- master2

- master3

- 192.168.100.1

- 192.168.100.2

- 192.168.100.3

- 192.168.100.4

featureGates:

? CoreDNS: true

EOF


編輯完成后執行kubeadm init --config config.yaml

如果失敗則查看錯誤journalctl -xeu kubelet 查看服務啟動日志或根據相關日志查看問題

通過kubeadm reset重置

注意,如果etcd已經寫入數據,請先到etcd存儲數據路徑下清空數據記錄。

若成功,你會看到以下內容

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

? mkdir -p $HOME/.kube

? sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

? sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

? https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join 192.168.100.1:6443 --token hpobow.vw1g1ya5dre7sq06 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847fgerbc58a6296911892662b98b1315

按照上面提示,此時root用戶還不能使用kubelet控制集群需要,配置下環境變量

對于非root用戶

mkdir -p $HOME/.kube

?sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

?sudo chown $(id -u):$(id -g) $HOME/.kube/config

對于root用戶

echo?"export KUBECONFIG=/etc/kubernetes/admin.conf"?>> ~/.bash_profile

source一下環境變量

source?~/.bash_profile

kubeadm生成證書密碼文件分發到master2和master3上面去

scp -r /etc/kubernetes/pki master2:/etc/kubernetes/

scp -r /etc/kubernetes/pki master3:/etc/kubernetes/

部署canal網絡,在master1執行

kubectl apply -f \

https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml

kubectl apply-f \https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

可能鏡像下載需要一點時間,也可以先將yaml文件下載到本地,自行修改鏡像路徑,使用自己下載好的鏡像

wget?https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml

wget?https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

等待部署好了之后查看當前節點是否準備好

[root@master1 ~]# kubectl get node

NAME STATUS ROLES AGE VERSION

master1? ? Ready? ? master? ? 31m? ? ? v1.10.0

通過kubectl get pods --all-namespaces查看是否所有的容器都已經運行,如果出現error或crash,就使用kubectl describe pod -n kube-system來查看出現的問題。

在master2和master3上面分別執行初始化

使用之前在master1執行的配置config.yaml在另外兩個節點上執行kubeadm init --config config.yaml,將獲得與master1一樣的結果

同樣的配置下環境變量

對于非root用戶

mkdir -p $HOME/.kube

?sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

?sudo chown $(id -u):$(id -g) $HOME/.kube/config

對于root用戶

echo?"export KUBECONFIG=/etc/kubernetes/admin.conf"?>> ~/.bash_profile

source一下環境變量

source?~/.bash_profile

[root@master1 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master1? ? Ready? ? master? ? 1h? ? ? ? v1.10.0

master2? ? Ready? ? master? ? 1h? ? ? ? v1.10.0

master3? ? Ready? ? master? ? 1h? ? ? ? v1.10.0

查看所有節點運行的容器kubectl get pods --all-namespaces -o wide

這樣,基本的主備模式的高可用就搭建完成了,若要部署dashboard請看我上一篇文章初階k8s集群搭建,值得注意的是,設置dashboard的basicauth的方式進行apiserver的驗證,這個設置需要在每一臺master上執行以保證高可用。

另外,在k8s 1.10中想使用HPA需要在每個master節點 /etc/kubernetes/manifests/kube-controller-manager.yaml中增加?- --horizontal-pod-autoscaler-use-rest-clients=false 才可以監控到cpu使用率來完成自動擴容。

監控插件heapster

需要有heapster.yaml、influxdb.yaml、grafana.yaml

vim heapster.yaml

---

apiVersion: v1

kind: ServiceAccount

metadata:

? name: heapster

? namespace: kube-system

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

? name: heapster

subjects:

? - kind: ServiceAccount

? ? name: heapster

? ? namespace: kube-system

roleRef:

? kind: ClusterRole

? name: system:heapster

? apiGroup: rbac.authorization.k8s.io

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

? name: heapster

? namespace: kube-system

spec:

? replicas: 1

? template:

? ? metadata:

? ? ? labels:

? ? ? ? task: monitoring

? ? ? ? k8s-app: heapster

? ? spec:

? ? ? serviceAccountName: heapster

? ? ? containers:

? ? ? - name: heapster

? ? ? ? image: 192.168.220.84/third_party/heapster-amd64:v1.3.0 #這里我用的是自己的私服鏡像地址

? ? ? ? imagePullPolicy: IfNotPresent

? ? ? ? command:

? ? ? ? - /heapster

? ? ? ? - --source=kubernetes:https://kubernetes.default

? ? ? ? - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086

---

apiVersion: v1

kind: Service

metadata:

? labels:

? ? task: monitoring

? ? # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

? ? # If you are NOT using this as an addon, you should comment out this line.

? ? kubernetes.io/cluster-service: 'true'

? ? kubernetes.io/name: Heapster

? name: heapster

? namespace: kube-system

spec:

? ports:

? - port: 80

? ? targetPort: 8082

? selector:

? ? k8s-app: heapster


vim influxdb.yaml

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

? name: monitoring-influxdb

? namespace: kube-system

spec:

? replicas: 1

? template:

? ? metadata:

? ? ? labels:

? ? ? ? task: monitoring

? ? ? ? k8s-app: influxdb

? ? spec:

? ? ? containers:

? ? ? - name: influxdb

? ? ? ? image: 192.168.220.84/third_party/heapster-influxdb-amd64:v1.1.1 #私服地址,需要自行更換

? ? ? ? volumeMounts:

? ? ? ? - mountPath: /data

? ? ? ? ? name: influxdb-storage

? ? ? volumes:

? ? ? - name: influxdb-storage

? ? ? ? emptyDir: {}

---

apiVersion: v1

kind: Service

metadata:

? labels:

? ? task: monitoring

? ? # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

? ? # If you are NOT using this as an addon, you should comment out this line.

? ? kubernetes.io/cluster-service: 'true'

? ? kubernetes.io/name: monitoring-influxdb

? name: monitoring-influxdb

? namespace: kube-system

spec:

? ports:

? - port: 8086

? ? targetPort: 8086

? selector:

? ? k8s-app: influxdb


vim?grafana.yaml

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

? name: monitoring-grafana

? namespace: kube-system

spec:

? replicas: 1

? template:

? ? metadata:

? ? ? labels:

? ? ? ? task: monitoring

? ? ? ? k8s-app: grafana

? ? spec:

? ? ? containers:

? ? ? - name: grafana

? ? ? ? image: 192.168.220.84/third_party/heapster-grafana-amd64:v4.4.1 #私服地址,需自行更換

? ? ? ? ports:

? ? ? ? - containerPort: 3000

? ? ? ? ? protocol: TCP

? ? ? ? volumeMounts:

? ? ? ? - mountPath: /var

? ? ? ? ? name: grafana-storage

? ? ? ? env:

? ? ? ? - name: INFLUXDB_HOST

? ? ? ? ? value: monitoring-influxdb

? ? ? ? - name: GF_SERVER_HTTP_PORT

? ? ? ? ? value: "3000"

? ? ? ? ? # The following env variables are required to make Grafana accessible via

? ? ? ? ? # the kubernetes api-server proxy. On production clusters, we recommend

? ? ? ? ? # removing these env variables, setup auth for grafana, and expose the grafana

? ? ? ? ? # service using a LoadBalancer or a public IP.

? ? ? ? - name: GF_AUTH_BASIC_ENABLED

? ? ? ? ? value: "false"

? ? ? ? - name: GF_AUTH_ANONYMOUS_ENABLED

? ? ? ? ? value: "true"

? ? ? ? - name: GF_AUTH_ANONYMOUS_ORG_ROLE

? ? ? ? ? value: Admin

? ? ? ? - name: GF_SERVER_ROOT_URL

? ? ? ? ? # If you're only using the API Server proxy, set this value instead:

? ? ? ? ? # value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/

? ? ? ? ? value: /

? ? ? volumes:

? ? ? - name: grafana-storage

? ? ? ? emptyDir: {}

---

apiVersion: v1

kind: Service

metadata:

? labels:

? ? # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

? ? # If you are NOT using this as an addon, you should comment out this line.

? ? kubernetes.io/cluster-service: 'true'

? ? kubernetes.io/name: monitoring-grafana

? name: monitoring-grafana

? namespace: kube-system

spec:

? # In a production setup, we recommend accessing Grafana through an external Loadbalancer

? # or through a public IP.

? # type: LoadBalancer

? # You could also use NodePort to expose the service at a randomly-generated port

? type: NodePort

? ports:

? - port: 80

? ? targetPort: 3000

? ? nodePort: 31236

? selector:

? ? k8s-app: grafana


執行kubectl apply -f?heapster.yaml -f?influxdb.yaml -f?grafana.yaml

在dashboard上的展示效果


heapster展示


heapster展示


grafana展示

加入的node工作節點

安裝以下軟件版本,文章開頭有說道

docker17.03.2-ce

socat-1.7.3.2-2.el7.x86_64

kubelet-1.10.0-0.x86_64

kubernetes-cni-0.6.0-0.x86_64

kubectl-1.10.0-0.x86_64

kubeadm-1.10.0-0.x86_64

環境配置

systemctl stop firewalldsystemctl disable firewalld

修改每個節點hostname

cat < /etc/hosts?> /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.1?master1

192.168.100.2?master2

192.168.100.3?master3

EOF

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

setenforce 0

echo "* soft nofile 65536" >> /etc/security/limits.conf

echo "* hard nofile 65536" >> /etc/security/limits.conf

echo "* soft nproc 65536" >> /etc/security/limits.conf

echo "* hard nproc 65536" >> /etc/security/limits.conf

echo "* soft memlock unlimited" >> /etc/security/limits.conf

echo "* hard memlock unlimited" >> /etc/security/limits.conf

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

echo 1 > /proc/sys/net/ipv4/ip_forward

sysctl -w net.bridge.bridge-nf-call-iptables=1

vim /etc/sysctl.conf

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

sysctl -p

然后執行在master上留下的kubeadm join 192.168.100.1:6443 --token hpobow.vw1g1ya5dre7sq06 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847fgerbc58a6296911892662b98b1315即可。

主要參考文章:

kubeadm安裝Kubernetes V1.10集群詳細文檔

kubernetes1.9離線部署

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容

  • 安裝k8s Master高可用集群 主機 角色 組件 172.18.6.101 K8S Master Kubele...
    jony456123閱讀 8,096評論 0 9
  • 時隔大半年,我又回來了,這回帶來的是最近非常火的容器編排工具——kubernetes 先附上docker 官網和k...
    我的橙子很甜閱讀 13,257評論 2 79
  • docker實現了更便捷的單機容器虛擬化的管理, docker的位置處于操作系統層與應用層之間; 相對傳統虛擬化(...
    Harvey_L閱讀 19,952評論 3 44
  • 遇見他的那一刻,我不信一輩子。我什么都不信。 我本是女嬌娥,又不是男兒郎。 后來, 我的靈魂為了他變的柔軟,變的生...
    lnnn閱讀 217評論 0 0
  • 《人生沒有白走的路,每一步都算數》 不可取的抱怨 不久前,一間名為“喪茶”的奶茶店突然火爆了起來,令人驚訝的是,他...
    mi穿衣服的刺猬閱讀 217評論 0 1