安裝docker
# apt-get install apt-transport-https ca-certificates curl software-properties-common
# curl -fsSL?https://download.docker.com/linux/ubuntu/gpg| sudo apt-key add -
# add-apt-repository "deb [arch=amd64]?https://download.docker.com/linux/ubuntu?$(lsb_release -cs) stable"
# apt-get update && apt-getinstall -y docker-ce
container分解
K8S分解(部分)
以下部分文件的下載需要額外的科學網絡配置,官方文件下載完畢后可push本地的docker registry,再次部署時只需要pull下來在docker tag成XML定義的大包名稱即可。
安裝K8S組件
# systemctl disable firewalld.service
# systemctl stop firewalld.service
# apt-get update && apt-getinstall -y apt-transport-https
# curl -s?https://packages.cloud.google.com/apt/doc/apt-key.gpg?| apt-key add -
# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb?http://apt.kubernetes.io/?kubernetes-xenial main
EOF
# apt-get update
# apt-get install -y kubelet kubeadm kubectl
主控節點
kubeadm init --apiserver-advertise-address <host IP address, 10.109.181.110 e.g.>?--pod-network-cidr=10.244.0.0/16
按照輸出做如下配置:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
使能K8S命令行的自動補齊
# echo "source <(kubectl completion bash)" >> ~/.bashrc
配置flannel網絡:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
slave節點
在主控節點上查看輸出信息 'kubeadm token create --print-join-command'
# kubeadm join 10.109.181.110:6443--token ztwxpd.qbp9iaiqsd8v97gg --discovery-token-ca-cert-hash
sha256:79ac20fc3f33ab41e23701923f246f997977a70ff3cb40ab10431aee4bf098b3
安裝dashboard
kubectl create -f?https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
將type: ClusterIP中的ClusterIP改為NodePort
# kubectl --namespace=kube-system edit service kubernetes-dashboard
# kubectl --namespace=kube-system get service kubernetes-dashboard
NAME?????????????????? TYPE?????? CLUSTER-IP????? EXTERNAL-IP?? PORT(S)???????? AGE
kubernetes-dashboard?? NodePort??10.111.96.162?? ??????? 443:32588/TCP?? 2d
如果是測試環境需要跳過kubeconfig或者Token的方式登錄,按照以下方法操作后登錄dashboard然后skip
用火狐瀏覽器登錄<host IP>:32588訪問dashboard,登錄界面會提示安全策略警告,點擊advance然后skip。用chrome瀏覽會出現錯誤且無法繞過,暫時沒有規避辦法。
部署EFK
EFK實際上是elastic search,fluentd和kibana三個服務,用來搜集,監控容器實例的log并提供可視化界面進行更為靈活的管理。這幾個模塊可以自由組合,比如ELK,用logstash來替代fluentd進行log的搜集。
# wget? https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml?注意需要注釋掉configuration start with 'NodeSelector'
# kubectl create -f .
查看服務狀態
# kubectl cluster-info
Kubernetes master is running athttps://<host IP address>:6443
Elasticsearch is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
KubeDNS is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
設置代理同時將端口映射到8888(可以自己定義)
#kubectl proxy --address='0.0.0.0'--port=8888--accept-hosts='^*$' &
進入kibana控制面板?http://<host IP address>:8888/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/management/kibana/index?_g=()?進行進一步配置
部署gluster
所有節點:
# apt-get installsoftware-properties-common
#add-apt-repository?ppa:gluster/glusterfs-3.8
# apt-get update && apt-getinstall glusterfs-server
# mkdir /opt/glusterd
# mkdir /opt/gfs_data
# sed -i 's/var\/lib/opt/g' /etc/glusterfs/glusterd.vol
# systemctl status glusterfs-server.service
●glusterfs-server.service - LSB: GlusterFS server
??Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset:enabled)
??Active: active (running) since Thu 2018-06-07 07:31:51 UTC; 31min ago
????Docs: man:systemd-sysv-generator(8)
??CGroup: /system.slice/glusterfs-server.service
?????????? └─19538/usr/sbin/glusterd -p /var/run/glusterd.pid
Jun 07 07:31:49 k8s-cluster-1systemd[1]: Starting LSB: GlusterFS server...
Jun 07 07:31:49 k8s-cluster-1glusterfs-server[19528]:? * Startingglusterd service glusterd
Jun 07 07:31:51 k8s-cluster-1glusterfs-server[19528]:??? ...done.
Jun 07 07:31:51 k8s-cluster-1 systemd[1]:
Started LSB: GlusterFS server
主控節點:
確保所有節點可以解析
root@k8s-cluster-1:~/gluster# cat/etc/hosts
…
10.109.181.110 k8s-cluster-1
10.109.181.117 k8s-cluster-2
10.109.181.119 k8s-cluster-3
root@k8s-cluster-1:~/gluster#gluster peer probe k8s-cluster-2
peer probe: success.
root@k8s-cluster-1:~/gluster#gluster peer probe k8s-cluster-3
peer probe: success.
root@k8s-cluster-1:~/gluster#gluster peer status
Number of Peers: 2
Hostname: k8s-cluster-2
Uuid:d10af069-09f6-4d86-8120-dde1afa4393b
State: Peer in Cluster (Connected)
Hostname: k8s-cluster-3
Uuid:c6d4f3eb-78c5-4b10-927e-f1c6e41330d5
State: Peer in Cluster (Connected)
創建對應的endpoint
root@k8s-cluster-1:~/gluster#kubectl apply -f glusterfs-endpoints.json
endpoints "glusterfs-cluster" created
root@k8s-cluster-1:~/gluster#kubectl get ep
NAME??????????????? ENDPOINTS????????????????????????????????????????????????????AGE
glusterfs-cluster?? 10.109.181.110:1207,10.109.181.117:1207,10.109.181.119:1207?? 5s
influxdb??????????? ???????????????????????????????????????????????????????16d
kubernetes????????? 10.109.181.110:6443?????????????????????????????????????????? 27d
創建對應服務
root@k8s-cluster-1:~/gluster#kubectl apply -f glusterfs-service.json
service"glusterfs-cluster" created
root@k8s-cluster-1:~/gluster# kubectlget svc
NAME??????????????? TYPE?????????? CLUSTER-IP?????? EXTERNAL-IP?? PORT(S)????????? AGE
glusterfs-cluster?? ClusterIP????? 10.97.199.53???? ??????? 1207/TCP???????? 6s
influxdb??????????? LoadBalancer?? 10.109.218.156?? ???? 8086:31240/TCP?? 16d
kubernetes????????? ClusterIP????? 10.96.0.1??????? ??????? 443/TCP????????? 27d
創建卷及參數微調
# gluster volume create?k8s-volume?transport tcp k8s-cluster-2:/opt/gfs_data k8s-cluster-3:/opt/gfs_data force
# gluster volume quota k8s-volume enable
# gluster volume quota k8s-volume limit-usage / 1TB
# gluster volume set k8s-volume performance.cache-size 4GB
# gluster volume set k8s-volume performance.io-thread-count 16
# gluster volume set k8s-volume network.ping-timeout 10
# gluster volume set k8s-volume performance.write-behind-window-size 1024MB
基本測試
修改該json中對應的內容 "path": "k8-volume"
# kubectl apply -f glusterfs-pod.json
登錄到該pod中,用df -h檢查是否分配并掛載了對應的volume
Heketi服務
簡單的說heketi提供了一個上層的restful接口以及簡單的命令行來實現更加靈活的分布式存儲管理。
# wget?https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
# tar -xvf heketi-client-v7.0.0.linux.amd64.tar.gz
# cp heketi-client/bin /heketi-cli/bin/
# git clone?https://github.com/gluster/gluster-kubernetes?&& cd ./gluster-kubernetes/deploy
/*Create a separate namespace*/
# kubectl create namespace gluster
安裝腳本執行前有一些前提條件需要滿足,比如必須要加載的內核模塊(更多前提查看腳本提示https://github.com/gluster/gluster-kubernetes/blob/master/deploy/gk-deploy)
# modprobe dm_snapshot dm_mirror dm_thin_pool
修改對應的daemonset文件保證該模塊被映射進pod中
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#vim kube-templates/glusterfs-daemonset.yaml
?- name: kernel-modules
??????? hostPath:
????????? path: "/lib/modules"?? ---- > change from /var/lib/modules
每個節點都需要mount.glusterfs command is available. 部分的紅帽系統這個命令包含在glusterfs-fuse中
# add-apt-repositoryppa:gluster/glusterfs-3.12
# apt-get update
# apt-get install -yglusterfs-client
執行安裝腳本
# ./gk-deploy -g -n gluster /*-g option,it will deploy a GlusterFS DaemonSet onto your Kubernetes cluster by treatingthe nodes listed in the topology file as hyper-converged nodes with both Kubernetes and storage devices on them.*/
刪除之前創建的vg
# vgremove -ff $(sudo vgdisplay | grep -i "VG Name" | awk '{print $3}')
主節點和存儲節點
這里我們有三個存儲節點分別是k3,k-pv1,k-pv2
# add-apt-repository ppa:gluster/glusterfs-3.12 && apt-get update && apt-get install -y glusterfs-client
具體參見http://www.lxweimin.com/p/2c6a0eacfe4a
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#export HEKETI_CLI_SERVER=$(kubectl get svc/deploy-heketi -n gluster --template 'http://{{.spec.clusterIP}}:{{(index.spec.ports 0).port}}')
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#echo $HEKETI_CLI_SERVER
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#curl $HEKETI_CLI_SERVER/hello
Hello from Heketi
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#heketi-cli -s $HEKETI_CLI_SERVER cluster list
Clusters:
Id:035b137fbe2c02021cc7c381710ed0c4[block]
root@k-1:/home/ubuntu/gluster-kubernetes/deploy#heketi-cli -s $HEKETI_CLI_SERVER topology info
Cluster Id:a17b06b860a5c731725ae435d03ed750
???File:? true
???Block: true
???Volumes:
???Nodes:
??????? Node Id:13206c89322302eee45a7d3d5a0b2175
??????? State: online
??????? Cluster Id:a17b06b860a5c731725ae435d03ed750
??????? Zone: 1
???????Management Hostnames: k-3
??????? Storage Hostnames: 10.109.181.131
??????? Devices:
???????????????Id:a5987c9a076eac86378825a552ce8b16??Name:/dev/vdb???????????State:online??? Size (GiB):49????? Used (GiB):0?????? Free (GiB):49
?????????????????? ?????Bricks:
??????? Node Id:952e7876c36b3177a6f30b91f328f752
??????? State: online
??????? Cluster Id:a17b06b860a5c731725ae435d03ed750
??????? Zone: 1
??????? Management Hostnames: k-pv2
??????? Storage Hostnames: 10.109.181.134
??????? Devices:
???????????????Id:56bc8b325b258cade583905f2d6cba0e?? Name:/dev/vdb??????????? State:online??? Size (GiB):99????? Used (GiB):0?????? Free (GiB):99
??????????????????????? Bricks:
??????? Node Id:a28dbd80cd95122a4cd834146b7939ce
??????? State: online
??????? Cluster Id:a17b06b860a5c731725ae435d03ed750
??????? Zone: 1
??????? Management Hostnames: k-pv1
??????? Storage Hostnames: 10.109.181.152
??????? Devices:
???????????????Id:58a6e5a003c6aa1d2ccc4acec67cbd5c??Name:/dev/vdb???????????State:online??? Size (GiB):99????? Used (GiB):0?????? Free (GiB):99
??????????????????????? Bricks:
具體文件參考:https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md
HELM
helm是K8S的一個包管理工具,用戶可以來編輯應用而不用關心底層的pod,service,endpoint等關系,是application focus的一個利器。
官方解釋:Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.
安裝步驟如下:
# curl?https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get> get_helm.sh
# chmod 700 get_helm.sh
# ./get_helm.sh
# helm version
Client:&version.Version{SemVer:"v2.9.1",GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}
安裝tiller
# helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1 --stable-repo-url?https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
# helm init --upgrade
$HELM_HOME has been configured at /Users/test/.helm.
Tiller (the helm server side component)has been installed into your Kubernetes Cluster.
Happy Helming!
# helm version
Client:&version.Version{SemVer:"v2.9.1",GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}
Server:&version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}
# kubectl create serviceaccount --namespace kube-system tiller
# kubectl create clusterrolebinding tiller-cluster-rule--clusterrole=cluster-admin --serviceaccount=kube-system:tiller
# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
安裝應用wordpress,測試
# helm install --name wordpress-helm --set "persistence.enabled=false,mariadb.persistence.enabled=false" stable/wordpress
NAME:?? wordpress-helm
LAST DEPLOYED: Thu Jun 28 09:03:362018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME????????????????????? TYPE????????? CLUSTER-IP???? EXTERNAL-IP? PORT(S)???????????????????? AGE
wordpress-helm-mariadb??? ClusterIP???? 10.103.74.128? ?????? 3306/TCP??????????????????? 1s
wordpress-helm-wordpress? LoadBalancer?10.108.70.1??????80:32211/TCP,443:32191/TCP? 1s
==> v1beta1/Deployment
NAME????????????????????? DESIRED? CURRENT?UP-TO-DATE? AVAILABLE? AGE
wordpress-helm-wordpress? 1???????1??????? 1?????????? 0????????? 1s
==> v1beta1/StatefulSet
NAME??????????????????? DESIRED? CURRENT?AGE
wordpress-helm-mariadb? 1???????1??????? 1s
==> v1/Pod(related)
NAME????????????????????????????????????? READY? STATUS???????????? RESTARTS? AGE
wordpress-helm-wordpress-8f698f574-xbbhj? 0/1???ContainerCreating? 0???????? 0s
wordpress-helm-mariadb-0????????????????? 0/1??? Pending??????????? 0???????? 0s
==> v1/Secret
NAME????????????????????? TYPE??? DATA?AGE
wordpress-helm-mariadb??? Opaque?2???? 1s
wordpress-helm-wordpress? Opaque?2???? 1s
==> v1/ConfigMap
NAME????????????????????????? DATA? AGE
wordpress-helm-mariadb??????? 1????1s
wordpress-helm-mariadb-tests? 1????1s
NOTES:
1. 獲取URL
?NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc--namespace default -w wordpress-helm-wordpress'
?export SERVICE_IP=$(kubectl get svc --namespace default wordpress-helm-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
?echo?http://$SERVICE_IP/admin
2. 獲取鑒權去登錄blog
?echo Username: user
?echo Password: $(kubectl get secret --namespace default wordpress-helm-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)
# helm ls
NAME??????????? REVISION??????? UPDATED???????????????????????? STATUS????????? CHART?????????? NAMESPACE
wordpress-helm? 1?????????????? Thu Jun 28 09:03:36 2018??????? DEPLOYED??????? wordpress-2.0.0 default
附錄
K8S API
Execute command after PODinstantiated
https://kubernetes.io/cn/docs/tasks/inject-data-application/define-command-argument-container/
Capability
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h
AppArmor
https://kubernetes.io/docs/tutorials/clusters/apparmor/
Networking
https://kubernetes.io/docs/concepts/cluster-administration/networking/
Kompose
https://k8smeetup.github.io/docs/tools/kompose/user-guide/
Cheat sheet