容器及容器編排

安裝docker

# apt-get install apt-transport-https ca-certificates curl software-properties-common

# curl -fsSL?https://download.docker.com/linux/ubuntu/gpg| sudo apt-key add -

# add-apt-repository "deb [arch=amd64]?https://download.docker.com/linux/ubuntu?$(lsb_release -cs) stable"

# apt-get update && apt-getinstall -y docker-ce

container分解

container

K8S分解(部分)

K8S

以下部分文件的下載需要額外的科學網絡配置,官方文件下載完畢后可push本地的docker registry,再次部署時只需要pull下來在docker tag成XML定義的大包名稱即可。

安裝K8S組件

# systemctl disable firewalld.service

# systemctl stop firewalld.service

# apt-get update && apt-getinstall -y apt-transport-https

# curl -s?https://packages.cloud.google.com/apt/doc/apt-key.gpg?| apt-key add -

# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb?http://apt.kubernetes.io/?kubernetes-xenial main

EOF

# apt-get update

# apt-get install -y kubelet kubeadm kubectl

主控節點

kubeadm init --apiserver-advertise-address <host IP address, 10.109.181.110 e.g.>?--pod-network-cidr=10.244.0.0/16

執行過程

按照輸出做如下配置:

# mkdir -p $HOME/.kube

# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# sudo chown $(id -u):$(id -g) $HOME/.kube/config

使能K8S命令行的自動補齊

# echo "source <(kubectl completion bash)" >> ~/.bashrc

配置flannel網絡:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

slave節點

在主控節點上查看輸出信息 'kubeadm token create --print-join-command'

# kubeadm join 10.109.181.110:6443--token ztwxpd.qbp9iaiqsd8v97gg --discovery-token-ca-cert-hash

sha256:79ac20fc3f33ab41e23701923f246f997977a70ff3cb40ab10431aee4bf098b3

節點發現完畢
查看基本服務狀態

安裝dashboard

kubectl create -f?https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

將type: ClusterIP中的ClusterIP改為NodePort

# kubectl --namespace=kube-system edit service kubernetes-dashboard

# kubectl --namespace=kube-system get service kubernetes-dashboard

NAME?????????????????? TYPE?????? CLUSTER-IP????? EXTERNAL-IP?? PORT(S)???????? AGE

kubernetes-dashboard?? NodePort??10.111.96.162?? ??????? 443:32588/TCP?? 2d

如果是測試環境需要跳過kubeconfig或者Token的方式登錄,按照以下方法操作后登錄dashboard然后skip

skip admin

用火狐瀏覽器登錄<host IP>:32588訪問dashboard,登錄界面會提示安全策略警告,點擊advance然后skip。用chrome瀏覽會出現錯誤且無法繞過,暫時沒有規避辦法。


部署EFK

EFK實際上是elastic search,fluentd和kibana三個服務,用來搜集,監控容器實例的log并提供可視化界面進行更為靈活的管理。這幾個模塊可以自由組合,比如ELK,用logstash來替代fluentd進行log的搜集。

# wget? https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml

# wget? https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-service.yaml

# wget? https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml

# wget? https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml?注意需要注釋掉configuration start with 'NodeSelector'

# wget? https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-service.yaml

# wget? https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml

# kubectl create -f .

查看服務狀態

# kubectl cluster-info

Kubernetes master is running athttps://<host IP address>:6443

Elasticsearch is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy

Kibana is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

KubeDNS is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

設置代理同時將端口映射到8888(可以自己定義)

#kubectl proxy --address='0.0.0.0'--port=8888--accept-hosts='^*$' &

進入kibana控制面板?http://<host IP address>:8888/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/management/kibana/index?_g=()?進行進一步配置

主要是index pattern等配置

部署gluster

所有節點:

# apt-get installsoftware-properties-common

#add-apt-repository?ppa:gluster/glusterfs-3.8

# apt-get update && apt-getinstall glusterfs-server

# mkdir /opt/glusterd

# mkdir /opt/gfs_data

# sed -i 's/var\/lib/opt/g' /etc/glusterfs/glusterd.vol

# systemctl status glusterfs-server.service

●glusterfs-server.service - LSB: GlusterFS server

??Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset:enabled)

??Active: active (running) since Thu 2018-06-07 07:31:51 UTC; 31min ago

????Docs: man:systemd-sysv-generator(8)

??CGroup: /system.slice/glusterfs-server.service

?????????? └─19538/usr/sbin/glusterd -p /var/run/glusterd.pid

Jun 07 07:31:49 k8s-cluster-1systemd[1]: Starting LSB: GlusterFS server...

Jun 07 07:31:49 k8s-cluster-1glusterfs-server[19528]:? * Startingglusterd service glusterd

Jun 07 07:31:51 k8s-cluster-1glusterfs-server[19528]:??? ...done.

Jun 07 07:31:51 k8s-cluster-1 systemd[1]:

Started LSB: GlusterFS server

主控節點:

確保所有節點可以解析

root@k8s-cluster-1:~/gluster# cat/etc/hosts

10.109.181.110 k8s-cluster-1

10.109.181.117 k8s-cluster-2

10.109.181.119 k8s-cluster-3

root@k8s-cluster-1:~/gluster#gluster peer probe k8s-cluster-2

peer probe: success.

root@k8s-cluster-1:~/gluster#gluster peer probe k8s-cluster-3

peer probe: success.

root@k8s-cluster-1:~/gluster#gluster peer status

Number of Peers: 2

Hostname: k8s-cluster-2

Uuid:d10af069-09f6-4d86-8120-dde1afa4393b

State: Peer in Cluster (Connected)

Hostname: k8s-cluster-3

Uuid:c6d4f3eb-78c5-4b10-927e-f1c6e41330d5

State: Peer in Cluster (Connected)

創建對應的endpoint

curl -O?https://raw.githubusercontent.com/kubernetes/examples/master/staging/volumes/glusterfs/glusterfs-endpoints.json

配置如圖

root@k8s-cluster-1:~/gluster#kubectl apply -f glusterfs-endpoints.json

endpoints "glusterfs-cluster" created

root@k8s-cluster-1:~/gluster#kubectl get ep

NAME??????????????? ENDPOINTS????????????????????????????????????????????????????AGE

glusterfs-cluster?? 10.109.181.110:1207,10.109.181.117:1207,10.109.181.119:1207?? 5s

influxdb??????????? ???????????????????????????????????????????????????????16d

kubernetes????????? 10.109.181.110:6443?????????????????????????????????????????? 27d

創建對應服務

curl -O?https://raw.githubusercontent.com/kubernetes/examples/master/staging/volumes/glusterfs/glusterfs-service.json

配置如圖

root@k8s-cluster-1:~/gluster#kubectl apply -f glusterfs-service.json

service"glusterfs-cluster" created

root@k8s-cluster-1:~/gluster# kubectlget svc

NAME??????????????? TYPE?????????? CLUSTER-IP?????? EXTERNAL-IP?? PORT(S)????????? AGE

glusterfs-cluster?? ClusterIP????? 10.97.199.53???? ??????? 1207/TCP???????? 6s

influxdb??????????? LoadBalancer?? 10.109.218.156?? ???? 8086:31240/TCP?? 16d

kubernetes????????? ClusterIP????? 10.96.0.1??????? ??????? 443/TCP????????? 27d

創建卷及參數微調

# gluster volume create?k8s-volume?transport tcp k8s-cluster-2:/opt/gfs_data k8s-cluster-3:/opt/gfs_data force

# gluster volume quota k8s-volume enable

# gluster volume quota k8s-volume limit-usage / 1TB

# gluster volume set k8s-volume performance.cache-size 4GB

# gluster volume set k8s-volume performance.io-thread-count 16

# gluster volume set k8s-volume network.ping-timeout 10

# gluster volume set k8s-volume performance.write-behind-window-size 1024MB

基本測試

# curl -O?https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/volumes/glusterfs/glusterfs-pod.json

修改該json中對應的內容 "path": "k8-volume"

# kubectl apply -f glusterfs-pod.json

登錄到該pod中,用df -h檢查是否分配并掛載了對應的volume

Heketi服務

簡單的說heketi提供了一個上層的restful接口以及簡單的命令行來實現更加靈活的分布式存儲管理。

# wget?https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz

# tar -xvf heketi-client-v7.0.0.linux.amd64.tar.gz

# cp heketi-client/bin /heketi-cli/bin/

# git clone?https://github.com/gluster/gluster-kubernetes?&& cd ./gluster-kubernetes/deploy

/*Create a separate namespace*/

# kubectl create namespace gluster

安裝腳本執行前有一些前提條件需要滿足,比如必須要加載的內核模塊(更多前提查看腳本提示https://github.com/gluster/gluster-kubernetes/blob/master/deploy/gk-deploy

# modprobe dm_snapshot dm_mirror dm_thin_pool

修改對應的daemonset文件保證該模塊被映射進pod中

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#vim kube-templates/glusterfs-daemonset.yaml

?- name: kernel-modules

??????? hostPath:

????????? path: "/lib/modules"?? ---- > change from /var/lib/modules

每個節點都需要mount.glusterfs command is available. 部分的紅帽系統這個命令包含在glusterfs-fuse中

# add-apt-repositoryppa:gluster/glusterfs-3.12

# apt-get update

# apt-get install -yglusterfs-client

執行安裝腳本

# ./gk-deploy -g -n gluster /*-g option,it will deploy a GlusterFS DaemonSet onto your Kubernetes cluster by treatingthe nodes listed in the topology file as hyper-converged nodes with both Kubernetes and storage devices on them.*/

刪除之前創建的vg

# vgremove -ff $(sudo vgdisplay | grep -i "VG Name" | awk '{print $3}')

主節點和存儲節點

這里我們有三個存儲節點分別是k3,k-pv1,k-pv2

# add-apt-repository ppa:gluster/glusterfs-3.12 && apt-get update && apt-get install -y glusterfs-client

具體參見http://www.lxweimin.com/p/2c6a0eacfe4a

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#export HEKETI_CLI_SERVER=$(kubectl get svc/deploy-heketi -n gluster --template 'http://{{.spec.clusterIP}}:{{(index.spec.ports 0).port}}')

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#echo $HEKETI_CLI_SERVER

http://x.x.x.x:8080

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#curl $HEKETI_CLI_SERVER/hello

Hello from Heketi

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#heketi-cli -s $HEKETI_CLI_SERVER cluster list

Clusters:

Id:035b137fbe2c02021cc7c381710ed0c4[block]

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#heketi-cli -s $HEKETI_CLI_SERVER topology info

Cluster Id:a17b06b860a5c731725ae435d03ed750

???File:? true

???Block: true

???Volumes:

???Nodes:

??????? Node Id:13206c89322302eee45a7d3d5a0b2175

??????? State: online

??????? Cluster Id:a17b06b860a5c731725ae435d03ed750

??????? Zone: 1

???????Management Hostnames: k-3

??????? Storage Hostnames: 10.109.181.131

??????? Devices:

???????????????Id:a5987c9a076eac86378825a552ce8b16??Name:/dev/vdb???????????State:online??? Size (GiB):49????? Used (GiB):0?????? Free (GiB):49

?????????????????? ?????Bricks:

??????? Node Id:952e7876c36b3177a6f30b91f328f752

??????? State: online

??????? Cluster Id:a17b06b860a5c731725ae435d03ed750

??????? Zone: 1

??????? Management Hostnames: k-pv2

??????? Storage Hostnames: 10.109.181.134

??????? Devices:

???????????????Id:56bc8b325b258cade583905f2d6cba0e?? Name:/dev/vdb??????????? State:online??? Size (GiB):99????? Used (GiB):0?????? Free (GiB):99

??????????????????????? Bricks:

??????? Node Id:a28dbd80cd95122a4cd834146b7939ce

??????? State: online

??????? Cluster Id:a17b06b860a5c731725ae435d03ed750

??????? Zone: 1

??????? Management Hostnames: k-pv1

??????? Storage Hostnames: 10.109.181.152

??????? Devices:

???????????????Id:58a6e5a003c6aa1d2ccc4acec67cbd5c??Name:/dev/vdb???????????State:online??? Size (GiB):99????? Used (GiB):0?????? Free (GiB):99

??????????????????????? Bricks:

創建相應的pv和pvc,以及測試用pod

具體文件參考:https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md


HELM

helm是K8S的一個包管理工具,用戶可以來編輯應用而不用關心底層的pod,service,endpoint等關系,是application focus的一個利器。

官方解釋:Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.

安裝步驟如下:

# curl?https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get> get_helm.sh

# chmod 700 get_helm.sh

# ./get_helm.sh

# helm version

Client:&version.Version{SemVer:"v2.9.1",GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}

安裝tiller

# helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1 --stable-repo-url?https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

# helm init --upgrade

$HELM_HOME has been configured at /Users/test/.helm.

Tiller (the helm server side component)has been installed into your Kubernetes Cluster.

Happy Helming!

# helm version

Client:&version.Version{SemVer:"v2.9.1",GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}

Server:&version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}

# kubectl create serviceaccount --namespace kube-system tiller

# kubectl create clusterrolebinding tiller-cluster-rule--clusterrole=cluster-admin --serviceaccount=kube-system:tiller

# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

安裝應用wordpress,測試

# helm install --name wordpress-helm --set "persistence.enabled=false,mariadb.persistence.enabled=false" stable/wordpress

NAME:?? wordpress-helm

LAST DEPLOYED: Thu Jun 28 09:03:362018

NAMESPACE: default

STATUS: DEPLOYED

RESOURCES:

==> v1/Service

NAME????????????????????? TYPE????????? CLUSTER-IP???? EXTERNAL-IP? PORT(S)???????????????????? AGE

wordpress-helm-mariadb??? ClusterIP???? 10.103.74.128? ?????? 3306/TCP??????????????????? 1s

wordpress-helm-wordpress? LoadBalancer?10.108.70.1??????80:32211/TCP,443:32191/TCP? 1s

==> v1beta1/Deployment

NAME????????????????????? DESIRED? CURRENT?UP-TO-DATE? AVAILABLE? AGE

wordpress-helm-wordpress? 1???????1??????? 1?????????? 0????????? 1s

==> v1beta1/StatefulSet

NAME??????????????????? DESIRED? CURRENT?AGE

wordpress-helm-mariadb? 1???????1??????? 1s

==> v1/Pod(related)

NAME????????????????????????????????????? READY? STATUS???????????? RESTARTS? AGE

wordpress-helm-wordpress-8f698f574-xbbhj? 0/1???ContainerCreating? 0???????? 0s

wordpress-helm-mariadb-0????????????????? 0/1??? Pending??????????? 0???????? 0s

==> v1/Secret

NAME????????????????????? TYPE??? DATA?AGE

wordpress-helm-mariadb??? Opaque?2???? 1s

wordpress-helm-wordpress? Opaque?2???? 1s

==> v1/ConfigMap

NAME????????????????????????? DATA? AGE

wordpress-helm-mariadb??????? 1????1s

wordpress-helm-mariadb-tests? 1????1s

NOTES:

1. 獲取URL

?NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc--namespace default -w wordpress-helm-wordpress'

?export SERVICE_IP=$(kubectl get svc --namespace default wordpress-helm-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

?echo?http://$SERVICE_IP/admin

2. 獲取鑒權去登錄blog

?echo Username: user

?echo Password: $(kubectl get secret --namespace default wordpress-helm-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)

# helm ls

NAME??????????? REVISION??????? UPDATED???????????????????????? STATUS????????? CHART?????????? NAMESPACE

wordpress-helm? 1?????????????? Thu Jun 28 09:03:36 2018??????? DEPLOYED??????? wordpress-2.0.0 default


附錄

K8S API

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#podsecuritypolicy-v1beta1-extensions

Execute command after PODinstantiated

https://kubernetes.io/cn/docs/tasks/inject-data-application/define-command-argument-container/

Capability

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h

AppArmor

https://kubernetes.io/docs/tutorials/clusters/apparmor/

Networking

https://kubernetes.io/docs/concepts/cluster-administration/networking/

Kompose

https://k8smeetup.github.io/docs/tools/kompose/user-guide/

Cheat sheet

https://kubernetes.io/docs/reference/kubectl/cheatsheet

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 229,698評論 6 539
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 99,202評論 3 426
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 177,742評論 0 382
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,580評論 1 316
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 72,297評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,688評論 1 327
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,693評論 3 444
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,875評論 0 289
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 49,438評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 41,183評論 3 356
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,384評論 1 372
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,931評論 5 363
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,612評論 3 348
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 35,022評論 0 28
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 36,297評論 1 292
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 52,093評論 3 397
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,330評論 2 377

推薦閱讀更多精彩內容