1.Pod
Pod是k8s的最基本的操作單元,包含一個(gè)或多個(gè)緊密相關(guān)的容器,類似于豌豆莢的概念。一個(gè)Pod可以被一個(gè)容器化的環(huán)境看作應(yīng)用層的“邏輯宿主機(jī)”(Logical Host).一個(gè)Pod中的多個(gè)容器應(yīng)用通常是緊耦合的。Pod在Node上被創(chuàng)建、啟動(dòng)或者銷毀。
為什么k8s使用Pod在容器之上再封裝一層呢?一個(gè)很重要的原因是Docker容器之間的通信受到docker網(wǎng)絡(luò)機(jī)制的限制。在Docker的世界中,一個(gè)容器需要通過link方式才能訪問另一個(gè)容器提供的服務(wù)(端口)。大量容器之間的link將是一個(gè)非常繁重的工作。通過Pod的概念將多個(gè)容器組合在一個(gè)虛擬的“主機(jī)”內(nèi),可以實(shí)現(xiàn)容器之間僅需通過Localhost就能相互通信了。
一個(gè)Pod中的應(yīng)用容器共享同一組資源:
(1)PID命名空間:Pod中的不同應(yīng)用程序可以看見其他應(yīng)用程序的進(jìn)程ID
(2)網(wǎng)絡(luò)命名空間:Pod中的多個(gè)容器能訪問同一個(gè)IP和端口范圍
(3)IPC命名空間:Pod中的多個(gè)容器能夠使用SystemV IPC或POSIX消息隊(duì)列進(jìn)行通信。
(4)UTS命名空間:Pod中的多個(gè)容器共享一個(gè)主機(jī)名
(5)Volumes(共享存儲(chǔ)卷):Pod中的各個(gè)容器可以訪問在Pod級別定義的Volumes
不建議在k8s的一個(gè)pod內(nèi)運(yùn)行相同應(yīng)用的多個(gè)實(shí)例。也就是說一個(gè)Pod內(nèi)不要運(yùn)行2個(gè)或2個(gè)以上相同的鏡像,因?yàn)槿菀自斐啥丝跊_突,而且Pod內(nèi)的容器都是在同一個(gè)Node上的
對Pod的定義可以通過Yaml或Json格式的配置文件來完成。關(guān)于Yaml或Json中都能寫哪些參數(shù),參考官網(wǎng)http://kubernetes.io/docs/user-guide/pods/multi-container/
下面是Yaml格式定義的一個(gè)PHP-test-pod.yaml的Pod,其中kind為Pod,在spec中主要包含了對Contaners(容器)的定義,可以定義多個(gè)容器。文件放在master上。
[plain]view plaincopy
apiVersion:?v1
kind:?Pod
metadata:
? ? ? ? ?name:?php-test
labels:
? ? ? ? ? name:?php-test
spec:
? ? ? ? ? ?containers:
- ? ? ? ? ? ?name:?php-test
? ? ? ? ? ? ? ?image:?192.168.174.131:5000/php-base:1.0
env:
? ? ? ? ? -?name:?ENV_TEST_1
? ? ? ? ? ? ?value:?env_test_1
? ? ? ? ? -?name:?ENV_TEST_2
? ? ? ? ? ? value:?env_test_2
ports:
? ? ? -?containerPort:?80
? ? ? ? ?hostPort:?80
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?create?-f?./php-pod.yaml
發(fā)現(xiàn)報(bào)錯(cuò)
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?create?-f?./php-pod.yaml
Error?from?server:?error?when?creating?"./php-pod.yaml":?pods?"php-test"?is?forbidden:?no?API?token?found?for?service?account?default/default,?retry?after?the?token?is?automatically?created?and?added?to?the?service?account
[root@localhost?k8s]#
修改/etc/kubernetes/apiserver ? 中的KUBE_ADMISSION_CONTROL,將ServiceAccount去掉
[plain]view plaincopy
[root@localhost?k8s]#?vi?/etc/kubernetes/apiserver
[plain]view plaincopy
#?default?admission?control?policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
重啟kube-apiserver
[plain]view plaincopy
[root@localhost?k8s]#?systemctl?restart?kube-apiserver.service
重新創(chuàng)建pod
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?create?-f?./php-pod.yaml
pod?"php-test"?created
1.3查看pod在哪個(gè)node上創(chuàng)建
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?get?pods
NAME???????READY?????STATUS????RESTARTS???AGE
php-test???1/1???????Running???0??????????3m
[root@localhost?k8s]#?kubectl?get?pod?php-test?-o?wide
NAME???????READY?????STATUS????RESTARTS???AGE???????NODE
php-test???1/1???????Running???0??????????3m????????192.168.174.130
[root@localhost?k8s]#
發(fā)現(xiàn)在192.168.174.130上,docker ps -a看看
[plain]view plaincopy
[root@localhost?~]#?docker?ps?-a
CONTAINER?ID????????IMAGE???????????????????????????????COMMAND??????????????????CREATED??????????????STATUS??????????????PORTS????????????????NAMES
9ca9a8d1bde1????????192.168.174.131:5000/php-base:1.0???"/bin/sh?-c?'/usr/loc"???About?a?minute?ago???Up?About?a?minute????????????????????????k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_fab25c8c
bec792435916????????kubernetes/pause????????????????????"/pause"?????????????????About?a?minute?ago???Up?About?a?minute???0.0.0.0:80->80/tcp???k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e1c8ba7b
[root@localhost?~]#
發(fā)現(xiàn)有2個(gè)容器,一個(gè)對應(yīng)的鏡像是php-base,一個(gè)對應(yīng)的鏡像是kubernetes/pause,它是Netowrk Container, 每啟動(dòng)一個(gè)Pod都會(huì)附加啟動(dòng)這樣一個(gè)容器,它的作用就只是簡單的等待,設(shè)置Pod的網(wǎng)絡(luò)。
php-base中php有添加了個(gè)info.php頁面,通過瀏覽器訪問http://192.168.174.130/info.php,發(fā)現(xiàn)容器正常工作,說明pod沒問題。
查看pod的詳細(xì)信息
kubectl describe podphp-test
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?describe?pod?php-test
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?describe?pod?php-test
Name:???????????php-test
Namespace:??????default
Node:???????????192.168.174.130/192.168.174.130
Start?Time:?????Thu,?10?Nov?2016?16:02:47?+0800
Labels:?????????name=php-test
Status:?????????Running
IP:?????????????172.17.42.2
Controllers:????
Containers:
php-test:
Container?ID:???????docker://9ca9a8d1bde1e13da2e7ab47fc05331eb6a8c2b6566662576b742f98e2ec9609
Image:??????????????192.168.174.131:5000/php-base:1.0
Image?ID:???????????docker://sha256:104c7334b9624b054994856318e54b6d1de94c9747ab9f73cf25ae5c240a4de2
Port:???????????????80/TCP
QoS?Tier:
cpu:??????????????BestEffort
memory:???????????BestEffort
State:??????????????Running
Started:??????????Thu,?10?Nov?2016?16:03:04?+0800
Ready:??????????????True
Restart?Count:??????0
Environment?Variables:
ENV_TEST_1:???????env_test_1
ENV_TEST_2:???????env_test_2
Conditions:
Type??????????Status
Ready?????????True
No?volumes.
Events:
FirstSeen?????LastSeen????????Count???From????????????????????????????SubobjectPath???????????????????Type????????????Reason??????????????????Message
---------?????--------????????-----???----????????????????????????????-------------???????????????????--------????????------??????????????????-------
14m???????????14m?????????????1???????{default-scheduler?}????????????????????????????????????????????Normal??????????Scheduled???????????????Successfully?assigned?php-test?to?192.168.174.130
14m???????????14m?????????????2???????{kubelet?192.168.174.130}???????????????????????????????????????Warning?????????MissingClusterDNS???????kubelet?does?not?have?ClusterDNS?IP?configured?and?cannot?create?Pod?using?"ClusterFirst"?policy.?Falling?back?to?DNSDefault?policy.
14m???????????14m?????????????1???????{kubelet?192.168.174.130}???????spec.containers{php-test}???????Normal??????????Pulled??????????????????Container?image?"192.168.174.131:5000/php-base:1.0"?already?present?on?machine
14m???????????14m?????????????1???????{kubelet?192.168.174.130}???????spec.containers{php-test}???????Normal??????????Created?????????????????Created?container?with?docker?id?9ca9a8d1bde1
14m???????????14m?????????????1???????{kubelet?192.168.174.130}???????spec.containers{php-test}???????Normal??????????Started?????????????????Started?container?with?docker?id?9ca9a8d1bde1
[root@localhost?k8s]#
kubectl get pod php-test -o yaml或者kubectl get pod php-test -o json 獲取pod更為詳細(xì)的信息
(1)在node上通過docker stop $(docker ps -a -q)停止容器,發(fā)現(xiàn)k8s會(huì)自動(dòng)重新生成新容器。
[plain]view plaincopy
[root@localhost?~]#?docker?stop?$(docker?ps?-a?-q)
9ca9a8d1bde1
bec792435916
[root@localhost?~]#?docker?ps?-a
CONTAINER?ID????????IMAGE???????????????????????????????COMMAND??????????????????CREATED?????????????STATUS???????????????????????PORTS????????????????NAMES
19aba2fc5300????????192.168.174.131:5000/php-base:1.0???"/bin/sh?-c?'/usr/loc"???2?seconds?ago???????Up?1?seconds??????????????????????????????????????k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e32e07e1
514903617a80????????kubernetes/pause????????????????????"/pause"?????????????????3?seconds?ago???????Up?2?seconds?????????????????0.0.0.0:80->80/tcp???k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_cac9bd60
9ca9a8d1bde1????????192.168.174.131:5000/php-base:1.0???"/bin/sh?-c?'/usr/loc"???19?minutes?ago??????Exited?(137)?2?seconds?ago????????????????????????k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_fab25c8c
bec792435916????????kubernetes/pause????????????????????"/pause"?????????????????19?minutes?ago??????Exited?(2)?2?seconds?ago??????????????????????????k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e1c8ba7b
[root@localhost?~]#
(2)停止node服務(wù)器(整個(gè)系統(tǒng)停止)
node服務(wù)器關(guān)閉,不是所有node服務(wù)器關(guān)閉,只是關(guān)閉php-test的pod所在的node。然后再master上查詢pod,無法獲取到php-test的pod信息了。
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?get?pods
NAME???????READY?????STATUS????????RESTARTS???AGE
php-test???1/1???????Terminating???1??????????25m
[root@localhost?k8s]#?kubectl?get?pod?php-test?-o?wide
Error?from?server:?pods?"php-test"?not?found
[root@localhost?k8s]#
重新啟動(dòng)node服務(wù)器,docker ps -a,發(fā)現(xiàn)一個(gè)容器都沒了。master上執(zhí)行kubectl get pods,也發(fā)現(xiàn)一個(gè)pod都沒了,說明如果node服務(wù)器掛了,那pod也會(huì)銷毀,且不會(huì)自動(dòng)在其它node上創(chuàng)建新的pod。這問題可以通過RC來進(jìn)行解決,看下面RC內(nèi)容。
kubectl delete pod NAME,比如kubectl delete pod php-test
Replication Controller確保任何時(shí)候Kubernetes集群中有指定數(shù)量的pod副本(replicas)在運(yùn)行, 如果少于指定數(shù)量的pod副本(replicas),Replication Controller會(huì)啟動(dòng)新的Container,反之會(huì)殺死多余的以保證數(shù)量不變。Replication Controller使用預(yù)先定義的pod模板創(chuàng)建pods,一旦創(chuàng)建成功,pod 模板和創(chuàng)建的pods沒有任何關(guān)聯(lián),可以修改pod 模板而不會(huì)對已創(chuàng)建pods有任何影響,也可以直接更新通過Replication Controller創(chuàng)建的pods。對于利用pod 模板創(chuàng)建的pods,Replication Controller根據(jù)label selector來關(guān)聯(lián),通過修改pods的label可以刪除對應(yīng)的pods。
master服務(wù)上創(chuàng)建文件php-controller.yaml,為了避免同一個(gè)rc定義的pod在同一個(gè)node上生成多個(gè)pod時(shí),端口沖突,文件中不指定hostPort。replicas指定pod的數(shù)量。
[plain]view plaincopy
apiVersion:?v1
kind:?ReplicationController
metadata:
name:?php-controller
labels:
name:?php-controller
spec:
replicas:?2
selector:
name:?php-test-pod
template:
metadata:
labels:
name:?php-test-pod
spec:
containers:
-?name:?php-test
image:?192.168.174.131:5000/php-base:1.0
env:
-?name:?ENV_TEST_1
value:?env_test_1
-?name:?ENV_TEST_2
value:?env_test_2
ports:
-?containerPort:?80
[plain]view plaincopy
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?create?-f?php-controller.yaml
replicationcontroller?"php-controller"?created
[root@localhost k8s]# kubectl get rc
[root@localhost k8s]# kubectl get rc php-controller
[root@localhost k8s]# kubectl describe rc php-controller
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?get?rc
NAME?????????????DESIRED???CURRENT???AGE
php-controller???2?????????2?????????1m
[root@localhost?k8s]#?kubectl?get?rc?php-controller
NAME?????????????DESIRED???CURRENT???AGE
php-controller???2?????????2?????????3m
[root@localhost?k8s]#?kubectl?describe?rc?php-controller
Name:???????????php-controller
Namespace:??????default
Image(s):???????192.168.174.131:5000/php-base:1.0
Selector:???????name=php-test-pod
Labels:?????????name=php-controller
Replicas:???????2?current?/?2?desired
Pods?Status:????2?Running?/?0?Waiting?/?0?Succeeded?/?0?Failed
No?volumes.
Events:
FirstSeen?????LastSeen????????Count???From????????????????????????????SubobjectPath???Type????????????Reason??????????????????Message
---------?????--------????????-----???----????????????????????????????-------------???--------????????------??????????????????-------
3m????????????3m??????????????1???????{replication-controller?}???????????????????????Normal??????????SuccessfulCreate????????Created?pod:?php-controller-8x5wp
3m????????????3m??????????????1???????{replication-controller?}???????????????????????Normal??????????SuccessfulCreate????????Created?pod:?php-controller-ynzl7
[root@localhost?k8s]#
[root@localhost k8s]# kubectl get pods
[root@localhost k8s]# kubectl get pods -o wide
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?get?pods?-o?wide
NAME???????????????????READY?????STATUS????RESTARTS???AGE???????NODE
php-controller-8x5wp???1/1???????Running???0??????????5m????????192.168.174.131
php-controller-ynzl7???1/1???????Running???0??????????5m????????192.168.174.130
[root@localhost?k8s]#
可見在131和130的2臺(tái)node服務(wù)器上都創(chuàng)建了Pod.
在文件中pod副本數(shù)量是通過replicas來控制的,kubernetes允許通過kubectl scal命令來動(dòng)態(tài)控制Pod的數(shù)量。
(1)更改replicas的數(shù)量為3
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?scale?rc?php-controller?--replicas=3
replicationcontroller?"php-controller"?scaled
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?get?pods?-o?wide
NAME???????????????????READY?????STATUS????RESTARTS???AGE???????NODE
php-controller-0gkhx???1/1???????Running???0??????????10s???????192.168.174.131
php-controller-8x5wp???1/1???????Running???0??????????11m???????192.168.174.131
php-controller-ynzl7???1/1???????Running???0??????????11m???????192.168.174.130
[root@localhost?k8s]#
發(fā)現(xiàn)在31服務(wù)器上多了一個(gè)POD
(2)更改replicas的數(shù)量為1
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?scale?rc?php-controller?--replicas=1
replicationcontroller?"php-controller"?scaled
[root@localhost?k8s]#?kubectl?get?pods?-o?wide
NAME???????????????????READY?????STATUS????????RESTARTS???AGE???????NODE
php-controller-0gkhx???1/1???????Terminating???0??????????2m????????192.168.174.131
php-controller-8x5wp???1/1???????Running???????0??????????12m???????192.168.174.131
php-controller-ynzl7???1/1???????Terminating???0??????????12m???????192.168.174.130
看到其中2個(gè)pod的狀態(tài)都是Terminating了
(3)docker rm刪除
通過docker rm刪除,一會(huì)后,會(huì)自動(dòng)啟動(dòng)新的容器,現(xiàn)象和上面的POD測試一樣。
通過更改replicas=0,可以把該RC下的所有Pod都刪掉。另外kubeclt也提供了stop和delete命令來完成一次性刪除RC和RC控制的全部Pod。需要注意的是,單純的刪除RC,并不會(huì)影響已創(chuàng)建好的Pod。
kubectl delete rc rcName ? 刪除rc,但是pod不會(huì)收到影響
kubectl delete -f rcfile ?(比如[root@localhost k8s]# kubectl delete -f php-controller.yaml )會(huì)刪除rc,也會(huì)刪除rc下的所有pod
雖然每個(gè)Pod都會(huì)分配一個(gè)單獨(dú)的IP地址,但這個(gè)IP地址會(huì)隨著Pod的銷毀而消失。這就引出一個(gè)問題:如果有一組Pod組成的一個(gè)集群來提供服務(wù),那么如何來訪問它們呢?
Kubernetes的Service(服務(wù))就是用來解決這個(gè)問題的核心概念。
一個(gè)Service可以看作一組提供相同服務(wù)的Pod的對外訪問接口。Service作用于哪些Pod是通過Label Selector來定義的。
再看上面例子,php-test Pod運(yùn)行了2個(gè)副本(replicas),這2個(gè)Pod對于前端程序來說沒有區(qū)別,所以前端程序不關(guān)心是哪個(gè)后端副本在提供服務(wù)。并且后端php-test Pod在發(fā)生變化(比如replicas數(shù)量變化或某個(gè)node掛了,Pod在另一個(gè)node重新生成),前端無須跟蹤這些變化。“Service”就是用來實(shí)現(xiàn)這種解耦的抽象概念。
上面rc例子,已經(jīng)刪除rc并且刪除所有pod了。Service和RC沒有先后順序的,只是如果Pod先于Service生成,則Service中某些信息就沒寫到Pod中。
[plain]view plaincopy
[root@localhost?k8s]#?ls
php-controller.yaml??php-pod.yaml??php-service.yaml
[root@localhost?k8s]#?kubectl?get?rc
[root@localhost?k8s]#?kubectl?get?pods
[root@localhost?k8s]#?kubectl?create?-f?php-controller.yaml
replicationcontroller?"php-controller"?created
[root@localhost?k8s]#?kubectl?get?rc
NAME?????????????DESIRED???CURRENT???AGE
php-controller???2?????????2?????????11s
[root@localhost?k8s]#?kubectl?get?pods?-o?wide
NAME???????????????????READY?????STATUS????RESTARTS???AGE???????NODE
php-controller-cntom???1/1???????Running???0??????????28s???????192.168.174.131
php-controller-kn55k???1/1???????Running???0??????????28s???????192.168.174.130
[root@localhost?k8s]#
php-service.yaml
[plain]view plaincopy
apiVersion:?v1
kind:?Service
metadata:
name:?php-service
labels:
name:?php-service
spec:
ports:
-?port:?8081
targetPort:?80
protocol:?TCP
selector:
name:?php-test-pod
生成,查詢
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?create?-f?php-service.yaml
service?"php-service"?created
[root@localhost?k8s]#?kubectl?get?service
NAME??????????CLUSTER-IP???????EXTERNAL-IP???PORT(S)????AGE
kubernetes????10.254.0.1???????????????443/TCP????6d
php-service???10.254.165.216???????????8081/TCP???29s
[root@localhost?k8s]#?kubectl?get?services
NAME??????????CLUSTER-IP???????EXTERNAL-IP???PORT(S)????AGE
kubernetes????10.254.0.1???????????????443/TCP????6d
php-service???10.254.165.216???????????8081/TCP???1m
[root@localhost?k8s]#?kubectl?get?endpoints
NAME??????????ENDPOINTS???????????????????????AGE
kubernetes????192.168.174.128:6443????????????6d
php-service???172.17.42.2:80,172.17.65.3:80???1m
[root@localhost?k8s]#
通過kubectl get endpoints看到,php-service監(jiān)控的兩個(gè)Pod中的容器地址,這2個(gè)地址172.17.42.2:80,172.17.65.3:80,只能在內(nèi)網(wǎng)訪問(node上,安裝有flannel)。
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?get?service
NAME??????????CLUSTER-IP???????EXTERNAL-IP???PORT(S)????AGE
kubernetes????10.254.0.1???????????????443/TCP????6d
php-service???10.254.165.216???????????8081/TCP???17m
[root@localhost?k8s]#
3.2.Pod的IP地址和Service的Cluster IP地址
php-service的ip是10.254.165.216,這是Service的Cluster IP地址,是k8s系統(tǒng)中的虛擬IP地址,有系統(tǒng)動(dòng)態(tài)分配。Pod的IP地址是Docker Daemon根據(jù)docker0網(wǎng)橋的IP地址段進(jìn)行分配的。Service的Cluster IP地址相對于Pod的IP地址來說相對穩(wěn)定,Service被創(chuàng)建時(shí)即被分配一個(gè)IP地址,在銷毀該Service之前,這個(gè)IP地址都不會(huì)變化了。而Pod在K8s集群中生命周期較短,可能被ReplicationController銷毀、再次創(chuàng)建,新創(chuàng)建的Pod將會(huì)分配一個(gè)新的IP地址。
由于Service對象在Cluster IP Range池中分配到的IP只能在內(nèi)部訪問,所以其他Pod都可以無礙訪問到它。但如果這個(gè)service作為前端服務(wù),準(zhǔn)備為集群外的客戶端提供服務(wù),我們就需要給這個(gè)服務(wù)提供公共IP了。
k8s支持兩種對外提供服務(wù)的Service的type定義:NodePort和LoadBalancer
在定義Service時(shí)指定spec.type=NodePort,并指定spec.ports.nodePort的值,系統(tǒng)就會(huì)k8s集群中的每個(gè)node上打開一個(gè)主機(jī)上的真實(shí)端口號。這樣,能訪問Node的客戶端都就能通過端口號訪問內(nèi)部的Service了。
php-nodePort-service.yaml
[plain]view plaincopy
apiVersion:?v1
kind:?Service
metadata:
name:?php-nodeport-service
labels:
name:?php-nodeport-service
spec:
type:?NodePort
ports:
-?port:?8081
targetPort:?80
protocol:?TCP
nodePort:?30001
selector:
name:?php-test-pod
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?delete?service?php-service
service?"php-service"?deleted
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?create?-f?php-nodePort-service.yaml
The?Service?"php-nodePort-service"?is?invalid.
metadata.name:?Invalid?value:?"php-nodePort-service":?must?be?a?DNS?952?label?(at?most?24?characters,?matching?regex?[a-z]([-a-z0-9]*[a-z0-9])?):?e.g.?"my-name"
[root@localhost?k8s]#?kubectl?create?-f?php-nodePort-service.yaml
You?have?exposed?your?service?on?an?external?port?on?all?nodes?in?your
cluster.??If?you?want?to?expose?this?service?to?the?external?internet,?you?may
need?to?set?up?firewall?rules?for?the?service?port(s)?(tcp:30001)?to?serve?traffic.
See?http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md?for?more?details.
service?"php-nodeport-service"?created
[plain]view plaincopy
[root@localhost?k8s]#?kubectl?get?pods?-o?wide
NAME???????????????????READY?????STATUS????RESTARTS???AGE???????NODE
php-controller-2bvdq???1/1???????Running???0??????????21m???????192.168.174.130
php-controller-42muy???1/1???????Running???0??????????21m???????192.168.174.131
這樣我們就可以通過http://192.168.174.130:30001/info.php或http://192.168.174.131:30001/info.php進(jìn)行訪問了。
如果云服務(wù)商支持外接負(fù)載均衡器,則可以通過spec.tye=LoadBalancer定義Service,同時(shí)指定負(fù)載均衡器的IP地址。使用這種類型需要指定Service的nodePort和clusterIp.
[plain]view plaincopy
apiVersion:?v1
kind:?Service
metadata:
name:?php-loadbalancer-service
labels:
name:?php-loadbalancer-service
spec:
type:?LoadBalancer
clusterIp:?10.254.165.216
selector:
app:?php-test-pod
ports:
-?port:?8081
targetPort:?80
protocol:?TCP
nodePort:?30001
status:
loadBalancer:
ingress:
ip:?192.168.174.127??#注意這是負(fù)載均衡器的IP地址
status.loadBalancer.ingress.ip設(shè)置的192.168.174.127為云服務(wù)商提供的負(fù)載均衡器的IP地址。
之后,對該Service的訪問請求將會(huì)通過LoadBalancer轉(zhuǎn)發(fā)到后端的Pod上去,負(fù)載分發(fā)的實(shí)現(xiàn)方式則依賴云服務(wù)商提供的LoadBalancer的實(shí)現(xiàn)機(jī)制。
如果一個(gè)Pod中有多個(gè)對外暴漏端口時(shí),對端口進(jìn)行命名,是個(gè)Endpoint不會(huì)因重名而產(chǎn)生歧義。
[plain]view plaincopy
selector:
app:?php-test-pod
ports:
-?name:?p1
port:?8081
targetPort:?80
protocol:?TCP
-?name:?p2
port:?8082
targetPort:?22