安裝k8s Master高可用集群

安裝k8s Master高可用集群

主機 角色 組件

172.18.6.101 K8S Master Kubelet,kubectl,cni,etcd

172.18.6.102 K8S Master Kubelet,kubectl,cni,etcd

172.18.6.103 K8S Master Kubelet,kubectl,cni,etcd

172.18.6.104 K8S Worker Kubelet,cni

172.18.6.105 K8S Worker Kubelet,cni

172.18.6.106 K8S Worker Kubelet,cni

etcd安裝

保證k8smaster高可用,不建議使用container的方式啟動etcd集群,因為container可能會出現隨時死掉的情況,etcd每個節點的啟動service又是有狀態的。因此此處將以二進制方式進行部署,建議在正式環境中最少部署3個節點的etcd集群,etcd具體安裝步驟參考本地服務方式搭建etcd集群

必要組件以及證書安裝

ca證書

參考kubernetes中證書生成創建CA證書,并將ca-key.pem與ca.pem放置到k8s集群中所有節點下的/etc/kubernetes/ssl下

woker證書制作

參考kubernetes中證書生成從節點證書生成段落,進行worker節點證書生成。對應ip的證書放置到對應worker節點的/etc/kubernetes/ssl下

kubelet.conf配置安裝

創建/etc/kubernetes/kubelet.conf內容如下:

apiVersion: v1

kind: Config

clusters:

- name: local

cluster:

server: https://[負載均衡IP]:[apiserver端口]

certificate-authority: /etc/kubernetes/ssl/ca.pem

users:

- name: kubelet

user:

client-certificate: /etc/kubernetes/ssl/worker.pem

client-key: /etc/kubernetes/ssl/worker-key.pem

contexts:

- context:

cluster: local

user: kubelet

name: kubelet-context

current-context: kubelet-context


cni插件安裝

從containernetworking的cni項目中下載cni的必須二進制文件,需要放置到k8s集群中所有節點下的/opt/cni/bin下。

后續將提供rpm包進行一鍵安裝。

kubelet服務部署

注意:后續將提供rpm包進行一鍵安裝。

將對應版本的kubelet二進制文件放置到k8s集群中所有節點下的/usr/bin下

創建/etc/systemd/system/kubelet.service內容如下:

# /etc/systemd/system/kubelet.service

[Unit]

Description=kubelet: The Kubernetes Node Agent

Documentation=http://kubernetes.io/docs/

[Service]

Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"

Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"

Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"

Environment="KUBELET_DNS_ARGS=--cluster-dns=10.100.0.10 --cluster-domain=cluster.local"

Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/shenshouer/pause-amd64:3.0"

ExecStart=

ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS

Restart=always

StartLimitInterval=0

RestartSec=10

[Install]

WantedBy=multi-user.target

創建如下目錄:

/etc/kubernetes/

|-- kubelet.conf

|-- manifests

`-- ssl

|-- ca-key.pem

|-- ca.pem

|-- worker.csr

|-- worker-key.pem

|-- worker-openssl.cnf

`-- worker.pem


master組件安裝

配置負載均衡

配置LVS使用VIP172.18.6.254指向后端172.18.6.101、172.18.6.102、172.18.6.103, 如需簡單,則可使用nginx進行TCP4層的負載。

證書生成

openssl.cnf內容如下:

[req]

req_extensions = v3_req

distinguished_name = req_distinguished_name

[req_distinguished_name]

[ v3_req ]

basicConstraints = CA:FALSE

keyUsage = nonRepudiation, digitalSignature, keyEncipherment

subjectAltName = @alt_names

[alt_names]

DNS.1 = kubernetes

DNS.2 = kubernetes.default

DNS.3 = kubernetes.default.svc

DNS.4 = kubernetes.default.svc.cluster.local

DNS.5 = test.example.com.cn

IP.1 = 10.96.0.1

IP.2 = 172.18.6.101

IP.3 = 172.18.6.102

IP.3 = 172.18.6.103

IP.4 = 172.18.6.254


# 三個master的IP

IP.2 = 172.18.6.101

IP.3 = 172.18.6.102

IP.3 = 172.18.6.103

# LVS負載均衡的VIP

IP.4 = 172.18.6.254

# 可能會用到的負載均衡domain

DNS.5 = test.example.com.cn


證書生成具體步驟請參考kubernetes中證書生成 Master證書生成部分與Worker證書生成部分,生成后的證書需要放置到三臺Master節點對應路徑上

其他組件安裝

Master節點上/etc/kubernetes/manifests下放置如下三個文件

kube-apiserver.manifest:

# /etc/kubernetes/manifests/kube-apiserver.manifest

{

"kind": "Pod",

"apiVersion": "v1",

"metadata": {

"name": "kube-apiserver",

"namespace": "kube-system",

"creationTimestamp": null,

"labels": {

"component": "kube-apiserver",

"tier": "control-plane"

}

},

"spec": {

"volumes": [

{

"name": "k8s",

"hostPath": {

"path": "/etc/kubernetes"

}

},

{

"name": "certs",

"hostPath": {

"path": "/etc/ssl/certs"

}

}

],

"containers": [

{

"name": "kube-apiserver",

"image": "registry.aliyuncs.com.cn/shenshouer/kube-apiserver:v1.5.2",

"command": [

"kube-apiserver",

"--insecure-bind-address=127.0.0.1",

"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",

"--service-cluster-ip-range=10.96.0.0/12",

"--service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

"--client-ca-file=/etc/kubernetes/ssl/ca.pem",

"--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem",

"--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

"--secure-port=6443",

"--allow-privileged",

"--advertise-address=[當前Master節點IP]",

"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",

"--anonymous-auth=false",

"--etcd-servers=http://127.0.0.1:2379"

],

"resources": {

"requests": {

"cpu": "250m"

}

},

"volumeMounts": [

{

"name": "k8s",

"readOnly": true,

"mountPath": "/etc/kubernetes/"

},

{

"name": "certs",

"mountPath": "/etc/ssl/certs"

}

],

"livenessProbe": {

"httpGet": {

"path": "/healthz",

"port": 8080,

"host": "127.0.0.1"

},

"initialDelaySeconds": 15,

"timeoutSeconds": 15,

"failureThreshold": 8

}

}

],

"hostNetwork": true

},

"status": {}

}kube-controller-manager.manifest

{

"kind": "Pod",

"apiVersion": "v1",

"metadata": {

"name": "kube-controller-manager",

"namespace": "kube-system",

"creationTimestamp": null,

"labels": {

"component": "kube-controller-manager",

"tier": "control-plane"

}

},

"spec": {

"volumes": [

{

"name": "k8s",

"hostPath": {

"path": "/etc/kubernetes"

}

},

{

"name": "certs",

"hostPath": {

"path": "/etc/ssl/certs"

}

}

],

"containers": [

{

"name": "kube-controller-manager",

"image": "registry.aliyuncs.com/shenshouer/kube-controller-manager:v1.5.2",

"command": [

"kube-controller-manager",

"--address=127.0.0.1",

"--leader-elect",

"--master=127.0.0.1:8080",

"--cluster-name=kubernetes",

"--root-ca-file=/etc/kubernetes/ssl/ca.pem",

"--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

"--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem",

"--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem",

"--insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap",

"--allocate-node-cidrs=true",

"--cluster-cidr=10.244.0.0/16"

],

"resources": {

"requests": {

"cpu": "200m"

}

},

"volumeMounts": [

{

"name": "k8s",

"readOnly": true,

"mountPath": "/etc/kubernetes/"

},

{

"name": "certs",

"mountPath": "/etc/ssl/certs"

}

],

"livenessProbe": {

"httpGet": {

"path": "/healthz",

"port": 10252,

"host": "127.0.0.1"

},

"initialDelaySeconds": 15,

"timeoutSeconds": 15,

"failureThreshold": 8

}

}

],

"hostNetwork": true

},

"status": {}

}


?kube-scheduler.manifest

{

"kind": "Pod",

"apiVersion": "v1",

"metadata": {

"name": "kube-scheduler",

"namespace": "kube-system",

"creationTimestamp": null,

"labels": {

"component": "kube-scheduler",

"tier": "control-plane"

}

},

"spec": {

"containers": [

{

"name": "kube-scheduler",

"image": "registry.aliyuncs.com/shenshouer/kube-scheduler:v1.5.2",

"command": [

"kube-scheduler",

"--address=127.0.0.1",

"--leader-elect",

"--master=127.0.0.1:8080"

],

"resources": {

"requests": {

"cpu": "100m"

}

},

"livenessProbe": {

"httpGet": {

"path": "/healthz",

"port": 10251,

"host": "127.0.0.1"

},

"initialDelaySeconds": 15,

"timeoutSeconds": 15,

"failureThreshold": 8

}

}

],

"hostNetwork": true

},

"status": {}

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

其他組件安裝

kube-proxy安裝

在任意master上執行kubectl create -f kube-proxy-ds.yaml,其中kube-proxy-ds.yaml內容如下:

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

labels:

component: kube-proxy

k8s-app: kube-proxy

kubernetes.io/cluster-service: "true"

name: kube-proxy

tier: node

name: kube-proxy

namespace: kube-system

spec:

selector:

matchLabels:

component: kube-proxy

k8s-app: kube-proxy

kubernetes.io/cluster-service: "true"

name: kube-proxy

tier: node

template:

metadata:

labels:

component: kube-proxy

k8s-app: kube-proxy

kubernetes.io/cluster-service: "true"

name: kube-proxy

tier: node

spec:

containers:

- command:

- kube-proxy

- --kubeconfig=/run/kubeconfig

- --cluster-cidr=10.244.0.0/16

image: registry.aliyuncs.com/shenshouer/kube-proxy:v1.5.2

imagePullPolicy: IfNotPresent

name: kube-proxy

resources: {}

securityContext:

privileged: true

terminationMessagePath: /dev/termination-log

volumeMounts:

- mountPath: /var/run/dbus

name: dbus

- mountPath: /run/kubeconfig

name: kubeconfig

- mountPath: /etc/kubernetes/ssl

name: ssl

dnsPolicy: ClusterFirst

hostNetwork: true

restartPolicy: Always

securityContext: {}

terminationGracePeriodSeconds: 30

volumes:

- hostPath:

path: /etc/kubernetes/kubelet.conf

name: kubeconfig

- hostPath:

path: /var/run/dbus

name: dbus

- hostPath:

path: /etc/kubernetes/ssl

name: ssl

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

網絡組件安裝

在任意master上執行kubectl apply -f kube-flannel.yaml,其中kube-flannel.yaml內容如下,注意,如果是在vagrant啟動的虛擬機中運行,請修改flannled啟動參數將--iface指向具體通訊網卡

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: flannel

namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

namespace: kube-system

name: kube-flannel-cfg

labels:

tier: node

app: flannel

data:

cni-conf.json: |

{

"name": "cbr0",

"type": "flannel",

"delegate": {

"ipMasq": true,

"bridge": "cbr0",

"hairpinMode": true,

"forceAddress": true,

"isDefaultGateway": true

}

}

net-conf.json: |

{

"Network": "10.244.0.0/16",

"Backend": {

"Type": "vxlan"

}

}

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

namespace: kube-system

name: kube-flannel-ds

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: amd64

serviceAccountName: flannel

containers:

- name: kube-flannel

image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0

command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth0" ]

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

- name: install-cni

image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0

command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

DNS部署

在任意master上執行kubectl create -f skydns.yaml,其中skydns.yaml內容如下

apiVersion: v1

kind: Service

metadata:

name: kube-dns

namespace: kube-system

labels:

k8s-app: kube-dns

kubernetes.io/cluster-service: "true"

kubernetes.io/name: "KubeDNS"

spec:

selector:

k8s-app: kube-dns

clusterIP: 10.100.0.10

ports:

- name: dns

port: 53

protocol: UDP

- name: dns-tcp

port: 53

protocol: TCP

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: kube-dns

namespace: kube-system

labels:

k8s-app: kube-dns

kubernetes.io/cluster-service: "true"

spec:

# replicas: not specified here:

# 1. In order to make Addon Manager do not reconcile this replicas parameter.

# 2. Default is 1.

# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.

strategy:

rollingUpdate:

maxSurge: 10%

maxUnavailable: 0

selector:

matchLabels:

k8s-app: kube-dns

template:

metadata:

labels:

k8s-app: kube-dns

annotations:

scheduler.alpha.kubernetes.io/critical-pod: ''

scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'

spec:

containers:

- name: kubedns

image: registry.aliyuncs.com/shenshouer/kubedns-amd64:1.9

resources:

# TODO: Set memory limits when we've profiled the container for large

# clusters, then set request = limit to keep this container in

# guaranteed class. Currently, this container falls into the

# "burstable" category so the kubelet doesn't backoff from restarting it.

limits:

memory: 170Mi

requests:

cpu: 100m

memory: 70Mi

livenessProbe:

httpGet:

path: /healthz-kubedns

port: 8080

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

readinessProbe:

httpGet:

path: /readiness

port: 8081

scheme: HTTP

# we poll on pod startup for the Kubernetes master service and

# only setup the /readiness HTTP server once that's available.

initialDelaySeconds: 3

timeoutSeconds: 5

args:

- --domain=cluster.local.

- --dns-port=10053

- --config-map=kube-dns

# This should be set to v=2 only after the new image (cut from 1.5) has

# been released, otherwise we will flood the logs.

- --v=0

- --federations=myfederation=federation.test

env:

- name: PROMETHEUS_PORT

value: "10055"

ports:

- containerPort: 10053

name: dns-local

protocol: UDP

- containerPort: 10053

name: dns-tcp-local

protocol: TCP

- containerPort: 10055

name: metrics

protocol: TCP

- name: dnsmasq

image: registry.aliyuncs.com/shenshouer/kube-dnsmasq-amd64:1.4

livenessProbe:

httpGet:

path: /healthz-dnsmasq

port: 8080

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

args:

- --cache-size=1000

- --no-resolv

- --server=127.0.0.1#10053

- --log-facility=-

ports:

- containerPort: 53

name: dns

protocol: UDP

- containerPort: 53

name: dns-tcp

protocol: TCP

# see: https://github.com/kubernetes/kubernetes/issues/29055 for details

resources:

requests:

cpu: 150m

memory: 10Mi

- name: dnsmasq-metrics

image: registry.aliyuncs.com/shenshouer/dnsmasq-metrics-amd64:1.0

livenessProbe:

httpGet:

path: /metrics

port: 10054

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

args:

- --v=2

- --logtostderr

ports:

- containerPort: 10054

name: metrics

protocol: TCP

resources:

requests:

memory: 10Mi

- name: healthz

image: registry.aliyuncs.com/shenshouer/exechealthz-amd64:1.2

resources:

limits:

memory: 50Mi

requests:

cpu: 10m

# Note that this container shouldn't really need 50Mi of memory. The

# limits are set higher than expected pending investigation on #29688.

# The extra memory was stolen from the kubedns container to keep the

# net memory requested by the pod constant.

memory: 50Mi

args:

- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null

- --url=/healthz-dnsmasq

- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null

- --url=/healthz-kubedns

- --port=8080

- --quiet

ports:

- containerPort: 8080

protocol: TCP

dnsPolicy: Default? # Don't use cluster DNS.

Node節點安裝

Docker安裝

新建/etc/kubernetes/目錄

|-- kubelet.conf

|-- manifests

`-- ssl

|-- ca-key.pem

|-- ca.pem

|-- ca.srl

|-- worker.csr

|-- worker-key.pem

|-- worker-openssl.cnf

`-- worker.pem

新建/etc/kubernetes/kubelet.conf配置,參考kubelet.conf配置

新建/etc/kubernetes/ssl,證書制作參考worker證書制作

新建/etc/kubernetes/manifests

新建/opt/cni/bin,安裝CNI參考cni安裝步驟

安裝kubelet,參考kubelet安裝

systemctl enable kubelet && systemctl restart kubelet && journalctl -fu kubelet

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,786評論 6 534
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,656評論 3 419
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,697評論 0 379
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,098評論 1 314
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,855評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,254評論 1 324
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,322評論 3 442
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,473評論 0 289
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 49,014評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,833評論 3 355
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,016評論 1 371
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,568評論 5 362
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,273評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,680評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,946評論 1 288
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,730評論 3 393
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,006評論 2 374

推薦閱讀更多精彩內容