k8s

k8s組件功能介紹

image.png
  • etcd保存了整個集群的狀態
  • apiserver提供了資源操作的唯一入口,并提供認證、授權、訪問控制、API注冊和發現等機制
  • controller manager負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等
  • scheduler負責資源的調度,按照預定的調度策略將Pod調度到相應的機器上
  • kubelet負責維護容器的生命周期,同時也負責Volume(CVI)和網絡(CNI)的管理
  • kube-proxy負責為Service提供cluster內部的服務發現和負載均衡
  • Ingress Controller為服務提供外網入口

k8s的概念介紹

Namespace

Namespace是對一組資源和對象的抽象集合,比如可以用來將系統內部的對象劃分為不同的項目組或用戶組。常見的pods, services, replication controllers和deployments等都是屬于某一個namespace的(默認是default),而node, persistentVolumes等則不屬于任何namespace。

Namespace常用來隔離不同的用戶,比如Kubernetes自帶的服務一般運行在kube-system namespace中。

(1) 命令行直接創建
$ kubectl create namespace new-namespace

(2) 通過文件創建
$ cat my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: new-namespace

$ kubectl create -f ./my-namespace.yaml
-----------------------------------------------------------
刪除命名空間
kubectl delete namespaces new-namespace
1.刪除一個namespace會自動刪除所有屬于該namespace的資源。
2.default和kube-system命名空間不可刪除。

yum 安裝k8s軟件

yum install -y kubelet-1.19.0 kubectl-1.19.0 kubeadm-1.19.0
  1. 在master執行初始化節點
kubeadm init --kubernetes-version=1.19.0  --apiserver-advertise-address=10.0.3.166 --image-repository registry.aliyuncs.com/google_containers --service-cidr=192.168.0.1/16 --pod-network-cidr=10.10.0.0/16 --ignore-preflight-errors=Swap,NumCPU
  • 初始化過程會遇到提示Cgroup必須為systemd的問題,修改方法如下:
通過 docker info | grep Cgroup查看當前Cgroup
通過修改/etc/docker/daemon.json 文件,加入以下內容
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

然后執行systemctl restart docker

  • 初始化過程遇到提示未關閉交換內存
    running with swap on is not supported. Please disable swap,修改方法如下:
swapoff -a && sed -i '/swap/d' /etc/fstab

初始化成功會出現如下提示:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
4. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.3.166:6443 --token 1laxen.38bzx8hzul2ikbfk \
    --discovery-token-ca-cert-hash sha256:2df1e08577b6f8671bb19a7eaa2bdb9142040d370dae282d94b3001cf61619ab
  1. 按上述成功的步驟執行上述1-3步驟的語句,其中第4句是進行k8s的網絡設置
    1-3步驟執行完畢后,查看k8s的master節點的組件運行情況
  2. 執行kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0               Healthy     {"health":"true"}
會發現controller-manager和scheduler連接有問題,排查是否本地10252,10251端口未啟動。
修改/etc/kubernetes/manifests/下的kube-controller-manager.yaml,kube-scheduler.yaml配置文件
去掉配置中的 --port=0的配置
然后再重啟systemctl restart kubelet
image.png

重啟完畢后,再執行kubectl get cs,會顯示如下:


image.png

這里使用flannel的進行K8S網絡的設置

flannal的安裝步驟如下

  1. 獲取kube-flannel.yml
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

2.kubectl apply -f kube-flannel.yml
3.查看kubectl get pod -n kube-system


image.png

節點加入

  1. 在node1和node2節點執行
kubeadm join 10.0.3.166:6443 --token 1laxen.38bzx8hzul2ikbfk    --discovery-token-ca-cert-hash sha256:2df1e08577b6f8671bb19a7eaa2bdb9142040d370dae282d94b3001cf61619ab

如果master節點的token過期

#得到token
[root@master k8s]# kubeadm token create 
bjjq4p.4c8ntpy20aoqptmi
#得到discovery-token-ca-cert-hash
[root@master k8s]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
2df1e08577b6f8671bb19a7eaa2bdb9142040d370dae282d94b3001cf61619ab
  1. 加入完畢后查看節點
kubectl get nodes
[root@master k8s]# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
cdh2     NotReady   <none>   38s     v1.19.0
cdh3     Ready      <none>   20m     v1.19.0
master   Ready      master   7d21h   v1.19.0

ingress-controller部署

  1. wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml .
  2. 修改mandatory.yaml中的部分配置
    展示其中部分配置,添加hostNetwork: true 讓端口暴露出來
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      hostNetwork: true #添加此配置將80 443端口暴露到宿主機去
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
  1. 執行kubectl apply -f mandatory.yaml
  2. 檢查服務部署情況
[root@master k8s]# kubectl get pod -o wide -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE     IP           NODE   NOMINATED NODE   READINESS GATES
nginx-ingress-controller-7d4544b644-4d974   1/1     Running   0          6m52s   10.0.3.164   cdh2   <none>           <none>
  1. 在部署的節點上查看端口映射
[root@cdh2 nginx]# netstat -tnlp | grep nginx
tcp        0      0 127.0.0.1:10246         0.0.0.0:*               LISTEN      115141/nginx: maste 
tcp        0      0 127.0.0.1:10247         0.0.0.0:*               LISTEN      115141/nginx: maste 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      115141/nginx: maste 
tcp        0      0 0.0.0.0:8181            0.0.0.0:*               LISTEN      115141/nginx: maste 
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      115141/nginx: maste 
tcp        0      0 127.0.0.1:10245         0.0.0.0:*               LISTEN      115108/nginx-ingres 
tcp6       0      0 :::10254                :::*                    LISTEN      115108/nginx-ingres 
tcp6       0      0 :::80                   :::*                    LISTEN      115141/nginx: maste 
tcp6       0      0 :::8181                 :::*                    LISTEN      115141/nginx: maste 
tcp6       0      0 :::443                  :::*                    LISTEN      115141/nginx: maste

由于配置了hostnetwork,nginx已經在node主機本地監聽80/443/8181端口。其中8181是nginx-controller默認配置的一個default backend。這樣,只要訪問node主機有公網IP,就可以直接映射域名來對外網暴露服務了。如果要nginx高可用的話,可以在多個node上部署,并在前面再搭建一套LVS+keepalive做負載均衡。用hostnetwork的另一個好處是,如果lvs用DR模式的話,是不支持端口映射的,這時候如果用nodeport,暴露非標準的端口,管理起來會很麻煩。

  1. 這里可以使用daemonset將ingress-controller部署到特定node,需要修改部分配置:先給要部署nginx-ingress的node打上特定標簽,這里測試部署在"node-1"這個節點。
給node-1節點打上  isIngress="true"的標簽
kubectl label node node-1 isIngress="true"
然后在daemonset中指定nodeSelector為isIngress=true
spec:
  nodeSelector:
    isIngress: true
  1. 修改mandatory.yaml中的部分配置,找到kind:Deployment的配置,將其進行修改為如下
apiVersion: apps/v1
kind: DaemonSet #將Deployment改為DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  
#  replicas: 1 # 刪除replicas
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      hostNetwork: true #添加此配置將80 443端口暴露到宿主機去
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      #選擇對應的標簽
      nodeSelector:
        isIngress: true  #kubernetes.io/os: linux 修改為 isIngress:true
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

metrics-server部署

用于k8s可以通過監控pod的CPU等指標的使用率,進行水平擴容

  1. 創建metric-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  namespace: kube-system
  name: metrics-server

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:aggregated-metrics-reader
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods","nodes"]
  verbs: ["get","list","watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    k8s-app: metrics-server
rules:
- apiGroups: [""]
  resources: ["pods","nodes","nodes/stats","namespaces","configmaps"]
  verbs: ["get","list","watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
  labels:
    k8s-app: metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
  labels:
    k8s-app: metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
  labels:
    k8s-app: metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
  1. 創建metric-api.yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  versionPriority: 100
  1. 創建metric-deploy.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      hostNetwork: true
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        image: bitnami/metrics-server:0.4.1
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-use-node-status-port
          - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        ports:
        - name: https
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        resources:
          limits:
            memory: 1Gi
            cpu: 1000m
          requests:
            memory: 1Gi
            cpu: 1000m
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
        - name: localtime
          readOnly: true
          mountPath: /etc/localtime
      volumes:
      - name: tmp-dir
        emptyDir: {}
      - name: localtime
        hostPath:
          type: File
          path: /etc/localtime
      nodeSelector:
        kubernetes.io/os: linux
  1. 分別執行kubectl apply -f metric-rbac.yaml, kubectl apply -f metric-api.yaml,kubectl apply -f metric-deploy.yaml
  2. 執行kubectl top po,出現以下輸出即成功
[root@master metrics-server]# kubectl top po
NAME                                CPU(cores)   MEMORY(bytes)   
nginx-deployment-77c6777f7b-qvpx2   0m           1Mi             
nginx-node                          0m           3Mi

dashbord安裝

  1. wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.5/aio/deploy/recommended.yaml
    這里注意對應的版本號
    2.kubectl apply -f recommended.yaml

3.查看是否安裝成功,kubectl get pods --all-namespaces

[root@master k8s]# kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
kube-system            coredns-6d56c8448f-tq4sb                     1/1     Running            0          7d22h
kube-system            coredns-6d56c8448f-x4prb                     1/1     Running            0          7d22h
kube-system            etcd-master                                  1/1     Running            0          7d22h
kube-system            kube-apiserver-master                        1/1     Running            0          7d22h
kube-system            kube-controller-manager-master               1/1     Running            8          7d20h
kube-system            kube-flannel-ds-f2m7q                        1/1     Running            0          118m
kube-system            kube-flannel-ds-pxhz6                        1/1     Running            0          98m
kube-system            kube-flannel-ds-r95k6                        1/1     Running            0          6d5h
kube-system            kube-proxy-k6jpw                             1/1     Running            0          7d22h
kube-system            kube-proxy-nxdbf                             1/1     Running            0          118m
kube-system            kube-proxy-v2vfg                             1/1     Running            0          98m
kube-system            kube-scheduler-master                        1/1     Running            8          7d20h
kubernetes-dashboard   dashboard-metrics-scraper-79c5968bdc-rvj7x   1/1     Running            0          12m
kubernetes-dashboard   kubernetes-dashboard-6f65cb5c64-r27w6        0/1     CrashLoopBackOff   7          12m
發現是失敗的,然后通過查看日志
kubectl logs kubernetes-dashboard-6f65cb5c64-r27w6 --tail=100 -n kubernetes-dashboard
提示如下錯誤:
2021/03/31 08:41:18 Starting overwatch
2021/03/31 08:41:18 Using namespace: kubernetes-dashboard
2021/03/31 08:41:18 Using in-cluster config to connect to apiserver
2021/03/31 08:41:18 Using secret token for csrf signing
2021/03/31 08:41:18 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://192.168.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 192.168.0.1:443: connect: connection timed out

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003b7840)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x413
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000468100)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000468100)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
main.main()
    /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x21c
由于無法訪問api-server,因此將dashbord部署在master上,需要修改配置文件
spec:
      nodeName: master #添加指定到master
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6

spec:
      nodeName: master #添加指定到master
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.5
          imagePullPolicy: Always

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32500 #宿主機對外的端口
  type: NodePort #修改為nodeport訪問,便于瀏覽器訪問,否則無法訪問
  selector:
    k8s-app: kubernetes-dashboard

修改完配置后,卸載dashbord,重新安裝
kubectl -n kubernetes-dashboard delete $(sudo kubectl -n kubernetes-dashboard get pod -o name | grep dashboard)
也可以通過以下命令查看服務
 kubectl get svc --all-namespaces
[root@master k8s]# kubectl get svc --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   192.168.0.1       <none>        443/TCP                  7d23h
kube-system            kube-dns                    ClusterIP   192.168.0.10      <none>        53/UDP,53/TCP,9153/TCP   7d23h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   192.168.99.230    <none>        8000/TCP                 42m
kubernetes-dashboard   kubernetes-dashboard        NodePort    192.168.253.207   <none>        443:32500/TCP            42m
  1. 我們需要使用daemonset部署到特定node,需要修改部分配置:先給要部署nginx-ingress的node打上特定標簽,這里假設測試部署在"cdh2"這個節點。
kubectl label master cdh2 isIngress="true"
這里就能強制指定nginx-ingress部署在哪個節點上面,方便后期在nginx-ingress前面加一層lvs的負載均衡
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容