CentOS 使用二進制部署 Kubernetes集群

一、概述

kubernetes 1.13 已發布,這是 2018 年年內第四次也是最后一次發布新版本。Kubernetes 1.13 是迄今為止發布間隔最短的版本之一(與上一版本間隔十周),主要關注 Kubernetes 的穩定性與可擴展性,其中存儲與集群生命周期相關的三項主要功能已逐步實現普遍可用性。

Kubernetes 1.13 的核心特性包括:利用 kubeadm 簡化集群管理、容器存儲接口(CSI )以及將 CoreDNS 作為默認 DNS 。

利用 kubeadm 簡化集群管理功能

大多數與 Kubernetes 接觸頻繁的人或多或少都會親自動手使用 kubeadm ,它是管理集群生命周期的重要工具,能夠幫助從創建到配置再到升級的整個流程。;隨著 1.13 版本的發布,kubeadm 功能進入 GA 版本,正式普遍可用。kubeadm 處理現有硬件上的生產集群的引導,并以最佳實踐方式配置核心 Kubernetes 組件,以便為新節點提供安全而簡單的連接流程并支持輕松升級。

該 GA 版本中最值得注意的是已經畢業的高級功能,尤其是可插拔性和可配置性。kubeadm 旨在為管理員與高級自動化系統提供一套工具箱,如今已邁出重要一步。

容器存儲接口(CSI)

容器存儲接口最初于 1.9 版本中作為 alpha 測試功能引入,在 1.10 版本中進入 beta 測試,如今終于進入 GA 階段正式普遍可用。在 CSI 的幫助下,Kubernetes 卷層將真正實現可擴展性。通過 CSI ,第三方存儲供應商將可以直接編寫可與 Kubernetes 互操作的代碼,而無需觸及任何 Kubernetes 核心代碼。事實上,相關規范也已經同步進入 1.0 階段。

隨著 CSI 的穩定,插件作者將能夠按照自己的節奏開發核心存儲插件,詳見 CSI 文檔。

CoreDNS 成為 Kubernetes 的默認 DNS 服務器

在 1.11 版本中,開發團隊宣布 CoreDNS 已實現基于 DNS 服務發現的普遍可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成為 Kubernetes 中的默認 DNS 服務器。CoreDNS 是一種通用的、權威的 DNS 服務器,能夠提供與 Kubernetes 向下兼容且具備可擴展性的集成能力。由于 CoreDNS 自身單一可執行文件與單一進程的特性,因此 CoreDNS 的活動部件數量會少于之前的 DNS 服務器,且能夠通過創建自定義 DNS 條目來支持各類靈活的用例。此外,由于 CoreDNS 采用 Go 語言編寫,它具有強大的內存安全性。

CoreDNS 現在是 Kubernetes 1.13 及后續版本推薦的 DNS 解決方案,Kubernetes 已將常用測試基礎設施架構切換為默認使用 CoreDNS ,因此,開發團隊建議用戶也盡快完成切換。KubeDNS 仍將至少支持一個版本,但現在是時候開始規劃遷移了。另外,包括 1.11 中 Kubeadm 在內的許多 OSS 安裝工具也已經進行了切換。

1、安裝環境準備

部署節點說明

2、kubernetes架構圖

kubernetes架構圖

3、kubernetes工作流程

集群功能各模塊功能描述:

Master節點:

Master節點上面主要由四個模塊組成,APIServer,schedule,controller-manager,etcd

APIServer: APIServer負責對外提供RESTful的kubernetes API的服務,它是系統管理指令的統一接口,任何對資源的增刪該查都要交給APIServer處理后再交給etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對kubernetes API的調用)是直接和APIServer交互的。

schedule: schedule負責調度Pod到合適的Node上,如果把scheduler看成一個黑匣子,那么它的輸入是pod和由多個Node組成的列表,輸出是Pod和一個Node的綁定。 kubernetes目前提供了調度算法,同樣也保留了接口。用戶根據自己的需求定義自己的調度算法。

controller manager: 如果APIServer做的是前臺的工作的話,那么controller manager就是負責后臺的。每一個資源都對應一個控制器。而control manager就是負責管理這些控制器的,比如我們通過APIServer創建了一個Pod,當這個Pod創建成功后,APIServer的任務就算完成了。

etcd:etcd是一個高可用的鍵值存儲系統,kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。

Node節點:

每個Node節點主要由三個模板組成:kublet, kube-proxy

kube-proxy: 該模塊實現了kubernetes中的服務發現和反向代理功能。kube-proxy支持TCP和UDP連接轉發,默認基Round Robin算法將客戶端流量轉發到與service對應的一組后端pod。服務發現方面,kube-proxy使用etcd的watch機制監控集群中service和endpoint對象數據的動態變化,并且維護一個service到endpoint的映射關系,從而保證了后端pod的IP變化不會對訪問者造成影響,另外,kube-proxy還支持session affinity。

kublet:kublet是Master在每個Node節點上面的agent,是Node節點上面最重要的模塊,它負責維護和管理該Node上的所有容器,但是如果容器不是通過kubernetes創建的,它并不會管理。本質上,它負責使Pod的運行狀態與期望的狀態一致。

二、kubernetes安裝及配置

1、初始化環境

1.1、設置關閉防火墻及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

1.2、關閉swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

1.3、設置Docker所需參數

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

1.4、安裝 Docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y
systemctl start docker && systemctl enable docker

1.5、創建安裝目錄

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

1.6、安裝及配置CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2、安裝ETCD

2.1、創建認證證書

創建 ETCD 證書

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

創建 ETCD CA 配置文件

cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen"
        }
    ]
}
EOF

創建 ETCD Server 證書

cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.187.131",
    "192.168.187.132",
    "192.168.187.133"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen"
        }
    ]
}
EOF

生成 ETCD CA 證書和私鑰

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

2.2、部署ETCD

解壓安裝文件

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
vi /k8s/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.187.131:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.187.131:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.187.131:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.187.131:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.187.131:2380,etcd02=https://192.168.187.132:2380,etcd03=https://192.168.187.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

創建 etcd的 systemd unit 文件

vi /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/k8s/etcd/ssl/server.pem \
--peer-key-file=/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

拷貝證書文件

cp ca*pem server*pem /k8s/etcd/ssl

將啟動文件、配置文件拷貝到 節點1、節點2

cd /k8s/ 
scp -r etcd 192.168.187.132:/k8s/
scp -r etcd 192.168.187.133:/k8s/
scp /usr/lib/systemd/system/etcd.service  192.168.187.132:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service  192.168.187.133:/usr/lib/systemd/system/etcd.service 

vi /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.187.132:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.187.132:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.187.132:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.187.132:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.187.131:2380,etcd02=https://192.168.187.132:2380,etcd03=https://192.168.187.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

vi /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.187.133:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.187.133:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.187.133:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.187.133:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.187.131:2380,etcd02=https://192.168.187.132:2380,etcd03=https://192.168.187.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

啟動ETCD服務

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

創建etcdctl命令軟鏈接

ln -s /k8s/etcd/bin/etcdctl /usr/bin/etcdctl

驗證集群是否正常運行

[root@master ~]# etcdctl \
--ca-file=/k8s/etcd/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--endpoints="https://192.168.187.131:2379,\
https://192.168.187.132:2379,\
https://192.168.187.133:2379" cluster-health
member 88594f44ddf134a9 is healthy: got healthy result from https://192.168.187.131:2379
member a1380f268e8526c6 is healthy: got healthy result from https://192.168.187.133:2379
member ba9a4cade1b1efa7 is healthy: got healthy result from https://192.168.187.132:2379
cluster is healthy

2.2、部署harbor

2.2.1、harbor簡介

Docker容器應用的開發和運行離不開可靠的鏡像管理,雖然Docker官方也提供了公共的鏡像倉庫,但是從安全和效率等方面考慮,部署我們私有環境內的Registry也是非常必要的。Harbor是由VMware公司開源的企業級的Docker Registry管理項目,它包括權限管理(RBAC)、LDAP、日志審核、管理界面、自我注冊、鏡像復制和中文支持等功能。

2.2.2、安裝docker-compose

curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
#查看版本
[root@harbor ~]# docker-compose version
docker-compose version 1.25.0-rc2, build 661ac20e
docker-py version: 4.0.1
CPython version: 3.7.4
OpenSSL version: OpenSSL 1.1.0k  28 May 2019

2.2.3、Harbor私有倉庫的安裝

  • 下載Harbor安裝文件
    從 github harbor 官網 release 頁面下載指定版本的安裝包。
wget https://github.com/vmware/harbor/releases/download/v1.5.0/harbor-offline-installer-v1.5.0.tgz
tar xvf harbor-offline-installer-v1.5.0.tgz
  • 配置Harbor
    解壓縮之后,目錄下回生成harbor.conf文件,該文件就是Harbor的配置文件。
## Configuration file of Harbor

# hostname設置訪問地址,可以使用ip、域名,不可以設置為127.0.0.1或localhost
hostname = 192.168.187.134

# 訪問協議,默認是http,也可以設置https,如果設置https,則nginx ssl需要設置on
ui_url_protocol = http

# mysql數據庫root用戶默認密碼root123,實際使用時修改下
db_password = root123

max_job_workers = 3 
customize_crt = on
ssl_cert = /data/cert/server.crt
ssl_cert_key = /data/cert/server.key
secretkey_path = /data
admiral_url = NA

# 郵件設置,發送重置密碼郵件時使用
email_identity = 
email_server = smtp.mydomain.com
email_server_port = 25
email_username = sample_admin@mydomain.com
email_password = abc
email_from = admin <sample_admin@mydomain.com>
email_ssl = false

# 啟動Harbor后,管理員UI登錄的密碼,默認是Harbor12345
harbor_admin_password = Harbor12345

# 認證方式,這里支持多種認證方式,如LADP、本次存儲、數據庫認證。默認是db_auth,mysql數據庫認證
auth_mode = db_auth

# LDAP認證時配置項
#ldap_url = ldaps://ldap.mydomain.com
#ldap_searchdn = uid=searchuser,ou=people,dc=mydomain,dc=com
#ldap_search_pwd = password
#ldap_basedn = ou=people,dc=mydomain,dc=com
#ldap_filter = (objectClass=person)
#ldap_uid = uid 
#ldap_scope = 3 
#ldap_timeout = 5

# 是否開啟自注冊
self_registration = on

# Token有效時間,默認30分鐘
token_expiration = 30

# 用戶創建項目權限控制,默認是everyone(所有人),也可以設置為adminonly(只能管理員)
project_creation_restriction = everyone

verify_remote_cert = on
  • 啟動 Harbor
    修改完配置文件后,在的當前目錄執行./install.sh,Harbor服務就會根據當期目錄下的docker-compose.yml開始下載依賴的鏡像,檢測并按照順序依次啟動各服務。
[root@localhost harbor]# ./install.sh 

[Step 0]: checking installation environment ...

Note: docker version: 19.03.2

Note: docker-compose version: 1.25.0

[Step 1]: loading Harbor images ...
  • Harbor啟動服務如下
[root@harbor harbor]# docker-compose ps
       Name                     Command                  State                                    Ports                              
-------------------------------------------------------------------------------------------------------------------------------------
harbor-adminserver   /harbor/start.sh                 Up (healthy)                                                                   
harbor-db            /usr/local/bin/docker-entr ...   Up (healthy)   3306/tcp                                                        
harbor-jobservice    /harbor/start.sh                 Up                                                                             
harbor-log           /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp                                       
harbor-ui            /harbor/start.sh                 Up (healthy)                                                                   
nginx                nginx -g daemon off;             Up (healthy)   0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:80->80/tcp
redis                docker-entrypoint.sh redis ...   Up             6379/tcp                                                        
registry             /entrypoint.sh serve /etc/ ...   Up (healthy)   5000/tcp 
  • 訪問web harbor
    在瀏覽器輸入http://192.168.187.134訪問harbor


    harbor頁面
  • 登錄 Web Harbor
    輸入admin用戶密碼登錄系統


    harbor

    在192.168.187.134主機登錄harbor

[root@harbor ~]# docker login 192.168.187.134
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

推送pause鏡像到harbor倉庫

[root@harbor ~]# docker push 192.168.187.134/k8s/pause-amd64:3.0
The push refers to repository [192.168.187.134/k8s/pause-amd64]
5f70bf18a086: Pushed 
41ff149e94f2: Pushed 
3.0: digest: sha256:f04288efc7e65a84be74d4fc63e235ac3c6c603cf832e442e0bd3f240b10a91b size: 939

2.3、部署kubernetes

2.3.1、創建認證證書

創建 Kubernetes CA 證書

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成API_SERVER證書

cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.187.131",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

創建 Kubernetes Proxy 證書

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Shenzhen",
      "ST": "Shenzhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ssh-key認證

[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RpxtEMGvYrQN5yA5tawDLd3jBRAgx33+xYUOAfH6mto root@master
The key's randomart image is:
+---[RSA 2048]----+
|..ooooo+=+ .     |
| o. . +oo+. .    |
|   o B o==o.     |
|  o * Xoo.=      |
|   o *.@So       |
|    o =o=        |
|     o ..        |
|     . o         |
|    ..E          |
+----[SHA256]-----+

2.3.2、部署master節點

kubernetes master 節點運行如下組件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-scheduler 和 kube-controller-manager 可以以集群模式運行,通過 leader 選舉產生一個工作進程,其它進程處于阻塞模式。

下載kubernetes-server壓縮包

wget https://storage.googleapis.com/kubernetes-release/release/v1.13.11/kubernetes-server-linux-amd64.tar.gz

將二進制文件解壓拷貝到master 節點

[root@master ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[root@master ~]# cd kubernetes/server/bin/
[root@master bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

拷貝認證

[root@master ~]# cp *pem /k8s/kubernetes/ssl/

部署 kube-apiserver 組件
創建 TLS Bootstrapping Token

[root@master bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
9f5cb09c4e8d625501b4bfd6df0e56a3
[root@master bin]# vi /k8s/kubernetes/cfg/token.csv
9f5cb09c4e8d625501b4bfd6df0e56a3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

創建apiserver配置文件

[root@master bin]# vi /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.187.131:2379,https://192.168.187.132:2379,https://192.168.187.133:2379 \
--bind-address=192.168.187.131 \
--secure-port=6443 \
--advertise-address=192.168.187.131 \
--allow-privileged=true \
--service-cluster-ip-range=20.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

創建 kube-apiserver systemd unit 文件

vi /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動服務

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

查看apiserver是否運行

[root@master ~]# systemctl status kube-apiserver        
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2019-10-04 12:47:16 CST; 1min 19s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 6597 (kube-apiserver)
    Tasks: 8
   Memory: 320.1M
   CGroup: /system.slice/kube-apiserver.service
           └─6597 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.187.131:2379,https://192.168.18...

10月 04 12:48:32 master kube-apiserver[6597]: I1004 12:48:32.392464    6597 available_controller.go:434] Updating v1.storage.k8s.io
10月 04 12:48:32 master kube-apiserver[6597]: I1004 12:48:32.409206    6597 available_controller.go:434] Updating v1beta1.admissionr....k8s.io
10月 04 12:48:32 master kube-apiserver[6597]: I1004 12:48:32.409462    6597 available_controller.go:434] Updating v1beta1.certificates.k8s.io
10月 04 12:48:32 master kube-apiserver[6597]: I1004 12:48:32.409582    6597 available_controller.go:434] Updating v1beta1.scheduling.k8s.io
10月 04 12:48:33 master kube-apiserver[6597]: I1004 12:48:33.467725    6597 httplog.go:90] GET /api/v1/namespaces/kube-system: (11.6...:42400]
10月 04 12:48:33 master kube-apiserver[6597]: I1004 12:48:33.471771    6597 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.56...:42400]
10月 04 12:48:33 master kube-apiserver[6597]: I1004 12:48:33.475319    6597 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (...:42400]
10月 04 12:48:35 master kube-apiserver[6597]: I1004 12:48:35.693829    6597 httplog.go:90] GET /api/v1/namespaces/default: (13.43223...:42400]
10月 04 12:48:35 master kube-apiserver[6597]: I1004 12:48:35.708441    6597 httplog.go:90] GET /api/v1/namespaces/default/services/k...:42400]
10月 04 12:48:35 master kube-apiserver[6597]: I1004 12:48:35.776261    6597 httplog.go:90] GET /api/v1/namespaces/default/endpoints/...:42400]
Hint: Some lines were ellipsized, use -l to show in full.

部署kube-scheduler
創建kube-scheduler配置文件

[root@master ~]# vi /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
  • –address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;
  • –kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它連接和驗證 kube-apiserver;
  • –leader-elect=true:集群運行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;

創建kube-scheduler systemd unit 文件

[root@master ~]# vi /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動服務

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

查看kube-scheduler是否運行

[root@master ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2019-10-04 13:05:14 CST; 10s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 6737 (kube-scheduler)
    Tasks: 7
   Memory: 48.1M
   CGroup: /system.slice/kube-scheduler.service
           └─6737 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

10月 04 13:05:17 master kube-scheduler[6737]: I1004 13:05:17.907502    6737 shared_informer.go:227] caches populated
10月 04 13:05:17 master kube-scheduler[6737]: I1004 13:05:17.907516    6737 shared_informer.go:227] caches populated
10月 04 13:05:17 master kube-scheduler[6737]: I1004 13:05:17.907530    6737 shared_informer.go:227] caches populated
10月 04 13:05:17 master kube-scheduler[6737]: I1004 13:05:17.907544    6737 shared_informer.go:227] caches populated
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.008490    6737 shared_informer.go:227] caches populated
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.008556    6737 shared_informer.go:227] caches populated
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.008585    6737 shared_informer.go:227] caches populated
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.008866    6737 leaderelection.go:241] attempting to acquire leader leas...uler...
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.066076    6737 leaderelection.go:251] successfully acquired lease kube-...heduler
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.067450    6737 shared_informer.go:227] caches populated
Hint: Some lines were ellipsized, use -l to show in full.

部署kube-controller-manager
創建kube-controller-manager配置文件

[root@master ~]# vi /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=20.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"

創建kube-controller-manager systemd unit 文件

[root@master ~]# vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動服務

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

查看kube-controller-manager是否運行

[root@master ~]# systemctl status kube-controller-manager 
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2019-10-04 13:11:14 CST; 5s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 6792 (kube-controller)
    Tasks: 5
   Memory: 124.6M
   CGroup: /system.slice/kube-controller-manager.service
           └─6792 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address...

10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.217119    6792 ttl_controller.go:116] Starting TTL controller
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.217503    6792 shared_informer.go:197] Waiting for caches to sy...or TTL
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.217654    6792 request.go:538] Throttling request took 414.6079...ut=32s
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.218381    6792 garbagecollector.go:130] Starting garbage collec...roller
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.218545    6792 shared_informer.go:197] Waiting for caches to sy...lector
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.218597    6792 graph_builder.go:272] garbage controller monitor...isions
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.221343    6792 graph_builder.go:282] GraphBuilder running
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.221862    6792 controllermanager.go:534] Started "pv-protection"
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.221912    6792 controllermanager.go:519] Starting "replicationc...oller"
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.223370    6792 controllermanager.go:534] Started "replicationcontroller"
Hint: Some lines were ellipsized, use -l to show in full.

將可執行文件路/k8s/kubernetes/ 添加到 PATH 變量中

[root@master ~]# vi .bash_profile
PATH=$PATH:$HOME/bin:/k8s/kubernetes/bin
[root@master ~]# . .bash_profile

查看master集群狀態

[root@master bin]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}

2.3.3、部署node節點

kubernetes work 節點運行如下組件:

  • docker 前面已經部署
  • kubelet
  • kube-proxy

部署 kubelet 組件

  • kublet 運行在每個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如* exec、run、logs 等;
  • kublet 啟動時自動向 kube-apiserver 注冊節點信息,內置的 cadvisor 統計和監控節點的資源使用情況;
  • 為確保安全,本文檔只開啟接收 https 請求的安全端口,對請求進行認證和授權,拒絕未授權的訪問(如apiserver、heapster)。
    將kubelet 二進制文件拷貝node節點
cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.187.132:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.187.133:/k8s/kubernetes/bin/

創建 kubelet bootstrap kubeconfig 文件

vi  environment.sh
# 創建kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=b772932145d9062ea2f2e9adf0ac87dc
KUBE_APISERVER="https://192.168.187.131:6443"
# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 創建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

將bootstrap kubeconfig kube-proxy.kubeconfig 文件拷貝到所有 nodes節點

cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.187.132:/k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.187.133:/k8s/kubernetes/cfg/

創建kubelet 參數配置文件拷貝到所有 nodes節點
創建 kubelet 參數配置模板文件:(其他節點需要將address改為節點IP)

vi /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.187.131
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["20.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

創建kubelet配置文件(其他節點需要將hostname-override改為該節點IP)

[root@node1 ~]# vi /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.187.131 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=192.168.187.134/k8s/pause-amd64:3.0"

創建kubelet systemd unit 文件

[root@node1 ~]# vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

將kubelet-bootstrap用戶綁定到系統集群角色

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

啟動服務

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

approve kubelet CSR 請求
可以手動或自動 approve CSR 請求。推薦使用自動的方式,因為從 v1.8 版本開始,可以自動輪轉approve csr 后生成的證書。
手動 approve CSR 請求
查看 CSR 列表:

[root@master ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-P7FPCm4aKQguhmejD8v0mTTHBKBwMDTHkVnf7bE04uk   12s   kubelet-bootstrap   Pending
[root@master ~]# kubectl certificate approve node-csr-P7FPCm4aKQguhmejD8v0mTTHBKBwMDTHkVnf7bE04uk
certificatesigningrequest.certificates.k8s.io/node-csr-P7FPCm4aKQguhmejD8v0mTTHBKBwMDTHkVnf7bE04uk approved
[root@master ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-P7FPCm4aKQguhmejD8v0mTTHBKBwMDTHkVnf7bE04uk   74s   kubelet-bootstrap   Approved,Issued
  • Requesting User:請求 CSR 的用戶,kube-apiserver 對它進行認證和授權;
  • Subject:請求簽名的證書信息;
  • 證書的 CN 是 system:node:kube-node2, Organization 是 system:nodes,kube-apiserver 的 Node 授權模式會授予該證書的相關權限;

查看集群狀態

[root@master ~]# kubectl get node
NAME              STATUS   ROLES    AGE     VERSION
192.168.187.131   Ready    <none>   22h     v1.13.11
192.168.187.132   Ready    <none>   9m9s    v1.13.11
192.168.187.133   Ready    <none>   6m50s   v1.13.11

部署 kube-proxy 組件

kube-proxy 運行在所有 node節點上,它監聽 apiserver 中 service 和 Endpoint 的變化情況,創建路由規則來進行服務負載均衡。

創建 kube-proxy 配置文件(其他節點需要將hostname-override改為該節點IP)

vi /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.187.131 \
--cluster-cidr=20.0.0.0/24 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
  • bindAddress: 監聽地址;
  • clientConnection.kubeconfig: 連接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根據 –cluster-cidr 判斷集群內部和外部流量,指定 –cluster-cidr 或 –masquerade-* all 選項后 kube-proxy 才會對訪問 Service IP 的請求做 SNAT;
  • hostnameOverride: 參數值必須與 kubelet 的值一致,否則 kube-proxy 啟動后會找不到該 Node,從而不會創建任何 ipvs 規則;
  • mode: 使用 ipvs 模式;

創建kube-proxy systemd unit 文件

vi /usr/lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動服務

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@master ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2019-10-09 23:09:14 CST; 2min 3s ago
 Main PID: 5525 (kube-proxy)
    Tasks: 0
   Memory: 33.0M
   CGroup: /system.slice/kube-proxy.service
           ? 5525 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.187.131 --cluster-cidr=20.0.0.0/24 --k...

Kubernetes配置secret拉取私倉庫

1.登錄harbor鏡像倉庫

[root@master ~]# docker login 192.168.187.134
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

之后輸入密碼就可以了, 這個時候我們可以在配置文件中查看登錄情況

[root@master ~]# cat ./.docker/config.json |base64
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4Ny4xMzQiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2
WVdKalFERXlNMEU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAi
RG9ja2VyLUNsaWVudC8xOS4wMy4yIChsaW51eCkiCgl9Cn0=
  1. 生成密鑰secret
    創建secret.yaml
vi secret.yaml
apiVersion: v1
kind: Secret
metadata:
  namespace: default
  name: harbor
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4Ny4xMzQiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2WVdKalFERXlNMEU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy4yIChsaW51eCkiCgl9Cn0=

將secret發布到Kubernetes集群中

[root@master ~]# kubectl create -f secret.yaml
secret/harbor created

創建nginx應用

創建nginx.yaml

vi nginx.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: 192.168.187.134/public/nginx:latest
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: harbor

將nginx.yaml發布到Kubernetes集群

[root@master ~]# kubectl create -f nginx.yaml 
deployment.extensions/nginx-deployment created

[root@master ~]# kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS   AGE     IP           NODE              NOMINATED NODE   READINESS GATES
nginx-deployment-84bb5596c-bnb6x   1/1     Running   0          2m23s   172.17.0.2   192.168.187.133   <none>           <none>
nginx-deployment-84bb5596c-jkt4h   1/1     Running   0          2m23s   172.17.0.2   192.168.187.132   <none>           <none>
nginx-deployment-84bb5596c-zvmhq   1/1     Running   0          2m24s   172.17.0.3   192.168.187.133   <none>           <none>
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 230,501評論 6 544
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 99,673評論 3 429
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 178,610評論 0 383
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,939評論 1 318
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 72,668評論 6 412
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 56,004評論 1 329
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 44,001評論 3 449
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 43,173評論 0 290
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 49,705評論 1 336
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 41,426評論 3 359
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,656評論 1 374
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 39,139評論 5 364
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,833評論 3 350
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 35,247評論 0 28
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 36,580評論 1 295
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 52,371評論 3 400
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,621評論 2 380

推薦閱讀更多精彩內容