Envoy xDS示例

一、Envoy xDS示例

代碼克隆參照Envoy示例博文

  • eds-filesystem
cd servicemesh_in_practise/Dynamic-Configuration/eds-filesystem
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 啟動
docker-compose up 

# 驗證
curl 172.31.11.2

注:該示例的docker-compose中有三個envoy鏡像位置,需要添加三次;而且envoy監聽的listener端口和端點發現端口,可以一樣,不會沖突

image.png

修改配置文件,模擬添加節點,讓eds動態發現

docker exec -it edsfilesystem_envoy_1 sh
cd /etc/envoy/eds.conf.d
cat eds.yaml.v2 > eds.yaml
mv eds.yaml bak && mv bak eds.yaml # 此步驟是在容器的時候需要運行,在宿主機不需要運行,是為了強制改變文件,以便基于inode監視的工作機制可被觸發,讓envoy動態發現新增節點
curl 172.31.11.2:9901/clusters # 可以通過這個命令查看新增接節點,mv命令執行前,雖然已經新增接節點,但是envoy并沒有發現,mv命令執行后,可以查看到

# 驗證
curl 172.31.11.2
image.png
  • lds-cds-filesystem
cd servicemesh_in_practise/Dynamic-Configuration/lds-cds-filesystem
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
...
        - webserver01-sidecar
        - webserver # 添加別名,后續修改cds.yaml配置文件中的域名驗證
...
        - webserver02-sidecar
        - webserver # 添加別名,后續修改cds.yaml配置文件中的域名驗證
# 啟動
docker-compose up 

# 驗證
curl 172.31.12.2

環境變量添加三次

image.png

修改配置文件,監聽端口驗證

docker exec -it ldscdsfilesystem_envoy_1 sh
cd /etc/envoy/conf.d
# 修改listener端口,從80改為10080
vi lds.yaml
resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
  name: listener_http
  address:
    socket_address: { address: 0.0.0.0, port_value: 10080 }
mv lds.yaml bak && mv bak lds.yaml
curl 172.31.12.2:9901/listeners # 查看監聽端口

# 驗證
curl 172.31.12.2:10080
image.png

修改配置文件,刪除節點驗證

vi cds.yaml
resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
  name: webcluster
  connect_timeout: 1s
  type: STRICT_DNS
  load_assignment:
    cluster_name: webcluster
    endpoints:
    - lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: webserver01
              port_value: 80
mv cds.yaml bak && mv bak cds.yaml
curl 172.31.12.2:9901/clusters # 驗證集群節點信息

# 驗證
curl 172.31.12.2:10080
image.png

修改域名解析驗證

vi cds.yaml
mv cds.yaml bak && mv bak cds.yaml
curl 172.31.12.2:9901/clusters # 驗證集群節點信息

# 驗證
curl 172.31.12.2:10080
image.png
  • lds-cds-grpc
cd servicemesh_in_practise/Dynamic-Configuration/lds-cds-grpc
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 啟動
docker-compose up
curl 172.31.15.2:9901/clusters # 查看集群信息
curl 172.31.15.2:9901/listeners # 查看監聽端口信息

# 驗證
curl 172.31.15.2
image.png

修改配置文件,監聽端口和添加節點驗證

docker exec -it ldscdsgrpc_xdsserver_1 sh
cd /etc/envoy-xds-server/config
cat config.yaml-v2 > config.yaml # 由于配置文件實時生效,最好不要直接編輯config.yaml文件,vi的自動保存功能會讓xDS server下發配置
curl 172.31.15.2:9901/clusters # 查看集群信息
curl 172.31.15.2:9901/listeners # 查看監聽端口信息
yum install jq -y # 安裝jq命令
# 根據config_dump接口查出來的信息,用jq命令過濾出根據配置信息動態發現的集群
curl -s 172.31.15.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'
[
  {
    "version_info": "411",
    "cluster": {
      "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
      "name": "webcluster", # 對應集群的名字
      "type": "EDS",
      "eds_cluster_config": {
        "eds_config": {
          "api_config_source": {
            "api_type": "GRPC", # api協議
            "grpc_services": [
              {
                "envoy_grpc": {
                  "cluster_name": "xds_cluster" # 發現webcluster對應的上游動態服務(xDS server)集群名字
                }
              }
            ],
            "set_node_on_first_message_only": true,
            "transport_api_version": "V3" # api版本
          },
          "resource_api_version": "V3"
        }
      },
      "connect_timeout": "5s",
      "dns_lookup_family": "V4_ONLY"
    },
    "last_updated": "20xx-04-25Txx:13:45.024Z"
  }
]
# 查看listener詳細信息
curl -s 172.31.15.2:9901/config_dump?resource=dynamic_listeners | jq '.configs[0].active_state.listener'
{
  "@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
  "name": "listener_http",
  "address": {
    "socket_address": {
      "address": "0.0.0.0",
      "port_value": 10080
    }
  },
  "filter_chains": [
    {
      "filters": [
        {
          "name": "envoy.filters.network.http_connection_manager",
          "typed_config": {
            "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
            "stat_prefix": "http",
            "rds": {
              "config_source": {
                "api_config_source": {
                  "api_type": "GRPC",
                  "grpc_services": [
                    {
                      "envoy_grpc": {
                        "cluster_name": "xds_cluster"
                      }
                    }
                  ],
                  "set_node_on_first_message_only": true,
                  "transport_api_version": "V3"
                },
                "resource_api_version": "V3"
              },
              "route_config_name": "local_route"
            },
            "http_filters": [
              {
                "name": "envoy.filters.http.router"
              }
            ]
          }
        }
      ]
    }
  ]
}

# 驗證
curl 172.31.15.2:10080
image.png

修改配置文件,listener名字驗證

vi config.yaml
name: myconfig
spec:
  listeners:
  - name: listener_http1 # 修改名字
    address: 0.0.0.0
    port: 10081 # 修改端口號
curl 172.31.15.2:9901/listeners # 查看監聽端口信息

# 驗證
curl 172.31.15.2:10080
curl 172.31.15.2:10081

修改listener名字會生成一個新的listener,并且原來的listener也存在且可以訪問


image.png
  • 健康檢測(health-check)
cd servicemesh_in_practise/Cluster-Manager/health-check
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 啟動
docker-compose up
curl 172.31.18.2:9901/clusters # 查看集群信息
curl 172.31.18.2:9901/listeners # 查看監聽端口信息

# 驗證
curl 172.31.18.2
image.png

修改livez檢測值為FAIL驗證

curl -XPOST -d "livez=FAIL" 172.31.18.11/livez # 修改web服務sidercar的livez值
curl -I 172.31.18.11/livez # 修改后驗證響應碼為506
HTTP/1.1 506 Variant Also Negotiates
content-type: text/html; charset=utf-8
content-length: 4
server: envoy
date: Tue, 26 Apr 20xx xx:54:29 GMT
x-envoy-upstream-service-time: 1

# 驗證
curl 172.31.18.2
image.png

修改livez檢測值為OK驗證

curl -XPOST -d "livez=OK" 172.31.18.11/livez
curl -I 172.31.18.11/livez
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 2
server: envoy
date: Tue, 26 Apr 20xx xx:57:38 GMT
x-envoy-upstream-service-time: 1

# 驗證
curl 172.31.18.2
image.png
  • 異常檢測(outlier-detection)
cd servicemesh_in_practise/Cluster-Manager/outlier-detection
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 啟動
docker-compose up

# 驗證
curl 172.31.20.2
image.png

模擬故障,修改livez的值為FAIL驗證

curl -XPOST -d 'livez=FAIL' 172.31.20.12/livez
curl -I  172.31.20.12/livez

# 驗證
while true; do curl 172.31.20.2/livez; sleep 0.5; done
image.png

恢復故障驗證

curl -XPOST -d 'livez=OK' 172.31.20.12/livez
curl -I  172.31.20.12/livez

# 驗證
while true; do curl 172.31.20.2/livez; sleep 0.5; done
image.png
  • least-requests
cd servicemesh_in_practise/Cluster-Manager/least-requests
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 啟動
docker-compose up

# 驗證
./send-request.sh 172.31.22.2 # 使用腳本發送測試
Send 300 requests, and print the result. This will take a while.

Weight of all endpoints:
Red:Blue:Green = 1:3:5

Response from:
Red:Blue:Green = 35:92:173
  • ring-hash
cd servicemesh_in_practise/Cluster-Manager/ring-hash
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 啟動
docker-compose up

# 驗證
while true; do index=$[$RANDOM%3]; curl -H "User-Agent: Browser_${index}" 172.31.25.2/user-agent && curl -H "User-Agent: Browser_${index}" 172.31.25.2/hostname && echo; sleep .1; done # 使用循環測試
User-Agent: Browser_0
ServerName: green

User-Agent: Browser_0
ServerName: green

User-Agent: Browser_2
ServerName: red

User-Agent: Browser_0
ServerName: green

User-Agent: Browser_2
ServerName: red

User-Agent: Browser_0
ServerName: green

User-Agent: Browser_2
ServerName: red

User-Agent: Browser_1
ServerName: blue

相同的瀏覽器id,請求落在相同的主機

  • priority-levels
cd servicemesh_in_practise/Cluster-Manager/priority-levels
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 啟動
docker-compose up

# 驗證
while true; do curl 172.31.29.2; sleep .5;done # 正常五個節點,只發給配置高優先級的三個節點
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12!

模擬故障驗證

curl -XPOST -d 'livez=FAIL' 172.31.29.13/livez
while true; do curl 172.31.29.2; sleep .5;done # 按照默認超配因子1.4計算,有大約6.8%的流量會調度到低優先級的兩個節點上

v3版本的優先級調度暫時沒調通


image.png
  • lb-subsets
    環境介紹
endpoint stage version type xlarge
e1 prod 1.0 std true
e2 prod 1.0 std
e3 prod 1.1 std
e4 prod 1.1 std
e5 prod 1.0 bigmem
e6 prod 1.1 bigmem
e7 dev 1.2-pre std

標簽分類
keys: [stage,type],subnets如下

[prod,std] - e1,e2,e3,e4
[prod,bigmem] - e5,e6
[dev,std] - e7
[dev,bigmem] - 沒有

keys: [stage,version]

[prod,1.0] - e1,e2,e5
[prod,1.1] - e3,e4,e6
[prod,1.2-pre] - 沒有
[dev,1.0] - 沒有
[dev,1.1] - 沒有
[dev,1.2-pre] - e7

keys:[version]

[1.0] - e1,e2,e5
[1.1] - e3,e4,e6
[1.2-pre] - e7

keys:[xlarge,version]

[true,1.0] - e1
[true,1.1] - 沒有
[true,1.2-pre] - 沒有

共生成如上10種子集(subnet),還有一個默認的自己
stage=prod,type=std,version=1.0 - e1,e2

cd servicemesh_in_practise/Cluster-Manager/lb-subsets
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 替換routes部分為如下內容
              routes:
              - match:
                  prefix: "/"
                  headers:
                  - name: x-custom-version
                    exact_match: pre-release
                  - name: x-environment-state
                    exact_match: dev
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        version: "1.2-pre"
                        stage: "dev"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-custom-version
                    exact_match: v1.0
                  - name: x-environment-state
                    exact_match: prod
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        version: "1.0"
                        stage: "prod"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-custom-version
                    exact_match: v1.1
                  - name: x-environment-state
                    exact_match: prod
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        version: "1.1"
                        stage: "prod"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-hardware-test
                    exact_match: memory
                  - name: x-environment-state
                    exact_match: prod
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        type: "bigmem"
                        stage: "prod"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-hardware-test
                    exact_match: std
                  - name: x-environment-state
                    exact_match: prod
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        type: "std"
                        stage: "prod"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-hardware-test
                    exact_match: std
                  - name: x-environment-state
                    exact_match: dev
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        type: "std"
                        stage: "dev"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-custom-version
                    exact_match: v1.0
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        version: "1.0"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-custom-version
                    exact_match: v1.1
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        version: "1.1"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-custom-version
                    exact_match: pre-release
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        version: "1.2-pre"
              - match:
                  prefix: "/"
                  headers:
                  - name: x-custom-xlarge
                    exact_match: isTrue
                  - name: x-custom-version
                    exact_match: pre-release
                route:
                  cluster: webcluster1
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        version: "1.0"
                        xlarge: "true"
              - match:
                  prefix: "/"
                route:
                  weighted_clusters:
                    clusters:
                    - name: webcluster1
                      weight: 90
                      metadata_match:
                        filter_metadata:
                          envoy.lb:
                            version: "1.0"
                    - name: webcluster1
                      weight: 10
                      metadata_match:
                        filter_metadata:
                          envoy.lb:
                            version: "1.1"
                  metadata_match:
                    filter_metadata:
                      envoy.lb:
                        stage: "prod"
          http_filters:
          - name: envoy.filters.http.router
# 啟動
docker-compose up

# 驗證
./test.sh 172.31.33.2 # 發送200次,根據組合比例,大概是9:1
Requests: v1.0:v1.1 = 184:16
curl -H "x-custom-version: v1.0" -H "x-environment-state: prod" 172.31.33.2/hostname # 調用1.0,prod子集
ServerName: e1
ServerName: e2
ServerName: e5
curl -H "x-custom-version: v1.1" -H "x-environment-state: prod" 172.31.33.2/hostname # 調用1.1,prod子集
curl -H "x-custom-version: pre-release" -H "x-environment-state: dev" 172.31.33.2/hostname # 調用1.2-pre,dev子集
ServerName: e7
curl -H "x-environment-state: prod" -H "x-hardware-test: memory" 172.31.33.2/hostname # 調用prod,bigmem子集
ServerName: e5
ServerName: e6
curl -H "x-environment-state: prod" -H "x-hardware-test: std" 172.31.33.2/hostname # 調用prod,std子集
ServerName: e1
ServerName: e2
ServerName: e3
ServerName: e4
curl -H "x-environment-state: dev" -H "x-hardware-test: std" 172.31.33.2/hostname # 調用dev,std子集
ServerName: e7
curl -H "x-custom-version: v1.0"  172.31.33.2/hostname # 調用1.0子集
ServerName: e1
ServerName: e2
ServerName: e5
curl -H "x-custom-version: v1.1"  172.31.33.2/hostname # 調用1.1子集
ServerName: e3
ServerName: e4
ServerName: e6
curl -H "x-custom-version: pre-release"  172.31.33.2/hostname # 調用1.2-pre子集
ServerName: e7
curl -H "x-custom-version: pre-release" -H "x-custom-xlarge: isTrue" 172.31.33.2/hostname # 調用true,1.0子集
ServerName: e1
curl 172.31.33.2/hostname # 調用默認子集
ServerName: e1
ServerName: e2
  • circuit-breaker
cd servicemesh_in_practise/Cluster-Manager/circuit-breaker
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment: # 添加環境變量
    - ENVOY_UID=0 # 添加環境變量
# 啟動
docker-compose up

# 驗證
./send-requests.sh http://172.31.35.2 300 # 對集群一發送300次請求,超過最大連接數,會有報錯503的響應碼出現
./send-requests.sh http://172.31.35.2/livez 300 # 對集群二發送300次請求,超過最大連接數,會有報錯503的響應碼出現
image.png
image.png

模擬故障驗證

curl -XPOST -d 'livez=FAIL' 172.31.35.14/livez
./send-requests.sh http://172.31.35.2/livez 300 # 此時發送請求,會觸發outlier_detection設置,看日志會發現,在請求三次報錯506之后,會將故障主機踢出集群,所有請求會有更多的503出現
image.png
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,345評論 6 531
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,494評論 3 416
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,283評論 0 374
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,953評論 1 309
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,714評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,186評論 1 324
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,255評論 3 441
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,410評論 0 288
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 48,940評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,776評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 42,976評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,518評論 5 359
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,210評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,642評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,878評論 1 286
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,654評論 3 391
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 47,958評論 2 373

推薦閱讀更多精彩內容