apisix在k8s上的實踐

一、相關文檔

# 1、chart 安裝apisix、apisix-dashboard、apisix-ingress-controller
https://github.com/apache/apisix-helm-chart
# 2、安裝過程中的報錯,可以下以下鏈接中搜索
https://github.com/apache/apisix-helm-chart/issues
# 3、apisix-ingress-controller相關文檔和使用
https://apisix.apache.org/zh/docs/ingress-controller/practices/the-hard-way/
# 4、灰度發布相關
https://apisix.apache.org/zh/docs/ingress-controller/concepts/apisix_route

二、環境

ip 備注
192.168.13.12 k8s-master-01
192.168.13.211 k8s-node-01
192.168.13.58 k8s-node-02、nfs

提前安裝helm

三、安裝

3.1、storageclass安裝配置

3.1.1、nfs節點上

yum install rpcbind nfs-utils -y
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
mkdir /nfs/data
[root@k8s-node-02 ~]# cat /etc/exports
/nfs/data/ *(insecure,rw,sync,no_root_squash)

3.1.2、nfs-subdir-external-provisioner

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.13.58 --set nfs.path=/nfs/data

3.1.3、sc

[root@k8s-master-01 apisix]# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  namespace: default
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  #---設置為默認的storageclass
provisioner: cluster.local/nfs-subdir-external-provisioner
parameters:
  server: 192.168.13.58
  path: /nfs/data
  readOnly: "false"

3.2、apisix安裝配置

注意:需要對charts做適當修改

helm pull apisix/apisix
tar -xf apisix-0.9.3.tgz
在apisix/values.yaml中做如下修改<只做了添加>:
ingress-controller:                                   ## 假如要啟用ingress-controller
  enabled: true
storageClass: "nfs-storage"                    ## 指定上面創建的nfs-storage
accessMode:
  - ReadWriteOnce
helm package apisix
helm install apisix apisix-0.9.3.tgz --create-namespace  --namespace apisix

安裝完成后,會發現apisix-ingress-controller-6697f4849d-wdrr5這個pod一直處于init的狀態,可以在https://github.com/apache/apisix-helm-chart/issues中進行搜索https://github.com/apache/apisix-helm-chart/pull/284,也可以通過查看pod日志進行解決,原因說明:

apisix-ingress-controller監聽k8s apiserver crd資源<apiroute>,通過svc apisix-admin:9180端口連接到apisix,apisix將規則寫入etcd中。但日志顯示controller一直監聽:apisix-admin.ingress-apisix.svc.cluster.local:9180,而svc和pod都部署在apisix的ns下,所以需要修改兩個地方為:apisix-admin.apisix.svc.cluster.local:9180,分別為:
kubectl edit deployment apisix-ingress-controller -n apisix
kubectl edit configmap apisix-configmap -n apisix
然后刪除pod apisix-ingress-controller,重新生成。

后續查看pod apisix-ingress-controller得日志,常有如下報錯:


apisix-ingress-controller err log

這一般是sa權限設置不當造成的,在此我貼一份我的配置,后續看到有報錯日志,在適當地方修改即可:

[root@k8s-master-01 apisix]# cat 12-sa.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: apisix-ingress-controller
  namespace: apisix
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: apisix-clusterrole
  namespace: apisix
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - persistentvolumeclaims
      - pods
      - replicationcontrollers
      - replicationcontrollers/scale
      - serviceaccounts
      - services
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - bindings
      - events
      - limitranges
      - namespaces/status
      - pods/log
      - pods/status
      - replicationcontrollers/status
      - resourcequotas
      - resourcequotas/status
    verbs:
      - get
      - list
      - watch
      - create
      - delete
      - update
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - controllerrevisions
      - daemonsets
      - deployments
      - deployments/scale
      - replicasets
      - replicasets/scale
      - statefulsets
      - statefulsets/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - autoscaling
    resources:
      - horizontalpodautoscalers
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - batch
    resources:
      - cronjobs
      - jobs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - daemonsets
      - deployments
      - deployments/scale
      - ingresses
      - networkpolicies
      - replicasets
      - replicasets/scale
      - replicationcontrollers/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - metrics.k8s.io
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apisix.apache.org
    resources:
      - apisixroutes
      - apisixroutes/status
      - apisixupstreams
      - apisixupstreams/status
      - apisixtlses
      - apisixtlses/status
      - apisixclusterconfigs
      - apisixclusterconfigs/status
      - apisixconsumers
      - apisixconsumers/status
      - apisixpluginconfigs
      - apisixpluginconfigs/status

    verbs:
      - get
      - list
      - watch
      - create
      - update
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apisix-clusterrolebinding
  namespace: apisix
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: apisix-clusterrole
subjects:
  - kind: ServiceAccount
    name: apisix-ingress-controller
    namespace: apisix

3.3、dashboard安裝

helm install apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace apisix

3.4、資源查看

由于在“3.2”中已經啟用了ingress-controller,在此不再按照教程再裝一次。


apisix
svc->apisix-admin    9180       pod->apisix: 9180       操作routes、streams、consumers等的端口
svc->apisix-gateway  80:30761   pod->apisix: 9080       訪問應用url的接口 

3.5、使用

3.5.1、創建pod

kubectl run httpbin --image-pull-policy=IfNotPresent --image kennethreitz/httpbin --port 80
kubectl expose pod httpbin --port 80

3.5.2、創建crd

[root@k8s-master-01 apisix]# cat ApisixRoute.yaml
apiVersion: apisix.apache.org/v2beta3 
kind: ApisixRoute
metadata:
  name: httpserver-route
spec:
  http:
  - name: httpbin
    match:
      hosts:
      - local.httpbin.org
      paths:
      - /*
    backends:
      - serviceName: httpbin
        servicePort: 80

注意在此用的apiVersion為“apisix.apache.org/v2beta3 ”,查看apisix-ingress-controller的日志會發現有如下報錯:

Failed to watch *v2beta1.ApisixRoute: failed to list *v2beta1.ApisixRoute: the server could not find the requested resource (get apisixroutes.apisix.apache.org)

這時修改configmap即可:


apisix-ingress-controller log errir

圖中標記處,在修改之前為:v2beta1,和我們使用的apiversion不匹配,所以會報錯,修改即可。

3.5.3、測試

[root@k8s-master-01 apisix]# kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9080/get" -H 'Host: local.httpbin.org'
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Host": "local.httpbin.org", 
    "User-Agent": "curl/7.79.1", 
    "X-Forwarded-Host": "local.httpbin.org"
  }, 
  "origin": "127.0.0.1", 
  "url": "http://local.httpbin.org/get"
}
[root@k8s-master-01 apisix]# curl http://192.168.13.12:30761/get -H "Host: local.httpbin.org"
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Host": "local.httpbin.org", 
    "User-Agent": "curl/7.29.0", 
    "X-Forwarded-Host": "local.httpbin.org"
  }, 
  "origin": "20.10.151.128", 
  "url": "http://local.httpbin.org/get"
}

3.6、灰度發布

# 本文使用過的文檔
https://api7.ai/blog/traffic-split-in-apache-apisix-ingress-controller

3.6.1 stable版本

[root@k8s-master-01 canary]# cat 1-stable.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-stable-service
  namespace: canary
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http-port
  selector:
    app: myapp
    version: stable
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-stable
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: stable
  template:
    metadata:
      labels:
        app: myapp
        version: stable
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v1
        imagePullPolicy: IfNotPresent
        name: myapp-stable
        ports:
        - name: http-port
          containerPort: 80
        env:
        - name: APP_ENV
          value: stable

3.6.2 canary版本

[root@k8s-master-01 canary]# cat 2-canary.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-canary-service
  namespace: canary
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http-port
  selector:
    app: myapp
    version: canary
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-canary
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: canary
  template:
    metadata:
      labels:
        app: myapp
        version: canary
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v2
        imagePullPolicy: IfNotPresent
        name: myapp-canary
        ports:
        - name: http-port
          containerPort: 80
        env:
        - name: APP_ENV
          value: canary

3.6.3 基于weight的灰度發布

[root@k8s-master-01 canary]# cat 3-apisixroute-weight.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute
  namespace: canary 
spec:
  http:
  - name: myapp-canary-rule
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
      weight: 10
    - serviceName: myapp-canary-service
      servicePort: 80
      weight: 5

測試:


基于權重的灰度發布

canary和stable的比例約為:2:1。

3.6.4 基于優先級的灰度發布

流量會優先打入優先級高的pod

[root@k8s-master-01 canary]# kubectl apply -f priority.yaml
apisixroute.apisix.apache.org/myapp-canary-apisixroute2 created
[root@k8s-master-01 canary]# cat 4-ap.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute2
  namespace: canary
spec:
  http:
  - name: myapp-stable-rule2
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule2
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

測試:


基于優先級的灰度發布

流量會優先打入myapp-canary-service。

3.6.5 基于參數的灰度發布

[root@k8s-master-01 canary]# cat vars.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute3
  namespace: canary 
spec:
  http:
  - name: myapp-stable-rule3
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule3
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
      exprs:
      - subject:
          scope: Query
          name: id
        op: In
        set:
        - "3"
        - "13"
        - "23"
        - "33"
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

測試:


基于條件的灰度發布

符合提交的流量會打入myapp-canary-service,否則打入myapp-stable-service。

3.6.5 基于header的灰度發布

[root@k8s-master-01 canary]# cat canary-header.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute3
  namespace: canary 
spec:
  http:
  - name: myapp-stable-rule3
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule3
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
      exprs:
      - subject:
          scope: Header
          name: canary
        op: RegexMatch
        value: ".*myapp.*"
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

測試:


基于header的灰度發布
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,546評論 6 533
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,570評論 3 418
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,505評論 0 376
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,017評論 1 313
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,786評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,219評論 1 324
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,287評論 3 441
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,438評論 0 288
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 48,971評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,796評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 42,995評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,540評論 5 359
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,230評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,662評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,918評論 1 286
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,697評論 3 392
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 47,991評論 2 374

推薦閱讀更多精彩內容