Knative Serving 實踐

最近針對“按業務的需求激活業務應用程序多個復本來執行跑批任務的處理業務數據,單個業務應用程序復本執行完后自動退出”這類需求解決方案的驅使下,最近學習與測試了一下開源 Serverless 項目 Knative,本文將基于 Knative Serving 功能模塊方案來實現,后續再更新Knative Eventing 實現方案實踐;

主要需求場景說明:

  • 單個業務應用程序執行一個請求(長處理任務)
  • 可按需啟動多個業務應用程序
  • 業務應用程序處理后自動退出

本實例環境: kubernetes 1.22 + knative 1.5

一、創建"業務應用"鏡像

實例代碼,用于模擬業務應用程序;注意這里有一個“WAITSECOND”參數用于模擬長處理任務情況;

# hellotest code for Golang 
package main

import (
    "fmt"
    "log"
    "net/http"
    "os"
    "time"
)

func handler(w http.ResponseWriter, r *http.Request) {
    log.Print("helloworld: received a request")
    target := os.Getenv("TARGET")
    if target == "" {
        target = "World"
    }
 
 # WAITSECOND 系統參數定義與處理,如值"30s"
    waitSec := os.Getenv("WAITSECOND")
    if waitSec != "" {
        ws, err := time.ParseDuration(waitSec)
        if err != nil {
            fmt.Fprintf(w, "please to specify a available WAITSECOND value. %s!\n", err)
        }
        time.Sleep(ws)
        fmt.Fprintf(w, "Hello %s! wait time %s\n", target, waitSec)
        return
    }

    fmt.Fprintf(w, "Hello %s!\n", target)
}

func main() {
    log.Print("helloworld: starting server...")

    http.HandleFunc("/", handler)

    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }

    log.Printf("helloworld: listening on port %s", port)
    log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}

將代碼編譯后制作成鏡像(hellotest),并上傳至鏡像倉庫

FROM alpine:latest
MAINTAINER "SA <itservice@xxxxx.com>"

RUN apk add --no-cache tzdata curl && \
    mkdir -p /var/log/ /lib64 && \
    ln -s /lib/libc.musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2
ENV TZ="Asia/Shanghai" PATH=$PATH:/bin/hellotest
WORKDIR /bin
COPY hellotest /bin/
EXPORT 8080
RUN chmod +x /bin/hellotest
CMD ["/bin/hellotest"]

二、創建 knative Service 實例

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: ksvr-samples
  namespace: default
spec:
  template:
    spec:
      imagePullSecrets:
      - name: habor-registry-secret
      containerConcurrency: 1
      containers:
      - image: registry-dev.xxxxxx.com/ops/hellotest:1.0       #私有鏡像庫倉
        env:
          - name: TARGET
            value: "Sample V1"
          - name: WAITSECOND
            value: "60s"  

三、測試驗證

  • 查看 ksvc 運行狀態與生成域名信息
[k8s/]# kubectl get rt
NAME            URL                                        READY   REASON   
ksvr-samples    http://ksvr-samples.default.example.com    True    

[k8s /]# kubectl get ksvc
NAME            URL                                        LATESTCREATED         LATESTREADY           READY   REASON
ksvr-samples    http://ksvr-samples.default.example.com    ksvr-samples-00002    ksvr-samples-00002    True    
  • 在環境(POD)內向sample應用發起兩個http請求
# http://kourier-internal.kourier-system.svc.cluster.local 為ingress地址
# jenkins-sa-84cf764768-hjb89 為另外的一個應用pod,僅為執行訪問之用

kubectl exec -it jenkins-sa-84cf764768-hjb89 -- curl -H "Host: ksvr-samples.default.example.com" http://kourier-internal.kourier-system.svc.cluster.local

  • 查看sample POD的擴容狀態(2+1)
[k8s/]# kubectl get pod
NAME                                             READY   STATUS              RESTARTS     AGE
ksvr-samples-00002-deployment-6f6dc75579-grgbh   0/2     ContainerCreating  0        11s
ksvr-samples-00002-deployment-6f6dc75579-wkcpd   2/2     Running        0            25s
ksvr-samples-00002-deployment-6f6dc75579-nsqd5   2/2     Running        0            25s
  • 查看無請求空閑狀態下縮容狀態

[k8s /]# kubectl get pod -w
ksvr-samples-00002-deployment-6f6dc75579-grgbh   2/2     Terminating    0            84s
ksvr-samples-00002-deployment-6f6dc75579-wkcpd   2/2     Terminating    0            96s
ksvr-samples-00002-deployment-6f6dc75579-grgbh   0/2     Terminating    0            116s
ksvr-samples-00002-deployment-6f6dc75579-grgbh   0/2     Terminating    0            116s
ksvr-samples-00002-deployment-6f6dc75579-grgbh   0/2     Terminating    0            116s
ksvr-samples-00002-deployment-6f6dc75579-nsqd5   2/2     Terminating    0            2m8s
ksvr-samples-00002-deployment-6f6dc75579-wkcpd   0/2     Terminating    0            2m8s
ksvr-samples-00002-deployment-6f6dc75579-wkcpd   0/2     Terminating    0            2m8s
ksvr-samples-00002-deployment-6f6dc75579-wkcpd   0/2     Terminating    0            2m8s
ksvr-samples-00002-deployment-6f6dc75579-nsqd5   0/2     Terminating    0            2m40s
ksvr-samples-00002-deployment-6f6dc75579-nsqd5   0/2     Terminating    0            2m40s
ksvr-samples-00002-deployment-6f6dc75579-nsqd5   0/2     Terminating    0            2m40s

四、其它

默認 Serving 在無請求狀態下將POD縮容至'零',在零POD狀態下由activator接管,當新的請求到來時會有一個init初始化應用POD時間請求延遲問題。如果需要保留最小一個復本來解決init延遲,可能通過全局配置來改變 knative serving的默認行為。

關閉 自動縮容至 0 個復本;關閉后默認最小縮容復本為 1


kubectl edit configmaps -n knative-serving config-autoscaler

# 自動縮容至零參數開關;默認值為true,調整為false
enable-scale-to-zero: "false" 

完整的autoscaler全局ConfigMap配置yaml

[k8s /]# cat knative/config-autoscaler.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: config-autoscaler
  namespace: knative-serving
data:
   container-concurrency-target-percentage: "70"
   container-concurrency-target-default: "100"
   requests-per-second-target-default: "200"
   target-burst-capacity: "211"
   stable-window: "60s"
   panic-window-percentage: "10.0"
   panic-threshold-percentage: "200.0"
   max-scale-up-rate: "1000.0"
   max-scale-down-rate: "2.0"
   enable-scale-to-zero: "false"                           # 自動縮容至零
   scale-to-zero-grace-period: "30s"
   scale-to-zero-pod-retention-period: "0s"
   pod-autoscaler-class: "kpa.autoscaling.knative.dev"
   activator-capacity: "100.0"
   initial-scale: "1"
   allow-zero-initial-scale: "false"
   min-scale: "0"
   max-scale: "0"
   scale-down-delay: "0s"
   max-scale-limit: "0" 

總結:

此實踐中基于對http協議的請求來觸發應用服務資源的擴容需求,此實踐中最核心的是對Knative Service 的定義指定請求并數為1(containerConcurrency: 1),即每個應用POD并發請求為1,多個請求將創建多POD復本。實踐中也常通過事件來觸發應用服務資源如事件源 MQ 、Kafka、Github等等,后續將另外更新說明Eventing模塊實踐。

~~~ FINISH ~~~

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容