序言
- 硬件環(huán)境:Virtualbox內(nèi)建虛擬機
- OS環(huán)境:ubuntu18.04
- K8s集群環(huán)境:單節(jié)點的all-in-one環(huán)境
1. NFS服務(wù)器的搭建
# nfs軟件包安裝
apt-get update
apt-get install nfs-common nfs-kernel-server -y
# 創(chuàng)建共享目錄且設(shè)置權(quán)限
mkdir -p /data/k8s
chmod -R 777 /data/k8s
# 掛載配置
cat << EOF >> /etc/exports
>/data/k8s *(rw,sync,no_root_squash)
>EOF
# nfs-server設(shè)置開機啟動并開啟服務(wù)
systemctl enable nfs-kernel-server
systemctl start nfs-kernel-server
2. 安裝K8s NFS后端存儲驅(qū)動插件
下文的安裝的Harbor采用NFS作為后端存儲,所以首先需要在K8s集群安裝對應(yīng)的驅(qū)動的插件nfs-client-provisioner,這兒采用helm安裝。
# 添加NFS helm倉庫
helm repo add github https://kubernetes-charts.storage.googleapis.com/
# 安裝nfs-client-provisioner
helm install github/nfs-client-provisioner --set nfs.server=10.0.2.15 --set nfs.path=/data/k8s
# 確認安裝
helm list
kubectl get pod | grep nfs-client-provisioner
3. Helm安裝Harbor
3.1 下載Harbor Helm安裝包
# 這里下載的版本是1.0.0
git clone -b 1.0.0 https://github.com/goharbor/harbor-helm.git
3.2 修改values.yaml
首先進入到harbor-helm
目錄修改values.yaml文件。
修改主要涉及到兩塊:
- 服務(wù)暴露的類型,這里我選用nodePort類型;
- 持久化配置,這里采用基于NFS作為后端存儲的動態(tài)創(chuàng)建PV的方式;
expose:
# Set the way how to expose the service. Set the type as "ingress",
# "clusterIP" or "nodePort" and fill the information in the corresponding
# section
type: nodePort
tls:
# Enable the tls or not. Note: if the type is "ingress" and the tls
# is disabled, the port must be included in the command when pull/push
# images. Refer to https://github.com/goharbor/harbor/issues/5291
# for the detail.
enabled: false
# Fill the name of secret if you want to use your own TLS certificate
# and private key. The secret must contain keys named tls.crt and
# tls.key that contain the certificate and private key to use for TLS
# The certificate and private key will be generated automatically if
# it is not set
secretName: ""
# By default, the Notary service will use the same cert and key as
# described above. Fill the name of secret if you want to use a
# separated one. Only needed when the type is "ingress".
notarySecretName: ""
# The commmon name used to generate the certificate, it's necessary
# when the type is "clusterIP" or "nodePort" and "secretName" is null
commonName: ""
...
# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker/notary client
#
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node
#
# If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: http://10.0.2.15:30002
...
# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamicly.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you have already existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
enabled: true
# Setting it to "keep" to avoid removing PVCs during a helm delete
# operation. Leaving it empty will delete PVCs after the chart deleted
resourcePolicy: "keep"
persistentVolumeClaim:
registry:
# Use the existing PVC which must be created manually before bound
existingClaim: ""
# Specify the "storageClass" used to provision the volume. Or the default
# StorageClass will be used(the default).
# Set it to "-" to disable dynamic provisioning
storageClass: "nfs-client"
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
chartmuseum:
existingClaim: ""
storageClass: "nfs-client"
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
jobservice:
existingClaim: ""
storageClass: "nfs-client"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
# If external database is used, the following settings for database will
# be ignored
database:
existingClaim: ""
storageClass: "nfs-client"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
# If external Redis is used, the following settings for Redis will
# be ignored
redis:
existingClaim: ""
storageClass: "nfs-client"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
...
幾點說明
其中
externalURL
設(shè)置時必須標(biāo)注協(xié)議名稱比如http、https等,IP地址是節(jié)點的IP,端口可設(shè)可不設(shè),默認端口號30002。持久化配置只需寫入對應(yīng)的storageClassName名稱,而實際的sc在安裝nfs-client-provisioner插件時已經(jīng)創(chuàng)建好了,可通過如下命令來查看填入即可:
kebectl get sc
3.3 安裝Harbor
安裝之前不妨檢查下配置及相關(guān)模板是否有效:
cd harbor-helm
helm install --dry-run --debug .
檢查沒問題后開始安裝Harbor:
helm install --name my-harbor .
需要等待一會兒,因為要下載docker鏡像,或者我們可以通過命令watch "kubectl get pod"
來隨時監(jiān)看harbor pod的創(chuàng)建狀態(tài),出現(xiàn)如下結(jié)果基本上就代表安裝成功了。
root@vinefu-dev:~/harbor-helm# kubectl get pod | grep harbor
my-harbor-harbor-adminserver-dd6c47cd4-h7lvg 1/1 Running 0 3h35m
my-harbor-harbor-chartmuseum-78dd845fbd-dmvl6 1/1 Running 0 3h35m
my-harbor-harbor-clair-85cbd7d6bd-8p8kg 1/1 Running 2 3h35m
my-harbor-harbor-core-566d4655f5-dbntl 1/1 Running 0 3h35m
my-harbor-harbor-database-0 1/1 Running 0 3h35m
my-harbor-harbor-jobservice-ffb4b5859-6zh66 1/1 Running 1 3h35m
my-harbor-harbor-nginx-6cf87f6787-pwlnx 1/1 Running 0 3h35m
my-harbor-harbor-notary-server-65749ddc8d-rsrwz 1/1 Running 0 3h35m
my-harbor-harbor-notary-signer-676b895677-z6ncj 1/1 Running 0 3h35m
my-harbor-harbor-portal-69f6f6cd76-dm6wf 1/1 Running 0 3h35m
my-harbor-harbor-redis-0 1/1 Running 0 3h35m
my-harbor-harbor-registry-58f4d56b5c-v2xwh 2/2 Running 0 3h35m
root@vinefu-dev:~/harbor-helm#
root@vinefu-dev:~/harbor-helm# kubectl get svc | grep harbor
harbor NodePort 10.99.15.249 <none> 80:30002/TCP,4443:30004/TCP 3h39m
my-harbor-harbor-adminserver ClusterIP 10.103.68.151 <none> 80/TCP 3h39m
my-harbor-harbor-chartmuseum ClusterIP 10.102.192.58 <none> 80/TCP 3h39m
my-harbor-harbor-clair ClusterIP 10.100.0.197 <none> 6060/TCP 3h39m
my-harbor-harbor-core ClusterIP 10.101.122.30 <none> 80/TCP 3h39m
my-harbor-harbor-database ClusterIP 10.101.75.97 <none> 5432/TCP 3h39m
my-harbor-harbor-jobservice ClusterIP 10.99.153.78 <none> 80/TCP 3h39m
my-harbor-harbor-notary-server ClusterIP 10.108.250.157 <none> 4443/TCP 3h39m
my-harbor-harbor-notary-signer ClusterIP 10.110.38.62 <none> 7899/TCP 3h39m
my-harbor-harbor-portal ClusterIP 10.110.83.54 <none> 80/TCP 3h39m
my-harbor-harbor-redis ClusterIP 10.107.51.162 <none> 6379/TCP 3h39m
my-harbor-harbor-registry ClusterIP 10.99.48.247 <none> 5000/TCP,8080/TCP 3h39m
root@vinefu-dev:~/harbor-helm#
然后我們就可以在瀏覽器輸入http://10.0.2.15:30002
來訪問Harbor UI了,其中用戶名及密碼分別是默認的admin和Harbor12345。
因為我在VirtualBox網(wǎng)絡(luò)設(shè)置做了NAT本地轉(zhuǎn)發(fā),所以我在本地瀏覽器就能通過127.0.0.1:30002
訪問啦。
4. Harbor的簡單使用
4.1 上傳docker鏡像
大家也許已經(jīng)注意到,在更改values.yaml
文件配置時我設(shè)置了tls.enabled=false
,如此的話在上傳鏡像時就無須證書形式的了,不過還是更新一下docker的設(shè)置(因為docker上傳鏡像默認走https,這兒需要不安全訪問的設(shè)置):
其中,上圖中紅色標(biāo)記區(qū)域就是要添加的配置,之后重啟docker,運行如下命令即可:
systemctl restart docker
首次上傳鏡像還需要harbor鏡像倉庫的登記:
docker login http://10.0.2.15:30002
# 根據(jù)提示輸入對應(yīng)的用戶名和密碼:admin和Harbor12345
然后回到Harbor UI,新建一個項目,假設(shè)名稱為vienfu-test-repo
,其中訪問級別設(shè)置成public
,下面我們就可以在后臺上傳鏡像了:
# 假設(shè)上傳本地的docker鏡像:busybox
docker tag busybox http://10.0.2.15:30002/vienfu-test-repo/busybox
docker push http://10.0.2.15:30002/vienfu-test-repo/busybox
在Harbor UI的操作界面,點開項目vienfu-test-repo
即可看到之前上傳的鏡像。
4.2 上傳Chart
本身Harbor
操作界面是提供上傳Helm Chart
的功能的:
但是對于開發(fā)者或者操作人員直接操作界面可能不是很方便,下面提供一種后臺CLI命令上傳Helm Chart
的方法。
首先需要安裝Helm push
插件:
helm plugin install https://github.com/chartmuseum/helm-push
然后就可以上傳啦
# 添加helm倉庫
helm repo add my-repo http://10.0.2.15:30002/chartrepo/vienfu-test-repo
helm repo list
# 把提前新建好的helm chart上傳至harbor
helm push mychart-0.1.0.tgz my-repo --username admin --password Harbor12345