1. 操作系統(tǒng)準(zhǔn)備
本次實驗環(huán)境準(zhǔn)備的是兩臺Centos 7作為實驗環(huán)境,Master節(jié)點要求必須至少2C,其余node節(jié)點1C即可。
Tips:人生建議:沒事還是不要用Redhat做作為自己的測試環(huán)境,YUM真的讓人心累。
2. 部署架構(gòu)
Hostname | IP | ROLE | 安裝的角色服務(wù) |
---|---|---|---|
master | 192.168.199.20 | Master | etcd、kube-apiserver、kube-scheduler、kube-controller-manager |
node1 | 192.168.199.21 | Node1 | kube-proxy、kubelet、docker |
3. 設(shè)置主機名及相關(guān)環(huán)境準(zhǔn)備
(1) 在兩臺主機上執(zhí)行以下操作
- 編輯
/etc/hostname
文件,將其中主機名修改為master和node1,并編輯/etc/hosts
文件,修改內(nèi)容為:
192.168.199.21 node1
192.168.199.20 master
192.168.199.20 etcd
192.168.199.20 registry
- 關(guān)閉防火墻
[root@master yum.repos.d]# systemctl stop firewalld
[root@master yum.repos.d]# systemctl disable firewalld
- 關(guān)閉swap
swap,這個當(dāng)內(nèi)存不足時,linux會自動使用swap,將部分內(nèi)存數(shù)據(jù)存放到磁盤中,這個這樣會使性能下降,為了性能考慮推薦關(guān)掉
[root@master yum.repos.d]# swapoff -a
- 關(guān)閉selinux
vi /etc/selinux/config
,關(guān)閉SELINUX
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
- 配置yum源,此處配置的是阿里的Centos 7的yum源和阿里的kubernetes源
Tips:可以在配置之前將/etc/yum.repos.d/ 下的文件都備份到bak目錄下
[root@master ~]# cd /etc/yum.repos.d/ && curl -O http://mirrors.aliyun.com/repo/Centos-7.repo
[root@master yum.repos.d]# vi kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg"
[root@master yum.repos.d]yum clean all
[root@master yum.repos.d]#yum makecache
-
reboot
重啟操作系統(tǒng),使hostname、SELINUX配置生效
4. 部署etcd
Kubernetes、Flannel都依賴于etcd服務(wù),所以需要先安裝etcd。直接使用yum進行安裝
[root@master yum.repos.d]# yum -y install etcd
[root@master yum.repos.d]# etcdctl --version
etcdctl version: 3.3.11
API version: 2
[root@master yum.repos.d]#
yum安裝的etcd默認配置文件在/etc/etcd/etcd.conf。編輯配置文件,修改以下配置項參數(shù):
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
選項說明:
--listen-peer-urls :etcd作為分布式節(jié)點通信端口,默認指定端口7001,我們這里做的是單節(jié)點,這個參數(shù)可以不寫,需要知道的是v2版本中改變?yōu)?380,7001仍可用
--listen-client-urls :客戶端操作etcd API的端口,默認指定端口4001,v2中改變?yōu)?379,在k8s中我們要使用4001端口
--data-dir :指定數(shù)據(jù)存放目錄
--advertise-client-urls :作為分布式的客戶端連接端口,如果不寫這個參數(shù)會出現(xiàn)以下報錯。
報錯截圖
啟動并驗證狀態(tài)
[root@master ~]# systemctl start etcd
[root@master ~]# etcdctl set testdir/testkey0 0
0
[root@master ~]# etcdctl get testdir/testkey0
0
[root@master ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
[root@master ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
5. 部署Master
5.1 部署Docker
采取yum的安裝方式,安裝完成之后修改/etc/sysconfig/docker
使其可以在registry上拉取鏡像
[root@master ~]# yum install docker
[root@master ~]# vim /etc/sysconfig/docker
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'
[root@master ~]#
設(shè)置開機自啟動并開啟服務(wù)
[root@master yum.repos.d]# systemctl start docker
[root@master yum.repos.d]# systemctl enable docker
5.2 安裝kubernetes
采取yum的安裝方式
[root@master ~]# yum install kubernetes
5.3 配置并啟動kubernetes
在Master上需要運行角色Kubernets API Server、Kubernets Controller Manager、Kubernets Scheduler,所以需要修改相對應(yīng)的服務(wù)配置
5.3.1 vi /etc/kubernetes/apiserver
[root@master ~]# vim /etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
5.3.1 vi /etc/kubernetes/config
[root@k8s-master ~]# vim /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://master:8080"
啟動服務(wù)并設(shè)置開機自啟動
[root@master kubernetes]# vim start_services.sh
#!/bin/bash
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
[root@master kubernetes]# bash start_services.sh
6. 部署Node1
6.1 部署docker
參考見5.1
6.2 部署kurbernetes
參考見5.2
6.3 配置并啟動kubernetes
在Node1上需要運行角色Kubelet、Kubernets Proxy,所以需要修改相對應(yīng)的服務(wù)配置
6.3.1 vim /etc/kubernetes/config
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://master:8080"
6.3.2 vim /etc/kubernetes/kubelet
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://master:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
啟動服務(wù)并設(shè)置開機自啟動
[root@node1 kubernetes]# vim start_services.sh
#!/bin/bash
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
[root@node1 kubernetes]# bash start_services.sh
7. 創(chuàng)建覆蓋網(wǎng)絡(luò)——Flannel
7.1 安裝Flannel
在master、node上均執(zhí)行如下命令,進行安裝
root@master ~]# yum install flannel
7.2 配置Flannel
master、node上均編輯/etc/sysconfig/flanneld,修改相關(guān)配置
[root@master ~]# vi /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
7.3 配置etcd中關(guān)于flannel的key
Flannel使用Etcd進行配置,來保證多個Flannel實例之間的配置一致性,所以需要在etcd上進行如下配置:(‘/atomic.io/network/config’這個key與上文/etc/sysconfig/flannel中的配置項FLANNEL_ETCD_PREFIX是相對應(yīng)的,錯誤的話啟動就會出錯)
[root@master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
{ "Network": "10.0.0.0/16" }
7.4 啟動
啟動Flannel之后,需要依次重啟docker、kubernete。
- 在master執(zhí)行:
systemctl enable flanneld.service
systemctl start flanneld.service
systemctl restart docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
- 在node上執(zhí)行:
systemctl enable flanneld.service
systemctl start flanneld.service
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service
Tips:期間遇到的一些問題
Q1. 在剛開始用redhat部署的時候,使用yum一直會遇到注冊的提示,并且即使刪除了yum.repo下的repo文件,再次使用yum也會重新生成官方repo文件
經(jīng)過查找,發(fā)現(xiàn)是redhat自帶的插件subscription-manager給弄得的。而這個插件的作用就是Red Hat Subscription Manager訂閱管理器,就是它讓你一直register。可以找到/etc/yum/pluginconf.d/subscription-manager.conf的配置文件將其禁用。
Q2. 在運行etcd的時候,無法啟動。運行journalctl -xe查看日志,提示報錯“When listening on specific address(es)”
經(jīng)過排查是因為未正確的配置/etc/etcd/etcd.conf中的ETCD_ADVERTISE_CLIENT_URLS參數(shù)配置導(dǎo)致。具體參考 4.部署etcd