1、實現cobbler+pxe自動化裝機
2、ansible實現主/備模式高可用
1、實現cobbler+pxe自動化裝機
1、httpd服務提供yum repository倉庫而kickstart文件提供安裝配置
2、syslinux是一個的引導加載程序,負責提供pxelinux.0文件。
3、PXE安裝
- PXE: preboot excution environment
- 首先 由dhcp分配給主機ip,netmask,gw,dns,通過tfrp server加載(bootloader,kernel,initrd),然后到yum repository可以通過 (ftp,http,nfs),由kickstart自動應答文件提供安裝配置,完成自動化安裝。
相對于Pxe而言Cobbler是一個自動化和簡化系統安裝的工具,通過使用網絡引導來實現系統自動化安裝。
它集成了:
PXE服務支持
DHCP服務管理
DNS服務管理(可選bind,dnsmasq)
電源管理
Kickstart服務支持
YUM倉庫管理
TFTP(PXE啟動時需要)
Apache(提供kickstart的安裝源,并提供定制化的kickstart配置)
構成的組件有:
Distros(發行版):表示一個操作系統,它承載了內核和initrd的信息,以及內核參數等其他數據
Profile(配置文件):包含一個發行版、一個kickstart文件以及可能的存儲庫,還包含更多特定的內核參數等其他數據
Systems(系統):表示要配給的額機器。它包含一個配置文件或一個景象,還包含IP和MAC地址、電源管理(地址、憑據、類型)、(網卡綁定、設置valn等)
Repository(鏡像):保存一個yum或rsync存儲庫的鏡像信息
Image(存儲庫):可替換一個包含不屬于此類比的額文件的發行版對象(例如,無法分為內核和initrd的對象)。
接下來我們進行實驗
安裝需要的安裝包
yum install -y tftp tftp-server dhcp httpd syslinux
配置dhcp服務
option domain-name "lvqing.com";
option domain-name-servers 223.5.5.5;
default-lease-time 600;
max-lease-time 7200;
log-facility local7;
subnet 192.168.31.0 netmask 255.255.255.0 {
range 192.168.31.210 192.168.31.220;
option routers 192.168.31.201;
filename "pxelinux.0";
next-server 192.168.31.201;
}
啟動服務
systemctl start dhcpd
systemctl start tftp
systemctl start httpd
查看租約情況
cat /var/lib/dhcpd/dhcpd.leases
接著我們配置cobbler
安裝服務
yum install -y cobbler
systemctl start cobblerd
然后執行cobbler check
報的錯誤
The following are potential configuration items that you may want to fix:
1 : The 'server' field in /etc/cobbler/settings must be set to something other than localhost, or kickstarting features will not work. This should be a resolvable hostname or IP for the boot server as reachable by all machines that will use it.
2 : For PXE to be functional, the 'next_server' field in /etc/cobbler/settings must be set to something other than 127.0.0.1, and should match the IP of the boot server on the PXE network.
3 : change 'disable' to 'no' in /etc/xinetd.d/tftp
4 : Some network boot-loaders are missing from /var/lib/cobbler/loaders, you may run 'cobbler get-loaders' to download them, or, if you only want to handle x86/x86_64 netbooting, you may ensure that you have installed a *recent* version of the syslinux package installed and can ignore this message entirely. Files in this directory, should you want to support all architectures, should include pxelinux.0, menu.c32, elilo.efi, and yaboot. The 'cobbler get-loaders' command is the easiest way to resolve these requirements.
5 : enable and start rsyncd.service with systemctl
6 : debmirror package is not installed, it will be required to manage debian deployments and repositories
7 : The default password used by the sample templates for newly installed machines (default_password_crypted in /etc/cobbler/settings) is still set to 'cobbler' and should be changed, try: "openssl passwd -1 -salt 'random-phrase-here' 'your-password-here'" to generate new one
8 : fencing tools were not found, and are required to use the (optional) power management features. install cman or fence-agents to use them
1,2,7都是和配置文件相關的,我們先修改配置文件
[root@node2 ~]# openssl passwd -1 -salt '123456' 'lvqing'
$1$123456$DNZ8F1JeU.5HhsLhVKTPU/
[root@node2 ~]# vim /etc/cobbler/settings
server: 192.168.31.201
next_server: 192.168.31.201
default_password_crypted: "$1$123456$DNZ8F1JeU.5HhsLhVKTPU/"
第三條是修改tftp的啟動狀態
service tftp
{
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /var/lib/tftpboot
disable = no
per_source = 11
cps = 100 2
flags = IPv4
}
第四條根據提示執行“cobblerget-loader”命令下載pxelinux.0,menu.c32,elilo.efi, 或yaboot文件,否則,需要安裝syslinux程序包,而后復制/usr/share/syslinux/中的pxelinux.0,menu.c32等文件至/var/lib/cobbler/loaders目錄中,此處我們先直接復制/usr/share/syslinux目錄中的文件到指定目錄,看看是否能解決:
cp -ar /usr/share/syslinux/* /var/lib/cobbler/loaders/
第五條啟動rsync服務
systemctl start rsyncd
systemctl enable rsyncd
6,8都是安裝缺少的軟件包安裝即可
yum install -y debmirror fence-agents
然后我們再檢查一次
[root@node2 ~]# systemctl restart cobblerd
[root@node2 ~]# cobbler check
The following are potential configuration items that you may want to fix:
1 : Some network boot-loaders are missing from /var/lib/cobbler/loaders, you may run 'cobbler get-loaders' to download them, or, if you only want to handle x86/x86_64 netbooting, you may ensure that you have installed a *recent* version of the syslinux package installed and can ignore this message entirely. Files in this directory, should you want to support all architectures, should include pxelinux.0, menu.c32, elilo.efi, and yaboot. The 'cobbler get-loaders' command is the easiest way to resolve these requirements.
2 : comment out 'dists' on /etc/debmirror.conf for proper debian support
3 : comment out 'arches' on /etc/debmirror.conf for proper debian support
第一條看來需要下載我們根據提示執行
cobbler get-loaders
2,3在指定文件中注釋掉相應的配置段即可
vim /etc/debmirror.conf
#@arches="i386";
#@dists="sid";
最后重啟cobbler
[root@node2 ~]# systemctl restart cobblerd
[root@node2 ~]# cobbler check
No configuration problems found. All systems go.
[root@node2 ~]# cobbler sync
掛載鏡像文件,然后使用cobbler命令導入
[root@node2 ~]# mount /dev/cdrom /mnt
mount: /dev/sr0 寫保護,將以只讀方式掛載
cobbler import --name=centos-7.1-x86_64 --path=/mnt
[root@node2 ~]# cobbler distro list
centos-7.1-x86_64
鏡像會被自動導入到/var/www/cobbler/ks_mirror,方便后續通過http獲取安裝源。
另外默認情況下,cobbler會生成一個最小化安裝的kickstart文件,如果想要自定義其對應的kickstart profile,可通過下面操作進行:
[root@cobbler ~]# cp centos7.cfg /var/lib/cobbler/kickstarts/ #復制自定義的kickstart文件到指定的目錄下
[root@cobbler ~]# cobbler profile add --name=centos-7.2-x86_64-custom --distro=centos-7.2-x86_64 --kickstart=/var/lib/cobbler/kickstarts/centos7.cfg #創建自定義的kickstart profile
[root@cobbler ~]# cobbler profile list
centos-7.1-x86_64
這里因為沒有編寫kickstart文件,就直接使用最小化的安裝了。
測試結果
cobbler會在/var/lib/tftpboot/pxelinux.cfg/default文件中自動添加相應的系統menu,另外如果需要修改默認啟動的menu,需要在此文件中修改,但需注意的是此文件每次cobbler sync都會恢復默認local啟動
cobbler的web管理
cobbler還支持web管理,需要安裝相應的安裝包
yum install -y cobbler-web
接著需要更改cobbler的認證模塊為auth.pam:
[authentication]
module = authn_pam
然后創建cobbler賬號:
echo "lvqing" | passwd --stdin cbadmin
更改用戶 cbadmin 的密碼 。
passwd:所有的身份驗證令牌已經成功更新。
在/etc/cobbler/users.conf文件中指定cbadmin賬號為cobbler-web的管理賬號:
vim /etc/cobbler/users.conf
[admins]
admin = "cbadmin"
重啟服務
systemctl restart cobblerd
systemctl restart httpd
出現錯誤
查看httpd日志
[root@node2 conf.d]# tail /var/log/httpd/ssl_error_log
[Tue Feb 12 23:44:41.951936 2019] [:error] [pid 8430] [remote 192.168.31.242:59848] self._setup(name)
[Tue Feb 12 23:44:41.951943 2019] [:error] [pid 8430] [remote 192.168.31.242:59848] File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 41, in _setup
[Tue Feb 12 23:44:41.951955 2019] [:error] [pid 8430] [remote 192.168.31.242:59848] self._wrapped = Settings(settings_module)
[Tue Feb 12 23:44:41.951962 2019] [:error] [pid 8430] [remote 192.168.31.242:59848] File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 110, in __init__
[Tue Feb 12 23:44:41.951991 2019] [:error] [pid 8430] [remote 192.168.31.242:59848] mod = importlib.import_module(self.SETTINGS_MODULE)
[Tue Feb 12 23:44:41.952014 2019] [:error] [pid 8430] [remote 192.168.31.242:59848] File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
[Tue Feb 12 23:44:41.952028 2019] [:error] [pid 8430] [remote 192.168.31.242:59848] __import__(name)
網上查找好像是python版本的問題
#下載pip.py
wget https://bootstrap.pypa.io/get-pip.py
#調用本地python運行pip.py腳本
python get-pip.py
#安裝pip
pip install Django==1.8.9
#查看pip版本號
python -c "import django; print(django.get_version())"
#重啟httpd
systemctl restart httpd
2、ansible實現主/備模式高可用
輕量級的運維工具:Ansible
Ansible的特性
模塊化:調用特定的模塊,完成特定任務
基于Python語言實現,有Paramiko,PyYAML,Jinja2(模板語言)三個關鍵模塊;
部署簡單:agentless
支持自定義模塊
支持playbook編排任務
有冪等性:一個任務執行一遍和執行n遍效果一樣,不因為重復執行帶來意外情況
安全,基于OpenSSH
無需代理不依賴PKI(無需ssl)
YAML格式編排任務,支持豐富的數據結構
較強大的多層解決方案
Ansible的架構
Core Modules:核心模塊
Custom Modules:自定義模塊
Connection Plugins:連接插件
Host Inventory:ansible管理主機的清單/etc/ansibe/hosts
Plugins:模塊功能的補充,如記錄日志發送通知等
Playbooks 核心組件;任務劇本,編排定義ansible任務集的配置文件,ansible順序依次執行,通常時json格式的yaml文件
常用的模塊
常用模塊:
command
-a 'COMMAND'
user
-a 'name= state={present|absent} system= uid='
group
-a 'name= gid= state= system='
cron
-a 'name= minute= hour= day= month= weekday= job= user= state='
copy
-a 'dest= src= mode= owner= group='
注意:src是目錄時最后帶/復制目錄內容,不帶/遞歸復制文件本身
file
-a 'path= mode= owner= group= state={directory|link|present|absent} src='
ping
沒有參數
yum
-a 'name= state={present|latest|absent}'
service
-a 'name= state={started|stopped|restarted} enabled='
shell
-a 'COMMAND'
script
-a '/path/to/script'
setup
playbook的核心元素:
- Hosts:主機
- tasks: 任務
- variables: 變量
- templates: 模板包含了模板語法的文本文件
- handlers: 處理器
- roles: 角色
- Hosts:運行指定任務的目標主機;
- remoute——user:在遠程主機上執行任務的用戶
sudo_user;
ansible的簡單使用格式:ansible HOST-PATTERN -m MOD_NAME -a MOD_ARGS -f FORKS -C -u USERNAME -c CONNECTION
實驗用ansible自動部署nginx+keepalived+lamp
兩臺nginx作為web代理服務器用keepalived做高可用;后端兩個apache服務器,一個部署apache+php,另一個部署apache+mysql。通過ansible管理配置以上服務器,配置完成后,能通過VIP訪問到后端主機主頁。
配置免密登陸待管理主機
ssh-keygen -t rsa -P ""
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.31.201
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.31.203
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.31.204
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.31.205
vim /etc/ansible/hosts #編輯主機清單文件添加主機
[nginx]
192.168.31.201
192.168.31.203
[apache]
192.168.31.204
192.168.31.205
[php]
192.168.31.204
[mysql]
192.168.31.205
測試是否能通
ansible all -m ping
192.168.31.205 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.31.204 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.31.203 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.31.201 | SUCCESS => {
"changed": false,
"ping": "pong"
}
配置時間同步
ansible all -m shell -a 'echo "TZ='Asia/Shanghai'; export TZ" > /etc/profile '
配置定期同步時間
ansible all -m cron -a "minute=*/3 job='/usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null' name=dateupdate"
關閉firewalld和selinux
ansible all -m shell -a 'systemctl stop firewalld; systemctl disable firewalld; setenforce 0'
配置各個服務
接著我們就可以在ansible主機上配置需要下發到各遠程主機上的playbook了,這里我以roles角色定義各服務器上需要配置的服務,最后用playbook調用相應的roles進行下發配置。
1、 配置apache服務role
創建好各個目錄
mkdir -pv /etc/ansible/roles/apache/{files,templates,tasks,handlers,vars,meta,default}
配置apache的配置模板
vim /etc/ansible/roles/apache/templates/vhost1.conf.j2
<virtualhost *:80>
servername lvqing.com
DirectoryIndex index.html index.php
Documentroot /var/www/html
ProxyRequest off
ProxyPassMatch ^/(.*\.php)$ fcgi://192.168.31.204:9000/var/www/html/$1
ProxyPassMatch ^/(ping|status)$ fcgi://192.168.31.204:9000/$1
<Directory / >
options FollowSymlinks
Allowoverride none
Require all granted
</Directory>
</virtualhost>
配置apache的主頁文件
vim /etc/ansible/roles/apache/templates/
index.html
<h1>This is {{ ansible_hostname }}</h1>
vim /etc/ansible/roles/apache/templates/
index.php
<?php
phpinfo();
?>
配置apache的task任務
vim /etc/ansible/roles/apache/tasks/main.yml
- name: install apache
yum: name=httpd state=latest
- name: install vhost file
template: src=/etc/ansible/roles/apache/templates/vhost1.conf.j2 dest=/etc/httpd/conf.d/vhost.conf
- name: install index.html
template: src=/etc/ansible/roles/apache/templates/index.html dest=/var/www/html/index.html
- name: install index.php
template: src=/etc/ansible/roles/apache/templates/index.php dest=/var/www/html/index.php
- name: start httpd
service: name=httpd state=started
配置php-fpm服務的role
mkdir -pv /etc/ansible/roles/php-fpm/{files,templates,tasks,handlers,vars,meta,default}
cp /etc/php-fpm.d/www.conf /etc/ansible/roles/php-fpm/templates/www.conf
vim /etc/ansible/roles/php-fpm/templates/www.conf
#修改以下配置
listen = 0.0.0.0:9000
listen.allowed_clients = 127.0.0.1
pm.status_path = /status
ping.path = /ping
ping.response = pong
配置task文件
- name: install epel repo
yum: name=epel-release state=latest
- name: install php package
yum: name={{ item }} state=latest
with_items:
- php-fpm
- php-mysql
- php-mbstring
- php-mcrypt
- name: install config file
template: src=/etc/ansible/roles/php-fpm/templates/www.conf dest=/etc/php-fpm.d/www.conf
- name: install session directory
file: path=/var/lib/php/session group=apache owner=apache state=directory
- name: start php-fpm
service: name=php-fpm state=started
配置mysql的role服務
mkdir -pv /etc/ansible/roles/mysql/{files,templates,tasks,handlers,vars,meta,default}
cp /etc/my.cnf /etc/ansible/roles/mysql/templates/
vim /etc/ansible/roles/mysql/templates/my.cnf
#添加下面兩行配置
skip-name-resolve=ON
innodb-file-per-table=ON
配置mysql的task任務
vim /etc/ansible/roles/mysql/tasks/main.yml
- name: install mysql
yum: name=mariadb state=latest
- name: install config file
template: src=/etc/ansible/roles/mysql/templates/my.cnf dest=/etc/my.cnf
- name: start mysql
service: name=mariadb.service state=started
配置nginx服務的role
mkdir -pv /etc/ansible/roles/nginx/{files,templates,tasks,handlers,vars,meta,default}
cp /etc/nginx/nginx.conf /etc/ansible/roles/nginx/templates/
vim /etc/ansible/roles/nginx/templates/nginx.conf
http {
......
upstream apservers {
server 192.168.31.204:80;
server 192.168.31.205:80;
}
......
server {
......
location / {
proxy_pass http://apservers;
proxy_set_header host $http_host;
proxy_set_header X-Forward-For $remote_addr;
}
......
}
配置nignx服務role的task任務
vim /etc/ansible/roles/nginx/tasks/main.yml
- name: install epel
yum: name=epel-release state=latest
- name: install nginx
yum: name=nginx state=latest
- name: install config file
template: src=/etc/ansible/roles/nginx/templates/nginx.conf dest=/etc/nginx/nginx.conf
- name: start nginx
service: name=nginx state=started
配置keepalived服務role
mkdir -pv /etc/ansible/roles/keepalived/{files,templates,tasks,handlers,vars,meta,default}
cp /etc/keepalived/keepalived.conf /etc/ansible/roles/keepalived/templates/
vim /etc/ansible/roles/keepalived/templates/keepalived.conf
vrrp_instance VI_1 {
state {{ keepalived_role }}
interface ens33
virtual_router_id 51
priority {{ keepalived_pri }}
advert_int 1
authentication {
auth_type PASS
auth_pass 12345678
}
virtual_ipaddress {
192.168.31.240/24 dev ens33 label ens33:0
}
}
編輯/etc/ansible/hosts文件,給nginx主機添加指定的對應變量:
192.168.31.201 keepalived_role=MASTER keepalived_pri=100
192.168.31.203 keepalived_role=BACKUP keepalived_pri=98
配置nginx服務的task服務:
vim /etc/ansible/roles/keepalived/tasks/main.yml
- name: install keepalived
yum: name=keepalived state=latest
- name: install config file
template: src=/etc/ansible/roles/keepalived/templates/keepalived.conf dest=/etc/keepalived/keepalived.conf
- name: start keepalived
service: name=keepalived state=started
至此所有的playbook roles都已經寫好了。
配置playbook下發配置
1、定義ap1并下發
mkdir /etc/ansible/playbooks
vim /etc/ansible/playbooks/ap1.yaml
#因為ap1又是apache服務器,也php-fpm服務器,所以調用apache和php-fpm兩個role
- hosts: php
remote_user: root
roles:
- apache
- php-fpm
#語法檢查
ansible-playbook --syntax-check /etc/ansible/playbooks/ap1.yaml
下發執行playbook
ansible-playbook /etc/ansible/playbooks/ap1.yaml
2、定義ap2的playbook并下發
vim /etc/ansible/playbooks/ap2.yaml
- hosts: mysql
remote_user: root
roles:
- apache
- mysql
#下發安裝
ansible-playbook /etc/ansible/playbooks/ap2.yaml
3、定義兩臺nginx服務器的playbook并下發
vim /etc/ansible/playbooks/loadbalance.yaml
- hosts: nginx
remote_user: root
roles:
- nginx
- keepalived
[root@node1 ~]# ansible-playbook /etc/ansible/playbooks/loadbalance.yaml
然后我們就可以測試