Openstack 安裝

1.環境準備

centos7

1.1、yum安裝設置

  • yum list |grep openstack

    centos-release-openstack-newton.noarch 1-2.el7 extras
    centos-release-openstack-ocata.noarch 1-2.el7 extras
    centos-release-openstack-pike.x86_64 1-1.el7 extras
    centos-release-openstack-queens.x86_64 1-1.el7.centos extras
  • yum install centos-release-openstack-queens.x86_64 1-1.el7.centos -y 此時會在/etc/yum.repo.d/下產生Openstack的yum源配置

1.2、OpenStack 客戶端

  yum install python-openstackclient -y
  yum install openstack-selinux -y

2.安裝

2.1、mariadb數據庫的安裝

OpenStack使用數據庫來存儲,支持大部分數據庫MariaDB或、MySQL或者PostgreSQL,數據庫運行于控制節點。

  • 卸載原版本mysql
    rpm -qa|grep mariadb
    rpm -e --nodeps mysql-community-common-5.7.9-1.el7.x86_64.rpm
    rpm -e --nodeps mysql-community-libs-5.7.9-1.el7.x86_64.rpm
    rpm -e --nodeps mysql-community-client-5.7.9-1.el7.x86_64.rpm
    rpm -e --nodeps mysql-community-server-5.7.9-1.el7.x86_64.rpm
    
  • 安裝mysql
     yum install mariadb mariadb-server python2-PyMySQL  -y
    
  • 修改配置(/etc/my.cnf.d/mariadb-server.cnf)
      [mysqld]
      bind-address = 10.20.16.229
      default-storage-engine = innodb
      innodb_file_per_table = on
      max_connections = 4096
      collation-server = utf8_general_ci
      character-set-server = utf8
      # 目錄預先規劃
      datadir=/data/openstack/mysql/data
      socket=/data/openstack/mysql/mysql.sock
      log-error=/data/openstack/mysql/log/mariadb.log
      pid-file=/data/openstack/mysql/mariadb.pid
    
    
  • 修改工作目錄屬組
      chown mysql:mysql -R /data/openstack/mysql
    
  • 啟動
      systemctl enable mariadb.service
      systemctl start mariadb.service
    
  • 執行初始化設置
      #賬號初始化
      mysql_secure_installation
      #遠程訪問設置(用于后期其他節點連接)
      GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'ips';
    

2.1、rabbitmq數據庫的安裝

  • 卸載老版本(略、、、)
  • 安裝
    yum install rabbitmq-server -y
    
  • 設置賬號和權限
    # 此處RABBIT_PASS 設置為ips
    rabbitmqctl add_user openstack RABBIT_PASS
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    
  • 創建工作目錄
    mkdir -p /data/openstack/rabbitmq
    chown rabbitmq:rabbitmq -R rabbitmq
    
  • 修改啟動文件(/usr/lib/systemd/system/rabbitmq-server.service)
    Environment=RABBITMQ_LOG_BASE=/data/openstack/rabbitmq/log
    WorkingDirectory=/data/openstack/rabbitmq/data
    
  • 啟動
    systemctl enable rabbitmq-server.service
    systemctl start rabbitmq-server.service
    
  • 為方便管理可以啟用相關插件(屬于rabbitmq相關,不詳述)
     rabbitmq-plugins enable rabbitmq_management 
     systemctl restart rabbitmq-server
     登錄(http://ip:15672/) 
     注意:用戶必須擁有admin權限
    

2.3、Memcached的安裝

  • 卸載老版本(略、、、)
  • 安裝
    yum install memcached python-memcached -y
    
  • 修改配置文件(/etc/sysconfig/memcached )
    PORT="11211"
    USER="memcached"
    MAXCONN="1024"
    CACHESIZE="64"
    #主要增加controller
    OPTIONS="-l 127.0.0.1,::1,controller"
    
  • 啟動
    systemctl enable memcached.service
    systemctl start memcached.service
    

2.4、身份認證服務keytone(控制節點)

  • 創建存儲
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'  IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'  IDENTIFIED BY 'ips';
    
  • 安裝相關包

    yum install openstack-keystone httpd mod_wsgi -y
    
  • 配置keystone(編輯文件 /etc/keystone/keystone.conf)
    /etc/keystone/keystone.conf

     [database]
     ···
     connection = mysql+pymysql://keystone:ips@controller/keystone
    
     [token]
     ...
     provider = uuid
    
  • 初始化身份認證服務的數據庫和Fernet

      su -s /bin/sh -c "keystone-manage db_sync" keystone
      keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
      keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    
  • Bootstrap the Identity service:

      # 在Queens版本中只需要一個端口(5000),就用于所有接口,以前的版本中中5000用于普通接口,35357僅負責管理服務,該換此處ADMIN_PASS為ips
      keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
        --bootstrap-admin-url http://controller:5000/v3/ \
        --bootstrap-internal-url http://controller:5000/v3/ \
        --bootstrap-public-url http://controller:5000/v3/ \
        --bootstrap-region-id RegionOne
    
  • 配置Apache HTTP 服務器(/etc/httpd/conf/httpd.conf)
    vim /etc/httpd/conf/httpd.conf

     ServerName controller  
    

    cp -f /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

       #主要修改日志產生的路徑
        Listen 5000
        Listen 35357
    
       <VirtualHost *:5000>
          WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
          WSGIProcessGroup keystone-public
          WSGIScriptAlias / /usr/bin/keystone-wsgi-public
          WSGIApplicationGroup %{GLOBAL}
          WSGIPassAuthorization On
          <IfVersion >= 2.4>
            ErrorLogFormat "%{cu}t %M"
          </IfVersion>
          ErrorLog /data/openstack/httpd/keystone-error.log
          CustomLog /data/openstack/httpd/keystone-access.log combined
    
          <Directory /usr/bin>
              <IfVersion >= 2.4>
                  Require all granted
              </IfVersion>
              <IfVersion < 2.4>
                  Order allow,deny
                  Allow from all
              </IfVersion>
          </Directory>
      </VirtualHost>
    
      <VirtualHost *:35357>
          WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
          WSGIProcessGroup keystone-admin
          WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
          WSGIApplicationGroup %{GLOBAL}
          WSGIPassAuthorization On
          <IfVersion >= 2.4>
            ErrorLogFormat "%{cu}t %M"
          </IfVersion>
          ErrorLog /data/openstack/httpd/keystone-error.log
          CustomLog /data/openstack/httpd/keystone-access.log combined
    
          <Directory /usr/bin>
              <IfVersion >= 2.4>
                  Require all granted
              </IfVersion>
              <IfVersion < 2.4>
                 Order allow,deny
                  Allow from all
              </IfVersion>
          </Directory>
      </VirtualHost>
    
  • 創建admin-rc文件,并寫入一下內容

    export OS_USERNAME=admin
    export OS_PASSWORD=ips
    export OS_PROJECT_NAME=admin
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_AUTH_URL=http://controller:35357/v3
    export OS_IDENTITY_API_VERSION=3
    export OS_IMAGE_API_VERSION=2
    
  • 創建domain、project、user和role

     # 創建domain,默認情況下已有domain:default
      openstack domain create --description "An Example Domain" example
      openstack project create --domain default  --description "Service Project" service
     # 創建project
      openstack project create --domain default --description "Demo Project" demo
     # 創建project,此時設置密碼為ips
      openstack user create --domain default   --password-prompt demo
     # 創建role,此時需設置為ipsrole
      openstack role create user
     # 綁定user、role、project三者關系
      openstack role add --project demo --user demo user 
     #  驗證
      unset OS_AUTH_URL OS_PASSWORD
      openstack --os-auth-url http://controller:35357/v3 \
    --os-project-domain-name Default --os-user-domain-name Default \
    --os-project-name admin --os-username admin token issue
    
  • 創建client 訪問配置admin-rc已創建,下面創建demo

      export OS_PROJECT_DOMAIN_NAME=Default
      export OS_USER_DOMAIN_NAME=Default
      export OS_PROJECT_NAME=demo
      export OS_USERNAME=demo
      export OS_PASSWORD=ips
      export OS_AUTH_URL=http://controller:5000/v3
      export OS_IDENTITY_API_VERSION=3
      export OS_IMAGE_API_VERSION=2
    ``
    
    
  • 本節QA
    QA1:Error: Package: perl-DBD-MySQL-4.023-5.el7.x86_64 (@base)

       rpm -ivh mysql-community-libs-compat-5.7.18-1.el7.x86_64.rpm
    

    QA2:Missing value auth-url required for auth plugin password

    source admin-rc
    

    QA3:Invalid command 'WSGIDaemonProcess', perhaps misspelled or defined by a module not included in the server configuration

     # 安裝說明中有,但是有時候為了處理httpd的問題,卸載httpd會同事卸載該組件,安裝時需一并安裝
     yum install apache2-mod_wsgi
    

    QA4:The request you have made requires authentication. (HTTP 401) (Request-ID: req-9a49935d-49a6-4673-ae3b-193d53eb0444)

     # 安裝過程中難免有錯誤,當回頭處理問題時,一種可能是修改過密碼,另一種情況是是之前的執行尚未生效
         keystone-manage bootstrap --bootstrap-password ips \
    --bootstrap-admin-url http://controller:5000/v3/ \
    --bootstrap-internal-url http://controller:5000/v3/ \
    --bootstrap-public-url http://controller:5000/v3/ \
    --bootstrap-region-id RegionOne
    

2.3、鏡像服務glance(控制節點)

  • 創建存儲
    CREATE DATABASE glance;
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'ips';
    
  • 創建openstack中的用戶glance
      # 創建user ,此時設置密碼為ips
      openstack user create --domain default --password-prompt glance
      # 給glance賦予service權限和admin角色
      openstack role add --project service --user glance admin
      # 創建service和endpoints,用于鏡像,
      openstack service create --name glance  --description "OpenStack Image" image
      openstack endpoint create --region RegionOne image public http://controller:9292
      openstack endpoint create --region RegionOne image internal http://controller:9292
      openstack endpoint create --region RegionOne image admin http://controller:9292
    
  • 安裝相關軟件包
    yum install openstack-glance -y
    
  • 修改配置文件
    /etc/glance/glance-api.conf
     [database]
     connection = mysql+pymysql://glance:ips@controller/glance
    
     [keystone_authtoken]
     auth_uri = http://controller:5000
     auth_url = http://controller:5000
     memcached_servers = controller:11211
     auth_type = password
     project_domain_name = Default
     user_domain_name = Default
     project_name = service
     username = glance
     password = ips
    
     [paste_deploy]
     flavor = keystone
    
    # 鏡像存儲方式和位置
     [glance_store]
     stores = file,http
     default_store = file
     filesystem_store_datadir = /data/openstack/glance/images/
    
    /etc/glance/glance-registry.conf
     [database]
     connection = mysql+pymysql://glance:ips@controller/glance
     [keystone_authtoken]
     auth_uri = http://controller:5000
     auth_url = http://controller:5000
     memcached_servers = controller:11211
     auth_type = password
     project_domain_name = Default
     user_domain_name = Default
     project_name = service
     username = glance
     password = ips
    
    [paste_deploy]
     flavor = keystone
    
  • 創建工作目錄
     mkdir -p /data/openstack/glance/images/
     mkdir -p /data/openstack/glance/log/
     chown glance:glance -R /data/openstack/glance
    
  • 初始化glance數據庫
     su -s /bin/sh -c "glance-manage db_sync" glance
    
  • 修改openstack-glance-api.service和openstack-glance-registry.service 統一存儲日志,并啟動
     # 主要是重新指定日志的存儲位置
     ExecStart=/usr/bin/glance-api --log-dir /data/openstack/glance/log/
     ExecStart=/usr/bin/glance-registry --log-dir /data/openstack/glance/log/
     #啟動
     systemctl daemon-reload 
     systemctl enable openstack-glance-api.service openstack-glance-registry.service
     systemctl start openstack-glance-api.service openstack-glance-registry.service
    
  • 驗證
     # 下載測試鏡像
     wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
     # 導入鏡像
      openstack image create "cirros" \
      --file cirros-0.3.5-x86_64-disk.img \
      --disk-format qcow2 --container-format bare \
      --public
     # 查看鏡像
      openstack image list
    # 拉取qcow2
      wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
    # 導入鏡像
    openstack image create "CentOS7" \
      --file CentOS-7-x86_64-GenericCloud.qcow2 \
      --disk-format qcow2 --container-format bare \
      --public
    

2.4、Compute 服務(nova)

2.4.1、控制節點安裝

  • 創建存儲
    CREATE DATABASE nova_api;
    CREATE DATABASE nova;
    CREATE DATABASE nova_cell0;
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'ips';
    
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'ips';
    
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'ips';
    flush privileges;
    
  • 創建openstack中的用戶nova
      # 創建user ,此時設置密碼為ips
      openstack user create --domain default --password-prompt nova
      # 給nova賦予service權限和admin角色
      openstack role add --project service --user nova admin
      # 創建service和endpoints,用于鏡像,
      openstack service create --name nova  --description "OpenStack Compute" compute
      openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
      openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
      openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
    
  • 創建openstack中的用戶placement
      # 創建user ,此時設置密碼為ips
      openstack user create --domain default --password-prompt placement
      # 給placement賦予service權限和admin角色
      openstack role add --project service --user placement admin
      # 創建service和endpoints,用于鏡像,
      openstack service create --name placement --description "Placement API" placement
      openstack endpoint create --region RegionOne placement public http://controller:8778
      openstack endpoint create --region RegionOne placement internal http://controller:8778
      openstack endpoint create --region RegionOne placement admin http://controller:8778
    
  • 在控制節點上安裝相關軟件包
    yum install openstack-nova-api openstack-nova-conductor \
    openstack-nova-console openstack-nova-novncproxy \
    openstack-nova-scheduler openstack-nova-placement-api  -y
    
  • 創建工作目錄
     mkdir -p /data/openstack/nova/
     chown nova:nova -R /data/openstack/nova
    
  • 修改配置文件(/etc/nova/nova.conf )
     [DEFAULT]
     # ...
     enabled_apis = osapi_compute,metadata
     transport_url = rabbit://openstack:ips@controller
     my_ip = 10.20.16.229
     use_neutron = True
     firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
     [api_database]
     # ...
     connection = mysql+pymysql://nova:ips@controller/nova_api
    
     [database]
     # ...
     connection = mysql+pymysql://nova:ips@controller/nova
    
     [api]
     # ...
     auth_strategy = keystone
    
     [keystone_authtoken]
     # ...
     auth_url = http://controller:5000/v3
     memcached_servers = controller:11211
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     project_name = service
     username = nova
     password = ips
    
     [vnc]
     enabled = true
     # ...
     server_listen = $my_ip
     server_proxyclient_address = $my_ip
    
     [glance]
     # ...
     api_servers = http://controller:9292
    
     [oslo_concurrency]
     # ...
     lock_path = /data/openstack/nova/tmp
    
     [placement]
     # ...
     os_region_name = RegionOne
     project_domain_name = Default
     project_name = service
     auth_type = password
     user_domain_name = Default
     auth_url = http://controller:5000/v3
     username = placement
     password = ips
    
  • 修改配置文件(/etc/httpd/conf.d/00-nova-placement-api.conf)并重啟httpd
     #官方BUG,增加配置
     <Directory /usr/bin>
        <IfVersion >= 2.4>
           Require all granted
        </IfVersion>
        <IfVersion < 2.4>
           Order allow,deny
           Allow from all
        </IfVersion>
     </Directory>
     # 重啟
     systemctl restart httpd
    
  • 初始化nova數據庫,并驗證
     # 初始化
     su -s /bin/sh -c "nova-manage api_db sync" nova
     su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
     su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
     su -s /bin/sh -c "nova-manage db sync" nova
     # 驗證
     nova-manage cell_v2 list_cells
    
  • 修改openstack-nova-*.service相關文件,主要統一存儲日志,并啟動
      # 主要是重新指定日志的存儲位置
      # openstack-nova-api.service
      ExecStart=/usr/bin/nova-api --log-dir /data/openstack/nova/log/
      # openstack-nova-consoleauth.service
      ExecStart=/usr/bin/nova-consoleauth --log-dir /data/openstack/nova/log/
      # openstack-nova-scheduler.service
      ExecStart=/usr/bin/nova-scheduler --log-dir /data/openstack/nova/log/
      # openstack-nova-conductor.service
      ExecStart=/usr/bin/nova-conductor  --log-dir /data/openstack/nova/log/
      # openstack-nova-novncproxy.service
      ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS --log-dir /data/openstack/nova/log/
    
     #啟動
     systemctl daemon-reload 
     systemctl start openstack-nova-api.service \
      openstack-nova-consoleauth.service openstack-nova-scheduler.service \
      openstack-nova-conductor.service openstack-nova-novncproxy.service
    
  • 本節QA
    QA1:官方BUG:修改配置文件/etc/httpd/conf.d/00-nova-placement-api.conf:

2.4.2、計算節點安裝

  • 在計算節點上安裝相關軟件包

    yum install openstack-nova-compute -y
    
  • 更改配置文件( /etc/nova/nova.conf)

    [DEFAULT]
      # ...
     verbose = True
     #替換為計算節點上的管理網絡接口的IP 地址,例如 :ref:example architecture <overview-example-architectures>`中所示的第一個節點 10.0.0.31 。
     my_ip = 10.20.16.228
     enabled_apis = osapi_compute,metadata
     transport_url= rabbit://openstack:ips@controller
     use_neutron = True
     firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
    [api]
    # ...
    auth_strategy = keystone
    
    [keystone_authtoken]
    # ...
    auth_url = http://controller:5000/v3
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = ips
    
    [vnc]
    # ...
    enabled = True
    #服務器組件監聽所有的 IP 地址
    vncserver_listen = 0.0.0.0 
    #代理組件僅僅監聽計算節點管理網絡接口的 IP 地址
    vncserver_proxyclient_address = $my_ip 
    #使用 web 瀏覽器訪問位于該計算節點上實例的遠程控制臺的位置
    novncproxy_base_url = http://controller:6080/vnc_auto.html
    
    [glance]
    # ...
    api_servers = http://controller:9292
    
     #配置鎖路徑
    [oslo_concurrency]
    # (可選的)為幫助排錯,在 “[DEFAULT]”部分啟用詳細日志(verbose = True)。
    lock_path = /data/openstack/nova/tmp
    [placement]
    # ...
    os_region_name = RegionOne
    project_domain_name = Default
    project_name = service
    auth_type = password
    user_domain_name = Default
    auth_url = http://controller:5000/v3
    username = placement
    password = ips
    
  • 查看CPU核數,確認是否支持CPU加速

      egrep -c '(vmx|svm)' /proc/cpuinfo
      #如果這個命令返回 >1的值,說明計算節點支持硬件加速。如果等于0 ,需要在/etc/nova/nova.conf中修改virt_type為QEMU,否則KVM。
      [libvirt]
      ...
      virt_type = qemu
     
    
  • 修改日志目錄,啟動計算服務

      # openstack-nova-compute.service
       ExecStart=/usr/bin/nova-compute --log-dir /data/openstack/nova/compute
      # 啟動
       systemctl  daemon-reload
       systemctl enable libvirtd.service openstack-nova-compute.service
       systemctl start libvirtd.service openstack-nova-compute.service
    
  • 將新的計算節點加入到庫中(cell )

      openstack compute service list --service nova-compute
      # 新增節點此處都要執行
      su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
      #當然如果不想手動執行,可以在 /etc/nova/nova.conf配置定時掃描發現
      [scheduler]
      discover_hosts_in_cells_interval = 300
    
  • 本節QA

2.5、網絡服務neutron

2.5.1、控制節點

  • 創建存儲
    CREATE DATABASE neutron;
    GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'ips';
     flush privileges;
    
  • 創建openstack中的用戶neutron
      # 創建user ,此時設置密碼為ips
      openstack user create --domain default --password-prompt neutron
      # 給neutron賦予service權限和admin角色
      openstack role add --project service --user neutron admin
      # 創建service和endpoints,用于鏡像,
      openstack service create --name neutron  --description "OpenStack Networking" network
      openstack endpoint create --region RegionOne  network public http://controller:9696
      openstack endpoint create --region RegionOne  network internal http://controller:9696
      openstack endpoint create --region RegionOne  network admin http://controller:9696
    
  • 安裝軟件包(Provider networks)
     yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
    
  • 修改配置
    /etc/neutron/neutron.conf
     [DEFAULT]
     # ...
     core_plugin = ml2
     service_plugins =
     transport_url = rabbit://openstack:ips@controller
     auth_strategy = keystone
     notify_nova_on_port_status_changes = true
     notify_nova_on_port_data_changes = true
    
     [keystone_authtoken]
     # ...
     auth_uri = http://controller:5000
     auth_url = http://controller:35357
     memcached_servers = controller:11211
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     project_name = service
     username = neutron
     password = ips
    
     [nova]
     # ...
     auth_url = http://controller:35357
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     region_name = RegionOne
     project_name = service
     username = nova
     password = ips
    
     [oslo_concurrency]
     # 預先創建好工作目錄 
     # mkdir -p /data/openstack/neutron/lock
     # chown neutron:neutron -R /data/openstack/neutron
     lock_path = lock_path = /data/openstack/neutron/lock
    
    Modular Layer 2 (ML2) plug-in: /etc/neutron/plugins/ml2/ml2_conf.ini
     [ml2]
     # ...
     type_drivers = flat,vlan
     tenant_network_types =
     mechanism_drivers = linuxbridge
     extension_drivers = port_security
    
     [ml2_type_flat]
     # ...
     flat_networks = provider
    
     [securitygroup]
     # ...
     enable_ipset = true
    

    Linux bridge agent: /etc/neutron/plugins/ml2/linuxbridge_agent.ini

     [linux_bridge]
     physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    
     [vxlan]
     enable_vxlan = false
    
     [securitygroup]
     # ...
     enable_security_group = true
     firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    
    iptables: /usr/lib/sysctl.d/00-system.conf使之生效 sysctl -p
       net.bridge.bridge-nf-call-ip6tables = 1
       net.bridge.bridge-nf-call-iptables = 1
    
    DHCP agent: /etc/neutron/dhcp_agent.ini
     [DEFAULT]
     # ...
     interface_driver = linuxbridge
     dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
     enable_isolated_metadata = true
    
    metadata agent : /etc/neutron/metadata_agent.ini
     [DEFAULT]
     # ...
     nova_metadata_host = controller
     metadata_proxy_shared_secret = ips
    
    /etc/nova/nova.conf (不要改變以前的配置過的)
     [neutron]
     url = http://controller:9696
     auth_url = http://controller:35357
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     region_name = RegionOne
     project_name = service
     username = neutron
     password = ips
     service_metadata_proxy = true
     metadata_proxy_shared_secret = ips
    
  • 配置連接到指定文件
      ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    
  • 初始化數據庫
      su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    
  • 由于剛才更改nova的配置文件,需重啟
       systemctl restart openstack-nova-api.service
    
  • 修改啟動配置,并啟動
    # /usr/lib/systemd/system/neutron-server.service
    ExecStart=/usr/bin/neutron-server \ 
    --config-file /usr/share/neutron/neutron-dist.conf  \ 
    --config-dir /usr/share/neutron/server \ 
    --config-file /etc/neutron/neutron.conf \ 
    --config-file /etc/neutron/plugin.ini \ 
    --config-dir /etc/neutron/conf.d/common \
    --config-dir /etc/neutron/conf.d/neutron-server \ 
    --log-file /data/openstack/neutron/log/server.log
    
    # /usr/lib/systemd/system/neutron-linuxbridge-agent.service
    ExecStart=/usr/bin/neutron-linuxbridge-agent \
    --config-file /usr/share/neutron/neutron-dist.conf \ 
    --config-file /etc/neutron/neutron.conf \ 
    --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini \
    --config-dir /etc/neutron/conf.d/common \
    --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent \ 
    --log-file /data/openstack/neutron/log/linuxbridge-agent.log
    
    # /usr/lib/systemd/system/neutron-dhcp-agent.service
    ExecStart=/usr/bin/neutron-dhcp-agent \
    --config-file /usr/share/neutron/neutron-dist.conf \
    --config-file /etc/neutron/neutron.conf  \ 
    --config-file /etc/neutron/dhcp_agent.ini \
    --config-dir /etc/neutron/conf.d/common \
    --config-dir /etc/neutron/conf.d/neutron-dhcp-agent \ 
    --log-file /data/openstack/neutron/log/dhcp-agent.log
    
    # /usr/lib/systemd/system/neutron-metadata-agent.service
    ExecStart=/usr/bin/neutron-metadata-agent \
    --config-file /usr/share/neutron/neutron-dist.conf \
    --config-file /etc/neutron/neutron.conf \
    --config-file /etc/neutron/metadata_agent.ini \
    --config-dir /etc/neutron/conf.d/common \
    --config-dir /etc/neutron/conf.d/neutron-metadata-agent \
    --log-file /data/openstack/neutron/log/metadata-agent.log
    
    # 啟動
    systemctl daemon-reload
    systemctl start neutron-server.service \
    neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    neutron-metadata-agent.service
    

2.5.2、計算節點

  • 安裝計算節點上相關軟件包
      yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
    
  • 更改配置文件
    /etc/neutron/neutron.conf
    [DEFAULT]
    ...
    #RabbitMQ消息隊列訪問
    transport_url = rabbit://openstack:ips@controller
    
    #配置認證服務訪問
    auth_strategy = keystone
    verbose = True
    
    [keystone_authtoken]
    # ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = ips
    
     #配置鎖路徑:
    [oslo_concurrency]
       ...
    #(可選的)為幫助排錯,在 “[DEFAULT]”部分啟用詳細日志(verbose = True)。
    lock_path = /data/openstack/neutron/tmp
    
    #注釋所有``connection`` 項,因為計算節點不直接訪問數據庫
    [database]
    
    
    Linux bridge agent:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
    [linux_bridge]
    physical_interface_mappings = provider:eno1
    [vxlan]
    enable_vxlan = false
    [securitygroup]
    # ...
    enable_security_group = true
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    
    iptables: /usr/lib/sysctl.d/00-system.conf使之生效 sysctl -p
       net.bridge.bridge-nf-call-ip6tables = 1
       net.bridge.bridge-nf-call-iptables = 1
    
    /etc/nova/nova.conf
     [neutron]
     url = http://controller:9696
     auth_url = http://controller:35357
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     region_name = RegionOne
     project_name = service
     username = neutron
     password = ips
    
  • 由于修改了nova配置,重啟計算服務
      # systemctl restart openstack-nova-compute.service
    
  • 修改啟動配置,啟動Linux橋接代理并配置它開機自啟動
    # /usr/lib/systemd/system/neutron-linuxbridge-agent.service 其中目錄提前創建
    # mkdir -p /data/openstack/neutron/log
    # chown neutron:neutron -R /data/openstack/neutron
    ExecStart=/usr/bin/neutron-linuxbridge-agent \ 
    --config-file /usr/share/neutron/neutron-dist.conf \ 
    --config-file /etc/neutron/neutron.conf \ 
    --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini \ 
    --config-dir /etc/neutron/conf.d/common \ 
    --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent \ 
    --log-file /data/openstack/neutron/log/linuxbridge-agent.log
    #  啟動
    systemctl  daemon-reload
    systemctl enable neutron-linuxbridge-agent.service
    systemctl start neutron-linuxbridge-agent.service
    
  • 驗證
     openstack extension list --network
    

2.5創建實例

  • Flavor

  • 本節QA
      QA1:創建server時,在nova-conductor.log中,報如下錯誤:

    2018-05-15 11:45:10.816 5547 ERROR   oslo_messaging.rpc.server MessageDeliveryFailure: Unable to connect to AMQP   server on controller:5672 after None tries: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.
    

    解決辦法:https://blog.silversky.moe/works/openstack-lanuch-instance-infinite-scheduling

     su -s /bin/sh -c "nova-manage db sync" nova
     如果仍有問題,到庫中確認配置是否正確
     SELECT * FROM `nova_api`.`cell_mappings` WHERE `created_at` LIKE BINARY '%openstack%' OR `updated_at` LIKE BINARY '%openstack%' OR `id` LIKE BINARY '%openstack%' OR `uuid` LIKE BINARY '%openstack%' OR `name` LIKE BINARY '%openstack%' OR `transport_url` LIKE BINARY '%openstack%' OR `database_connection` LIKE BINARY '%openstack%' ;
    

    此外,即便配置正確在使用openstack4j 拿去token時也會包該問題

       su -s /bin/sh -c "nova-manage db sync" nova
    

    QA2:創建服務時,{u'message': u'No valid host was found. ', u'code': 500, u'created': u'2018-05-17T02:22:47Z'

       管理員給這個工程的資源配額是最多創建10個實例,最多使用20個vcpu,
       最多使用5G的內存,只要達到某一個資源的使用上限,就會出現異常,這就是配額管理。
        # 修改默認配置
       openstack quota set c5ba590cab874f55b1668bad5cd2a6a6 --instances 30 --cores 90 --ram 204800
     
    

    QA3:Build of instance 00b69820-ef36-447c-82ca-7bdec4c70ed2 was re-scheduled: invalid argument: could not find capabilities for domaintype=kvm

      # kvm 被 BIOS 禁用了
       dmesg | grep kvm
      重啟進入設置即可
    

2.6、dashboard安裝

  • 安裝軟件包
      yum install openstack-dashboard -y
    
  • 更改配置文件(/etc/openstack-dashboard/local_settings)
      #配置控制節點,來使用 OpenStack 服務
      OPENSTACK_HOST = "controller"
      #允許所有主機訪問儀表板
      ALLOWED_HOSTS = ['*', ]
      #配置 memcached 會話存儲服務
       CACHES = {
         'default': {
             'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
             'LOCATION': 'controller:11211',
          }
        }
      #為通過儀表盤創建的用戶配置默認的 user 角色
       OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
      #啟用multi-domain model
      OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
      #配置服務API版本,這樣你就可以通過Keystone V3 API來登錄dashboard
      OPENSTACK_API_VERSIONS = {
          "identity": 3,
          "volume": 2,
          "image": 2
      }
     #如果您選擇網絡參數1,禁用支持3層網絡服務
      OPENSTACK_NEUTRON_NETWORK = {
          ...
        'enable_router': False,
        'enable_quotas': False,
        'enable_distributed_router': False,
        'enable_ha_router': False,
        'enable_lb': False,
        'enable_firewall': False,
        'enable_vpn': False,
        'enable_fip_topology_check': False,
    }
    #可以選擇性地配置時區
    TIME_ZONE = "Asia/Shanghai"
    
  • 啟動web 服務器和會話存儲服務,并配置它們隨系統啟動
     # systemctl enable httpd.service memcached.service
     # systemctl restart httpd.service memcached.service
    

2.6、塊設備存儲服務cinder (控制節點和計算節點)

2.6.1、控制節點

* 創建存儲
  ```
    CREATE DATABASE cinder;
    GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'  IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'   IDENTIFIED BY 'ips';
    flush privileges;
  ```
  • 創建openstack中的用戶cinder
       # 創建user ,此時設置密碼為ips
       openstack user create --domain default --password-prompt cinder;
       # 給cinder 賦予service權限和admin角色
       openstack role add --project service --user cinder admin;
       # 創建cinderv2 和 cinderv3 服務
       openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2;
       openstack service create --name cinderv3  --description "OpenStack Block Storage" volumev3;
      # 創建service和endpoints,用于鏡像
       openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2
       openstack endpoint create --region RegionOne  volumev2 public http://controller:8776/v2/%\(project_id\)s
       openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
       openstack endpoint create --region RegionOne  volumev2 admin http://controller:8776/v2/%\(project_id\)s
       openstack endpoint create --region RegionOne  volumev3 public http://controller:8776/v3/%\(project_id\)s
       openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
       openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    
  • 安裝cinder
     yum install openstack-cinder -y
    
  • 修改配置文件 :/etc/cinder/cinder.conf
     [DEFAULT]
     # ...
     transport_url = rabbit://openstack:ips@controller
     auth_strategy = keystone
    
     [keystone_authtoken]
     # ...
     auth_uri = http://controller:5000
     auth_url = http://controller:35357
     memcached_servers = controller:11211
     auth_type = password
     project_domain_id = default
     user_domain_id = default
     project_name = service
     username = cinder
     password = ips
    
     [database]
     # ...
     connection = mysql+pymysql://cinder:ips@controller/cinder
    
     # 目錄預先創建
     # mkdir -p /data/openstack/cinder/tmp
     # chown cinder:cinder -R /data/openstack/cinder
     [oslo_concurrency]
     # ...
     lock_path = /data/openstack/cinder/tmp
    
    
  • 修改配置文件并重啟 :/etc/nova/nova.conf
     [cinder]
     os_region_name = RegionOne
    
  • 重啟nova
      systemctl restart openstack-nova-api.service
    
  • 初始化數據結構
       su -s /bin/sh -c "cinder-manage db sync" cinder
    
  • 修改啟動配置:主要為了歸檔日志
     # openstack-cinder-api.service
     ExecStart=/usr/bin/cinder-api --config-file /usr/share/cinder/cinder-dist.conf --  config-file /etc/cinder/cinder.conf --logfile /data/openstack/cinder/log/api.log
     # openstack-cinder-scheduler.service
      ExecStart=/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /data/openstack/cinder/log/scheduler.log
    
  • 啟動cinder
     systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
    

2.6.2、計算節點

  • 增加LVM支持,安裝相關組件
       yum install lvm2 device-mapper-persistent-data  openstack-cinder targetcli python-keystone -y
      # 啟動
      # systemctl enable lvm2-lvmetad.service
      # systemctl start lvm2-lvmetad.service
    
  • 為塊存儲服務創建物理卷(cinder 會在這個卷組中創建邏輯卷)
      # 提前準備好分區nvme0n1p4
      pvcreate /dev/nvme0n1p4
      vgcreate cinder-volumes /dev/nvme0n1p4
    
  • 修改配置文件/etc/lvm/lvm.conf
       devices {
       ...
       #此處配置一定要正確不然會導致cinder-volume的State為down
       filter =[ "a|^/dev/nvme0n1p4$|","r|.*/|" ]
    
  • 更改配置文件(/etc/cinder/cinder.conf)
      [DEFAULT]
      # ...
      #RabbitMQ消息隊列訪問
      rpc_backend = rabbit://openstack:ips@controller
      #配置認證服務訪問
      auth_strategy = keystone
      my_ip = 10.20.16.227
      # 啟用 LVM 后端
      enabled_backends = lvm
      #配置鎖路徑
      lock_path = /data/openstack/cinder/tmp
      #啟用詳細日志
      verbose = True
      #配置鏡像服務的位置
      glance_api_servers = http://controller:9292
    
      #配置數據庫訪問
      [database]
      ...
      connection = mysql://cinder:ips@controller/cinder #替換 CINDER_DBPASS
    
    #配置認證服務訪問,注釋或者刪除其他選項
    [keystone_authtoken]
    ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    auth_plugin = password
    project_domain_id = default
    user_domain_id = default
    project_name = service
    username = cinder
    password = CINDER_PASS #cinder用戶選擇的密碼
    
    #配置LVM后端以LVM驅動結束,卷組``cinder-volumes`` ,iSCSI 協議和正確的 iSCSI服務,在[DEFAULT]中啟用
    [lvm]
    ...
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    volume_group = cinder-volumes
    iscsi_protocol = iscsi
    iscsi_helper = lioadm
    
  • 啟動塊存儲卷服務及其依賴的服務,并將其配置為隨系統啟動
      # systemctl enable openstack-cinder-volume.service target.service
      # systemctl start openstack-cinder-volume.service target.service
    

CentOS 鏡像

  • 設置固定root密碼
      virt-customize -a CentOS-7-x86_64-GenericCloud.qcow2 --root-password password:root123
    
  • 設置其他用戶密碼
     [root@host229 openstack]# guestfish --rw -a CentOS-7-x86_64-GenericCloud.qcow2
     ><fs> run 
     ><fs> list-filesystems
     /dev/sda1: xfs
     ><fs> mount /dev/sda1 /
     ><fs> vi /etc/cloud/cloud.cfg
    
    解除root鎖定:/etc/cloud/cloud.cfg
    disable_root: 0
    ssh_pwauth:   1
    ······
    system_info:
      default_user:
        name: centos
        lock_passwd: false
        plain_text_passwd: 'root@ips'
    
    增加ssh 登陸支持:/etc/ssh/sshd_config
    Port 22
    #AddressFamily any
    ListenAddress 0.0.0.0
    #ListenAddress ::
    PermitRootLogin yes
    PasswordAuthentication yes
    
  • 導入鏡像
     openstack image create "Centos-7" --file CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare  --public
    
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,238評論 6 531
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,430評論 3 415
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,134評論 0 373
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,893評論 1 309
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,653評論 6 408
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,136評論 1 323
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,212評論 3 441
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,372評論 0 288
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 48,888評論 1 334
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,738評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 42,939評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,482評論 5 359
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,179評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,588評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,829評論 1 283
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,610評論 3 391
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 47,916評論 2 372