openstack queens版本安装-2018年新版私有云

  • A+
所属分类:应用服务
  • 本文源自 674591788, 于2018-04-2500:42:17,由 整理发表,共 19986 字。
摘要

openstack queens版本安装-2018年新版私有云

openstack 安装步骤:

环境准备:

系统:centos7 x86_64

controller 2c+6g+40g 192.168.147.50 可以nat上网

compute 1c+4g+40g 192.168.147.60 可以nat上网

neutron 1c+2g+20g 192.168.147.70 可以nat上网

关闭selinux:

cat /etc/sysconfig/selinux

SELINUX=disabled

SELINUXTYPE=targeted

配置hostname:

192.168.147.50 openstack-controller openstack-controller.com

192.168.147.60 openstack-compute openstack-compute.com

192.168.147.70 openstack-neutron openstack-neutron.com

关闭防火墙:

systemctl disable firewalld

重启

reboot

基础软件包:

yum install openssl openssl-devel net-tools vim wget -y

时间同步:

控制节点:

yum install chrony -y

配置ntp:

vi /etc/chrony.conf

加入:

server time.windows.com iburst

allow 192.168.147.0/24

启动服务:

systemctl enable chronyd.service

systemctl start chronyd.service

其他节点:

yum install chrony -y

配置ntp:

vi /etc/chrony.conf

加入:

server time.windows.com iburst

server 192.168.147.50 iburst

启动服务:

systemctl enable chronyd.service

systemctl start chronyd.service

所有节点:

同步时钟:

chronyc sources

确保所以的时间都是同步的。

使用timedatectl status查看:

[root@openstack-compute ~]# timedatectl status
Local time: 一 2018-03-12 23:14:13 CST
Universal time: 一 2018-03-12 15:14:13 UTC
RTC time: 一 2018-03-12 15:14:13
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a

所有节点:安装openstack的yum 源:

yum install centos-release-openstack-queens -y

yum upgrade -y 更新yum源

安装opentack client的python包:

yum install python-openstackclient -y

如果系统关闭了selinux可以不用安装这个包:

yum install openstack-selinux -y

控制节点安装mysql:

yum install mariadb mariadb-server python2-PyMySQL -y

vi /etc/my.cnf

[mysqld]

bind-address = 192.168.147.50

default-storage-engine = innodb

innodb_file_per_table = on

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

启动服务:#

systemctl enable mariadb.service

systemctl start mariadb.service

修改密码:

mysql_secure_installation

安装rabbitmq:

yum install rabbitmq-server -y

启动服务:

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server.service

创建用户和授权:

rabbitmqctl add_user openstack openstack

rabbitmqctl set_permissions openstack "." "." ".*"

将openstack加入administrator组 要不然web界面无法登陆

rabbitmqctl set_user_tags openstack administrator

启动web界面插件:

rabbitmq-plugins enable rabbitmq_management

重启rabbitmq

systemctl restart rabbitmq-server

web界面登陆测试:

http://192.168.147.50:15672 使用openstack登陆即可

安装memcache:

yum install memcached python-memcached -y

修改memecahe的配置文件 /etc/sysconfig/memcached :

OPTIONS="-l 127.0.0.1,::1,openstack-controller"

启动:

systemctl enable memcached.service

systemctl start memcached.service

检查一下memcache端口:

[root@openstack-controller ~]# netstat -anltp|grep memcache

tcp 0 0 192.168.147.50:11211 0.0.0.0: LISTEN 56532/memcached

tcp 0 0 127.0.0.1:11211 0.0.0.0:
LISTEN 56532/memcached

tcp6 0 0 ::1:11211 :::* LISTEN 56532/memcached

安装etcd:

yum install etcd -y

编辑配置文件:

vi /etc/etcd/etcd.conf

#[Member]

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.147.50:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.147.50:2379"

ETCD_NAME="openstack-controller"

#[Clustering]

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.147.50:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.147.50:2379"

ETCD_INITIAL_CLUSTER="openstack-controller=http://192.168.147.50:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"

ETCD_INITIAL_CLUSTER_STATE="new"

启动服务:

systemctl enable etcd

systemctl start etcd

keystone安装:

创建数据库:

mysql -uroot -padm123

创建keystone数据库并授权:

CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.
TO 'keystone'@'localhost' \

IDENTIFIED BY 'keystone';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \

IDENTIFIED BY 'keystone';

安装keystone的包:

yum install openstack-keystone httpd mod_wsgi -y

修改keystone的配置文件 /etc/keystone/keystone.conf:

1.数据库连接:

[database]

connection = mysql+pymysql://keystone:keystone@192.168.147.50/keystone

2.令牌提供者

[token]

provider = fernet

3.同步数据库 一定要在keystone用户下:

su -s /bin/sh -c "keystone-manage db_sync" keystone

4.初始化临牌仓库:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

5.定义管理服务:

keystone-manage bootstrap --bootstrap-password keystone \

--bootstrap-admin-url http://192.168.147.50:35357/v3/ \

--bootstrap-internal-url http://192.168.147.50:5000/v3/ \

--bootstrap-public-url http://192.168.147.50:5000/v3/ \

--bootstrap-region-id RegionOne

小插曲:

由于按照官方文档 http://controller:35357/v3/ 但是我的主机名是openstac-controller,所以会报错。

这个时候keystone-manage 没有删除命令,你修改成正确的命令也不行,需要修改数据库的endpoint表

use keystone;

update endpoint set url="http://192.168.147.50:5000/v3/" where id="xxx";

update endpoint set url="http://192.168.147.50:35357/v3/" where id="xxx";

6.配置http:

vim /etc/httpd/conf/httpd.conf

ServerName openstack-controller

配置wsgi模块软件连接:

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动httpd服务:

systemctl enable httpd.service

systemctl start httpd.service

创建admin环境变量脚本:

vi adminrc.sh

#!/bin/bash

export OS_USERNAME=admin

export OS_PASSWORD=keystone

export OS_PROJECT_NAME=admin

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://192.168.147.50:35357/v3

export OS_IDENTITY_API_VERSION=3

创建新的域:

openstack domain create --description "An Example Domain" xxx

创建服务项目:

openstack project create --domain default \

--description "Service Project" service

创建demo项目:

openstack project create --domain default \

--description "Demo Project" demo

创建demo用户:

openstack user create --domain default \

--password-prompt demo

创建user角色:

openstack role create user

将user的角色赋予demo用户和demo项目

openstack role add --project demo --user demo user

确认以上操作有效:

unset OS_AUTH_URL OS_PASSWORD

admin用户:

openstack --os-auth-url http://192.168.147.50:35357/v3 \

--os-project-domain-name Default --os-user-domain-name Default \

--os-project-name admin --os-username admin token issue

demo用户;

openstack --os-auth-url http://192.168.147.50:5000/v3 \

--os-project-domain-name Default --os-user-domain-name Default \

--os-project-name demo --os-username demo token issue

glance安装:

创建数据库:

mysql -uroot -padm123

create database glance;

use glance;

GRANT ALL PRIVILEGES ON glance.
TO 'glance'@'localhost' \

IDENTIFIED BY 'glance';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \

IDENTIFIED BY 'glance';

创建glance用户:

openstack user create --domain default --password-prompt glance##提示输入密码

openstack user create --domain default --password=glance glance##不提示输入密码

将glance用户加入到admin的角色和service

项目

openstack role add --project service --user glance admin

创建glance的服务项目:

openstack service create --name glance \

--description "OpenStack Image" image

创建endpoint:

openstack endpoint create --region RegionOne \

image public http://192.168.147.50:9292

openstack endpoint create --region RegionOne \

image internal http://192.168.147.50:9292

openstack endpoint create --region RegionOne \

image admin http://192.168.147.50:9292

安装软件:

yum install openstack-glance -y

修改glance-api配置文件:/etc/glance/glance-api.conf

[database]

connection = mysql://glance:glance@192.168.147.50/glance

[keystone_authtoken]

...

auth_uri = http://192.168.147.50:5000

auth_url = http://192.168.147.50:35357

memcached_servers = 192.168.147.50:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = glance

[paste_deploy]

...

flavor = keystone

[glance_store]

...

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

编辑配置文件:/etc/glance/glance-registry.conf

[database]

connection = mysql://glance:glance@192.168.147.50/glance

[keystone_authtoken]

...

auth_uri = http://192.168.147.50:5000

auth_url = http://192.168.147.50:35357

memcached_servers = 192.168.147.50:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = glance

[paste_deploy]

...

flavor = keystone

同步数据库:

su -s /bin/sh -c "glance-manage db_sync" glance

启动服务:

systemctl enable openstack-glance-api.service \

openstack-glance-registry.service

systemctl restart openstack-glance-api.service \

openstack-glance-registry.service

测试:

openstack image list ##不报错及成功

下载镜像:

wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

上传镜像:

openstack image create "cirros_test" \

--file cirros-0.3.5-x86_64-disk.img \

--disk-format qcow2 --container-format bare \

--public

nova 控制节点:

mysql -uroot -padm*123

create database nova_api;

create database nova;

create database nova_cell0;

GRANT ALL PRIVILEGES ON nova_api. TO 'nova_api'@'localhost' IDENTIFIED BY 'nova_api';

GRANT ALL PRIVILEGES ON nova_api.
TO 'nova_api'@'%' IDENTIFIED BY 'nova_api';

GRANT ALL PRIVILEGES ON nova. TO 'nova'@'localhost' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON nova.
TO 'nova'@'%' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON nova_cell0. TO 'nova_cell0'@'localhost' IDENTIFIED BY 'nova_cell0';

GRANT ALL PRIVILEGES ON nova_cell0.
TO 'nova_cell0'@'%' IDENTIFIED BY 'nova_cell0';

创建nova用户:

openstack user create --domain default --password=nova nova

将nova用户加到admin 角色和service项目组

openstack role add --project service --user nova admin

创建计算项目:

openstack service create --name nova \

--description "OpenStack Compute" compute

创建endpoint:

openstack endpoint create --region RegionOne \

compute public http://192.168.147.50:8774/v2.1

openstack endpoint create --region RegionOne \

compute internal http://192.168.147.50:8774/v2.1

openstack endpoint create --region RegionOne \

compute admin http://192.168.147.50:8774/v2.1

创建placement用户:

openstack user create --domain default --password=placement placement

加入到service项目和admin组

openstack role add --project service --user placement admin

创建placement的服务目录:

openstack service create --name placement --description "Placement API" placement

创建endpoint:

openstack endpoint create --region RegionOne placement public http://192.168.147.50:8778

openstack endpoint create --region RegionOne placement internal http://192.168.147.50:8778

openstack endpoint create --region RegionOne placement admin http://192.168.147.50:8778

安装包:

yum install openstack-nova-api openstack-nova-conductor \

openstack-nova-console openstack-nova-novncproxy \

openstack-nova-scheduler openstack-nova-placement-api -y

编辑配置文件:

/etc/nova/nova.conf

[DEFAULT]

...

enabled_apis = osapi_compute,metadata

[api_database]

...

connection = mysql://nova_api:nova_api@192.168.147.50/nova_api

[database]

...

connection = mysql://nova:nova@192.168.147.50/nova

[DEFAULT]

...

transport_url = rabbit://openstack:openstack@192.168.147.50

[api]

...

auth_strategy = keystone

[keystone_authtoken]

...

auth_uri = http://192.168.147.50:5000

auth_url = http://192.168.147.50:35357

memcached_servers = 192.168.147.50:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = nova

[DEFAULT]

...

my_ip = 192.168.147.50

[DEFAULT]

...

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[vnc]

enabled = true

...

server_listen = $my_ip

server_proxyclient_address = $my_ip

[glance]

...

api_servers = http://192.168.147.50:9292

[placement]

...

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://192.168.147.50:35357/v3

username = placement

password = placement

/etc/httpd/conf.d/00-nova-placement-api.conf:

<Directory /usr/bin>

<IfVersion >= 2.4>

Require all granted

</IfVersion>

<IfVersion < 2.4>

Order allow,deny

Allow from all

</IfVersion>

</Directory>

重启httpd服务:

systemctl restart httpd

同步api数据库:

su -s /bin/sh -c "nova-manage api_db sync" nova

同步cell0数据库:

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1:

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

同步nova数据库:

u -s /bin/sh -c "nova-manage db sync" nova

确认cell0和cell1:

nova-manage cell_v2 list_cells

启动服务:

systemctl enable openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

计算节点:

yum install openstack-nova-compute -y

编辑配置文件:/etc/nova/nova.conf

[DEFAULT]

...

enabled_apis = osapi_compute,metadata

[DEFAULT]

...

transport_url = rabbit://openstack:RABBIT_PASS@controller

[api]

...

auth_strategy = keystone

[keystone_authtoken]

...

auth_uri = http://192.168.147.50:5000

auth_url = http://192.168.147.50:35357

memcached_servers = 192.168.147.50:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS

[DEFAULT]

...

my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

[DEFAULT]

...

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[vnc]

...

enabled = True

server_listen = 0.0.0.0

server_proxyclient_address = $my_ip

novncproxy_base_url = http://192.168.147.50:6080/vnc_auto.html

[glance]

...

api_servers = http://192.168.147.50:9292

[oslo_concurrency]

...

lock_path = /var/lib/nova/tmp

[placement]

...

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://192.168.147.50:35357/v3

username = placement

password = PLACEMENT_PASS

egrep -c '(vmx|svm)' /proc/cpuinfo

[libvirt]

...

virt_type = kvm

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

最后在控制节点执行:

source adminrc.sh

显示计算节点,已经可以看到compute节点了

openstack compute service list

openstack compute service list --service nova-compute

当您添加新的计算节点时,您必须在控制器节点上运行nova管理cellv2发现主机来注册这些新的计算节点。或者,您可以在/etc/nov/nova.conf中设置适当的间隔

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

或者:

修改nova.conf修改时间间隔:

[scheduler]

discover_hosts_in_cells_interval = 300

网络节点:

下面是在控制节点执行:

mysql -u root -padm123;

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.
TO 'neutron'@'localhost' \

IDENTIFIED BY 'neutron';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \

IDENTIFIED BY 'neutron';

source adminrc.sh

创建neutron用户:

openstack user create --domain default --password=neutron neutron

将neutron加入到admin组和service项目

openstack role add --project service --user neutron admin

创建网络项目:

openstack service create --name neutron \

--description "OpenStack Networking" network

创建endpoint:

openstack endpoint create --region RegionOne \

network public http://192.168.147.70:9696

openstack endpoint create --region RegionOne \

network internal http://192.168.147.70:9696

openstack endpoint create --region RegionOne \

network admin http://192.168.147.70:9696

安装二层简单网络:

选择网络安装选择1:

在网络节点执行

安装包:

yum install openstack-neutron openstack-neutron-ml2 \

openstack-neutron-linuxbridge ebtables -y

编辑配置文件:/etc/neutron/neutron.conf

[database]

...

connection =mysql://neutron:neutron@192.168.147.50/neutron

[DEFAULT]

...

core_plugin = ml2

service_plugins = router

[DEFAULT]

...

transport_url = rabbit://openstack:openstack@192.168.147.50

[DEFAULT]

...

auth_strategy = keystone

[keystone_authtoken]

...

auth_uri = http://192.168.147.50:5000

auth_url = http://192.168.147.50:35357

memcached_servers = 192.168.147.50:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = neutron

[DEFAULT]

...

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[nova]

...

auth_url = http://192.168.147.50:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = nova

[oslo_concurrency]

...

lock_path = /var/lib/neutron/tmp

/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

...

type_drivers = flat,vlan,gre,vxlan,geneve

##In the [ml2] section, disable self-service networks:

[ml2]

...

tenant_network_types = vlan,gre,vxlan,geneve

[ml2]

...

mechanism_drivers = linuxbridge

[ml2]

...

extension_drivers = port_security

[ml2_type_flat]

...

flat_networks = pyth1##可以自己随便取

[securitygroup]

...

enable_ipset = true

/etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]

physical_interface_mappings = pyth1##(自己取得名字):PROVIDER_INTERFACE_NAME

[vxlan]

enable_vxlan = false

[securitygroup]

...

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Ensure your Linux operating system kernel supports network bridge filters by verifying all the following sysctl values are set to 1

将下面参数的值确保为1:

net.bridge.bridge-nf-call-iptables

net.bridge.bridge-nf-call-ip6tables

vi /usr/lib/sysctl.d/00-system.conf

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

sysctl -p

/etc/neutron/dhcp_agent.ini:

[DEFAULT]

...

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

/etc/neutron/metadata_agent.ini

[default]

nova_metadata_host = 192.168.147.50

metadata_proxy_shared_secret = neutron

然后在控制节点nova配置neutron:

/etc/nova/nova.conf

[neutron]

...

url = http://192.168.147.70:9696

auth_url = http://192.168.147.50:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

service_metadata_proxy = true

metadata_proxy_shared_secret = neutron

网络节点执行:

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

控制节点: systemctl restart openstack-nova-api.service

网络节点:

systemctl enable neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

neutron-metadata-agent.service

systemctl start neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

neutron-metadata-agent.service

在计算节点:

安装包:

yum install openstack-neutron-linuxbridge ebtables ipset -y

修改配置文件:

/etc/neutron/neutron.conf

[DEFAULT]

...

transport_url = rabbit://openstack:openstack@192.168.147.50

[DEFAULT]

...

auth_strategy = keystone

[keystone_authtoken]

...

auth_uri = http://192.168.147.50:5000

auth_url = http://192.168.147.50:35357

memcached_servers = 192.168.147.50:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = neutron

[oslo_concurrency]

...

lock_path = /var/lib/neutron/tmp

/etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]

physical_interface_mappings = pyth1:eno16777736

[vxlan]

enable_vxlan = false

[securitygroup]

...

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Ensure your Linux operating system kernel supports network bridge filters by verifying all the following sysctl values are set to 1:

net.bridge.bridge-nf-call-iptables

net.bridge.bridge-nf-call-ip6tables

配置nova 使用neutron:

/etc/nova/nova.conf

[neutron]

...

url = http://192.168.147.70:9696

auth_url = http://192.168.147.50:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

重启服务:

systemctl restart openstack-nova-compute.service

启动服务:

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

创建一个网络:

source adminrc.sh

openstack network create --share --external \

--provider-physical-network pyth1 \

--provider-network-type flat flat-test

创建一个子网:

openstack subnet create --network flat-test \

--allocation-pool start=192.168.147.80,end=192.168.147.90 \

--dns-nameserver 192.168.147.2 --gateway 192.168.147.2 \

--subnet-range 192.168.147.0/24 flat-test-subnet

创建虚拟机类型:

openstack flavor create --id 2 --vcpus 1 --ram 1024 --disk 2 m3.nano

demorc.sh

#!/bin/bash

export OS_USERNAME=demo

export OS_PASSWORD=demo

export OS_PROJECT_NAME=demo

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://192.168.147.50:5000/v3

export OS_IDENTITY_API_VERSION=3

source demorc.sh

生产秘钥对:

ssh-keygen -q -N ""

上传秘钥对:

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

查看秘钥对:

openstack keypair list

添加安全组规则ICMP,SSH:

openstack security group rule create --proto icmp default

openstack security group rule create --proto tcp --dst-port 22 default

查看各自id:

openstack flavor list

openstack image list

openstack network list

openstack security group list

openstack server create --flavor m3.nano --image cirros_test --nic net-id='5e75805c-d4cf-4804-bbfa-31dedc56c9ce' --security-group default --key-name mykey queens-instance

查看实例:

openstack server list

查看vnc的url:

openstack console url show provider-instance

还可检查 :

ping 192.168.147.85

ssh cirros@192.168.147.85

安装dashboard:

yum install openstack-dashboard -y

/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "192.168.147.50"

ALLOWED_HOSTS = ['*', ]

Configure the memcached session storage service:

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION': '192.168.147.50:11211',

}

}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {

"identity": 3,

"image": 2,

"volume": 2,

}

PENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

OPENSTACK_NEUTRON_NETWORK = {

...

'enable_router': False,

'enable_quotas': False,

'enable_distributed_router': False,

'enable_ha_router': False,

'enable_lb': False,

'enable_firewall': False,

'enable_vpn': False,

'enable_fip_topology_check': False,

}

TIME_ZONE = "Asia/Shanghai"

systemctl restart httpd.service memcached.service

确认:

http://192.168.147.50/dashboard

使用admin或者demo登录

安装实例的时候注意:

这个坑我用了一下午的时间解决:

centos7.4 会出现你创建虚拟机之后,boot界面,但是可以获取到ip。不能登录,不能查看界面,nova的console日志也是为空。将libvirt的 virt_type改成qemu也不好使。这个是内核的关系。所以在安装了openstack的源之后 不要只想yum update 要不然创建了虚拟机使用不了。

更换成centos7.2 可以圆满解决这个问题

最后附上成功的截图,胜利的标志:

openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云openstack queens版本安装-2018年新版私有云

天使

发表评论

您必须登录才能发表评论!