forked from beloglazov/openstack-centos-kvm-glusterfs
-
Notifications
You must be signed in to change notification settings - Fork 0
/
readme
249 lines (170 loc) · 8.69 KB
/
readme
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
Partitioning for compute nodes:
sda1
boot 500M
sda2
vg_computeX
lv_root 10000 ext4
lv_swap 6000
lv_home 4996 ext4
sda3
vg_gluster 216972
lv_gluster 215972 xfs
During the partitioning for the controller node, the VM image
partition must be mounted into: /var/lib/glance/images
The following are the initial steps that need to be followed prior to
running the installation scripts:
yum update -y
yum install -y man nano emacs git
git clone [email protected]:beloglazov/openstack-centos-kvm-glusterfs.git
## Volumes
It is necessary to have a logical volume group called nova-volumes,
which will be used for allocating storage volumes and attaching them
to VMs. It is also important to have enough space mounted to
/var/lib/glance/images that will be used by glance for storing VM
images.
## Nova Configuration
/etc/nova/nova.conf is copied to all compute nodes, as well as to the
controller, which runs the other nova services. In other words, there
is a single configuration file for the controller and compute nodes.
nova.conf should have its owner set to root:nova, for example as
follows:
addgroup nova
usermod -g nova nova
chown -R root:nova /etc/nova
chmod 640 /etc/nova/nova.conf
## Authentication
Keystone allows two types of authentication for administration action
like creating users, tenants, etc:
1. Using an admin token and admin_port (35357). Example:
keystone --token=3ab103bf9aaef6b336bf --endpoint=http://controller:35357/v2.0 user-list
2. Using an admin user and public_port (5000). Example:
keystone --os_username=admin --os_tenant_name=admin --os_password=oisjdfoisaehf --os_auth_url=http://controller:5000/v2.0 user-list
Services can also authenticate using one of two ways. One way is to
share the admin token between services and authenticate with keystone
using the token.
However, it is also possible to use special users created in keystone
for each service. These users are by default: nova, glance, etc. The
service users are assigned to the service tenant and admin role in
that tenant.
Here is an example of the password-based authenication for nova:
nova --os_username=nova --os_password=knvbsoinrvrv --os_tenant_name=service --os_auth_url=http://controller:5000/v2.0 image-list
One of two sets of authentication parameters are required to be
specified in /etc/nova/api-paste.ini. The first option is to set up
the token-based authentication, like the following:
auth_host = controller
auth_protocol = http
admin_token = 3ab103bf9aaef6b336bf
The second option is to set up the password-based authentication, as
follows:
auth_host = controller
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = knvbsoinrvrv
In my opinion, the password-based authentication is preferable, since
it uses keystone's database backend, and is probably more advanced.
Even though, the user name and password are specified in the config
file, it is still necessary to provide these data when using the
command line client. Probably this is done to provide extra security
against unauthorized access from the command line.
## Logging in into VMs
Nova support injection of SSH keys into VM instances for password-less
authorization. Example:
nova keypair-add test > test.pem
chmod 600 test.pem
nova boot --image cirros-0.3.0-x86_64 --flavor m1.small --key_name test myfirst-server
## Network configuration
Configuration types:
1. Flat Mode. Public IP addresses from a specified range are injected
into instances on launch. This only works on Linux systems that keep
network configuration in /etc/network/interfaces
network_manager=nova.network.manager.FlatManager
2. Flat DHCP Mode. Nova compute runs a DHCP server listening to the
created bridge that assignes public IP addresses to instances. There
must be only one host running nova-network. The network_host option in
nova.conf specifies wich host nova-network is running on. The bridge
using the flat_network_bridge option. There is an option to specify
what is the starting IP address in the subnet for VMs.
network_manager=nova.network.manager.FlatDHCPManager
3. VLAN Mode. VM instances are assigned private IP addresses from
networks created for each tenant / project. Instances are accessed
through a special VPN VM instance.
network_manager=nova.network.manager.VlanManager
Nova runs a metadata service on http://169.254.169.254 that VM
instances query to obtain SSH keys and other user data. This service
is run as a part of nova-api and must be enabled in the enabled-apis
configuration option (enabled by default). nova-network configures
iptables to NAT port 80 of 169.254.169.254 to the IP address specified
in metadata_host and the port specified in metadata_port (the defaults
are the IP address of the nova-network service and 8775) configured in
nova.conf. If nova-api and nova-network are running on different
hosts, the metadata_host option should point to the IP address of
nova-api.
To enable SSH and ping to VMs, the following commands must be run:
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
I plan to run nova-network in the flat DHCP mode on compute1 and try
to assign IP addresses from the same subnet that the other compute
nodes are in. The available IP addresses for the VM will be restricted
by the flat_network_dhcp_start option, where several starting IP
addresses will be used by the physical machines. I'll try to reuse the
same bridges that are already created. What I need to do is uninstall
the DHCP server and let it be run by nova-network.
It should be possible to modify /etc/dnsmasq.conf to specify statis IP
addresses for the physical machines. It is also possible to specify
host names (as dhsmasq acts as a DNS server); therefore, the
configuration in the hosts file can be ommited.
## Difference between nova-manage and nova
nova-manage - can be run bu the admin user on the host machine.
nova - can be run by any user from any machine, but it requires the
user to specify the authentication URL, as well as the user name and
password.
## Volumes
A volume can only be attached to one instance at a time. The volume
manager dynamically create a logical volume in the nova-volumes volume
group when requested (nova volume-create). Then, the created volume
can be attached to a VM instance using nova volume-attach.
Compute nodes communication with the volume server through iSCSI. The
server must run iscsitarget, and the compute nodes must run
open-iscsi. After having these services running, the nova-volume
service can be started on the server.
## GlusterFS as a VM storage for Live Migration
The usage of GlusterFS depends on the instances_path option configured
in OpenStack. This option specifies the path where VM instances are
stored. The default value is /var/lib/nova/instances
The setup consists of mounting a GlusterFS volume into
/var/lib/nova/instances. Another option is to mount the GlusterFS
volume to another directory and change the instances_path to point to
that directory.
!!! The shared storage must be mounted on the controller as well to
pass the test.
## Glance troubleshooting
Sometimes the glance service fails to start on OS boot. The reason
might be that it requires some services that are not yet available.
The solution is just to restart the glance service.
## Nova compute troubleshooting
There was a problem when libvirtd could not open char device with the
following error message:
15391: error : qemuProcessReadLogOutput:1005 : internal error Process exited while reading console log output: chardev: opening backend "file" failed
And then the following error:
error : qemuProcessReadLogOutput:1005 : internal error Process exited while reading console log output: char device redirected to /dev/pts/3
qemu-kvm: -drive file=/var/lib/nova/instances/instance-00000015/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none: could not open disk image /var/lib/nova/instances/instance-00000015/disk: Permission denied
Both the problems have been solved, by setting the user and group in
/etc/libvirt/libvirtd.conf as follows:
user = "nova"
group = "nova"
And also changing the ownership as follows:
chown -R nova:nova /var/cache/libvirt
chown -R nova:nova /var/run/libvirt
chown -R nova:nova /var/lib/libvirt
## Nova network troubleshooting
We are running nova-network on the gateway instead of the controller,
since our controller is not running nova-compute. This hasn't worked
probably because the network configuration of the controller differs
from the one of the compute hosts (different network interfaces are
used).
# nova-network problems
When after starting nova-network, it gets stuck on "Attempting to grab
file lock "iptables" for method "apply"", the solution is:
rm /var/lib/nova/tmp/nova-iptables.lock
Source: https://answers.launchpad.net/nova/+question/200985