Skip to content

vmimagemanager makes it easy to repeatably perform stores and restores images as rsync, tgz, and cpio.bz2 from libvirt based virtualisation using either shared partitions or raw disk images.

Notifications You must be signed in to change notification settings

hepix-virtualisation/vmimagemanager

Repository files navigation

Introduction to vmimagemanager
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

vmimagemanager was originally created for software testing, and automation of deployment. It's original form was a linear script, which has since been completly rewritern to make furture extention easier and the benifits are now showing.

It uses libvirt to control the virtualisation execution. 

So it is optimized for repeatedly restarting a Virtual Machine slot and adding or removing images or image overlays, with one command line, rather than shuting down, mounting the raw file system, and restarting the virtual maschine and variations on this work flow.

Features

    * Simple Slot and Virtual Machine model of deployment.
    * Per Slot configuration.
    * Speeds the import and export of images to archives.
    * Multiple Disk mounting options including image or partions csharing.
    * Multiple archive formats, including rsync, tar.gz and cpio.bz2
    * Very speedy frequent redeploys with rsync.
    * Defaults and per Slot configuration.
    * Manages mounting domains when then Vm is down and 
       un-mounting when the domain is launched.
    * Manages exporting images in tar.gz format.

Dependacies

This application makes use of the folowing command line tools.

kpartx
mount
bzip2
gzip
tar
cpio

It is writern in python and uses python magic module, and so is not installable in Redhat Enterprisse linux 5 
and below. It uses an elementtree implementation many shoudl work. These are packaged as standard in debain.

In Redhat the following packages are needed.

python - Available in standard RHEL
python-elementtree - Available in standard RHEL
python-lxml - available from DAG.

Limitations

    * Only works with libvirt at the moment.

Model

Virtualization hosts will run a fixed number of "Slots" upon which Virtual Machines can be started, The rational of this is based on the work of many groups in HTC (*1) it seems that the number of jobs per host is typically proportional to the CPU/cores's, Memory allowing, although this varies with IO jobs and CPU requirements for the job. On this reasoning the slot and virtual machine model was used. Further experimenting shows that IO requirements of VM workloads can greatly influance the number of virtual maschines a host can support.

This simple model can be complicated further by migration of hosts. This is not a common requirement in High throughput computing but could prove useful, this is not to be investigated in the command line version of this aplication.

Performance

To minimise booting time we recomend direct kernel booting set in libvirt. Boot speeds have noticibly decreased with Scientific Linux 6. Using rsync to store and retrive disk images can be remarkably fast for reimaging, but LVM snapshot distrcution and creation is faster, the code has been redeisnged to support this as a simple plugin to a facade but LVM support is not yet included.

I have found large SAS arrays and SSD's have good performance with virtualisation. Low RPM single spindle storeage should be avoided on high throughput systems.

Roadmap

vmimageamanager is useful for off line backing up virtual machine systems, extracting and inserting directories, and reseting the operating system, I usually use rsync. It is meant to be useful and user friendly so suggestions are always welcome.

This tool is also meant to be used by other automation tools, such as automated build and testing set ups, Unfortunaly I have no current automated test suit for this application, This is a high priority. I hope to connect the backend to an AMQP server in the near future. Then it might be suitable for high availablity work.


Notes

(1) HTC and HPC:High performance Computing became well defined and included low latency communication between different computer CPU's, and through this definition came the need to define High Throughput Computing, which differs in the amount of synchronization operations the computers need to perform.

Setting Up
~~~~~~~~~~

Installing prerequasits

python setup.py  --help-commands

should help you get started I prefure to only install packages (Mine or other peoples) but to save time you might want to do a 

$python setup.py install 

I also recomend building RPM's.

$python setup.py bdist_rpm

Debian derivatives "deb" packages 

Copy a template over to the correct location.

- Example for RPM based install

d430:~# cp  /etc/vmimagemanager/vmimagemanager-example-d430.cfg \
           /etc/vmimagemanager/vmimagemanager.cfg
d430:~# vim /etc/vmimagemanager/vmimagemanager.cfg


Configuring
~~~~~~~~~~~

Configuration Section "vmim.local"

vmimagemanager manages your snapshots of an OS to devices and a large image store location.This path is set.on my server this space is mounted under "/server/vmstore"

vmimages="/server/vmstore/images"
vmextracts="/server/vmstore/extracts"

Configuring a Slot Section

The slot section can be titled anything after a "" prefix. For example:

[vmim.vm.1]
HostName="DebianStable"
store_type="partition"
HostDisk="/dev/mapper/vg.01-hudson_slave_vm01"
partition=1
mount="/mnt/virt.yokel.org"
host="virt.yokel.org"
mac=["02:11:62:22:21:13"]

I then use LVM now in conjunction with these scripts becasue I use a lot of partitions, and the current linux Kernel (2.6.18) does not support more than 15 partitions per drive which is clearly no enough if your Virtual maschines are small in memorry demands.

For this reason you may prefure to use disk images:

[vmim.vm.2]
HostName="pong"
store_type="kpartx"
root="/dev/mapper/d430-sysLenny"
#root="/dev/sbird/vm001sys"
swap="/dev/mapper/d430-swapLenny"
#swap="/dev/sbird/vm001swap"
#mount="/mnt/vm001.yokel.org"
mac=["02:01:63:22:51:13"]
#ip=192.168.0.28

The system can also support RAW images specified as:

[vmim.vm.2]
store_type="partition"
HostName="exampleImageBased"
HostDisk="/dev/mapper/vg.01-hudson_slave_vm01"
partition=1

These configuration fields are set to be applied with the template for the xen configuration. which is stored in the setting

[vmim.local]
xenconftemplate="/etc/vmimagemanager/vmimagemanager-xen.template"

About

vmimagemanager makes it easy to repeatably perform stores and restores images as rsync, tgz, and cpio.bz2 from libvirt based virtualisation using either shared partitions or raw disk images.

Resources

Stars

Watchers

Forks

Packages

No packages published