Alternatives
One alternative is LXC (Linux Containers), which is a lightweight virtualization method to run multiple virtual units (containers, akin to chroot). KVM has much better isolation than LXC, but the latter is more performant.Installation
configuration files policy
A useful convention, before editing a configuration file for the first time, is to make a copy of it in the SAME directory where it is located but with a .bak-default extension added. I have authored a script /opt/bak/sbin/bak which backs up the configuration into /root/bak-HOSTNAME-HOSTID.tgz (only readable by root). This convention also allows to see what changed in each configuration file with respect to the default version shipped by upstream, so you can know what has been changed at a glance:# vimdiff -o file.conf file.conf.bak- default |
Bare OS
Use the 64-bit alternate install CD from the LTS Ubuntu release download page. Be sure to press F4 and select "Install a command-line system" in order to get a minimal installation and preferably select expert mode. Set up LVM Disk to partion sda. LVM provides VMs with direct filesystem access, speeding up disk I/O considerably. Allocate enough space for logical partitions in the volume group: a swap partition (e.g. equal to the amount of memory) and use the rest of the space as a single root partition (about at least 5GB). Bet sure to install openssh-server as an additional package. Set the system clock to UTC. Re-enter the BIOS and reset the boot orderif you changed it in order to boot from the DVD.Network
After first boot, configure network (static IP and bridge interface so that guests can have full LAN access).apt-get install bridge-utils |
Change these lines in /etc/network/interfaces:
auto eth0 iface eth0 inet dhcp |
to:
auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address SERVER_IP_ADDRESS_EG_192.168.1.1 netmask NETMASK_EG_255.255.255.0 network NETWORK_EG_192.168.1.0 broadcast BROADCAST_EG_192.168.1.255 gateway GATEWAY_EG_192.168 . 1.254 dns-nameservers DNS_EG_192.168.1.254 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off |
bridge_fd is the forwarding delay for interfaces joining the bridge. It's how long it'll be before the interface will be able to do anything. During this time the bridge will be discovering other bridges and checking that no loops are created. For a better description and the reason for it you need to read up on spanning tree protocol. See brctl(8) for an introduction. Anyway the spanning tree protocol is turned off by bridge_stp off, since there could not be any loops for this topology. See bridge-utils-interfaces(5)
After this change you can either reboot:
# reboot |
or just reload the network:
# /etc/init.d/networking restart Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces Reconfiguring network interfaces... ssh stop/waiting ssh start/running, process 2292 Waiting for br0 to get ready (MAXWAIT is 20 seconds). ssh stop/waiting ssh start/running, process 2559 [ OK ] |
# ifconfig # brctl show # brctl show br0 |
Note virbr0 is another bridge created automatically when installing KVM, but that's only for NAT connectivity of VMs, so br0 still needs to be created.
KVM installation
Make sure there is hardware support for virtualization:# apt-get install cpu-checker # kvm-ok INFO: /dev/kvm exists KVM acceleration can be used |
# apt-get install libvirt-bin |
# adduser root libvirtd # adduser root kvm |
If you want to create Ubuntu-based VMs as described below, install this handy script too:
# apt-get install python-vm-builder
Usage
Managing the VMs
Use virsh, the virtual shell:# virsh --connect qemu: ///system |
Get a list of available commands:
virsh # help |
Show running VM:
virsh # list |
virsh # list --all |
Update VM configuration (to be done before the first boot and at every configuration edit):
virsh # define /etc/libvirt/qemu/vm1.xml |
Start/shutdown/suspend/resume/pull the power plug of a VM:
virsh # start/shutdown/suspend/resume/destroy vm1 |
# ssh MY_LOGIN_NAME @VM_IP_ADDRESS password: TEMPORARY_PASSWORD |
Remove a VM:
virsh # shutdown vm1 virsh # undefine vm1 |
Edit configuration, e.g. change cores, memory, etc:
# virsh edit vm1 |
Mount/change a CD-ROM iso (keep ISO images into /var/lib/libvirt/images):
# virsh attach-disk vm1 /path/to/image.iso hdc --driver file --type cdrom --mode readonly |
Remove a CD-ROM iso:
# virsh attach-disk vm1
" "
hdc --driver file --type cdrom --mode readonly
Creating an LVM-Based Ubuntu VM (headless installation)
In this example I am creating a simple Ubuntu JeOS machine named lemon, with 2 GB memory, dual core and a 200G disk mapped to one logical volume. For more details see vmbuilder(1):# lvcreate -n lemon -L 421888 VOLUME_GROUP # vmbuilder kvm ubuntu --libvirt qemu: ///system --suite=precise --flavour=virtual --arch=amd64 --mirror=http://de.archive.ubuntu.com/ubuntu
-o --ip=IP_EG_192.168.1.2 --gw=GATEWAY_EG_192.168.1.254 --mask=NETMASK_EG_255.255.255.0
--dns=DNS_EG_192.168.1.252 --user=YOUR_USERNAME --name="YOUR NAME" --pass=YOUR_PASSWORD
--addpkg=acpid --addpkg=openssh-server --mem=2048 --hostname=lemon
--bridge=br0 --raw=/dev/VOLUME_GROUP/lemon --part=/tmp/vmbuilder.partition |
If you do not specify neither
--rootsize nor --swapsize nor, vmbuilder will default to 4 GB and 1 GB
respectively and the rest will be left as free space, this is probably
not what you want. For advanced partition schemas you better use --part.
option
|
meaning
|
---|---|
kvm | hypervisor |
ubuntu | distro |
--libvirt qemu:///system | Needed if you want to use Virsh to manage your virtual machines |
--arch | i386 or amd64 |
--mem | virtual RAM in megabyte |
--bridge | bridge interface to connect the VM to |
--ip | static IP address |
--gw | gateway address |
--mask | netmask |
--dns | DNS server IP |
--raw | raw device to create the partitions in, e.g. an LVM logical volume |
--rootsize | Root FS size, default 4096. Ignored if --part is used. |
--swapsize | Swap FS size, default 1024. Ignored if --part is used. |
--part | text file containing partition layout, e.g.: root 10240 swap 2048 /data 409600 |
# virsh start lemon
Cloning a VM
First, make sure the VM is shut down:# virsh list --all Id Name State ---------------------------------- 3 vm1 running |
# virsh shutdown vm1 |
# virsh list --all Id Name State ---------------------------------- - vm1 shut off |
# lvcreate -L1024G -n vm1clone VOLUME_GROUP_NAME # virt-clone -o vm1 -n vm1clone -f /dev/VOLUME_GROUP_NAME/vm1clone |
If you are cloning from a smaller to a bigger virtual disk, you need to extend both the partition and the filesystem in order for the guest OS to make use of all space. First, use virt-manager to log in to the VM and re-create the partition using fdisk, then reboot and resize it, as explained here:
# fdisk /dev/vda p d 1 n p 1 p w # reboot # resize2fs /dev/vda1 |
Anyway, the ext3/4 online resize algorithm can be slow if your partition is big. A slightly faster approach to clone a small VM into a big disk is to start from an empty filesystem and copy the directory structure through the virtual network. But formatting a big partition (e.g. 1 terabyte) can also be slow, do not expect miracles. See also this serverfault page about speeding up formatting for large partitions.
Snapshotting a VM
Please refer to the Ubuntu Wiki for background informations and details. Here is an example of snapshotting the vm1 VM and mount the snapshot read-only (e.g. for backup purposes):# lvcreate -s -n vm1-snap -L 2G VOLUME_GROUP/vm1
Logical volume
"vm1-snap"
created
# kpartx -av /dev/VOLUME_GROUP/vm1-snap
add map VOLUME_GROUP-vm1--snap1 (
252
:
11
):
0
2147472747
linear /dev/VOLUME_GROUP/vm1-snap
63
# mkdir -p /vm-mnt/vm1-snap
# mount -t ext4 -o ro /dev/mapper/VOLUME_GROUP-vm1--snap1 /vm-mnt/vm1-snap
... now modify vm1 and see that vm1-snap remains the same ...
You can see how much space the snapshot is using:
# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
...
vm1 volum owi-ao
1
.00t
vm1-snap volum swi-ao
2
.00g wm1
0.01
...
# umount /vm-mnt/vm1-snap
# kpartx -d /dev/VOLUME_GROUP/vm1-snap
# lvremove /dev/VOLUME_GROUP/vm1-snap
Creating an LVM-based CentOS VM (headless installation)
This is rather specific. I needed to run the old CentOS 5.5 in a VM. We will create a new logical volume for holding ISO images to install OSs from, format it using XFS (which is optimized for large files) and mount in /var/lib/libvirt/images/. We will also download an old Cent OS 5.5 image from multiple sites, using a download accelator:# lvcreate -n images -L 20g VOLUME_GROUP Logical volume "images" created # lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert images volum -wi-a- 20 .00g root
5 .28g swap_1 volum -wi-a- 31 .97g # apt-get install xfsprogs # mkfs.xfs -b size= 4096 /dev/VOLUME_GROUP/images # cd /etc # cp fstab fstab.bak- default # echo '/dev/mapper/VOLUME_GROUP-images /var/lib/libvirt/images/ xfs defaults,noatime,nodev,nosuid,noexec 0 99' >>fstab # mount /var/lib/libvirt/images/ # apt-get install aria2 # aria2c http: //download.filesystems.org/linux/centos/CentOS-5.5-i386-bin-DVD.iso http://mirror.teklinks.com/centos/5.5/isos/i386/CentOS-5.5-i386-bin-DVD.iso http://media.kombinasi.net/~download/iso/centos/CentOS-5.5-i386-bin-DVD/CentOS-5.5-i386-bin-DVD.iso ftp://ftp.sandy.ru/pub/Linux/CentOS/CentOS-5.5-i386-bin-DVD/CentOS-5.5-i386-bin-DVD.iso ftp://nephi.unice.fr/linux/centos/distrib/5.5/CentOS-5.5-i386-bin-DVD.iso |
Now let's create an LVM-based CentOS 5.5 VM:
# lvcreate -L12G -n centos- 5.5 VOLUME_GROUP # apt-get install virtinst # virt-install -n centos- 5.5 -r 1024 --vcpus= 2 --disk path=/dev/VOLUME_GROUP/centos- 5.5 -c /var/lib/libvirt/images/CentOS- 5.5 -i386-bin-DVD.iso --graphics vnc --noautoconsole --os-type linux --os-variant=rhel5. 4 --network=bridge:br0 Starting install... Creating domain... | 0 B 00 : 00 Domain installation still in progress. You can reconnect to the console to complete the installation process. |
Enable root login b/c it is needed to connect via virt-manager from your client machine and complete the graphical installation:
# sudo passwd root |
$ virt-manager |
Host reboot
Every now and then, e.g. because of system updates or for stability reasons, the host OS will need to be rebooted. Unfortunately that implies all the guest OSs must be restarted too. Unfortunately, you cannot pause a guest system and resume its execution state after a reboot of the host. This feature is unfortunately not implemented in KVM. To mitigate this problem, you can schedule a nightly reboot of the host using this script, which first shuts down all guests (starting shutdown requests in parallel to reduce overall downtime), and checks their state before rebooting the host:#!/bin/bash
# Safely stops all VMs before rebooting the system.
# Useful to schedule a nightly reboot with cron(8).
# See: https://lists.ubuntu.com/archives/ubuntu-server/2011-May/005663.html
for vm in $(virsh -q list | awk '{ print $2 }'); do
{
virsh shutdown $vm;
# WAIT until the machine is really powered off
until [ "$(virsh -q list --all | awk '{if ($2 == "$vm") print $3" "$4}')" = 'shut off' ]; do
sleep 3
done
} &
done
# Wait until all the machines are powered off.
wait
# Now all VMs are shutted down.
reboot "$@"
I called this script kvmsafe-reboot and it is implemented just as a reboot(8) wrapper, so it should be used instead of reboot. Please remember to set the 'autostart' flag on all domains you want to come up after rebooting:
# for i in vm1 vm2 vm3...; do virsh autostart $i; done
Also make sure every VM has the acpid daemon running.
Monitoring host and/or VM performance
There are many tools to do this. A simple text-mode solution is sysstat. You can also install a graphs generator:# apt-get install zenity gnuplot xsltproc
# cd /usr/local/bin
# gunzip -c /usr/sysstat/examples/sargraph.gz >sargraph
# chmod 755 sargraph
To use it, from your client machine type:
$ ssh -X host_machine sargraph
Look at the sar(1) manual page for a clear explanation of the various output parameters.
Backups
In my Github repository I am sharing a script to back up the system and some of the VMs on an external USB disk, e.g. put it in /usr/local/sbin/snapbak.sh It's configuration has to be in /usr/local/etc/snapbak.conf (or change the path at the beginning of snapbak.sh) E.g. you can configure here the VMs to backup. You may want to label the USB disk (e.g. as SNAPBAKUSB) so that if the device name changes, it can still be mounted automatically:# e2label /dev/sdc1 SNAPBAKUSB
then add this line to /etc/fstab:
# echo 'LABEL=SNAPBAKUSB /mnt/hdd-usb ext3 noauto,defaults 0 0' >>/etc/fstab
Not: After a warm reboot, the USB disk was not recognized and cannot be mounted, so backups will fail unless one goes in the server room and manually disconnects and reconnects the disk. I have added usb_storage to /etc/modules to solve this problem.
No comments:
Post a Comment