Sunday, 19 February 2012

Xen Part 6: Guest Installation

I haven't managed to solve the DPMS issue yet, but the workaround (disabling monitor power saving) seems to be doing the trick in the interim. So, time to plough on and get a domU up and running.

Update: this method didn't result in a working Ubuntu Oneiric guest on my dom0 configuration. Part 7 of this series details the kernel upgrade I required to fix some of the underlying problems, and Part 8 provides revised instructions for the guest installation on the new kernel.

Guest Network Configuration

Debian advises the manual creation of a network bridge rather than using Xen's scripts, so we'll follow that path.

# apt-get install bridge-utils
# brctl addbr guestbr
# brctl addif guestbr eth0

Replace eth0 with the interface that you wish to bridge to (if you don't know, check ifconfig). You can add multiple interfaces if you like.

This is all well and good, but if you're using NetworkManager for your connection, you're likely to have just lost Internet connectivity! We're going to need to take your interface out of the hands of NetworkManager, and configure the bridge to load automatically.

# vim /etc/network/interfaces

It'll need to look something like this, substituting the appropriate interface name(s), etc:

auto lo guestbr
iface lo inet loopback
iface eth0 inet manual
iface guestbr inet dhcp
  bridge_ports eth0

Finally, perform this step to avoid an error later on when starting the domU ("missing vif-script"):


# chmod +x /etc/xen/scripts/*


Time to reboot and reclaim your networking capabilities. Hopefully!


If you're uncertain what to put in the interfaces file, there are countless resources online which can help. If you encounter networking issues with Xen specifically, the XenSource networking page is an invaluable reference.


PCI Passthrough with VT-d

I intend to dedicate a graphics card to one of the guests via VT-d, the preferential method. For this we need xen-pciback in the dom0 kernel. Fedora has xen-pciback compiled as a module, but on Debian it's integrated. There are pluses and minuses to each approach, but from our perspective the Debian method probably makes configuration marginally simpler.

Add the following to the GRUB_CMDLINE_LINUX entry in /etc/default/grub, where the PCI ID (in BDF notation) comes from lspci:




$ lspci | grep VGA

00:02.0 VGA compatible controller: Intel Corporation Sandy Bridge Integrated Graphics Controller (rev 09)

01:00.0 VGA compatible controller: ATI Technologies Inc Device 6718
# vim /etc/default/grub

GRUB_CMDLINE_LINUX="xen-pciback.hide=(01:00.0)"


Then issue the necessary:

# update-grub

Reboot and verify it worked (you'll get some output if it did):

# xl pci-list-assignable-devices
0000:01:00.0


LVM Preparation

I'm using use LVM2 for my domu partitions because a) I want high performance, and b) I want to be able to grow the size of the domus. Using partitions would give me a) but not b). Using loopback images, i.e. files, would give me b) but not a). Using LVM gives me the best of both worlds.

The partition I have dedicated to domus (/dev/md3) needs to be configured to use LVM (pvcreate), with a volume group (vgcreate), and the necessary logical volumes (lvcreate), which will act like normal partitions for each domu.

# pvcreate /dev/md3
  Physical volume "/dev/md3" successfully created
# pvdisplay | grep "PV Size"
  PV Size               833.72 GiB / not usable 888.00 KiB
# vgcreate xendomu /dev/md3
  Volume group "xendomu" successfully created
# vgdisplay | grep "VG Size"
  VG Size               833.72 GiB

LV Preparation

It's not necessary to create a LV for the installation method we're using below, but I include this as a reference. Nothing to see here; skip to the next section. 

# lvcreate -L 50G -n ubuntu xendomu
  Logical volume "ubuntu" created
# lvdisplay | grep "LV Size"

  LV Size                50.00 GiB

There's a 50GB partition for my first domu: Ubuntu. Before the OS can be installed, the LV needs to be formatted. I'm going to be using ext3. I could have chosen ext4, the successor filesystem with superior performance in key aspects, but I've heard about a lot of odd bugs relating to PV guests on ext4, so I'm staying safe and sticking with ext3. 

# mkfs.ext3 /dev/xendomu/ubuntu
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13107200 blocks
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000, 7962624, 11239424

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

If we wanted, we could mount the new LV as such:

# mkdir -p /domu/ubuntu
# mount /dev/xendomu/ubuntu /domu/ubuntu

To permanently delete the LV:

# umount /domu/ubuntu
# lvchange -an /dev/xendomu/ubuntu
# lvremove /dev/xendomu/ubuntu 

Guest Installation Methods

There are different ways we could go about installing the domu systems. Here's a few methods:
  • Manually burn an installation CD, reboot the system and install it into the LV. Of course, no seasoned Linux user would consider a solution involving a reboot. Reboots are those things performed daily by users of Microsoft Windows.
  • Manually install Debian with debootstrap, or CentOS with rpmstrap, etc.
Either of those manual methods would then require you to manually create a config file for the installation, and issue an xm create -f /path/to/config command.  This is unnecessarily laborious, so here's some simpler methods:
  • Use a GUI: virt-manager is a very cool solution. It's based on libvirt, a virtualisation abstraction layer which can sit on top of either KVM or Xen. I played with this under Fedora. The problem was, I found it to be quite buggy. Most of the bugs can be worked around, but in some senses I was left wondering if it wasn't simpler to use the CLI in the first place
  • Use the complementary CLI tool, virt-install. It's developed by the same team as virt-manager, and uses the same backend, so can be seen as virt-manager, with neither the GUI nor the bugs
  • Use xen-create-image, part of xen-tools. This is what we'll be doing below.

First, make sure the prerequisites are installed:

# apt-get install xen-tools debootstrap

xen-create-image can be run with a mass of command line arguments, which define all the aspects of our domu. A more sensible approach is to amend its configuration file, which defines all the default parameters for the command. This is a much more workable solution. When we finally run the command, it will use the amended configuration file for its settings, and any parameters we supply as overrides. 

xen-create-image Configuration

# vim /etc/xen-tools/xen-tools.conf

1) Storage type. Tell the script that we're using LVM storage and provide the VG name:

lvm = xendomu

Note that EVMS is supported. I haven't tried this yet, but it sounds like a superb way of managing your storage. 

2) Installation method. The installation method section gives us a number of interesting options:
  • debootstrap (for Debian systems)
  • rpmstrap (for CentOS, Fedora etc.)
  • rinse (an alternative for CentOS, Fedora etc.) 
  • from an existing installation directory
  • from an existing tar file
As I'm installing Ubuntu, which is a Debian-based OS, I'll be using debootstrap, which happens to be the default - so nothing to change here.

3) Disk. The tool is going to create a Logical Volume (LV) on the VG we specified of the specified size, i.e. the below will result in xen-create-image running a command like lvcreate -L 50G -n <hostname>-disk xendomu

size = 50Gb

4) Memory. Choose how much RAM to allocate to the domu. I'll give Ubuntu 8GB for the time being - this can be changed later. Also, since I'm allocating so much RAM, I don't see the point in enabling a swap file. If you're allocating much less RAM, you may want to leave this enabled and possibly even increase its size (swap = 1024Mb, for example). Note that swap space is assigned by creating a dedicated LV on the provided VG.

memory = 8192Mb
noswap = 1

5) Distribution. Pick the distribution to install. This field takes the name of the distribution version, e.g. Oneiric for Ubuntu 11.10 Oneiric Ocelot. Other examples are centos5 for CentOS 5 and squeeze for Debian 6 stable. You can find the distributions supported by your copy of xen-tools in the hook script directory: /usr/lib/xen-tools

dist = oneiric

6) Bridge. Set the bridge name to the one created earlier.


bridge = guestbr 

7) Root password. By default, a generated root password is set. I'd rather set my own.

passwd = 1

8) Architecture. I doubt this is necessary, but just in case I'm setting this explicitly. If you don't have a 64-bit system & dom0 OS, ignore this.

arch = amd64

Guest Installation

Now that's all set up, all we need to do is invoke the command. Note that I've provided a few parameters:
  • hostname is mandatory, and according to the manpage should really be fully qualified, e.g. hostname.domain.com. I'm being rebellious.
  • vcpus sets the number of virtual CPUs to expose to the guest (the default is 1). I have a 4-core HT system, exposing 8 cores. I'll probably pin one of them to dom0 later on, so for the time being I'll just expose 7 to this domu.
  • ip sets the guest's IP address; use this if DHCP is disabled
# xen-create-image --hostname=ace2x1 --vcpus=7

This will take some time, as it sets up the LV and downloads reams of data from the mirror you specified. 

It Didn't Work!

It didn't work for me either. Look in /var/log/xen-tools for some logs explaining what went wrong.

1) If you find the error: 

"We are trying to configure an installation of <distribution> in <location> - but there is no hook directory for us to use. This means that we would not know how to configure this installation."

Then you've provided a distribution value for which xen-tools doesn't have the necesary hook files for. Let's see which distributions xen-tools supports:

# dpkg --status xen-tools | grep Version
Version: 4.2-1
# ls /usr/lib/xen-tools/ | grep natty
natty.d
# ls /usr/lib/xen-tools/ | grep oneiric

So, xen-tools 4.2-1 can install Ubuntu Natty (11.04), but doesn't know about Ubuntu Oneiric (11.10). As it so happens, the version of debootstrap I have doesn't support Oneiric either. Luckily, this is all easy to resolve:

# ln -s /usr/lib/xen-tools/karmic.d /usr/lib/xen-tools/oneiric.d
# ln -s /usr/share/debootstrap/scripts/gutsy /usr/share/debootstrap/scripts/oneiric

2) If you find the error:

"E: Failed getting release file <URL>"

then you should try setting the mirror location manually.

# vim /etc/xen-tools/xen-tools.conf

For Ubuntu distributions, comment out (add a # before) the line

mirror = `xt-guess-suite-and-mirror --mirror`

and uncomment (remove the # before) the line

mirror = http://gb.archive.ubuntu.com/ubuntu/

For other distributions, you're going to have to find out the URL and set it yourself.

3) If you find other errors, alas. You're on your own.

It Worked!

Great stuff!

# mv /etc/xen/<hostname>.cfg /etc/xen/<hostname>

The domU's Xen configuration file now resides at /etc/xen/<hostname>. It's worth having a look at. Note that you can set the kernel/initrd, amount of RAM, number of CPUs, disk devices, networking settings (including MAC address and hostname), and poweroff/reboot/crash behaviour.

# vim /etc/xen/<hostname>

We need to add the name of the network bridge we created, something like this:

vif        = [ 'mac=xx:xx:xx:xx:xx:xx,bridge=guestbr' ]

Ensure vcpus is set correctly.

After that, fire her up!

# xm create <hostname>
# xm list
# xm console <hostname>


Next > Part 7: The Move to Wheezy

No comments:

Post a Comment