Xen
is basically a customised kernel which can run multiple operating
systems simultaneously. It divides these OSs into two primary zones
with different permissions. One OS, the host OS, runs in what
is known as domain 0 (dom0). Dom0 has complete control over
the system.
One
can then install guest OSs, which run in either paravirtualised or
fully virtualised mode. They are heavily restricted, either running on fully emulated hardware, or using hypercalls to communicate with the Xen kernel. They run in what
is known as domain U (domU).
So,
first we install our dom0 OS (the host OS), and install Xen onto
that, replacing the native Linux kernel. Finally, we reboot into our
new Xen kernel in dom0, and proceed to create virtual machines to
install domU OSs (the guest OSs) into.
Dom0 OS
Dom0
can be moreorless any Linux distro you choose, although some are
better choices than others. I found Fedora 16 to be the simplest to
get Xen running on, whilst I had difficulty with OpenSUSE and Debian.
Debian
is well known as being the sysadmin's favourite distro, due in large
part to its legendary stability and its fantastic package management
system based on Aptitude. It seems like the perfect base on which to
build a rock-solid Xen system.
I
already gave Debian a try and completely failed to load the Xen
kernel, due to a bug which proved elusive. Yet, aside from anything
else, I have most experience with Debian, and prefer it over
RHEL-based distros (which I found relatively simple to get Xen
running on). What you use for your dom0 will be your own choice, but
I have chosen to start over with Debian – and I intend to get it
working. The instructions that follow assume you too are using Debian
6.
Installation of
Debian
For the time being, all
we have to do is install a standard Debian distro via the usual
methods. If installing a Linux distribution is new to you, please seek out some tutorials on the web. There are hundreds, if not thousands, to choose from.
We will be making use of a GUI, and I recommend using GNOME
– the default – unless you have a particular preference
otherwise.
I
chose to use two 1TB drives (Samsung Spinpoint F3 HD103SJ, one of
the fastest 7.2krpm drives on the market), to provide a RAID1 redundant
configuration. For the partitioning, I went for a simple setup of a
single partition for the host OS, allowing a whole 100GB to ensure
there's absolutely no possibility of running out of space. I added on
a total of 10GB of swap space, and finally the remainder (about
895GB) for guest OSs, which I plan to install to LVM
logical volumes within this partition.
Note: you do not, by any
means, need to use LVM for guest OSs. You do not need to follow my partitioning structure or sizing. Neither do you need to use a RAID configuration. This is all completely up to you, and you could technically use a single drive - or a part thereof - and just select automatic partitioning during the installation process. Alternatively, you could surpass my feeble attempts and try to run hardware supported RAID10 across four SSDs - it's your call.
If
you need assistance on partitioning your drives, I advise that you do a
web search for tutorials on the process, of which there are many.
However; briefly, if you want to use a RAID1 setup like mine, choose
a manual partition configuration during the installation process.
Format both drives if necessary (i.e. remove any existing partitions,
which will of course destroy any existing data on them), and create
an identical partition setup on both 1TB drives:
- 100.0GB partition, type: physical partition for RAID, with bootable flag
- 5.0 GB partition, type: swap (we are not going to mirror these partitions)
- <remaining space, around 895GB>, type: physical partition for RAID
Then
choose to set-up software RAID. We
need to create two RAID1 MD
devices. So first create
one across both 100GB partitions (2 active, 0 spare), and then
create another across both
<remaining space> partitions (2 active, 0 spare).
Select
the first 100GB RAID device and create an ext4 partition with the
root mount point. Select the second <remaining space> RAID
device and create a physical partition for LVM.
Finally,
in the latter stages of the installation process, install
your chosen bootloader (I'd
recommend GRUB if you have no preference) to
the MBR of the first drive of the set, e.g. /dev/sda.
I won't continue to give advice specifically for Fedora, but this one is worth mentioning. Fedora 16 has a bug (hopefully fixed now) which can cause the OS not to boot when it is installed to a software RAID device. If you encounter the error "No such device" upon booting your new Fedora installation on software RAID, this may help.
Next > Part 4: Booting into dom0
No comments:
Post a Comment