Sunday, 22 January 2012

Booting Fedora 16: "No such device" error

If you installed Fedora 16 to a new software RAID device, you may find yourself unable to boot into it, receiving a "No such device" error. In my case, this turned out to be easily resolvable by adding the necessary RAID modules to the bootloader - the instructions below assume you're using the GRUB bootloader.

First, boot to rescue mode using your Fedora installation media.

Change root to your installed system:
# chroot /mnt/sysimage

Add the missing modules to GRUB:
# echo "GRUB_PRELOAD_MODULES=\"raid mdraid09 mdraid1x\"" >> /etc/defaults/grub

Regenerate GRUB's configuration file:
# grub2-mkconfig -o /boot/grub2/grub.cfg

Install GRUB to the correct location. You will need to change /dev/sda to that correct location. I.e., /dev/sda installs GRUB to the MBR of the first HDD.
# grub2-install /dev/sda


Update 23 Jun: I'm happy to say that Fedora 17 doesn't suffer from this blight. The installation process correctly adds the insmod mdraid1x line.  

Xen Part 3: Installing the Host OS


Xen is basically a customised kernel which can run multiple operating systems simultaneously. It divides these OSs into two primary zones with different permissions. One OS, the host OS, runs in what is known as domain 0 (dom0). Dom0 has complete control over the system.

One can then install guest OSs, which run in either paravirtualised or fully virtualised mode. They are heavily restricted, either running on fully emulated hardware, or using hypercalls to communicate with the Xen kernel. They run in what is known as domain U (domU).

So, first we install our dom0 OS (the host OS), and install Xen onto that, replacing the native Linux kernel. Finally, we reboot into our new Xen kernel in dom0, and proceed to create virtual machines to install domU OSs (the guest OSs) into.  

Dom0 OS

Dom0 can be moreorless any Linux distro you choose, although some are better choices than others. I found Fedora 16 to be the simplest to get Xen running on, whilst I had difficulty with OpenSUSE and Debian.

Debian is well known as being the sysadmin's favourite distro, due in large part to its legendary stability and its fantastic package management system based on Aptitude. It seems like the perfect base on which to build a rock-solid Xen system.

I already gave Debian a try and completely failed to load the Xen kernel, due to a bug which proved elusive. Yet, aside from anything else, I have most experience with Debian, and prefer it over RHEL-based distros (which I found relatively simple to get Xen running on). What you use for your dom0 will be your own choice, but I have chosen to start over with Debian – and I intend to get it working. The instructions that follow assume you too are using Debian 6.

Installation of Debian

For the time being, all we have to do is install a standard Debian distro via the usual methods. If installing a Linux distribution is new to you, please seek out some tutorials on the web. There are hundreds, if not thousands, to choose from. 

We will be making use of a GUI, and I recommend using GNOME – the default – unless you have a particular preference otherwise.

I chose to use two 1TB drives (Samsung Spinpoint F3 HD103SJ, one of the fastest 7.2krpm drives on the market), to provide a RAID1 redundant configuration. For the partitioning, I went for a simple setup of a single partition for the host OS, allowing a whole 100GB to ensure there's absolutely no possibility of running out of space. I added on a total of 10GB of swap space, and finally the remainder (about 895GB) for guest OSs, which I plan to install to LVM logical volumes within this partition. 

Note: you do not, by any means, need to use LVM for guest OSs. You do not need to follow my partitioning structure or sizing. Neither do you need to use a RAID configuration. This is all completely up to you, and you could technically use a single drive - or a part thereof - and just select automatic partitioning during the installation process. Alternatively, you could surpass my feeble attempts and try to run hardware supported RAID10 across four SSDs - it's your call.

If you need assistance on partitioning your drives, I advise that you do a web search for tutorials on the process, of which there are many. However; briefly, if you want to use a RAID1 setup like mine, choose a manual partition configuration during the installation process. Format both drives if necessary (i.e. remove any existing partitions, which will of course destroy any existing data on them), and create an identical partition setup on both 1TB drives:
  • 100.0GB partition, type: physical partition for RAID, with bootable flag
  • 5.0 GB partition, type: swap (we are not going to mirror these partitions)
  • <remaining space, around 895GB>, type: physical partition for RAID
Then choose to set-up software RAID. We need to create two RAID1 MD devices. So first create one across both 100GB partitions (2 active, 0 spare), and then create another across both <remaining space> partitions (2 active, 0 spare).

Select the first 100GB RAID device and create an ext4 partition with the root mount point. Select the second <remaining space> RAID device and create a physical partition for LVM.

Finally, in the latter stages of the installation process, install your chosen bootloader (I'd recommend GRUB if you have no preference) to the MBR of the first drive of the set, e.g. /dev/sda.

I won't continue to give advice specifically for Fedora, but this one is worth mentioning. Fedora 16 has a bug (hopefully fixed now) which can cause the OS not to boot when it is installed to a software RAID device. If you encounter the error "No such device" upon booting your new Fedora installation on software RAID, this may help.

Next > Part 4: Booting into dom0

Xen Part 2: Hardware Requirements for Xen

You've selected Xen as your hypervisor - so far so good. But before we get ahead of ourselves, let's take a moment to understand what will be required for this to function. Can you run Xen on the hardware you already have? If not, how much would it cost?

Again, it comes down to your requirements. There are a couple of considerations:

  1. Which Operating Systems will you be installing as guests?
  2. What will you want to do with them? (For most people, read: do you want to play 3D games?)
I'll address these points in turn.

1) Guest Operating Systems

You don't need special hardware to install Xen on Linux and then add some Linux virtual machines on top. But if you want to run a Windows VM too, then I'm afraid you do need special hardware. To understand why, you'll need to understand one of the fundamentals of OS virtualisation: Paravirtualisation and Full Virtualisation. Let me briefly explain (or skip to the section titled "Xen" if you want to jump to the chase).

Full Virtualisation

To run a guest operating system, Xen can take one of two paths. It can try to "emulate" normal computer hardware, such that the guest operating system is completely unaware that it's being run on a virtual machine. So, Xen can present the guest OS with a fake processor (let's call it MyVirtualIntelCPU, and let's tell the guest it has 100MB of L2 cache for kicks!), and a fake hard disk (let's call it XenDisk), etc. 

Since Xen handles all the interaction with the guest, it can present whatever fake interface it likes, and so long as the guest sees it as compatible hardware, it'll be okay. The problem is, since we're emulating hardware in software, this is slow. Very slow. 

What I've described is known as Full Virtualisation. You'll typically see the acronym HVM, which stands for Hardware Virtual Machine.

Paravirtualisation

Thankfully, there is another option. Imagine if the guest OS knew it was running on a hypervisor. The hypervisor and guest OS could agree on a set of parameters each should operate within, and then the guest OS could be free, to some extent, to run by itself - without the need to run everything through the hypervisor, with fake emulated hardware. Obviously, things are rather more complex, but in simple terms, paravirtualisation (you'll often see the acronym PV) allows the guest OS to run without all the emulation that HVMs require. Properly configured, PV can be pretty close to native speed. 

At the heart of PV is the concept of a "hypercall", which is a way for the guest OS to communicate with the hypervisor. This presents an obvious gotcha: the guest OS needs to have virtualisation support built-in.

Xen

Xen supports both PV and Full Virtualisation (you don't often see the acronym "FV"). When it's possible run a guest OS with paravirtualisation, you'll typically want to do so. The following operating systems support paravirt-ops, which is the standard for paravirtualisation, originally created by Xen:
  • All Linux distributions (with kernel version >= 2.6.23)
  • FreeBSD
  • NetBSD
  • OpenSolaris (now OpenIndiana)
Some others might too - let me know of any I've missed - but typically speaking, if it isn't on the list, it isn't going to run in Paravirtualised mode, in which case you'll need to run it as a HVM, and this is where the hardware requirement comes in.

Hardware Requirement

To run OSs with Full Virtualisation (HVM), you'll need a CPU which supports either:
  • AMD-V
  • Intel VT-x
To see if your processor has the necessary support, I advise you check the processor's data sheet on the manufacturer's website. These days, most new CPUs I've seen support hardware virtualisation, but you will still need to check. For more information on processor virtualisation, read the Wikipedia article

So, the bottom line is that to run Windows (and some others) as a guest OS, you need your CPU to support VT-x for Intel or AMD-V for AMD.

2) Gaming and other special requirements

This mainly applies to HVM guests again, so we're primarily talking about Windows here.

If you want to have a Windows guest (HVM) to play 3D games, this adds an extra layer of complexity. Modern games are highly computationally demanding, and place a lot of strain on graphics cards. The emulated Cirrus card provided by QEMU isn't going to begin to cut the mustard. You're going to need to "pass through" your graphics card hardware to the guest OS.

In other words, instead of showing Windows a (virtual) antiquated graphics card from the Paleolithic era, you need to give your Windows guest access to a real graphics card - that ATi HD6850 sitting in your box, or whatever it happens to be. You need for the host OS to ignore it, and the guest OS to use it. This case is a subset of PCI Passthrough, dubbed VGA Passthrough.

There are other cases when you'll need similar support - if you want to expose a special piece of hardware to your HVM guest, such as a particular network card or sound card, you will have to use PCI Passthrough.

Hardware Requirement

To get this working on a HVM guest such as Windows, you're going to need support for either:

  • AMD-Vi
  • Intel VT-d
These are processor extensions for Directed I/O - in other words, routing devices to particular guests. You will need to check your processor data sheet to ensure that it supports one of these extensions. These are much less widely supported than basic Vt-x/AMD-V virtualisation, so don't assume that your CPU will have it, even if it's a high end CPU. For example, Intel's Sandy Bridge i7 2600 supports VT-d, whilst the i7 2600k (the slightly more expensive, unlocked version) doesn't support VT-d. Be careful.

Unfortunately, it isn't quite that simple. For VT-d to work, you also need your motherboard to support it too, and this seems to be a bit of a minefield. I hope that by the time you read this, motherboard support for VT-d will have improved substantially. Make sure you buy a motherboard which has been recommended as VT-d (or AMD-Vi) capable, and if you can, verify that it has an option to enable VT-d in the BIOS. 

I would advise going further, given that some which claim to support VT-d have turned out to require additional hacks (or worse) to get it working. For the wary, I suggest picking a motherboard which has already been tested by the community and verified as working with VT-d. Xen has a list of compatible motherboards. Additionally, Paul Braren has a short list of motherboards that he has kindly taken the time and effort to verify.

For those who are interested, I picked a motherboard on Paul's list - the ASRock Z68 Extreme4 Gen3 - and coupled it with an i7 2600. Based on my testing so far, I seem to be able to configure a graphics card passthrough successfully with this combination. 

3) Other Considerations

Okay, I know I didn't have a point 3 in my original list, but this is just a general piece of advice which shouldn't need stating. With a hypervisor, you're going to be running multiple operating systems simultaneously. You have a host OS, and at least one guest OS (or else, why bother?)

Given this, you'd be well advised to base the system on some reasonably powerful hardware, crucially with a sufficient quantity of RAM. If you plan to run 3 guest OSs and have 4GB of RAM, you're going to be very restricted in terms of what you can run on each guest. Let's say you allocated 1GB of RAM to each OS - the host, and the three guests. 1GB only just meets the minimum recommended requirements for Ubuntu desktop edition. Things would get sluggish awfully quickly.

You're well advised to kit out your box with plenty of RAM - upwards of 8GB would be preferable.

Next > Part 3: Installing the Host OS

Saturday, 21 January 2012

Xen Part 1: Beyond *nix: Running a Hypervisor

It's an ugly fact: most people run Windows.

They run one computer, with one monitor, and one Operating System. Yet the simplicity of the setup belies its power: with the move to cloud computing well and truly underway, one could argue that the host OS is becoming little more than a middleman, shuffling requests back and forth between server and client, and finally rendering the results on-screen. All that one needs is a web browser, and thanks to cloud computing services (such as Google Docs, for example), the world is their oyster.

Virtualisation in Enterprise

Cloud computing providers, much like other large institutions which heavily rely on technology, make use of virtualisation technologies. The idea is to provide greater flexibility and reliability through the abstraction of the OS from the underlying hardware. An OS doesn't need to run directly on hardware: an OS can run on an OS, which itself runs on hardware. Alternatively, an OS can run on a special kernel, capable of running other OSs simultaneously. We are of course talking about hypervisors.

The world of high performance grid computing, on-demand hardware provisioning and bare-metal virtualisation can seem a world removed from our day-to-day home computing requirements. Who wants to go through all the hassle of setting up an enterprise-grade hypervisor, when a web browser (like we get on our phones and tablets) offers us all we need?

Virtualisation and Us

The thing is, not all of us fit into the single-OS thin-client paradigm. Some of us have more demanding hardware requirements. Others have pronounced security concerns. Some people simply need to run multiple OSs (such a *nix for work & reliability, and Windows for play & Photoshop - or are they one and the same thing?). Others are seeking a solution to poor reliability.

Hypervisors provide a base to run multiple operating systems simultaneously and switch between them seamlessly. It's a very similar concept to Virtual Machine software such as Virtualbox and VMWare. Windows crashed again and need to reboot? Don't reach for the power switch - just click the button. Reload the previous snapshot. Assign your Solaris installation some more RAM so it can handle an increase in requests. Switch back to Linux. Then maximise your Windows window so you can load that darned .pptx file some ignoramus sent you via e-mail.

Virtual Machines

If you haven't spun up a VM before, try it now - it's free, easy and, above all, impressive to behold. Download and install Virtualbox (my preference), and then download a copy of whichever OS you'd care to try - the latest version of Ubuntu, for example. You'll then need to start Virtualbox, create a new VM, and finally start the VM pointing to the Ubuntu disk image (or CD if you burnt it). You will then install Ubuntu into the VM, all the time running it inside a window on your desktop, which you can resize, minimise and maximise like any other. Googling "Virtualbox tutorial" will throw up hundreds of walkthroughs to guide you through the process, and a similar search on Youtube will provide ample video walkthoughs too. The process doesn't take very long.

In this case, the "guest OS" (Ubuntu in the above example) is running as a VM. It is being executed by the Virtualbox code, which itself is running on your "host OS". This is all very cool, but it has a few limitations. One of the limitations is that the guest OS will be relatively slow (fast enough for most tasks, but it's going to be considerably slower than your host OS). This is because the guest OS code first has to be processed by Virtualbox before being passed to the host OS's kernel. It would be better if we could skip the middleman. This is one of the many benefits of hypervisors.

Virtual Machine Monitors 

Hypervisors (i.e. VMMs) take the VM concept and go the extra mile. They run on what is known as "bare metal" - i.e. your physical hardware. Instead of me lecturing on about them, you may as well read this excellent Wikipedia article which expresses it better than I'd be likely to.

Read that? Good. (In case you've gone down the schoolboy route of assuming you'll read it later, I'll reiterate the main point I wanted to you take from it). As you've undoubtedly noted already, there are two main types of hypervisor. One runs on top of another operating system (much like Virtualbox), and one is a cut down version of an operating system itself (like the Linux kernel). There are many good examples of each, but of particular note, KVM is taking off in the case of the former, and Xen is the primary open-source player in the latter. However, there are many good choices, and your choice will depend on your particular requirements. It may be that the excellent OpenVZ would suit your purposes just fine. Or perhaps you might feel like VMWare's proprietary ESX is what is required. Whatever the case, you'll need to do your research: as ever, Google is your big brother.

The Case for Xen

If, like me, you determine that:
  • You want a highly performant solution - Virtualbox isn't going to be the answer
  • You want the ability to run games/advanced 3D rendering, i.e. perform. VGA passthrough - that unfortunately rules out KVM for the time being (unless you have a lot of spare time)
  • You want an open source solution - scratch all of VMWare's products off the list
  • You want to run OSs other than Linux - OpenVZ isn't going to cut it
  • You've done your research
  • Xen seems to offer everything you need
then keep reading, as I try to untangle the process of setting up Xen on a Linux host, installing multiple guest OSs (Windows included), getting the management tools working, and finally preparing PCI passthrough - dedicating graphics cards to particular guests.

The final outcome - which I hope you'll be able to duplicate - will be a PC running Linux, with another Linux virtual machine for daily usage, and a Windows virtual machine for gaming. It will sit on a RAID1 redundant base, have rock-solid stability, display across a number of monitors, and generally solve the problems of the world.

So sit back, grab a beer, and guffaw at my expense as I try to walk you through this daunting process.

If it all runs like clockwork, I'll eat your hat.

Next > Part 2: Hardware Requirements for Xen