With all the relatively heavy lifting an x86 can do, having two copies of Windows run side by side - or Windows and Linux, or Linux and Linux, or BSD and Windows, or what have you - seems like a natural extension of the design. I can run 200 processes on one Windows server. Why can't an x86 run 50 Linux processes and 150 Windows processes instead?
It should be that easy, and there is a way to do it. Every OS should run above a layer that arranges the coexistence of multiple operating systems, or multiple instances of one OS.
IBM's got that very thing on midrange systems and higher (not on any with x86 CPUs). It's called a hypervisor, and I want one. I want one in my Opteron, my Athlon 64 FX, my EM64T Xeon, my Xserve G5, and my Power Mac. I can accept not having one on my PowerBook, but that's subject to change.
PCs have no hypervisors (not yet; there's another column). Nothing is more privileged than the operating system. From the moment your OS's boot loader starts up, the operating system grabs direct, exclusive control of every chip, card, and bus in your box. The box is owned. That's dandy unless you'd like to fulfill your server or workstation's potential by running apps under whatever operating system suits them best.
Or perhaps you'd like the freedom to create a granularity of workload distribution beyond the process level: Take two instances of operating systems and reapportion their resources to suit the work at hand. Set up a second instance of Linux to be a fail-over for the first. You needn't even be interested in running multiple operating systems. Just gaining the ability to snapshot, pause, suspend to disk, or (using VMWare's Virtual Center) move an instance of Windows or Linux to a less burdened server makes a strong case for having a virtualisation layer that hosts just a single OS.
But, as if affected by some magic potion, your PC gives itself utterly to the first thing it sees. I said that's the OS. There is something that loads before the OS, but it doesn't hypervise, supervise, or otherwise. Please wait a moment while I share my opinions on the feebleness of the x86 BIOS firmware design with my wastebasket.
To its credit, when the BIOS finishes running, it's got a map of your system's devices, buses, and ports. My problem is that it takes this list and hands it straight to one OS. No, no, no. Hand it to a hypervisor. Using configuration settings determined by an administrator, the hypervisor allocates and monitors resources, and arbitrates device access across multiple operating systems. It can log, create hooks for low-level debugging, suspend virtual machines to disk, and make one OS fail over for another. It can proclaim that this controller card belongs to OS number one, this Fibre Channel card belongs to OS number two, and this network adapter is shared between them. I've said before that I believe all systems should be virtual. I'd like a hack-proof, compact virtualisation layer to be the software your PC server or workstation boots.
In most cases, I don't think that a hypervisor would be used to meet the current definition of virtualisation. PC operating systems assume they have the entire machine to themselves, an assumption that hasn't changed since DOS. That needs to go, and it's about to.