[By Matthew French] One of the things I love about the IT industry is the constant hype. There is always some new idea or silver bullet that will fix everything. The hype can provide endless hours of amusement and it can be great fun to watch the clash of the old with the new, the believers versus the Luddites. And of course it provides great mental exercise — deconstructing the hype, trying to understand it, trying to debunk it and trying to see where the hype is real.
The easy bit is identifying the hype. The smoke on the Internet battlefield will litter blogs and the comments sections of online publications for years. The harder part is understanding what is real. This is so difficult that often the purveyors of hype don’t understand the true benefits of what they are pushing. Unfortunately, this is a natural side effect of a point scoring process where combatants will tend to pick the small but easy to understand points while ignoring the complex but important issues.
One area currently emerging from the hype is virtualisation. For the uninitiated, virtualisation is the technology that makes it possible to fool an entire operating system into thinking it is running by itself when in reality it is running inside another operating system. If you have ever used a ZX Spectrum emulator to play games from the bronze age of computing then you have used a form of virtualisation. Apart from emulation, we also have hypervisors which roughly can be described as operating systems for virtual machines, and we also have para-virtualisation where the hosted operating system knows it is a virtual machine but plays along, acting as if it were an independent computer.
So the obvious question: why is virtualisation important, especially since the concept is not actually new? Mainframes have been doing it for decades.
To understand this, we need to categorise virtualisation into three groups.
The first group is people who need access to another computer but don’t want to have another computer on their desk. This is quite a diverse group, from people who want to run Microsoft Office on their Apple Mac, to developers who like breaking stuff, and antivirus researchers who want to infect a machine they have complete control over.
What is curious about this group is how virtualisation on the PC has grown from almost nothing five years ago. The easy answer is that computers have become so powerful that the average laptop has more than enough horsepower to run two or three operating systems at the same time. Virtualisation has become popular because it is now possible. In a few years time we will probably consider virtualisation on the desktop to be a utility, like a text editor or a disk formatting tool.
However, for the purposes of this discussion we have limited interest in this group. In this scenario virtualisation has immediate practical benefits which are easy to see and explain, so there is no need for hype.
The second group is administrators who want to virtualise the desktop. Here the idea is we take all the desktops and put them onto a few big servers. However this group better falls into the category of thin clients, which is another hyped-up technology that has been slowly bubbling ever since we left the mainframe behind. Thus it is not useful for this discussion.
Which brings us to the third area where virtualisation is important: the data centre and the server room. Here the benefits of virtualisation are far less obvious. As a result this is where the hype war is being waged.
The basic premise is simple. By consolidating dozens of physical computers onto one large server using virtualisation it is possible to reduce cooling costs, power consumption and the amount of physical space used. Virtualisation is doing for physical servers what the iPod has done for compact discs. It has turned servers into files that we can play wherever we like. Server administrators now have a whole range of new super powers.
For example, it is possible to move a running server from one physical computer to another in seconds, without shutting it down or the server being aware that its physical location has moved 10km. This is a great party trick, but it also has some very real benefits. The first is that you have an effective high-availability and disaster recovery solution. When hardware starts failing, when the server room starts filling up with water or even when you just need to add more memory to the physical box, you can move the running server to new hardware with the flick of a switch.
A more interesting development is the ability to move servers around according to the amount of work they are doing. So when the high energy physics team down the corridor start testing your patience with their performance testing, their busy servers can be spread out across many computers to share the load. At night, when the databases and applications are idle, they can be consolidated onto one physical server and the other hardware can be shut down, saving on power and cooling costs. Best of all, this process can be completely automatic.
Another benefit is that adding new servers only takes a few clicks of the mouse. No longer is it necessary to wait months for the purchasing department to place the order, and then several more months for the physical hardware to arrive. Capacity management is also a lot simpler. You don’t need to order twice as much memory as you think you might need. If it turns out you need the memory six months later, you can double the memory by opening a dialog box and changing a few settings.
Awesome stuff. But if you were paying attention, you should be shouting: “Hype!”
That’s right. It is hype. Oh, these features exist. They are here, today. But watch out for the fine print: migrating virtual servers between physical machines needs supported storage-area network architectures. The ability to fail over in seconds could means high levels of network traffic that could be unworkable over slow wide-area network links. Some applications don’t appreciate the hardware changing while they are using it. So while the technology works, it might not work for you.
Then there is another element to hype: the world has a knack for introducing new problems. One big wrinkle with virtualisation is that the moment people see they can have more servers, they will attempt to use more. So while you might succeed in halving the number of actual servers, you double the number of virtual servers.
Unfortunately one of the biggest components of the cost of a server is the salary of the person needed to maintain it. There are patches to apply, back-ups to do, services to restart and configuration changes to make. It doesn’t matter whether they are physical or virtual servers because the work will still need to be done. If you double the number of servers, you need to double the number of people looking after them. Not only have your cost savings gone out the window, there is an excellent chance your costs have gone up.
If you are nodding in agreement and have decided that virtualisation will never work, then I fear you have fallen into the Luddite’s trap. The benefits of virtualisation are real, but we have to change the way we work. In a virtualised world, servers are cookies cut from the same template. If you focus on automation and repeatability then you should reap the benefits. But if you just use virtualisation because every one else does and continue to work as you did before, then you probably won’t see the benefit.
And there you have it — the real frustration of hype: knowing that there is something out there, but not knowing if it will work. If we don’t follow the hype, we feel like we are being left behind. But if we follow the hype, there will be much pain ahead. Sometimes choice can be a terrible thing.
- French is an independent consultant with more than 20 years of experience in the IT industry