Hype, er, visors - TechCentral

Hype, er, visors

Matthew French

[By Matthew French] One of the things I love about the IT industry is the constant hype. There is always some new idea or silver bullet that will fix everything. The hype can provide endless hours of amusement and it can be great fun to watch the clash of the old with the new, the believers versus the Luddites. And of course it provides great mental exercise — deconstructing the hype, trying to understand it, trying to debunk it and trying to see where the hype is real.

The easy bit is identifying the hype. The smoke on the Internet battlefield will litter blogs and the comments sections of online publications for years. The harder part is understanding what is real. This is so difficult that often the purveyors of hype don’t understand the true benefits of what they are pushing. Unfortunately, this is a natural side effect of a point scoring process where combatants will tend to pick the small but easy to understand points while ignoring the complex but important issues.

One area currently emerging from the hype is virtualisation. For the uninitiated, virtualisation is the technology that makes it possible to fool an entire operating system into thinking it is running by itself when in reality it is running inside another operating system. If you have ever used a ZX Spectrum emulator to play games from the bronze age of computing then you have used a form of virtualisation. Apart from emulation, we also have hypervisors which roughly can be described as operating systems for virtual machines, and we also have para-virtualisation where the hosted operating system knows it is a virtual machine but plays along, acting as if it were an independent computer.

So the obvious question: why is virtualisation important, especially since the concept is not actually new? Mainframes have been doing it for decades.

To understand this, we need to categorise virtualisation into three groups.

The first group is people who need access to another computer but don’t want to have another computer on their desk. This is quite a diverse group, from people who want to run Microsoft Office on their Apple Mac, to developers who like breaking stuff, and antivirus researchers who want to infect a machine they have complete control over.

What is curious about this group is how virtualisation on the PC has grown from almost nothing five years ago. The easy answer is that computers have become so powerful that the average laptop has more than enough horsepower to run two or three operating systems at the same time. Virtualisation has become popular because it is now possible. In a few years time we will probably consider virtualisation on the desktop to be a utility, like a text editor or a disk formatting tool.

However, for the purposes of this discussion we have limited interest in this group. In this scenario virtualisation has immediate practical benefits which are easy to see and explain, so there is no need for hype.

The second group is administrators who want to virtualise the desktop. Here the idea is we take all the desktops and put them onto a few big servers. However this group better falls into the category of thin clients, which is another hyped-up technology that has been slowly bubbling ever since we left the mainframe behind. Thus it is not useful for this discussion.

Which brings us to the third area where virtualisation is important: the data centre and the server room. Here the benefits of virtualisation are far less obvious. As a result this is where the hype war is being waged.

The basic premise is simple. By consolidating dozens of physical computers onto one large server using virtualisation it is possible to reduce cooling costs, power consumption and the amount of physical space used. Virtualisation is doing for physical servers what the iPod has done for compact discs. It has turned servers into files that we can play wherever we like. Server administrators now have a whole range of new super powers.

For example, it is possible to move a running server from one physical computer to another in seconds, without shutting it down or the server being aware that its physical location has moved 10km. This is a great party trick, but it also has some very real benefits. The first is that you have an effective high-availability and disaster recovery solution. When hardware starts failing, when the server room starts filling up with water or even when you just need to add more memory to the physical box, you can move the running server to new hardware with the flick of a switch.

A more interesting development is the ability to move servers around according to the amount of work they are doing. So when the high energy physics team down the corridor start testing your patience with their performance testing, their busy servers can be spread out across many computers to share the load. At night, when the databases and applications are idle, they can be consolidated onto one physical server and the other hardware can be shut down, saving on power and cooling costs. Best of all, this process can be completely automatic.

Another benefit is that adding new servers only takes a few clicks of the mouse. No longer is it necessary to wait months for the purchasing department to place the order, and then several more months for the physical hardware to arrive. Capacity management is also a lot simpler. You don’t need to order twice as much memory as you think you might need. If it turns out you need the memory six months later, you can double the memory by opening a dialog box and changing a few settings.

Awesome stuff. But if you were paying attention, you should be shouting: “Hype!”

That’s right. It is hype. Oh, these features exist. They are here, today. But watch out for the fine print: migrating virtual servers between physical machines needs supported storage-area network architectures. The ability to fail over in seconds could means high levels of network traffic that could be unworkable over slow wide-area network links. Some applications don’t appreciate the hardware changing while they are using it. So while the technology works, it might not work for you.

Then there is another element to hype: the world has a knack for introducing new problems. One big wrinkle with virtualisation is that the moment people see they can have more servers, they will attempt to use more. So while you might succeed in halving the number of actual servers, you double the number of virtual servers.

Unfortunately one of the biggest components of the cost of a server is the salary of the person needed to maintain it. There are patches to apply, back-ups to do, services to restart and configuration changes to make. It doesn’t matter whether they are physical or virtual servers because the work will still need to be done. If you double the number of servers, you need to double the number of people looking after them. Not only have your cost savings gone out the window, there is an excellent chance your costs have gone up.

If you are nodding in agreement and have decided that virtualisation will never work, then I fear you have fallen into the Luddite’s trap. The benefits of virtualisation are real, but we have to change the way we work. In a virtualised world, servers are cookies cut from the same template. If you focus on automation and repeatability then you should reap the benefits. But if you just use virtualisation because every one else does and continue to work as you did before, then you probably won’t see the benefit.

And there you have it — the real frustration of hype: knowing that there is something out there, but not knowing if it will work. If we don’t follow the hype, we feel like we are being left behind. But if we follow the hype, there will be much pain ahead. Sometimes choice can be a terrible thing.

  • French is an independent consultant with more than 20 years of experience in the IT industry

6 Comments

  1. Matthew, I think you’ve missed one aspect of the hype that always frustrates me. The inability of companies selling these solutions to avoid jargon or to explain things well.

    I’ve just finished reading some brochures. Took me a while to realise they where talking thin clients in clusters. Didn’t help that they’d renamed their product or that they invented some new terms for old and trusted concepts.

    I’m not sure if there main aim is to make me feel like a moron left behind in the dust of some new developments and then they trade on my ignorance to just buy their stuff to stop my embarrassment. Or do they think if they add enough buzzwords I might think they’re even cooler then the next guy. Or maybe its just too scary to tell it like it is since then I could more easily compare their offering to others.

    Who knows, maybe there is a special tax rebate for companies in silicon valley when they use more then 5 3 letter acronyms for their products.

  2. Hi,

    Nice article, but I have key areas where virtualisation has huge benefits, you dont always have to believe the hype, just implement it where practical!

    Lets give a relatively simple scenario for a Software development company.

    Software dev companies typically run multiple projects at once. Projects start and stop all at various times. A software co can use virtualistion on hardware without a fancy SAN (read storage area network – a lot of hard drives in one place) – forget about the fancy bells and whistles of ‘live migration’ from one phyiscal machine to another.

    The real bonus is the company can instantly create an environment similar to what the customer will have instead of buying expensive new hardware. These Virtual Machines (VM’s) will run for the course of the project, once done, they can be turned off and the processing power freed up for another project. You can even ‘ship’ via the net or via harddrive the entire VM to the customer!

    You can even have prebuilt software in the VM and copy it so the next project that uses the same software is already installed – quite nifty.

    Whilst there is a lot of FUD about virtualisation, there are huge benefits for even smaller companies.

    We are seeing big savings in our datacentre already with our own and customers infrastructure. We can deliver an equivalent piece of hardware for a fraction of the cost, reducing power consumption dramatically, centralising storage onto our SAN and giving customers access to storage that can grow on demand.

    There are more advantages but virtualisation allows companies to significantly reduce their costs. Xen & VMWare both have free offerings for the entry level.

  3. I don’t quite get the point of this article, this could be about just about any product in the world? “Product X is hyped, but some things about it are useful; you probably won’t use all the features of product X”. But anyway, seeing that Virtualisation was brought up…

    >> It doesn’t matter whether they are physical or virtual servers because the work will still need to be done. If you double the number of servers, you need to double the number of people looking after them.

    Disagree.. it’s obvious that the elements of the infrastructure that require the most configuration and TLC, the applications and services, will require rougly the same amount of work however many machines they happen to be distributed over – the only difference is maintaining more base OSes, which, at least in Windows’ case, pretty much take care of themselves with WSUS – I’m not familiar enough with *nix to know if they’re the same, but I assume they have a similar facility. The extra work that might be required, of course, is negated by the roll-out times of the new servers and driver/hardware issues. Configure once, deploy many times, all on exactly the same virtualised hardware layer.

    With MS aggressively targetting the VM space with Hyper-V and replicating more of VMWare’s crowned jewels for free and making it easier to use and more accessible (eg. no requirements for SANs, as it’ll on with any hardware with a win2k8 driver), there’s going to be a point at which it doesn’t make sense to use at least some VMs, even if it’s only for stuff like backups and disaster recovery.

    @Mark I also work in a dev-house, and VMs have long been a favourite of developers – and it doesn’t take long for a sysadmin to fall in love with virtualised infrastructures, it makes their life SO much easier.

    >But if we follow the hype, there will be much pain ahead. Sometimes choice can be a terrible thing.

    Virtualisation has moved beyond the hype stage, it’s a natural progression in the IT workd especially with memory becoming cheaper, CPUs having more and more cores on them and of course the big green movement happening. I think if a company hasn’t at least investigated virtualisation and understands how it will benefit/hinder them, someone needs a rap on the knuckles!

    I also don’t really buy that there’s a lot of FUD out there about VMs – yes, there’a ton of info pushed out, but it’s at least useful to someone, somwehere. Unless you advertise your product as “This virtualisation platform virtualises machines”, you’re going to start telling SOMEONE about features they won’t need. Pick out the points you need, and ignore the rest. If you can’t decide which bits you need, then you shouldn’t be making the decision anyway.

  4. VM’s are lovely ideas, the difference is though, the mini and mainframe environments are built for that sort of thing. I can’t assign half a CPU to a VM on a Intel box.

    Then there is sprawl, people forget that a VM is another host I have to manage.

    Then theres that nonsense about ‘the server is only 5% utilised’. Rubbish. Recalculate that for SLA’d hours.

  5. Matthew French on

    @Dwayne – you are quite correct about the jargon and the inability to explain the product. But this would better be described as marketing. There doesn’t have to be hype for brochures to make no sense. Although one should have sympathy for the marketers who have to try and make the ordinary sound extraordinary.

    @Mark, @Greg – virtualization is reaching the end of the hype cycle, but there is still a lot of noise in data center environments where disaster recovery and high availability are big issues.

    Note a SAN (or NAS) is required when you want to be able to move a running VM onto other hardware. Some of the “bare metal” VM tools are also designed to work better with a SAN – the idea being that the VM software can be stored in flash memory and the hardware does not need any physical disks.

  6. Firstly Matthew I must thank you for not bringing the virtualization vendors and their offerings into the argument, as I would have stopped reading immediately.

    Secondly while I agree with most of your points, I don’t agree necessarily that the sudden urge to splurge on virtual machines is a bad thing. On the contrary. In my experience business tend to consolidate multiple server roles into a single server to save on the need to buy physical hardware. With the growth in virtualization platforms I see more purpose built servers in the data centre and even in the small business.