Virtualization Basics

To understand where we are today, it is helpful to know a little about how we got here. As a manager you do not need to understand the technical intricacies of virtualization, but it helps to understand the basic concepts as you evaluate options for your organization.

HISTORY

What's old is new again! If you've been around computers as long as the authors have, you'll recall the days of the 1960s and 1970s when to take full advantage of expensive mainframes we partitioned the machine into separate virtual machines. This allowed us to run multiple jobs (applications) at the same time. Because they were so expensive, mainframes were designed from the beginning to use partitioning as a way to fully leverage our investment.

Then, starting in the 1980s, we began to move to inexpensive servers based on the Intel x86 series of processing chips. We also began developing client/server applications to take advantage of the processing power that we now had on the desktop. Windows NT and, later, Linux made it easy and inexpensive to set up a new server for each new application required by the business. Users were happy because they had a dedicated server to ensure on-demand availability for each of their important applications. IT support personnel were happy because it was easy to pinpoint the source of application problems reported by the users.

But now, in the 2000s, we face a new series of challenges. The need for servers has increased even further because of the proliferation of Web-based applications accessed by both internal and external users. Continuing to add more and more servers has created a new set of problems, including:

▲ Poor investment utilization. The market research firm IDC estimates that the typical server spends 85% of the time waiting for a user request and not performing any useful activity. Most applications are only used during business hours, yet servers are left running 24 hours a day. Many servers are lightly used, yet the users require that the application be available when they need it.

▲ Increased energy usage. No matter what is running on a server, energy is required to power the processor, spin the hard drive(s), and keep memory refreshed. As this energy is used and converted to heat, the amount of energy required for cooling the server room or data center goes up.

▲ Facility limitations. Increasing the number of servers may strain the ability of the local power grid to provide enough power to run and cool the facility housing the servers; it also increases the amount of physical space needed.

▲ Increased IT costs. As more servers are added, additional IT personnel are required to manage the servers. The cost of disposing of end-of-life servers can also be a major expense.

▲ Lack of disaster recovery. As the number of servers increases, it becomes more complex and resource intensive to properly protect applications from natural disasters, security threats, and equipment failure. If quick restoration of an application is required, a duplicate set of hardware must be available and ready to go. The expense of doing this makes it more likely that it would not get done properly and restoration objectives will not be met.

To help address these issues, the virtualization of x86 based servers was first made available by VMware in 1999. As of this writing, it is the leader in the virtualization market. Other companies, such as Citrix and Microsoft, have entered the virtualization market with their own products to vir-tualize all or part of an IT infrastructure.

HOW IT WORKS

Virtualization, as most frequently applied today, essentially uses software to mimic hardware. On a server or a desktop PC, it allows multiple operating systems and multiple applications to run on a single computer. The software that makes this possible is known as a hypervisor. The hypervisor forms a layer between the physical hardware and the virtualized operating system or application. Sometimes, the hypervisor acts as its own operating system and works directly with the hardware; in other cases, it simply sits on top of the operating system installed on the machine. The hypervisor makes the virtual operating system or application think that the underlying hardware belongs only to it and provides an isolated environment so that a problem with one does not affect another running on the same hardware.

Other types of virtualization, such as virtualizing networks, desktops, and storage, all work in a similar fashion. A piece of software, sometimes assisted by specialized hardware, allows you to create a virtual resource from one or more physical resources. You can then manage and allocate this resource to users and applications as needed. Each type of resource that can be virtualized has different reasons for not being fully utilized, but through virtualization we can change that.

WHY IT'S GREEN

Virtualization potentially offers great benefits to an organization without regard to its impact on the environment. However, since this book is about IT's impact on the environment, let's take a look at what virtualization has to offer from a green perspective:

▲ Decreased energy use. By increasing the utilization of our computing resources and reducing the number of physical devices, the amount of energy required to operate the devices is decreased. So, too, is the amount of energy required to cool these devices, thereby doubling our reduction of energy usage.

▲ Reduction in toxic waste. Most of these electronic devices contain toxic materials such as lead, cadmium, mercury, hexavalent chromium, polybrominated biphenyl (PBB), and polybrominated diphenyl ether (PBDE) flame retardants. These materials can be released into the environment if the devices are not recycled properly.

▲ Reduction in facility requirements. If the amount of equipment is reduced, so are space requirements, which means business can increase without having to build ever larger data centers.

So whether you use virtualization to save money or to be green, everyone wins.

Was this article helpful?

0 0

Post a comment