The Next‐Generation Virtual Data Center

The next‐generation virtual data center represents the culmination of virtualization technologies into a consolidated virtual enterprise data center. In mission‐critical data center environments, the costs of maintaining the highest levels of uptime can be extreme. By introducing virtualization into the data center, there are several key benefits that become immediately obvious. In fact, the data center concept started virtually on the mainframe. The term hypervisor was originally used to describe the multiple user time slicing technology used on the "big iron" from which the current virtual‐to‐physical interface technologies get their name. There are many reasons that enterprises have started to look at virtualization in the data center. In order to understand the reasons for enterprises to transition to a virtual data center, there needs to be an analysis first of the costs.

Benefits of Virtualization in the Data Center

The transition to virtual servers has been shown to lead to a drastic decrease in the number of physical servers. Server consolidation is a large part of the cost savings, but it isn't the only factor. Outside of the costs of the infrastructure components themselves, which will decrease for the same server workloads through virtualization, the largest costs in the data center are power, rack space, and bandwidth. The first two items are where the costs associated with data center environments will decrease most sharply.

Power Savings

Through consolidation and the greater utilization of existing server hardware, the number of physical servers can be reduced to 10 percent or less of the initial servers in the environment. The power cost calculations in the first part of this series provided a means to determine the savings by reduction of hardware. It didn't, however, discuss the unique needs of a data center with regard to power requirements for cooling. What also hasn't been discussed at length is the physical space required to house an enterprise server farm. A decrease in hardware by 90% quite simply allows the same server workload to occupy one rack where it previously needed ten.

Whether the servers are physical or virtual, the bandwidth needs won't vary. Each application and role will have requirements for the level of service needed. Bandwidth will be dependent on this factor alone. Power, however, is a variable that will change greatly when transitioning from physical to virtual followed by rack space.

A typical data center will allocate power to a rack or cage based on the needs of the server hardware inside. When physical servers increase in number, so do the needs to power all the components in them. In a virtual data center, the physical hardware is greatly reduced as a result of the server consolidation taking place during the migration to virtual servers. The increased power demands of the individual servers themselves will increase as the result of more processors and processor cores or additional RAM; however, the overall server power loads will be decreased because several components are not duplicated such as network cards, video hardware, motherboards, and so on. Additionally, modern servers capable of virtualization are designed to efficiently share physical resources for use by virtual server workloads. By embracing a virtualization strategy, a data center environment becomes much more efficient.

Hardware Utilization and Costs

When virtualizing the servers in the data center, each investment in server hardware will be able to be fully utilized. The abundance of cheap processing power has enabled businesses to buy much more server than they need for even the most basic of roles. Even the lowest‐end servers available are overkill for basic infrastructure services. Infrastructure servers such as those running DNS or DHCP typically require very little power to perform effectively, but separating those roles is important because so many services rely on them working correctly. In the virtual data center, these infrastructure services can maintain the separation they need while still operating within the confines of a single server.

Hardware utilization is key to the savings in a virtual data center. By making more efficient use of physical hardware, the virtual data center environment results in a significant decrease in physical servers. What the virtual data center also does is create a cost‐effective option for high availability and fault tolerance. The use of high‐performance shared storage is crucial to keeping costs low and performance high. Traditionally, Storage Area Networks (SANs) have not been used for servers to boot from. Although boot‐from‐SAN support has been built‐in to modern operating systems (OSs) as far back as Windows 2000, the difficulty in managing the solutions based on it has led to low adoption. With virtualization, the ability to run entire virtual guest OSs on the SAN without tying them directly to the host servers has rapidly increased adoption. Less costly iSCSI SANs have begun to supplant Fibre Channel in the data center. These disk subsystems are the backbone of the nextgeneration virtual data center.

Workload Balancing

Workload balancing is another concept that the virtual data center has brought into the mainstream. With workload balancing, the next‐generation virtual data center can cost effectively allow workloads to be balanced between physical and virtual hosts in any combination. In a properly designed workload‐balancing scenario, resources can be dynamically provisioned for virtual servers. As the demands on a particular workload increase, the available resources can be shifted. Current technologies do not support this while the virtual servers are running, but platform changes are in development that will enable hot‐adding of virtual CPUs and RAM.

The next‐generation data center hardware already has support for virtualization built in, but the maturity of the virtualization platforms and the awareness of being virtualized allow modern OSs to better capitalize on the benefits of virtualization. OSs now have API support built in, which enable solutions to be constructed for management of virtual servers. This low‐level support gives developers the flexibility to write the tools necessary for this task.

One additional clear cost advantage of moving to a virtual data center is the licensing model now embraced by the major platform vendors. Early enterpriselevel virtualization platforms were often more expensive than the servers on which they ran. However, the trend toward free or low‐cost platforms has dominated in recent years, setting up the current revolution in adoption. Now, an enterprise doesn't have to choose the most powerful servers on which to host virtual workloads just to save on software costs. Even server OS licensing has been a factor in virtualization with vendors offering additional single and multiple copies of the latest products meant just for virtualization.

Transitioning to a Virtual Data Center

The provisioning of new virtual servers in the virtual data center is a process that is so elegant in its simplicity it's almost hard to believe. The nature of virtual disks make them portable, so entire libraries of testing, development, and production baseline servers can be created for nearly instantaneous deployment. Building a new virtual server doesn't require the costly and stringent controls that a physical system does. This work has already been done for each of the baseline virtual machines. The hardware independence of the virtual servers makes the compatibility testing unnecessary. From a software standpoint, once a perfect example server is built virtually, it can be duplicated instantly. There are no slight variations in OS platforms such as different patch levels that can create enormous challenges to maintaining a mission‐critical data center environment.

Migration

The migration process can take many different forms, but there is truly only one data center solution for migration. The enterprise customer migrating to a virtual data center must be able to do so without downtime or interruption to the live environment. The migration solution must be able to analyze the running production workloads, profile them, provision appropriate virtual servers, and incrementally migrate the workloads at a time when it is most advantageous for the company with the least impact on IT personnel who will performing the migration.

Maintaining Uptime

In order for a migration from a physical data center to a virtual one to be successful, it must not allow for extended downtime. This is perhaps the largest hurdle facing IT managers considering virtualization. They already understand the benefits of virtualization, but have to ensure that the transition does not adversely affect availability. The virtual data center requires a sophisticated set of tools and applications to transition from physical to virtual workloads. In order for a missioncritical data center to exist virtually and make the most of the investments in hardware and software, it is crucial to know exactly what workloads can and should be migrated and how to do so.

There are a few next steps required in moving to virtualization in the nextgeneration data center. The workload analysis and profiling done during the process of consolidation will play a critical role when transitioning to virtualization in the data center. You must then determine the availability needs of each workload and the architecture needed to support those goals. Using third‐party software tools to gain this level of insight into the workloads is vital.

Current Trends for Future Results

Current trends towards the virtual data center are driving the development of specialized software to make the transition from physical to virtual as seamless as possible. The state‐of‐the‐art solutions available now allow for automated distribution of workloads and effective migrations from physical to virtual. The future virtual data center takes this one step further by introducing a cloud computing model to virtualization. By building on the tools available now, enterprises can plan for a completely distributed virtual data center where workloads can exist anywhere. The future virtual data center platforms will be aware of where these workloads exist, and how to manage them in an automated fashion in the event of any number of disasters. Much like the workflow of any project, the future virtual data center, in terms of availability and recovery models, can be built to handle any number of circumstances.

Future virtual data centers will need to be able to dynamically shift workloads between servers and virtual servers between physical hosts much like routers choose the best way to route traffic over the Internet. Without virtualization, this idea simply isn't possible. By analyzing the current workloads and trending future utilization, software tools will greatly increase the performance and efficiency of these systems. The human intervention that is core to the current virtualization implementations in the data center will be greatly diminished. As enterprises look to cut cost and increase uptime, the future of the data center becomes apparent in virtualization.