Any business will have some holdover legacy technology to support even in a completely revamped service delivery‐based virtual infrastructure. Integrating the legacy components in as complete a way as possible will help limit the costs associated with the project overall. Businesses must be sensitive to carrying forward only the right items and understanding where the infrastructure will have to be overhauled to support desktop virtualization.
Trying to integrate technologies and solutions just for the sake of reuse can lead to greatly increased costs, particularly in terms of support, which is what desktop virtualization is meant to reduce. Embracing the end user service delivery goal will require a thorough understanding of how these legacy and emerging technologies fit to allow the most cost-effective means for building evolving service delivery models.
The first chapter looked at the goal of end user service delivery and how desktop virtualization helped to meet that goal. To do so required a step back in thinking to evaluate how users interact with their desktops and then apply this knowledge to effectively leverage desktop virtualization. As part of that journey, we were able to determine where desktop virtualization didn't fit particularly well. This critical recognition then shaped the conversation surrounding desktop virtualization. Instead of looking at desktop virtualization as a destination in service delivery, we were able to see it as a piece of an evolving service delivery strategy.
The second chapter reviewed the legacy components that are already a part of most network infrastructures and explored where to place them in a desktop virtualization model. This important step made sure that any new infrastructure met the goals without unnecessarily making new purchases or attempting to fit components into a VDI build out that weren't right for the project. The emergence of the cloud, with all of the buzz surrounding it, as a possible part of the desktop virtualization and service delivery discussion meant that we had to take into account this new paradigm and the possibilities of hosted VDI.
Along the way, we saw that some of the promises of hosted VDI need technologies that don't yet exist. This chapter will take a forward look so that you can see where solutions like hosted VDI will make sense. Additionally, we are going to see what critical technologies are needed in order for desktop virtualization to become a larger part of a service delivery strategy.
This chapter will build upon the lessons learned in the first two chapters to determine how to build a successful VDI that meets the goals of end user service delivery without overreaching as a solution. We are going to apply the lessons learned earlier to allow us to best leverage desktop virtualization as part of a continually developing series of service delivery methods.
One of the keys to desktop virtualization is to understand the limits of technology now while anticipating what is coming in the near future. The market is constantly changing as vendors see how businesses want to leverage desktop virtualization and then help adapt the technologies to work where they haven't quite been a good fit yet. This chapter will explore where natural progressions in technologies will enable greater use of desktop virtualization.
This chapter will conclude with a solid explanation of what can be done now to maximize an investment in desktop virtualization and what to expect in the near future. This understanding will provide a means to determine whether desktop virtualization is the right service delivery technology for your organization—or will be as the technologies underpinning it mature.
At this point, you have been able to review existing solutions in both virtual and physical infrastructures. When we began this journey, we started with a comparison of server virtualization and desktop virtualization. This seemed natural because the lessons learned with server virtualization were thought to translate to desktop virtualization in the same ways. We found out quickly that such wasn't the case. The end user experience that means so much to the success of desktop virtualization was the critical difference.
We next looked at how virtual desktops and physical desktops fit into service delivery and what this meant for your organization. Transitioning from physical desktops to virtual ones had a theoretical payoff in terms of reduced total cost of ownership and support. However, depending on the means in which the transition occurred (that is, virtual desktop access from a physical desktop, thin clients, and so on), the returns on investment and how well an organization could meet the reduced support costs goals varied wildly.
Towards the end of the second chapter, we investigated how physical desktops fit into the equation. What do you do with all of the legacy desktops that you have? This question creates a big problem for IT architects when trying to determine what solutions from the physical infrastructure can be used to support the virtual infrastructure.
Let's take a look at the options for addressing perhaps the most prevalent legacy component in the transition to a virtual desktop infrastructure: the physical desktop. There are three options available with regard to physical desktops in a virtual environment:
Each of these options carries inherent consequences. The focus is to balance the competing goals of reduced total cost of ownership/support with reduced capital expenditures.
The first option really requires no additional capital expenditures for desktop replacement.
Here you only have to use a software solution to access the virtual desktops. In the case of RDP, there is nothing to install, just start accessing virtual desktops immediately. Installing clients for other virtualization technologies is a matter of seconds per machine. This really is the simplest way to integrate legacy technology into a desktop virtualization plan.
There is one major drawback, though, to this first option. You now have a "double desktop" scenario. You have to support the physical desktop and the virtual desktop. This setup clashes with the reduced support cost goal of desktop virtualization. In fact, the single largest driver of desktop virtualization is reducing the support costs for the desktop infrastructure, and this method arguably increases those costs. For this reason, the "double desktop" option should be used with caution.
The second option requires the most up‐front capital to be expended but essentially reduces the cost of supporting a physical device to near zero. Thin clients have been around for a long time and have proven to be a great mechanism for delivery of service. However, they aren't free. Although thin clients do, in general, cost less than their desktop counterparts, they are often as expensive as a desktop depending on the features needed.
The third option is one that has tremendous flexibility but has seemed to miss widespread adoption. There are several vendors that have created solutions that replace the operating system (OS) on PCs with a stripped down one designed to perform the same hardened, maintenance‐free function as a thin client. The obvious plus is that there is no need to purchase additional hardware.
Take note that this solution isn't quite as simple as it may seem. One of the major benefits of thin clients or physical desktops is that the OS is designed to support the hardware in the best possible or most flexible manner. On physical desktops, this comes in the form of extensive hardware driver support. On thin clients, this comes in the form of specialized hardware and software built to work together. When you try to "convert" a desktop into a thin client, you might run into problems with hardware compatibility and vendor support. Success is completely dependent on testing and selecting a solution that supports your entire desktop infrastructure.
Using desktops as thin clients is a means to embrace virtual desktops now with reduced support costs for the access device because it isn't serving in a traditional desktop sense. Earlier chapters explained how vendors have built solutions, such as Microsoft's ThinPC, to perform this function with the same goal of leveraging desktops in a thin client‐like manner until thin clients can replace the desktops. This signals a serious acknowledgement that the industry realizes the growing problem of the transitional period from physical desktops to thin clients.
Using desktops in a dual desktop role—where the physical and virtual desktops are both running full OSs and application sets as needed—hasn't aligned well with the reduced support costs goal of desktop virtualization. Although this option might be great for testing purposes, it isn't the most cost‐effective way to achieve a production virtual desktop infrastructure.
I also don't recommend a wholesale shift to thin clients as a replacement for desktops unless the desktops are to be replaced already and the choice can now be made to replace them with thin clients. Thin clients certainly make sense for reduced support and power consumption. This is the final destination for replacement of legacy desktops that we should strive for, but the capital expenditure makes this hard to justify in the context of reduced operating costs. If the infrastructure is set for a desktop hardware refresh, this option is ideal for maximum return on investment.
Being able to "convert" desktops into thin clients buys businesses time to get the maximum life out of existing legacy desktops without having to expend massive amounts up front to embrace desktop virtualization. This solution, as noted before, isn't perfect. Some hardware may not work in this role due to hardware incompatibilities. Nonetheless, it does fit a wide variety of hardware with minimal costs. There are certainly costs to transition to the use of legacy desktops in this manner, including labor to install and test, but this cost is far less than hardware replacement.
Desktops are going to arguably be the largest single legacy component to consider when figuring out how to integrate what you have into what you need to support VDI. However, there are a whole host of other components that need to be taken into account. The first two chapters investigated the networking and storage requirements for VDI.
The required investment in additional upgrades to your network infrastructure in order to meet the user experience requirements of VDI are going to vary depending on where you are in a technology refresh cycle, much like desktop replacement. VDI requires constant connections to virtual desktops, so investments made in load balancers, redundant switches, and in general for redundant components will be reusable. Also, making the upgrade to 10 Gigabit Ethernet will significantly improve performance as well as the efficacy in network aggregation, a tactic that becomes useful in managing VDI's dense networking arrangements.
Storage requirements, as discussed in the preceding chapter, are going to change as well. VDI relies upon fast and reliable storage. This generally means SAN storage. For many companies that have invested already in a scalable SAN solution that also provides for upgrades to faster interconnect technologies, VDI's storage requirements aren't going to be a problem. Earlier iSCSI SANs are going to be taxed heavily by the increased load that desktop virtualization places on the shared storage.
Application virtualization is a key supporting technology set that can get lost in the discussion of desktop virtualization. Being able to separate the applications from the OS is going to be critical to the rapid deployment of applications in a cost‐effective manner. Later, we will explore this in further detail; the key takeaway from this lesson is that much of the other technologies will at least in the near term find some use in a VDI environment.
Earlier lessons worked to best balance what and how much of your legacy solutions to bring forward. Putting that all together here is going to be critical. As long as the focus remains on end user service delivery, you are going to be fine. Previously, the focus was on what you should keep in order to keep the infrastructure build‐out costs as low as possible. In this lesson, we aren't going to focus on what we integrate now, but what makes sense as a part of a long‐term evolution in service delivery.
As part of a long‐term strategy, perhaps the most important component to service delivery with regard to legacy solutions is shared storage. Shared storage, particularly SAN storage, will always play a critical role in the delivery of service. This was the case even before server virtualization, when having centrally managed storage was critical to effective storage utilization and enterprise scalability. It caught on for clustering and high availability and made a natural fit for server virtualization shortly thereafter. SAN storage is the key enabler in a scalable desktop virtualization solution as well. The SAN solution that is used with desktop virtualization will have a tremendous demand placed on it in terms of both disk performance and overall storage requirements.
When planning for the future growth and performance with shared storage, it is critical to implement a solution that can be upgraded to allow for faster disk and interface technologies. As the demands placed on the SAN increase, there will be a need to eliminate those as bottlenecks. Faster interconnect technologies will enable many more virtual machines to use the same SAN for their needs. Disk performance is a key factor as well. Adding more and faster disk spindles facilitates this improved performance. New technologies that integrate very high‐speed solid state drives to facilitate caching may also play a role, particularly as their cost diminishes.
Shared storage is just part of the puzzle. Although the networking equipment that is currently in place to support both physical and virtual networks will work well, the demands of desktop virtualization will require upgrades in speed. The good news is that a transition can occur. The iSCSI wave brought with it a new era of interoperability that should continue for the foreseeable future. This good news means that additional infrastructure equipment can be added to increase speed without having to do a complete replacement.
Fibre Channel hasn't made this the most cost‐effective path. Avoid making purchases that have proprietary technologies that may not be supported in the next 5 years. That said, the evolving technologies associated with Fibre Channel over Ethernet (FCoE) offer a transitional mechanism to move off current Fibre Channel SAN storage. They facilitate the administrative ease associated with iSCSI but atop existing copper cabling and network interface cards.
Integration has been well discussed and how to bring legacy and emerging technologies into something coherent and cost effective has also been an important part of this lesson. However, many new technologies are going to emerge that will bring challenges to integration as often as they will solve them. Understanding where to invest now to avoid obsolescence will require planning.
It is time to start looking forward so that you can effectively plan for how desktop virtualization will change as technologies adapt. We have well discussed that desktop virtualization doesn't fit all scenarios. It isn't going to be a panacea for the problems of end user service delivery. This isn't because the concept of desktop virtualization is flawed. Far from it—we know that desktop virtualization is a great step towards improved service delivery. This guide has shown that VDI fits scenarios such as LAN‐based access very well. The biggest problem is that the technologies required to make desktop virtualization fit a much larger set of use cases don't yet exist.
In the previous chapter, I talked about hosted VDI, which provides a great example of a service delivery model that is waiting for technology to catch up with the concept. There are many emerging vendor solutions that are beginning to capitalize on creating a cloud-based solution for desktop virtualization. This seems all well and good, but we have already determined that bandwidth and latency requirements are critical to service delivery.
Failing to deliver enough bandwidth with low levels of latency will result in a user experience that is going to seriously hinder end user productivity. How do you control the bandwidth and latency for users from all types of access devices, on all types of networks, from various locations who are trying to access a hosted VDI while having an experience that meets their expectations? That question is loaded with variables that any cloud‐based VDI solution is going to have to address.
In Chapter 2, I made it clear that any hosted VDI provider is going to need to know where a user is connecting from and ensure that the user is connected to the lowest latency, typically geographically closest, cloud facility—and based on the service level requirements, provide a near‐LAN quality experience. Right now, this can't be done with certainty. There need to be massive increases in Internet connection speeds and a reduction in the price of that Internet connectivity in order to realize this goal.
This challenge doesn't just affect hosted VDI from vendors. Building your own private cloud infrastructure is going to have the same challenges. The larger and more distributed your user base, the more difficult it will be to provide consistent experiences using virtual desktops. You have to be able to provide bandwidth on demand to the nearest data center. The technology set required for this undertaking is something we have to wait for.
The increasing use of smartphones and tablet PCs as a means for accessing virtual desktops will introduce new challenges to connectivity that networks don't well address with the current technology infrastructure. Although there are emerging 4G and WiMAX broadband networks, the maturity and reliability of these infrastructures and the ability of devices to support them aren't yet to a level that guarantees service delivery. Without these technologies further evolving and becoming much more robust, the access to VDI from them is going to remain a "best effort" scenario in which consistent service delivery is highly dependent on device, location, other users, and a whole host of variables that cannot be controlled.
The possibility of using offline virtual desktops is one that has come about to help address the issues with offline or highly mobile users that have less than ideal connectivity. However, these solutions don't support all platforms, and virtual desktops consume a large amount of local storage when used in this scenario. This might work for a mobile user with a laptop, but those with a smartphone or tablet are not going to be able to get the experience they expect due to processing and storage limitations on the devices.
Thin clients are a fantastic means of accessing virtual desktops, particularly in the LAN environment. However, they aren't going to fit all end user scenarios. In the first chapter, I outlined the challenges with heavy users and VDI. Thin clients as a virtual desktop presentation solution are great for standard users, but specialized peripheral support and multiple monitor support are still lacking. Unlike a PC that can be upgraded to support newer or faster interconnects, like USB 3.0 or eSATA, a thin client is limited in upgradability. Multiple monitor support is something that several vendors provide, but upgrading a thin client to support additional monitors isn't going to be an easy task. Support for more than two monitors is almost impossible to come by. There are some quad‐monitor supported thin clients, but these are far from being mainstream and have large price tags.
Storage performance and costs are going to continue to be a major factor in limiting adoption of desktop virtualization. As previous chapters discussed, the trade from local inexpensive storage on desktops to shared and much more expensive SAN storage is going to be a necessary part of any VDI transition. The problem is that a smaller company isn't necessarily going to have a SAN and/or the talent to manage one.
Shared storage is going to have to come down in price to a point where the differential between desktop storage and shared storage is far less than it is today. A company that wants to dramatically increase the number of virtual desktops that it can provide is not going to be able to do so without massive capital expenditures. SAN vendors' reliance on proprietary hard drives instead of commodity hardware means that upgrades and interoperability are a huge problem. SAN solutions will have to become further standardized and modular to provide companies with greater flexibility in VDI storage options. This isn't to say that this is true of all SAN vendors. There are some that offer modularized solutions to increase interoperability.
Management solutions for shared storage are also going to have to become much easier to work with. It has only been very recently that major storage vendors have realized the need to provide simple‐to‐manage entry‐level offerings for shared storage. SAN management will need to get to a point where an IT generalist can easily provision storage and provide onsite service to the shared storage array.
As VDI solutions mature and continue to achieve widespread adoption, management solutions must keep pace. There are a whole host of point solutions that exist to help with management, but a unified toolset that is capable of monitoring and managing all of the critical infrastructure components isn't as simple as an off‐the‐shelf purchase.
The increased demands of VDI on a network infrastructure require that many more aspects of the network infrastructure are managed in order to ensure effective service levels. In the previous chapters, we looked at what upgrades need to be made to the network infrastructure to support VDI. Let's turn our attention to how to determine whether you are able to manage those demands and understand the stresses placed on the infrastructure as well as how to determine whether you have met those demands.
I'm not saying that the existing management solutions can't provide crucial metrics to assist with isolating bottlenecks and determining where upgrades must be made. I'm positing that management solutions are general in nature and aren't designed specifically with desktop virtualization and end user service delivery through VDI in mind. Vendors need to create simplified management solutions that allow network teams to adapt quickly to the changes in demands that VDI places on the network and to ensure that virtual desktop service expectations are met. VDI vendors offer many of the requirements for management, but these aren't yet mature. Third‐party solution providers that specialize in this area have built great management solutions for the physical desktop space and should extend and expand these solutions so that management can occur within a single pane.
These technology enhancements that we are waiting for are going to be critical to expanding VDI adoption. The areas of storage, network infrastructure management, external connectivity, mobile devices, and overall usability of the solutions are going to need to see technology enhancements in order to increase the adoption of desktop virtualization. Without these enhancements, smaller businesses face a steep set of challenges in implementing any form of VDI, and larger businesses face scalability and management challenges with their desktop virtualization endeavors. The overall costs will also need to come down as the technical solutions evolve in order to allow a wider array of businesses to justify the costs of building a VDI versus the savings in support and ongoing management.
Determining where your investments in desktop virtualization as a service delivery model have been successful will require a period of analysis. During this time, the pre‐VDI support costs will be compared with those in a post‐VDI implementation. The cost benefits of desktop virtualization rely on a reduction in the total cost of ownership of the infrastructure. The first chapter examined how critical it is to understand the benefits of desktop virtualization and have realistic expectations of the best uses for desktop virtualization.
Understanding the goals of a desktop virtualization project is crucial to success. Likewise, the planning and execution phases are critical to success. So how do you determine that you are undertaking a desktop virtualization project with the proper understanding of what you can expect to accomplish and how to quantify that success? How do you determine that you got it right?
You need to compare the desktop virtualization solution that you have built with the legacy infrastructure it is replacing. There are several problems that threaten to make this comparison impossible. Gathering accurate user feedback, having appropriate pre‐VDI metrics on support incidents and responses, and support issues previously unseen prior to the VDI implementation are all going to make this task difficult.
During the pilot phase and even into a limited production VDI build out, users are going to be a key source of information on the success of the VDI implementation. Creating the user workspace in a manner that is equal to or greater than the users' current expectations was part of the original goals we outlined. Desktop virtualization is about end user service delivery. You must gather intelligence from the user base that will allow you to accurately measure whether the users think that the VDI is "better" than what they had before.
This will be a problem. The nature of surveying users makes the objective determination of success difficult. At one end of the spectrum, there will be users who may be uncomfortable with the change and will perceive the change as a diminished experience. At the other end of the spectrum, there will be users that are either excited to work with the new technology or believe that the transition to this new solution will fix other unrelated problems. These users may incorrectly report increased service levels. Other user may see an increase or a decrease in the quality of their end user experience but may not report that accurately for a number of reasons such as unfamiliarity with the way VDI works or limited use of the new VDI that doesn't let them accurately describe their experience.
The biggest problem you are going to face with user feedback is that you are trying to compare objective costs for support before and after the use of desktop virtualization by relying on subjective user input. There will be a good core of user feedback that does help in determining whether you have succeeded in your VDI goals.
Establishing a baseline of support metrics must be done prior to any VDI pilot. I can't emphasize this enough. Although gathering and categorizing support metrics is something that should be a part of all business IT directives, we know in reality that this doesn't always happen like it should. If there is no way to determine the root cause of support issues prior to implementing a desktop virtualization project, there will be no way to determine whether the support issues that desktop virtualization is meant to reduce have actually decreased.
Part of gathering these metrics in pre‐VDI and post‐VDI environments is isolating what support metrics are meant to be reduced by VDI. Up until now, we have focused on the generalized reduction in support costs through desktop virtualization. Now we need to look more granularly at support incidents and isolate which types of support incidents are going to be reduced, eliminated, and resolved more quickly through the use of desktop virtualization. Even if you are gathering pre‐VDI support metrics, those must be of the type that can be analyzed in the context of what desktop virtualization can achieve.
One of the best ways to establish appropriate metrics is to build categories that reflect the different types of incidents that can occur in general with support. The following work well to start:
Many incidents in these categories should be virtually eliminated with desktop virtualization (that is, driver conflict and hardware failure). Others will decrease under VDI, such as OS problems and software installation and application crash/problems. Some, as they are outside of the desktop environment either physical or virtual, will stay at the same rate or may even increase (for example, user training).
The support categories I have provided are by no means exhaustive. These serve as an example of commonly encountered support issue types and how they vary wildly in the way that desktop virtualization affects them. Determining appropriate support metrics with the most appropriate types of incidents for your environment will help ensure that results can be accurately measured.
What do you do about new types of support that might be introduced by the transition to desktop virtualization? Most of the new support incidents that come from the VDI environment should now be server related. You have moved the desktop to the data center and hosted the virtual desktops on servers now. Maintenance and support issues for the desktops that typically fell to the desktop technician level now are going to happen at the server technician level in the data center. You have to make sure that any possible increase in data center server support costs are factored into the cost equation.
Another challenge is determining the severity and impact of a hardware or infrastructure support issue. As we discussed in previous chapters, the shared nature of VDI, particularly the shared storage on SAN, will mean that downtime and failure of a data center component is going to impact a much larger segment of the user base. Here again, planning and understanding the implications for support with VDI will mean that this metric can be estimated up front—although it can truly only be determined upon failure.
Determining how you got it right means asking the right questions about what types of support you expect to reduce or eliminate and understanding that the metrics you gather may not be as accurate as you want. Being able to understand the support costs for failures you have not yet experienced will also be critical to determining whether your VDI investment paid the dividends you are expecting.
Determining what the future holds for desktop virtualization is a difficult proposition. We have established that the purpose of undertaking a desktop virtualization transition is to provide service delivery through a mechanism that offers reduced support costs and reduced total cost of ownership over traditional physical desktops.
We know that desktop virtualization is not a fit for all current physical desktop user scenarios. The earlier proposed technological changes will expand the adoption of VDI, but is it really going to be the final solution to the problem of end user service delivery? The first chapter introduced the concept of end user workspaces. To most end users, the workspace concept is tied to what they see on their desktops. It is a collection of icons, applications, documents, and user‐specific customizations that enable a user to work in a manner that is most conducive to their needs. For this reason, desktop virtualization will continue to grow; it provides a mechanism for workspace delivery as a consistent desktop experience.
The reduced support costs and increased reliability of computing in a data center will also make desktop virtualization an attractive proposition to businesses of all sizes. This is particularly true as businesses go through the next one or two desktop refresh cycles. These refresh cycles will allow the most cost‐effective times for either partial or wholesale transition to desktop virtualization. During these PC refresh periods, at minimum, we should expect that a good portion of LAN‐based users will prove to be the best candidates for virtual desktops. These LAN‐based users will have the best chance of seamless virtual desktop experiences most closely approximating their physical desktop experiences. The task worker that doesn't have any specialized peripheral support needs or complex processing and graphics requirements is going to be the most cost‐effective type to move to a virtual desktop.
Vendor solutions now allow rich multimedia and 3D support for display of virtual desktops. The demand for this as a part of virtual desktops is going to increase as more users clamor for an experience that is similar to the physical desktop they are used to. This idea illustrates how, in the next few years, the way user workspaces are delivered will change.
Widespread desktop virtualization adoption is going to be most dependent on the availability of cost‐effective and high‐performance storage as well as networking reliability and performance enhancements. Over the next 2 to 5 years, those two categories will need to ramp up to keep pace with the demand.
Storage has been the key problem with effective widespread VDI implementation. Storage costs are continually dropping and shared storage solutions are becoming much more affordable for small and midsize businesses. Still, the substantial price per gigabyte of storage differential between SAN and traditional desktop storage is far from parity. A major cost shift is going to have to occur similar to the one that did in backup with regard to disk versus tape costs. Similarly, the way virtual desktops are stored and the high level of duplication that occurs is going to have to be addressed.
Desktop virtualization isn't the only way to deliver services to end users. Application virtualization is showing strong growth as a technology set in and of itself. Combining application virtualization with desktop virtualization allows for further decreased support costs.
When you break down the end user experience to its components on a traditional physical desktop, you end up with three categories:
A virtual desktop is going to have the requirements of all three categories. One might say this is necessary for the sake of end user experience and familiarity with working in this way. However, being able to isolate these component categories will allow IT architects to have future flexibility.
The desktop OS is really nothing more than a platform upon which a user interacts with the applications and data they need to perform their specific job function. Being able to provide the user with a workspace that marries the desktop OS with the applications and with the data/settings while being able to separate them will bring a much higher level of supportability to the desktop ecosystem. Encapsulating applications using virtualization is a critical step to making this vision a reality.
Delivering applications is one of the biggest challenges faced by IT personnel. The use of application virtualization along with user storage that isn't local to the virtual desktop will enable the virtual desktop to be a generalized OS that fades into the background. However, this idea isn't exactly working in practice as well as it does in theory. This focus on delivering applications, data, and the user state instead of the desktop allows you to uncouple the desktop and enables more flexible alternative delivery methods.
We have explored the best use cases of VDI employing existing technologies. We have also determined that VDI is certainly going to be a part of emerging service delivery models because it helps reduce support costs and drives down total cost of ownership in the long term. Nonetheless, we need to remember that desktop virtualization isn't a fit for all uses and all users. PC refresh cycles, required storage upgrades, and the number of remote and offline users all factor into where and how much of a role VDI plays in service delivery. Make sure that the investments made today in desktop virtualization have clearly defined goals and are measurable. Being able to understand the limitations of VDI and where those limitations are going to likely be removed in the near future will help you understand the best means to future‐proof your investment in VDI. VDI is going to at some point transition into a service delivery technology that addresses virtually all of the current challenges.
What does this mean to your business? Plan for VDI and don't be afraid to make an investment in desktop virtualization. There are many areas in which it makes great business sense to do so, but do be aware that having expectations of desktop virtualization as the end all, be all of service delivery is going to lead to missing goals due to unrealistic expectations. As long as the models you develop and the reasons for the transition to VDI are understood and foremost in all decisions, then VDI will make a powerful part of a complete end user service delivery solution.