2009年4月8日星期三

VMWare and how it effects Storage

1:VMware Server 因为上面的很多OS共用一个或几个HBA,IO为small block Random IO,因此对IOPS的要求比对带宽的要求高..


"VMWare Changes Everything"


That's a lovely marketing phrase, but when it comes to storage, it does, and it doesn't. What you really need to understand is how VMWare can effect your storage environment as well as the effects that storage has on your VMWare environment. Once you do, you'll realize that it's really just a slightly different take on what storage administrators have always battled. First some background.

Some Server Virtualization Facts

  1. The trend of server virtualization is well under way and it's moving rapidly from test/dev environments into production environments. Some people are implementing in a very aggressive way. For example, I know one company who's basic philosophy is "it goes in a VM unless it absolutely can be proven it won't work, and even then we will try it there first."
  2. While a lot of people think that server consolidation is the primary motivating factor in the WMVware trend, I have found that many companies are also driven by Disaster Recovery since replicating VMs is so much easier then building duplicate servers at a DR site.
  3. 85% of all virtual environments are connected to a SAN, that's down from nearly 100% a short time ago. Why? Because NFS is making a lot of headway, and that makes a lot of sense since it's easier to address some of the VMWare storage challenges with NFS than it is with traditional fiber channel LUNs.
  4. VMWare changes the way that servers talk to the storage. For example, they force the use of more advanced file systems like VMFS. VMFS is basically a clustered file system and that's needed in order to perform some of the more attractive/advanced things you want to do with VMWare like VMotion.

Storage Challenges in a VMWare Environment

  1. Application performance is dependant on storage performance. This isn't news for most storage administrators. However, what's different is that since VMWare can combine a number of different workloads all talking through the same HBA(s), the result is that the workload as seen by the storage array turns into a highly random, usually small block I/O workload. These kinds of workloads are typically very sensitive to latency much more than they require a great deal of bandwidth. Therefore the storage design in a VMWare environment needs to be able to provide for this type of workload across multiple servers. Again, something that storage administrators have done in the past for Exchange servers, for example, but on a much larger scale.
  2. End to end visibility from VM to physical disk is very difficult to obtain for storage admins with current SRM software tools. These tools were typically designed with the assumption that there was a one-to-one correspondence between a server and the application that ran on that server. Obviously this isn't the case with VMWare, so reporting for things like chargeback becomes a challenge. This also effects troubleshooting and change management as well since the clear lines of demarcation between server administration and storage administration are now blurred by things like VMFS, VMotion, etc.
  3. Storage utilization can be significantly decreased. This is due to a couple of factors, the first of which is that VMWare requires more storage overhead to hold all of the memory, etc. so that it can perform things like VMotion. The second reason that VMWare uses more storage is that VMWare admins tend to want very large LUNs assigned to them to hold their VMFS file systems and to have a pool of storage that they can use to rapidly deploy a new VM. This means that there is a large pool of unused storage sitting around on the VMWare servers waiting to be allocated to a new VM. Finally, there is a ton of redundancy in the VMs. Think about how many copies of Windows are sitting around in all those VMs. This isn't new, but VMware sure shows it to be an issue.

Some Solutions to these Challenges

As I see it there are three technical solutions to the challenges posed above.

  1. Advanced storage virtualization - Things like thin provisioning to help with the issue of empty storage pools on the VMWare servers. Block storage virtualization to provide the flexibility to move VMWare's underlying storage around to address issues of performance, storage array end of lease, etc. Data de-dupulication to reduce the redundancy inherent in the environment.
  2. Cross domain management tools - Tools that have the ability to view storage all the way from the VM to the physical disk and to correlate issues between the VM, server, network, SAN, and storage array are beginning to come onto the market and will be a necessary part of any successful large VMWare rollout.
  3. Virtual HBAs - These are beginning to make their way onto the market and will help existing tools to work in a VMWare environment.

Conclusion

Organizations need to come to the realization that with added complexity comes added management challenges and that cross domain teams that encompass VMWare Admins, Network Admins, and SAN/Storage Admins will be necessary in order for any large VMWare rollout to be successful. However, the promise of server virtualization to reduce hardware costs and make Disaster Recovery easier is just too attractive to ignore for many companies and the move to server virtualization over the last year shows that a lot of folks are being drawn in. Unfortunately, unless they understand some of the challenges I outlined above, they may be in for some tough times and learn these leassons the hard way.

--joerg

ZZ From: http://joergsstorageblog.blogspot.com/2008/06/vmware-and-how-it-effects-storage.html

没有评论:

发表评论