Wednesday, July 20, 2011

VMware ESXi 5.0 New Features and vSphere 5.0 New Features

UPDATE: see my newer blog entry for latest (Sept-2012) VMware ESXi 5.1 info.

I recently posted a blog about the expected VMware ESXi 5.0 New Features, and since that posting just a couple months ago, VMware has released to the public the details of new features in VMware ESXi 5.0 and vSphere 5.0; so, this is a followup blog to cover the "official" new features in ESXi 5.x ...

New Features in VMware ESXi 5.0

Before getting into the new features in vSphere 5 / ESXi 5, I want to start with some helpful terminology, in case you are not familiar, as this will be mentioned a few times in the coming paragraphs as it relates to new significant features in the VMware products: ".VIB" files. You may have seen these VIBs before when you were upgrading VMware (e.g., as described in my how-to upgrade VMware ESXi 4.0 to 4.1 blog).

A .vib file is an ESXi / vSphere (or VMware) Installation Bundle (VIB file) that is essentially just a Debian (Linux) Package from what I have been able to deduce; and, not surprisingly, these .vib files may be referred to as "Software Package" files too.  These .vib files come into play as VMware (and partners / providers) package their solutions, drivers, CIM providers, and applications which are designed to extend the ESXi platform in some way.

Those .vib files are going to play an important role for the new Image Builder and Auto Deploy features, as any given ESXi Image is composed of VIB files (software packages) for 1) the core hypervisor, 2) Drivers, 3) CIM providers, and 4) other plug-in components. And, .vib files can include information about relations with other .vib files (e.g., ones they depend on and/or may conflict with).

ESXi 5.0 : Image Builder (NEW)

Image Builder is a new new set of command line utilities that allow administrators to create custom ESXi images that include 3rd party components required for specialized hardware (like drivers and CIM — Common Information Model &mdash providers). These utilities are used to create images suitable for different types of deployment, such as ISO-based installation, PXE-based installation (Preboot Execution Environment), and Auto Deploy (a new mechanism for provisioning ESXi hosts under vSphere 5.0)
Image Builder is designed as a Power Shell snap-in component and bundled with PowerCLI.

Image Builder is meant to help overcome limitations that exist where the Standard ESXi image (ISO), with its base providers and base drivers, does not have all the drivers or CIM providers for your specific hardware and/or does not otherwise include the vendor-specific plug-in components you require. Put another way, Image Builder will allow you to create and manager image profiles and build ESXi customized boot images (e.g., installable .ISO images or bundles ready for PXE installation). These images can then be placed in a "Depot" (a repository containing your image profiles and relevant VIBs), and these "Depots" can exist as a web-server-type-depot or a ZIP-file-encapsulated-depot .

Ultimately, ImageBuilder will allow you to then clone and modify an existing image profile, selecting the various VIBs you want to incorporate (be it drivers, ESXi VIB files, and/or OEM VIBs), and generate an ISO image (for a CDROM/DVDROM) and/or a PXE-bootable image. Whew! Got that? Well, this should at least give you an idea of what the new feature is for... and how it also goes along with the next new feature...

ESXi 5.0 / vSphere 5 : Auto Deploy (NEW)

Since I mentioned it in the prior paragraph, here is a bit more discussion about the new "Auto Deploy" construct. Quoted from the VMware "What's new in vSphere 5" PDF, "Auto Deploy is a new deployment and patching model for new vSphere hosts running the ESXi hypervisor. Deploy more vSphere hosts in minutes and update them more efficiently than ever before."

So what exactly does that mean?

Auto-Deploy is based on PXE-Boot (i.e., the ability to boot an ESXi host over the network), and works with Image Builder, vCenter Server, and Host Profiles.  It works like this: PXE boots the server, the ESXi image profile is loaded into host-memory via the Auto-Deploy Server, configuration information is applied (using Answer File / Host Profile  —  answer files being a new 5.0 construct), and the host is placed/connected in vCenter.

This has substantial advantages in that it requires no boot disk (on each connected host), and theoretically you can use it to quickly deploy a large number of ESXi hosts over the network and share a standard ESXi image across many hosts — the ESXi host-image has been effectively "de-coupled" from the physical server; secondarily this implies that you could recover a (failed) host without recovering the physical hardware and/or restoring from a backup [NOTE: an assumption here is that you have another server available with matching hardware].

This new (Auto Deploy) approach has a lot to offer!  This should allow you to setup a single boot-image that can be shared across hosts; and, with every reboot each host is starting up with a consistent image (keeping you servers all in-sync with regards to their ESXi setup).  This will likely replace a lot of custom scripting and other home-spun solutions currently in existence.  I can see this being a HUGE advantage when dealing with massive server farms.

With all "state information" being stored off the host and in vCenter Server (rules store; host configs saved in host profiles; custom host-settings saved in "answer files"), one has to wonder (well, I certainly wonder), whether this is putting "all your eggs in one basked" a bit? Also, this clearly implies one serious Enterprise-grade setup for your network and hardware (can you say, redundancy and fault-tolerance!?).  All steps of the Auto-Deploy (to me at least) certainly will require a rock-solid infrastructure: from 1) PXE Boot, to 2) the boot contacting the auto-deploy server, to 3) the auto-deploy rules-engine determining what image profile, host-profile (and cluster) is being dealt with, to 4) pushing the appropriate image to the host requesting it and applying the proper host profile, to 5) placing the host into a cluster... that all sounds a bit intensive to me.

Well, that is a summary of what this new "Auto Deploy" is all about... hope it at least gets your interest.  I have no way to really test this out much in my environment, as I do not have piles of servers sitting around to play with, etc.

ESXi 5.0 : Firewall (NEW)

Quoted from the VMware "What's new in ESXi 5" web page: "The ESXi 5.0 management interface is protected by a service-oriented and stateless firewall, which you can configure using the vSphere Client or at the command line with esxcli interfaces. A new firewall engine eliminates the use of iptables and rule sets define port rules for each service. For remote hosts, you can specify the IP addresses or range of IP addresses that are allowed to access each service."

I guess this is something I should get excited about. I did always find it (a bit) concerning that ESXi had no built-in firewalling in previous versions of the product. Has it ever caused me an issue? Well, no... but, I also have another layer of hardware firewalls around my servers that has (so far) done the job just fine. But, more security is generally a good thing, and presuming it does not impact performance (which, it seems is very unlikely), I see no reason why I would not use it.

New (Version 8) Virtual-Machine format with 3D (Windows Aero) and USB 3.0 Support : Great News for us Software Developer Types!

Since I use VMware ESXi for a lot of software-development-related tasks (including testing software under various operation systems for consistency, etc), I was thrilled to see support for 3D graphics (though, non-hardware-accelerated support) for Windows Aero (this is going to be quite handy for my Windows-7 x32/x64 testing).

And, I am definitely looking forward to the USB 3.0 device support. There are some limitations with this support, but I still hope to take advantage of any increased throughput to my external USB3 devices.  Again, quoted from VMware's "What's new in ESXi 5.0" web page: "ESXi 5.0 features support for USB 3.0 devices in virtual machines with Linux guest operating systems. USB 3.0 devices attached to the client computer running the vSphere Web Client or the vSphere Client can be connected to a virtual machine and accessed within it. USB 3.0 devices connected to the ESXi host are not supported at this time."

So, the wording of that paragraph could certainly be better.  What it sounds like is that VMware was unable to implement support for USB 3.0 devices under Microsoft Windows based Guest Virtual Machines at this time.  And, although the Linux-guest VMs can connect to a USB 3 device, that "connection" is essentially made over just a "pipe" of sorts that is passing data flowing to a local USB 3 connection (local to a machine running the vSphere client) over the network to our ESXi-hosted-VM?  So, to say that another way... it looks like we are going to be able to connect USB 3 devices, to Linux-guest-VMs only, and even then we will be passing that USB3 traffic over our network?  Wow, that surely diminishes my initial excitement with this feature, but at least I can still "attach" a USB3 device to an ESXi-hosted VM (not quite the same as plugging that device into my server, directly, though!)

I would expect this USB 3.0 support to improve with ESXi 5.1 or such.  I can understand why this approach would have been taken, as it would be much simpler to implement.  The fact is, if a local machine (running vSphere client) has access to a USB 3.0 device, we already have the two things we need: a network connection to ESXi, and a computer and OS that supports USB 3.0 devices and can "talk" to them... so, what easier way to implement some type of support in ESXi (VMs) than to pipe data across the network to those VMs.  But, if that was the reasoning, I do not fully "get" why only Linux VMs would support this.  Maybe I am just confused.  I need to do some testing and see what I run into.

Other new virtual-hardware features:
  • 32-way virtual SMP
  • 1TB virtual machine RAM
  • UEFI virtual BIOS. Virtual machines running on ESXi 5.0 can boot from and use the Unified Extended Firmware Interface (UEFI).
  • Smart card reader support for virtual machines (implemented *somewhat* like USB3 support in that this refers to smart card readers attached to the client computer running the vSphere Web Client or the vSphere Client); those smart cards can be connected to one or more virtual machines running under ESXi. And, the virtual machine remote console (in the vSphere client products) support connecting smart card readers to
    multiple VMs, making them useful for smart card authentication to VMs.
  • Apple Mac OS X Server guest OS support — but, before you get TOO excited, this new vSphere 5.0 support for the Apple Mac OS X Server 10.6 (“Snow Leopard”) as a guest operating system is restricted to running only on certain Apple Xserve model hardware. I guess if you are "an Apple shop", this will be a neat feature.  Otherwise, it is a bit lack-luster from where I sit... I would have very much liked to run OSX 10.6 or 10.7 and such inside a hosted VM so I could use it for testing web-sites under Safari and other things.  Oh well.

Other New and Enhanced ESXi / vSphere Features

In no particular order...
  • Swap to SSD. (I am very interested in this, as I hope to make use of my Intel SSDs that have done so well already on my desktops and existing ESXi 4.1 server). vSphere 5.0 provides new SSD handling and optimization whereby the VMkernel automatically recognizes and tags SSD devices (local to ESXi or available on the network), and can apparently use this information to allow the ESXi swap to extend to those SSD devices and enable memory-over-commitment while minimizing the performance impact).
  • Support for SNMP v.2 with full monitoring for all hardware on the host.
  • Secure Syslog (system message logging) enhancements.
  • Enhanced Unified CLI Framework, including the ability to use the esxcli framework both remotely as part of vSphere CLI and locally on the ESXi Shell (formerly Tech Support Mode).
  • Profile-Driven Storage, Storage I/O control, Network I/O Control, and Distributed Switch for additional Enterprise-Level performance and control features.
  • And, oh yeah... you may have noticed the mention (throughout this blog) of the vSphere Web Client... that too is new, and as the name implies, it is a browser-based implementation of the vSphere Client... and, it is (at this time) a limited subset of what the full vSphere Windows UI / Client contains.  I expect this will eventually evolve into the full-fledged vSphere client over time, thus making the management client available on all clients (and OS's), via modern web browsers.
The fact is, there is a LOT of new features in this latest release of ESXi 5.x and vSphere 5.x that are worth checking out.  If you need more details, I suggest going to VMware's website and checking out all these products, whitepapers, announcements, and such.


Anonymous said...

Just SUCKs the the new ram limit per cpu. Seems silly to me.

Have you seen any inprovements to this issue?

Mike Eberhart said...

Yes,... agreed. The ramifications of the new licensing policy are severe in some cases. I am following this subject regularly, and have yet to see any change. I plan to do a posting about the limit(s) to which you refer (for others that are not aware yet); hopefully I get to that soon.

I will admit that, although I very much like some of the new features in ESXi 5, I may well leave one of my primary development machines on v4.1 just because of the RAM-limits. Not sure yet.

Thanks for reading; hopefully you and I (and many others) see a more REASONABLE per-CPU RAM allotment. I hope VMware gives into pressure. Part of me wonders if they set it this low to gauge reaction, knowing full-well that they really are willing to go with double the amount (RAM) they specified... thus, create a raucous and then come in later with some concessions and look good for "hearing the community". Who knows. Stay posted.

metz said...

Any rumors on the RAM limit?

Mike Eberhart said...

I have been watching the vmware forums and other sites, but so far I have not seen any mention of increasing the vRAM limit (per CPU) for the free version of ESXi above the 8GB that version 5's new license imposes. So, if you have a single-CPU server, the 8GB vRAM (across all running VMs) is going to exist... 2 processors gets you 16GB vRAM available for all running VMs. This does rather suck, as it is a step backwards (in capability) for many of us looking to move from ESXi 4/4.1 up to 5. In fact, it may just stop us from moving forward, or looking into something like XenServer. 8GB per socket with a max of 4 sockets just is not that great, even for a FREE version. Their "Essentials" is obviously where they are pushing (trying to push) us to.

Essentials is $495 and if I understand right, you are buying the right to a vRAM max of 24GB/processor (vs. 8GB in the Free version)... to a total of 48GB for any SINGLE SERVER (and the right to install Essentials on 3 servers... thus, theoretical max vRAM entitlement of 144GB for $495).

I am torn!
I will PROBABLY give in and just buy Essentials since I have enough VMs setup that I do not want to waste any time moving to another platform. Fact is, $500 worth of *time* can be chewed up quickly. IF you have more time than money budgeted for an upgrade, perhaps move to a free alternative like Xen.

Unknown said...

Awesome post........