Deploying Windows Azure Pack Websites–Overview

There have been plenty of blog posts about using Windows Azure Pack for Infrastructure as a Service, but not many about using the websites feature.  I’ve been working with the websites functionality since Windows Azure Services for Windows Server, and thought I’d give an overview of how this feature works.  In this blog I assume you’ve already set up the tenant & administrative portals and the associated bits and pieces that go along with that.  Part 1 (this one) will give an overview of the websites feature and its components, and some of the things you need to consider for deployment.  In part 2 we’ll have a look at the deployment process.

Windows Azure Pack websites feature lets you run high density, scalable modern websites with an experience that is similar to Windows Azure websites.  It supports .NET, PHP & Node.js, and provides integration with a number of source control systems.  In addition, through the tenant portal you can also provision databases with MS SQL or MySQL to use with your websites.

The WAP websites feature is made up of several components:

  • The websites controller which provisions and manages all the other roles in the websites infrastructure.  The controller is the first role that is provisioned
  • The management server (sometimes known as the REST server) provides a REST interface to the websites feature that the WAP portal can consume.  The configuration process of the websites controller also configures the management server.
  • Web Workers are servers that actually process the web requests, and can be either shared or reserved.  Shared workers as the name implies share their resources across multiple tenants and websites, while reserved workers are dedicated to a tenant.  For redundancy you should have multiple workers.
  • Front end servers, which take web requests from clients and route them to the correct worker servers.  The front end servers use a modified version of Application Request Routing to provide this functionality.  There should be at least two front end servers for redundancy, and you will need to load balance them in some way (Windows NLB seems to work just fine, but you may have a hardware device you can use).
  • File servers provide the content store for all the web site content & certificates.
  • The Publisher allows tenants to deploy their content to their websites via FTP, Visual Studio and WebMatrix.
  • The websites runtime database provides a configuration store for the websites feature, and runs on MS SQL server.  The runtime database needs to be contactable by all the servers in the websites infrastructure.  SQL mixed mode authentication must be used.  If you use a named instance, be sure to start the SQL browser and open UDP port 1434 (and create a program exclusion for the SQL instance process)

Of these components, only the front end servers & the publisher are exposed to the Internet.  All other components are purely internal.  For the front end servers, you will expose only the load balanced IP address.

When I talk about the Websites feature, I like to break it down into the management components (controller, management server, publisher, runtime DB) and the delivery components (front ends, workers, file servers).

Websites Overview

One of the key things you’ll need to do before you deploy the websites feature is choose a DNS name for your hosted websites.  By default, each website you create gets a unique DNS name in the format <website name>.<your custom domain name>.  For example, websites created in Windows Azure get a name <something>.azurewebsites.net.  You could use something like websites.yourdomain.com, or register a custom domain, there are no special requirements placed on this.  You do need to consider how easy it will be for someone to type in those URL’s, so don’t choose something too complex or with too many layers.

Tenants can also map other domain names to your websites once they are provisioned (assuming your plan allows this), but they will always have your custom domain name as well.

You’ll then need to set up DNS records in this domain that will allow tenants to publish content, and users to access content.  The DNS records you’ll require are detailed here.

When deploying the websites feature if you are using NAT to present the public facing components, you’ll also need to set up an internal DNS that has the same records as the external facing records, but referring to the internal IP addresses.  If you don’t do this, you’ll find things like deploying new websites from the gallery will fail.

You’ll also need to consider certificates.  You’ll need to provide a default certificate for the websites feature which is used if an SSL session is requested and there is no corresponding SSL certificate loaded, and also for git publishing.  This certificate needs to be a wildcard certificate which includes the names *.<your custom DNS suffix> & *.scm.<your custom DNS suffix>.  You’ll also need to provide a certificate for the publisher – you can reuse the default certificate if you want to, or you can provision a new certificate with the publish.<your custom DNS suffix> name explicitly in it.

The other thing you’ll need to think about is redundancy.  As noted above, you should build redundancy in to your front end servers by load balancing and provision multiple worker servers.  You should also plan for redundancy with your content store (the file server).  You could go as far as a file server cluster, or try something simpler like using DFS to replicate the content.  Another alternative to consider is using a NAS or a SAN that exposes a Windows SMB file share.

Static IPs in Windows Azure

If you’ve used Windows Azure virtual machines, you’ll be aware that virtual machines you create in virtual networks don’t get static IP addresses, they get what is effectively an infinite DHCP lease on an IP address.  That infinite lease only exists as long as your cloud service is running  – if you shut it down and start another virtual machine on that virtual network, it could get that IP address.  For services that are running the entire time this is usually OK, but doesn’t give any predictability, nor does it give you any control over what IP address is allocated.

A new version of the PowerShell cmdlets for Windows Azure was released earlier this year, and in the changelog, four new virtual network cmdlets are added:

Get-AzureStaticVNetIP
Set-AzureStaticVNetIP
Remove-AzureStaticVNetIP
Test-AzureStaticVNetIP

These cmdlets haven’t been officially announced as far as I can see, nor are they documented as to their behaviour, but fortunately there is some PowerShell help available.  Even so, it’s taken a bit of playing around to figure out exactly what you are able to do with it.  From what I’ve managed to figure out, you have to use “Set-AzureStaticVNetIP” during provisioning of a new VM, it doesn’t seem to work on existing VM’s, even if they are deallocated/stopped.

Basically, you just add it into your standard VM provisioning PowerShell:

New-AzureVMConfig –Name $name –InstanceSize $size -ImageName $img |
Add-AzureProvisioningConfig -Windows –AdminUserName $admin -Password $pwd |
Set-AzureSubnet –SubNetNames $subnet |
Set-AzureStaticVNetIP -IPaddress 172.16.1.10 |
New-AzureVM -ServiceName $newSvc –vNetName $vnet –AffinityGroup $ag

This will start a new VM, with the desired IP address.  Examining the virtual machine once it’s provisioned shows that the machine is still allocated its IP address via DHCP. It looks like this just creates a static reservation for the VM somewhere in the Windows Azure fabric.

Hopefully Microsoft will document this more fully in time, and maybe even bring the functionality into the Windows Azure portal.  But at least there is now a solution for fixed IP addresses in Windows Azure.  And I’m also making an assumption here that this is going to be supported long term by Microsoft (I feel pretty safe in making that assumption though – this is good functionality).

A PowerShell DSC solution for firewall rules

In my last post I started using PowerShell DSC to configure my VM’s ready to run the Azure Pack websites role.  My next step was to try to configure firewall rules using the script resource.  However I ran into a couple of problems, one of which I couldn’t solve and would love any feedback to help figure it out.

Using the “import-module” functionality didn’t work at first.  However a quick tweet to Jeff Snover (why not go straight to the top?) helped sort that out – it’s a bug, and you need to use “& “import-module” <module name>”

The other problem I had was that the size of my MOF file was getting too large, which was causing errors running start-dscconfiguration.  I haven’t managed to resolve this – I had some success with was ensuring that getscript actually did return a hash table.

So, I ended up building a custom resource to do the firewall configuration which is available here.

Some notes on using it:

1. Extract the folder to \Windows\System32\WindowsPowerShell\v1.0\Modules\PSDesiredStateConfiguration\PSProviders\

2. Use the “Ensure” keyword to ensure that the rule is enabled, and the “Absent” keyword to ensure that the rule is disabled.  I’m not sure this is strictly correct, but it’s consistent with other resources.

3. Use the friendly name (FirewallRules) in your configuration script

4. The solution only enables & disables existing firewall rules.  Use the Name parameter in your script to pass the rule name in. e.g. Name = “FPS-NB_Session-Out-TCP”

A couple of things I learned building this:

1. Read the documentation, and then read it again.

2. Use the write-verbose command in your script to write debugging information to the console (and use the –verbose switch for start-dscconfiguration).  This is suggested in the documentation and is good advice.

3. The functions stand alone – so state or information won’t get passed between the functions automagically.  You need to check your state (or whatever) when you need it.

I’d still like to understand how and why my MOF file becomes too large in my scripted approach, because I feel like that is a better and easier way to do it.

Playing with PowerShell Desired State Configuration

One of the challenges I’ve set myself over the last nine months or so is that I try to do as much configuration work as possible with PowerShell instead of the GUI tools.  It’s a great way to save time, but also to force me to get more used to PowerShell.  One of the new features announced at TechEd North America is Desired State Configuration (DSC).  DSC allows you to specify a configuration, and then tell PowerShell to “make it so”.  I’m doing some playing around with the Windows Azure Pack in Windows Server 2012 R2 Preview, so what better time to also have a play with DSC?

From what I’ve seen so far, DSC is a declarative style of building configurations, focussing on the end state, rather than the process of getting to the end state.  This means you can define a simple configuration like:

Node localhost

{

Package WebPI
{

        Ensure = “Present”
Path = “$Env:SystemDrive\Source\WebPlatformInstaller_amd64_en-US.msi”
Name = “Web Platform Installer”
ProductId = “458707CD-9D7A-477F-B925-02242A29673B”
}

}

This is a simple configuration that defines that a server needs to have the Web Platform Installer application installed.

The Package keyword (or “Resource” to use it’s official name) tells DSC that we’re looking at an application that needs to be in place.  There are a number of resources in the box for DSC documented here.  The “Ensure = “present”” line defines that the software should be in place and the Path line defines where the software source is if it needs to be installed.  The ProductId is the detection mechanism – how does DSC figure out if it needs to do anything.  If this strikes you as very similar to System Center Configuration Manager’s application model, you’re not the only one!

So what does this have to do with Azure Pack?  Well, if you’ve had a look at it you’ll see there is a bunch of pre-configuration to do on the machines you use if you’re doing a distributed install, especially for the web hosting component.  I decided to use the opportunity to learn DSC as well as Azure Pack.  What I’ve built is an initial configuration file that will enable all the appropriate operating system components, and install the Web Platform Installer so I can use it to download the Azure Pack components.  I could simply do all the configuration with WebPI, but where’s the learning opportunity in that?

To set this up, you will need to create a directory in C:\ called source, and then download the Web Platform Installer from here and save it into that directory.  Now you can use DSC to configure your machine.  Start by executing the PowerShell configuration file here

.\AzurePackDSC.ps1

If it’s worked correctly you will see a subdirectory created called AzurePackWebConfig with a MOF file in it.  Then, you can apply the configuration

start-dscconfiguration –verbose –wait –path .\AzurePackWebConfig

And then wait as the configuration is applied.  And it’s as easy as that.

My next challenge is to modify the DSC configuration to include the firewall configuration by using the “Script” resource.  Stay tuned for a further blog post!

What Microsoft needs to improve in System Center

In the first part of this two part series, I looked at what Microsoft are doing well in System Center.  In this second part, I’ll take a look at some of the things that need improvement.

DISCLAIMER: I worked for Microsoft for nearly five years in a technical sales role, selling System Center & Windows Server.  I was never privy to long-term strategy or roadmap when I was there, and what I am going to talk about here is all based on publically available information, and my own thoughts.  The intent of these blog posts is to stimulate discussion, and certainly not to belittle the hard work that I know the System Center teams do.

Release cycles

Microsoft needs to up the pace of introducing new features, they simply can’t afford to stick with a 2/4 year release cycle any more.  There are still a lot of things that they need to do, and they need to do them faster.  There are positive signs with the SP1 release, which actually adds a lot of new functionality, and even though it didn’t ship simultaneously with Win8/Server 2012 the release timeframe is less than a year after the initial release.  If Microsoft can keep that pace up then that will work.  The risk they run since System Center 2012 is now one “product” is that they’ll always be waiting for the slowest component to finish developing.  I’d suggest a six monthly “feature release” to go along with their quarterly rollup releases.

Operations Manager

I talked about the awesome APM functionality that got wrapped into Operations Manager in the first part of this series, which makes Operations Manager really useful to .NET application owners.  There were some minor changes around the fringes in the rest of the main product, but effectively Microsoft delivered the same product as last time, with the addition of an acquisition (AVIcode), a component they licensed (Networking monitoring – at least I believe this is licensed from EMC) and two they built from scratch (Java monitoring, basic dashboarding).  So in the three years since the last release, we’ve seen very little progress in the main product.

I still can’t believe that in what is effectively the fifth release of the product we still don’t have decent root cause analysis, we don’t have alert suppression for dependent components, we don’t have a decent capacity analysis, and we’re still stuck with the broken reporting system that they’ve built.  It’s just too hard to get decent data out of the product – dashboards are a help, but they are limited in the widgets they support, and only support short term data.  And the resource requirements keep going up and up, and console performance seems to go down.

If Microsoft want to build a cloud management stack, they should start looking at some of the competition a bit closer, VMware are well positioned with vCenter Operations if they can integrate the suite bit tighter, and extend the reach out.  Also see the section below on acquisitions.

Configuration Manager

I’m glad the Configuration Manager team made the changes they did in the 2012 release, because prior to that they had basically delivered the same product for 6 or 7 years (fundamentally was SMS 2003 that different from 2007 or 2007 R2/R3?).  The pace of change in Configuration Manager is glacial, and when you look at the what 2012 ships that is fundamentally different, it’s basically the admin UI (great improvement) and AppModel (also a great addition).  That’s it.  They couldn’t even manage to ship the cross platform support until SP1.  And is it still possible that in seventeen years of shipping the product, Microsoft still don’t have a native asset management solution?

I’m glad to see them tying to Intune a bit more, as this might make them able to introduce new functionality faster.  It’s lucky for them that the licensing terms for customers in Enterprise Agreements make it a easy choice, and also lucky for them that their traditional competitor Altiris has been ruined by Symantec.

Client Licensing

Microsoft’s strategy with client licensing appears to be to win with Configuration Manager, and lose everything else.  By tying products like Service Manager & Orchestrator to either the Enterprise CAL suite, or the System Center Client Management Suite, they are basically dooming the additional components of System Center to obscurity on the desktop.  The area where they could, and should be competing is with Service Manager, but the licensing model is the complete opposite of just about every other Service Management vendor out there, where Microsoft license per managed device, not on a per analyst logon.  I’ve worked on a few deals where the price to license on this basis was double or triple the cost of the nearest (and more capable) competitor.  And then we would have had to add on a third party solution for Asset Management, which the competitor did natively.  If Microsoft really want those products to succeed, they need to fix client licensing.  I’d fix it by just moving the entire System Center suite in the Core CAL.

Acquisitions

Microsoft needs to get their cheque book out and start making some acquisitions to fill the gaps in the product suite.  I’d start with Provance to fill their asset management gap, and probably pick up one of the analytics vendors like Bay Dynamics to help with some of the reporting and analysis.

There’s also the problems of missed acquisitions – Microsoft really should have picked up vKernel before Quest got them (and if missing that was a problem, maybe they should have picked up Quest before Dell got them and got a complete VDI stack with third party clients as well).  They should also never have let Odyssey software go to Symantec – why would you let your competitor pick up an important product add-on?

Microsoft has too much of a “we’ll build it ourselves” focus, and they need to get out of this way of thinking.  They can be much faster if they acquire some new capability rather than having to build it.

Multi-tenancy

Everyone else in the industry seems to see the trend for moving a lot of IT operations to managed service providers (and you would think Microsoft would get this with all their cloud messaging), but Microsoft persist in building solutions that aren’t multi-tenantable, and are designed to pretty much solely on a customers premises.  The only concession they seem to have made is the Service Provider Foundation feature in Orchestrator SP1.  If Microsoft want to truly be a cloud tools provider, they need multi-tenancy.

Console Performance

I’m sure I’m not the only one who has noticed that the console performance of just about all the System Center products are not the best.  The exceptions would be Orchestrator (light and fast), and surprisingly Configuration Manager.  Everything else is like wading through treacle.  If you want IT pros to enjoy using your products, make sure the consoles are light and fast and snappy.

Private Cloud

The private cloud story from Microsoft is good, but when you look at some of the components it’s also frustrating.  For instance, why should I implement an additional Service Management solution to deal with self service when I already have a service management solution in place?  Why do I need Operations Manager in place to discover information about my fabric?  Surely VMM already knows all that?  Sometimes it feels like the private cloud stack has too much complexity, or it’s complexity for complexities sake.  And the marketing message needs some work – what does “Foundation for the future” mean? If I’m building private cloud who cares about “Cloud on your terms”?.  I’ve talked about the marketing feature that is “Cross Platform from the metal up” in my Marketing Features post.

Cross Platform

I mentioned cross platform in the first part of the series, what I would really like Microsoft to do is commit publically to dates when they are going to support the newest versions of the products they manage, be it vSphere, RedHat, HP-UX, whatever.  Rather than leave customers guessing as to when, the date should be made explicit for them to give some predictability – customers like that.

They also need to ensure that the cross platform story is consistent.  If I can manage an OS with Operations Manager, I should probably be able to manage it with Configuration Manager, and also deploy it with Virtual Machine Manager.  If an OS is supported on Hyper-V, it should be supported in System Center, and vice versa.

Conclusions

As I mentioned in part one, there’s a lot to like in System Center, but there are also areas where there are challenges.  I believe Microsoft have the resources & talent to fix the areas where there are challenges, the question is do they see the customer value in doing so?

What Microsoft are doing right with System Center 2012

This is post one of a two post series.  In this post, I’ll look at what Microsoft are doing well in the System Center 2012 product set, and in part two I’ll take a look at what needs to be improved.  I’ll try to take into account some of the improvements that are coming in the SP1 release, which is due early 2013.  Some of the products will appear in both categories as they have some both good and bad.

DISCLAIMER: I worked for Microsoft for nearly five years in a technical sales role, selling System Center & Windows Server.  I was never privy to long-term strategy or roadmap when I was there, and what I am going to talk about here is all based on publically available information, and my own thoughts.  The intent of these blog posts is to stimulate discussion, and certainly not to belittle the hard work that I know the System Center teams do.

Acquisition Integration

Microsoft have made relatively few acquisitions in the System Center space (for more on this, see part two), with Opalis in 2009 and AVICode in 2010 being the only ones I can think of.  What they have done really successfully is to integrate those products well into the family.  Some organisations seem to struggle a bit getting acquisitions integrated (VMware seem to have challenges in this area, but that may be to do with how many things they’ve acquired), but Microsoft seem to have a good model here.

The AVICode acquisition in 2010 was a nice move by Microsoft to take the monitoring of .NET applications deeper and get right inside the application.  This has been well integrated into the Operations Manager component of System Center as the Application Performance Monitoring (APM) feature.  The value that APM adds for anyone who is building and deploying .NET web applications is massive, and is worth deploying System Center to get.  The additional capability that SP1 brings to the APM suite (including Windows Services, and SharePoint) extends that value further.  If you’re running .NET applications and you’re not using System Center 2012 Application Performance Monitoring, you should be.

The Opalis acquisition in 2009 was probably the biggest surprise by Microsoft, and added real value to the System Center suite.  The release of System Center 2012 Orchestrator was the first full Microsoft release of the Opalis IP, and it was done well.  Orchestrator works very well, is very stable, and is well integrated into System Center and into other products as well (see Cross Platform below).  The challenge Microsoft has in this space is to inspire it’s OEM’s and partners to increase the pace of integration pack releases for Orchestrator.  For instance, the Cisco UCS integration pack is still at version 0.1, and we are now 8 months past the release of System Center 2012.  Also, a bit more clarity around using the SDK to build code based integration packs would be helpful.
The great thing about Orchestrator is that for a person who is familiar with the Microsoft toolset it’s probably a much quicker learning curve than tools like Chef or Puppet.  They could probably learn a thing or two about building a community from those tools though.

Integration

In addition to integrating the acquisitions, Microsoft has clearly also focussed on making sure the System Center stack is more tightly integrated.  When you look at the disparate pieces that existed before the System Center 2012 release, you can see how much work they’ve done.  There is still some work to do here, but you can see the fruits of this in Orchestrator & Service Manager.  I’m looking forward to seeing

Configuration Manager & Windows Intune integration

The Service Pack 1 release of Configuration Manager adds some integration with the Windows Intune cloud management platform.  This is a smart move, as the Configuration Manager team are just not fast enough at delivering support for new operating systems (or new features in general), whereas the Intune team can.  The Intune team are already shipping support for delivering software to Android & iOS, but do not have a complete MAM/MDM strategy.  However if you’re a customer who is currently using System Center Configuration Manager, this integration will probably be a really welcome addition.

Cross Platform

There’s good and bad to talk about in cross-platform.  What’s good is that components like Operations Manager lead the way for the Microsoft stack, and have for quite a while.  Their cross platform story has had several years to mature, and their support for multiple operating systems gets better with each release, and extending out into non-OS focussed areas like Java in the 2012 release.  It’s nice to see Microsoft start to extend the cross platform support into Configuration Manager (even if they are 10 years behind the competition, and more limited in their range).  It’s also great to see Microsoft continuing to ship cross platform integration packs for Orchestrator, or even Virtual Machine Manager adding support for other hypervisors (even though I’ve written about this before).  As I said in the Marketing Features post, you have to be serious about cross platform – and in general Microsoft are demonstrating that they are.

Scripting & Automation

One thing Microsoft have clearly had a mandate about is to ensure that not only are the tools useful, but also usable at the command line. It’s great to see PowerShell across just about the entire System Center family (Configuration Manager & Orchestrator are missing, but it appears SP1 remedies at least Configuration Manager).  This means that driving the tools on an automated basis is much easier, and allows for greater scale and hands off management.  And having lots of pre-built integration packs for Orchestrator helps.

Service Based Thinking

The service based model that Microsoft introduced in Operations Manager, and has now extended through to the deployment model in Virtual Machine Manager.  I think people are still getting their heads around what it means to really deploy at a service level – management is kind of there already – but deployment seems to be still at a “we need this many VM’s, and then we’ll stack our app and it’s various components on it”.  Cloud based applications running on things like Azure or CloudFoundry are kind of the model here.  Microsoft have enabled the ability for internal organisations to start to use this way of thinking, the question is how long will it take to adopt it?

Server licensing

System Center & Windows Server have basically lead the way for virtualisation friendly licensing models (in contrast to their competition who have had a few stumbles in this area).  It’s really a shame some of the other teams haven chosen to make such crazy licensing decisions (seriously SQL – you don’t compete with Oracle by making your licensing as stupid as theirs).

Overall there’s a lot to like in System Center, what I’ve covered is some of the bits that I think Microsoft are doing really well outside of the overall product functionality.  In part two I’ll take a look at some of the places they need to improve.

Marketing Features

A month or so back I made a couple of comments on twitter about building what I called “marketing features”.  I thought I’d try to clarify my thoughts a bit further.

image

My definition of a marketing feature is a feature that looks really great on a marketing checklist, but has limited customer use case, either because the feature is actually useless, or it’s implementation makes it so.

Multi-hypervisor management is a great example of this – see how easy it is to make a comparison between two different product stacks:

Microsoft VMware
Multi-hypervisor management Yes No

Wow, doesn’t that look bad?  Of course, multi-hypervisor management could actually be a really useful piece of technology, and the reason I call it out in both VMware and Microsoft is the limited use case point.  Microsoft are particularly guilty of this in System Center 2012 Virtual Machine Manager, and theirs is because their implementation makes it so.

Microsoft made System Center 2012 generally available in April 2012 and only had support for managing vSphere 4.0 & 4.1.  VMware made vSphere 5.0 generally available in August 2011, eight months earlier.
They are still to make System Center 2012 SP1 generally available with support for vSphere 5.1 (but curiously not 5.0).  SP1 is expected to be released to the public next month, which is seventeen months after the GA of vSphere 5.0, and it still can’t manage that platform!

From a technology use case, any customer who wanted to do cross-hypervisor management would have either been locked to a maximum of vSphere 4.1 for almost a year and a half, or had to choose a different tool.

VMware don’t get a free ride here either.  Recently they released vCenter multi-hypervisor manager, but only included support for Hyper-V in Windows Server 2008R2, not the much more capable Windows Server 2012 release, and likewise have made no public commitment to when they will support it.  And talk about a missed marketing opportunity – what a great story “VMware can manage Windows Server 2012 before Microsoft can” would have been…

In my opinion, if you’re going to do cross-platform support, you have to be serious about doing it.  Serious about doing it would mean either:

  • Releasing support for the latest versions of managed products when you release your product or,
  • Having a public commitment to support the latest version of the managed product in a short timeframe (60-90 days).

If you’re not going to do that, then you’re pretty much building a marketing feature.  Drop the feature and invest that money building a feature you can support.  If your product isn’t architected to allow you to make this happen, then either fix it, or drop the feature.  Your customers aren’t served well by you building marketing features, they are served by you building features that they can use.

image

 

Edit: VMware has just posted a blog talking about complexity when managing vSphere via Virtual Machine Manager.  They have a point – if you’re talking about doing full time management of your vSphere environment through VMM you’ll be missing out since Microsoft can never keep up with the vCenter features.
However in some ways that misses the point (probably quite deliberately I think).  The key is the text that is highlighted in the second link: “With System Center 2012, you can more easily and efficiently manage your applications and services across multiple hypervisors”.  Note that it talks about managing applications and services, not about managing multiple hypervisors, and that is important.  Really it’s about the overlay capability that you might get with System Center 2012 across the top of vSphere – things like Self Service, Service-based Deployment, Server App-V, Private Cloud.