Update Rollup 3 for Azure Pack Websites is available

Microsoft today released Update Rollup 3 for System Center 2012 R2, and along with it rollups for Azure Pack & the Azure Pack Websites feature.  The KB article specifically for the Websites feature is here, and is mostly bugfixes.  There were a couple of fixes that caught my attention:

  • PHP 5.5 is now available to select from the portal
  • The installation bug with WebPI 5.0 where FSRM was not being installed on Workers & Publishers is now fixed
  • The temporary file quota now is able to be adjusted.  This is not able to be adjusted via the portal, so I assume this will be a PowerShell change.
  • ISAPI Classic mode is now available to select in the portal (previously it was a PowerShell only feature)

I’ll be updating my systems over the next few days, I’ll report back with any issues.  If you need to review the update instructions, they are available here.  The websites update is pretty straightforward as long as you have internet access from the controller – just update the controller using Windows Update, and it takes care of the rest.

What is the Azure Pack Websites security model?

A customer recently asked me how Azure Pack websites are isolated and how one website is prevented from interfering with another website running on the same worker.  Microsoft don’t really document this anywhere, so I had to do a bit of digging.

The first part of the answer actually comes from the screenshot I had in an earlier post about where the websites actually are.


When you look at task manager and look at the user running the w3wp process for your site, you’ll see a user corresponding to the name of the site.  However, if you look in the local user database you won’t find any users with those names.  The websites are running using “virtual accounts” which were introduced in Windows Server 2008 R2 and are documented here.  They are part of the same set of technology as Managed Service Accounts, but probably less well known.
You can configure an IIS website to use a virtual account by setting the identity of the application pool to “ApplicationPoolIdentity”.  As we saw in the “where are my websites?” post, you don’t get to see an Azure Pack website in IIS, but if you look in the ApplicationHostConfig file for the website you’ll see the entry below, showing that the Azure Pack website is using ApplicationPoolIdentity.

<add name=”website1″ managedRuntimeVersion=”v4.0″>
<processModel identityType=”ApplicationPoolIdentity” />

A plan that include the web sites service has a set of quotas assigned to it, which can be customised for each web site mode (Intro shared, Basic shared, Reserved).  The quota limit for memory is implemented as a Win32 job limit, which you can view using Process Explorer.  If you get the properties of one of the website w3wp.exe instances, you can see the properties on the job page.   The quota limit for memory can be seen in the Max Working Set property, and if you change an instance to a mode with a different quota you’ll see this change reflected once the process has been restarted in the new mode.


The CPU & network quotas are tracked by the “Resource Metering” service, and this information is sent to the controller, which is actually responsible for enforcing any quota.  The default action is to do nothing when quota is exceeded but if you’ve chosen to stop the site, the controller will take care of this.

The next level of isolation and protection is the application of quotas through FSRM on the workers.  On a worker running instances of websites you can run the PowerShell cmdlet “get-fsrmquota”, and this will show you something like the below

Description     :
Disabled        : False
MatchesTemplate : True
Path            : C:\inetpub\temp\DWASFiles\Sites\website2
PeakUsage       : 361472
Size            : 209715200
SoftLimit       : False
Template        : DynamicWasTempQuota
Threshold       :
Usage           : 361472
PSComputerName  :

This shows that each website is limited to consume 200MB of disk space on the worker, and that is a hard limit (soft limit = False).  So a runaway or malicious process on a worker could only ever get an individual website instance to use 200MB of disk space.  As you can also see, this is created from a template called “DynamicWasTempQuota” which is created by Azure Pack (you can have a look at this using get-fsrmquotatemplate).

If you’re using a Windows file server for the web sites feature you can also explore the limits that are applied at the file server that correspond to the “Subscription Storage Space” value in the plan quota (the size value below corresponding to a 1GB hard limit):

Description     :
Disabled        : False
MatchesTemplate : False
Path            : C:\WebSites\c9d2cf1a9d6b00b7d177\1f68e081ae3c489cbd98cdffcf86de1d
PeakUsage       : 74752
Size            : 1125899906842624
SoftLimit       : False
Template        :
Threshold       :
Usage           : 74752
PSComputerName  :

The last piece of the security and isolation provided in Windows Azure Pack websites is an IP filtering function (as documented here) that is used to prevent the service from launching a denial of service against itself.  This is implemented using a custom filter driver called RsFilter that is deployed as part of WAP on workers.  You can see the filter loaded if you run ftlmc on a worker, and if you look in the registry you can see the entries for the IP filtering at HKLM:\System\CurrentControlSet\Services\RsFilter

Compare the block list configured in the portal:


that is then transfered to the workers registry:


That’s as much information as I’ve been able to find about the security and isolation model, hopefully that comes in handy for someone.

Error upgrading Azure Pack websites to Update 2

This is a recent one I struck building a test lab for Azure Pack websites.  When I upgraded the controller to update 2 of the Windows Azure Pack websites feature, I ended up with the update getting stuck in a loop and logging the following error in the Applications and Services Logs\Microsoft\Windows\WebSitesUpdate\Operational log:

Unexpected exception encountered during action CreateOrUpdateFeed Error occurred during the offlining process of the Web Sites product dependencies. WebPICmd.exe returned –1
at Microsoft.Web.Hosting.UpdateService.PowershellHelper.Execute(String powershellFilePath, Dictionary`2 arguments)
at Microsoft.Web.Hosting.UpdateService.UpdateManager.CreateOrUpdateFeed()
at Microsoft.Web.Hosting.UpdateService.UpdateManager.DoWork(Object unused)
The publisher has been disabled and its resource is not available. This usually occurs when the publisher is in the process of being uninstalled or upgraded

This was an interesting one, as the error indicated that something was going wrong when webpicmd.exe was running.  Webpicmd is the command line interface to the Web Platform Installer, so it should be just downloading and installing content.  I checked the basics – that web access was available from the controller, that DNS was resolving correctly, and that I could use webpicmd to list available downloads (webpicmd /list /listoption:all) and everything worked fine.  So the problem wasn’t webpi itself.

Fortunately the websites upgrade process creates some logs in “C:\Program Files\IIS\Microsoft Web Sites\Logs” and the key log in there is “CreateOrUpdateFeedWebPI.log”.  This log helped me troubleshoot where things were going wrong.  In the log was an entry “ProductId WFastCgi2Py27 not found”.  If you compare to the official feed http://go.microsoft.com/?LinkId=9845550 there is definitely an entry there for WFastCgiPy27.  So it appeared that something had become out of sync between the offline feed that the update was using and the official feed.

The solution for this was to delete %systemdrive%\HostingOfflineFeed and let the upgrade process run again.  Make sure to back up the HostingOfflineFeed folder before deleting it, just in case.  

So, in summary, to fix upgrade issues:

– Troubleshoot using the Applications and Services Logs\Microsoft\Windows\WebSitesUpdate\Operational log
– Troubleshoot using the logs in “C:\Program Files\IIS\Microsoft Web Sites\Logs” on the websites controller
– If you think the problem is with the offline feed, try deleting it and starting again.

Windows Azure Pack Websites – Where are my sites?

Another quick post, this time to look at some of the insides of Windows Azure Pack websites feature, and to answer a question that I get asked occasionally – where is has my website gone?

If you’re a typical IT Pro and have been working with IIS for a few years your natural instinct when you want to troubleshoot anything going wrong on Windows Azure Pack websites will be to dive into IIS Manager.  However, if you open this up on a worker, you’ll be in for a nasty surprise:


No running websites at all? What about Application Pools?


Nothing?  So where is my website?

The thing to understand about Windows Azure Pack websites is that the websites run as part of the Windows Process Activation Service (WAS) and are created dynamically on a worker and simply don’t appear in IIS Manager.  So your sites are there, but just not part of IIS.  So how do you find them?

There are two ways that I’ve found.  The first is to look in process monitor on a particular worker, and go to the details view.  Look for a username that corresponds to your website name and you can find out if an instance of your site is running on that worker.


The other way to find the site (and this can be slightly more useful) is to use Windows Explorer, and navigate to <system root>:\inetpub\temp\DWASFiles\sites.  Under here you will find directories corresponding to each website


The reason this can be more useful is that if you drill into that directory, you’ll see a lot more of the bits and pieces of that website – like config files (under “Config”), and a link to the website content (usually “VirtualDirectory0”) so you can compare what you think your website should be serving against what what the worker is delivering.  You can also see if strange things are happening – I had a website that was never able to display content and we were able to observe large numbers of files being compiled in the “Temporary ASP.NET files” directory.

You’ll also find other interesting stuff in the <system root>:\inetpub\temp\DWASFiles directory, most interesting usually being the “log” directory which allows you to see the logs for the sites running on the worker.

I needed this information when I first started with Azure Pack and needed to do some troubleshooting, so hopefully someone else will find it useful.

How does gallery publishing work in Azure Pack websites?

A quick blog post as I’ve been troubleshooting an issue with gallery publishing, and there wasn’t a lot of information available out there.  My problem was that I would run through the wizard to deploy a preconfigured web application through the gallery which would work fine, but publishing would eventually fail.  I had to break out some network traces to figure out what was going on in the end (along with the information in this page in the documentation)

So, what happens when you choose to deploy a new website using the gallery feature?

Once you’ve made the request, the first thing that happens is that an empty website is created.  This is a call from the tenant API to the websites management server.  As part of this process, a new directory is created on the file server for the content.  If you’ve got your permissions wrong in any way, this process will fail and the whole thing falls over.

The content for the website is then downloaded.  This content is downloaded from the tenant portal, and you can track what is happening by looking in  C:\users\MgmtSvc-TenantSite\Appdata\Local\Temp.  The download is a zip file with a random name (you can open it and inspect the contents if you like).  If you need to track where the download is coming from, grab a copy of the XML source for the gallery feed.  The link for the gallery feed is available in the settings tab of the web sites cloud page in the Azure Pack admin portal (the default is here).  Look for the <installerURL> section of the application you’re looking for to find the download link.

Once the download is complete, the tenant portal makes a connection to the publisher on port 8172 (the default publishing port) to upload the content.  The publisher writes the content to the file server as it receives it.  Once this is complete the site is basically ready to use.

The things you’ll need to confirm are working to ensure website gallery resources work are:

  • Internet access and internet name resolution from the tenant portal
  • Tenant portal can resolve the IP address for the publisher.  Depending on your topology this might need to be an internal IP address that the tenant portal can access, in which case you’ll need to plan for that in your DNS structure.  You may end up with an internal DNS zone for the websites feature to use as well as a corresponding external DNS zone for tenants
  • Tenant portal can access the publisher on port 8172
  • Permissions on the file server are correct – if you’ve used the websites controller to deploy the file server you’ll be fine.  If you’re using a NAS or a file cluster, then check your permissions are correct.

Hopefully this information will be useful for someone out there.

Deploying Windows Azure Pack Websites–Deployment

In part one of this series, we had a quick overview of the Windows Azure Pack websites feature, and in part two we looked at getting the prerequisites in place.  In this post we’ll go through the deployment process.

Although there are a lot of moving parts in the websites feature, the deployment process is very simple.  The general process goes something like this:

  1. Deploy the controller using the Web Platform Installer
  2. Configure the controller using the website, which provisions the management (REST) server and optionally configures the file server
  3. Register the management server with the Azure Pack admin portal
  4. Deploy the remaining roles (Front Ends, Workers & Publisher) from the Azure Pack admin portal

To start with part 1 you can install the Web Platform Installer and then select the controller component which is called “Windows Azure Pack: Web Sites v2”


Since I prefer to do as much as possible from the command line, to deploy the controller you can execute the following once you have WebPI installed:

Cd “C:\Program Files\Microsoft\Web Platform Installer\”
.\webpicmd.exe /install /products:HostingPrimaryControllerBootstrapper_v2 /AcceptEula

Once installation is finished, Internet Explorer will start and you will be taken to the configuration page.  The first thing you’ll need to do is ignore the certificate warnings and then get on with configuration.

Start by configuring the database connection, and your custom DNS suffix.  Be aware that there doesn’t seem to be a way to change the custom DNS suffix after installation, so make sure you get it right first.

01. Websites Config

Next define the server name for the server that is going to be the websites management server (REST API server).  Provide a username & password to configure the management server, the file server, front ends & publisher.  In the prerequisite post I mentioned that you need to set the credentials to be the same on all these roles.  You’ll also provide the credentials to install the worker roles.

02. Websites Config

You’ll also need to set the registration credentials for the REST API server – default is CloudAdmin and whatever password you want.

02s. Websites Config

For the file server, you get options about whether you’re going to let the setup process configure the file server for you (this works if you are just going to have a single server), or you are going to preconfigure a Windows file server, or if you are going to use a non-Windows file server (like a NAS or your SAN).  Either way you’ll need to provide the server name, and the three sets of credentials (FileShareOwner, FileShareUser, CertificateShareUser).  The share paths will be automatically populated for you based on the servername, but change them if you need to.

03. Websites Config

03a. Websites Config

03b. Websites Config

Choose to join the CEIP, and whether to use Microsoft Update.  You should join Microsoft Update, as updates to the websites components are provided through Windows Update.

04. Websites Config

Finalise the setup process and everything going well you’ll get three green ticks.

05. Websites Config

Now you have three configured components – the controller, the management server and the file server.

The next step is to head to the Azure Pack Admin portal and register the management server.  Log in to the Admin portal, and choose “Web Site Clouds” from the left hand bar.  Click the link below “Connect to a web site cloud”

01. WebSitesRegistration

Enter a meaningful display name, the name of your management server (prefixed with https) and then the CloudAdmin credentials you created during deployment.  Assuming you got everything right, you’ll get a green tick beside the password, and you’ll be able to connect.

02. WebSitesRegistration

Now when you click the “Clouds” link at the top you’ll see your registered web site cloud.  All that remains is to provision the rest of the roles.

03. WebSitesRegistration

To do this, click your web site cloud from the Clouds pane and you’ll be taken into the detailed view.  You’ll be able to see the current roles provisioned and what status they are in.  Note that if you chose either the preconfigured Windows file server, or non-Windows file server options during deployment, your file servers will not show here.

04. WebSitesRegistration

To provision the roles, just click the “Add Role” button at the bottom of the screen.  You can choose to add a new worker, frontend, or publisher.

05. WebSitesRegistration

Each following option is basically the same – enter the name of the server you wish to add.  The web worker role has an additional option that allows you to choose what sort of worker you wish to provision – shared or reserved.

06. WebSitesRegistration

Once you click the tick, a process will be triggered on the controller that will deploy the selected role to the selected server.  Do this for each server that you are deploying.  If you want to see where things are at with the deployment of a role, you can highlight the server in the list and click the “Role Log” button.  This will bring up a log of what the controller is attempting to do.

08. Role Log

The deployment is reasonably resilient, and if it fails it will attempt to autorepair as best it can.  It will retry a number of times, and then reboot the server if things are still failing.  This does seem to be quite normal, and if the controller is able to reboot the VM it tends to be able to finish deployment.  Where you’ll run into trouble is if you haven’t enabled the required firewall rules so the controller can’t talk to the server or transfer the required files.  The deployment is agent based – the first thing that happens is that the controller pushes the source files for the Web Farm Agent to C:\windows\temp on the destination server, and then installs it.  From then it appears to do all communication via the Web Farm Agent (which is an HTTP listener on port 8173).

That’s it for deployment, in the next post we’ll briefly look at the post-deployment tasks.

Deploying Windows Azure Pack Websites–Prerequisites

In the first post of this series, we looked at the architecture of the Windows Azure Pack websites feature.  In this post we’ll look at the prerequisites for the websites feature.

The first thing you’ll need is a SQL database server for the runtime databases.  You can share the runtime databases for the websites feature with the SQL databases for the Azure Pack portal if you like.  As for the Azure Pack runtime, you can provision the databases on an AlwaysOn cluster, but the feature does not natively support AlwaysOn so you must manually add the databases to an Availability Group, and ensure the user accounts are created on each node correctly.  And similarly to the Azure Pack portal the SQL instance must allow SQL authentication, and you’ll require a SQL account that has sysadmin rights on the server.

You should also plan to provision a SQL database server to host databases for tenants to provision as part of their websites.  This database must be at the very least a separate instance of SQL server on the same SQL server that hosts the runtime databases, or, for higher security, a completely separate SQL instance.  The tenant database service does natively support AlwaysOn, although you will need to configure SQL to allow Contained Databases.  You might also choose to provision MySQL if you have that requirement – this process is documented on TechNet.

You’ll also need a number of virtual machines to host the features.  You’ll need a minimum of six VM’s (controller, management server, file server, front end, worker, publisher) but you should allow for redundancy in the front end and workers at the very least.  The great thing about the websites feature deployment is that all the deployment is either done with the Web Platform Installer, or triggered remotely, so you can start with essentially blank VM’s.  For each of the VM’s, they can be basically default installs of Windows Server 2012 R2 (or Windows Server 2012 if you need to).  The only thing you need to do them is to enable the required firewall rules and disable UAC for remote connections.  Do this for all machines that are going to be part of the websites service (technically you don’t need to do this to the controller).

Microsoft give you the netsh & reg commands for enabling the required firewall rules, but I prefer to use the equivalent PowerShell commands:

get-netfirewallrule | where {$_.DisplayGroup -eq “File and Printer Sharing”} | enable-netfirewallrule
get-netfirewallrule | where {$_.DisplayGroup -eq “Windows Management Instrumentation (WMI)”} | enable-netfirewallrule
New-ItemProperty -path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\system -Name LocalAccountTokenFilterPolicy -PropertyType Dword -value 1

The easiest way to execute these commands across all the machines is to use PowerShell remoting if you can (invoke-command –ComputerName <list of computer names> –ScriptBlock {<commands from above>}).

The controller is deployed using the Web Platform Installer, but all other roles are deployed by a remote push from the controller.  To do the push you need to provide administrative credentials as part of the controller configuration.  You must use two sets of credentials, one for the workers, and another for all the other roles.  The credentials could be local accounts with Administrator privilege, or domain accounts with Administrator privilege.  If you choose the local account option, ensure that the password for the accounts are the same on all the required boxes.

You will also need to have a password ready that you will use as part of the websites feature configuration, which creates a local account on the management server that the Azure Pack portal will use to perform its operations.  The account is by default called CloudAdmin.

If you have chosen to use a clustered file server, or a NAS, or some other non-Windows server to host the file server content share, you will need to preconfigure it.  This process is reasonable complex and is documented on TechNet.  It is crucial to get this part correct, if you have permission issues on the file server the websites feature will be non-functional.

Once you’ve got these things in place, and you’ve got your chosen DNS suffix you can begin deployment.  We’ll look at this in the next post.