What is the Azure Pack Websites security model?

A customer recently asked me how Azure Pack websites are isolated and how one website is prevented from interfering with another website running on the same worker.  Microsoft don’t really document this anywhere, so I had to do a bit of digging.

The first part of the answer actually comes from the screenshot I had in an earlier post about where the websites actually are.


When you look at task manager and look at the user running the w3wp process for your site, you’ll see a user corresponding to the name of the site.  However, if you look in the local user database you won’t find any users with those names.  The websites are running using “virtual accounts” which were introduced in Windows Server 2008 R2 and are documented here.  They are part of the same set of technology as Managed Service Accounts, but probably less well known.
You can configure an IIS website to use a virtual account by setting the identity of the application pool to “ApplicationPoolIdentity”.  As we saw in the “where are my websites?” post, you don’t get to see an Azure Pack website in IIS, but if you look in the ApplicationHostConfig file for the website you’ll see the entry below, showing that the Azure Pack website is using ApplicationPoolIdentity.

<add name=”website1″ managedRuntimeVersion=”v4.0″>
<processModel identityType=”ApplicationPoolIdentity” />

A plan that include the web sites service has a set of quotas assigned to it, which can be customised for each web site mode (Intro shared, Basic shared, Reserved).  The quota limit for memory is implemented as a Win32 job limit, which you can view using Process Explorer.  If you get the properties of one of the website w3wp.exe instances, you can see the properties on the job page.   The quota limit for memory can be seen in the Max Working Set property, and if you change an instance to a mode with a different quota you’ll see this change reflected once the process has been restarted in the new mode.


The CPU & network quotas are tracked by the “Resource Metering” service, and this information is sent to the controller, which is actually responsible for enforcing any quota.  The default action is to do nothing when quota is exceeded but if you’ve chosen to stop the site, the controller will take care of this.

The next level of isolation and protection is the application of quotas through FSRM on the workers.  On a worker running instances of websites you can run the PowerShell cmdlet “get-fsrmquota”, and this will show you something like the below

Description     :
Disabled        : False
MatchesTemplate : True
Path            : C:\inetpub\temp\DWASFiles\Sites\website2
PeakUsage       : 361472
Size            : 209715200
SoftLimit       : False
Template        : DynamicWasTempQuota
Threshold       :
Usage           : 361472
PSComputerName  :

This shows that each website is limited to consume 200MB of disk space on the worker, and that is a hard limit (soft limit = False).  So a runaway or malicious process on a worker could only ever get an individual website instance to use 200MB of disk space.  As you can also see, this is created from a template called “DynamicWasTempQuota” which is created by Azure Pack (you can have a look at this using get-fsrmquotatemplate).

If you’re using a Windows file server for the web sites feature you can also explore the limits that are applied at the file server that correspond to the “Subscription Storage Space” value in the plan quota (the size value below corresponding to a 1GB hard limit):

Description     :
Disabled        : False
MatchesTemplate : False
Path            : C:\WebSites\c9d2cf1a9d6b00b7d177\1f68e081ae3c489cbd98cdffcf86de1d
PeakUsage       : 74752
Size            : 1125899906842624
SoftLimit       : False
Template        :
Threshold       :
Usage           : 74752
PSComputerName  :

The last piece of the security and isolation provided in Windows Azure Pack websites is an IP filtering function (as documented here) that is used to prevent the service from launching a denial of service against itself.  This is implemented using a custom filter driver called RsFilter that is deployed as part of WAP on workers.  You can see the filter loaded if you run ftlmc on a worker, and if you look in the registry you can see the entries for the IP filtering at HKLM:\System\CurrentControlSet\Services\RsFilter

Compare the block list configured in the portal:


that is then transfered to the workers registry:


That’s as much information as I’ve been able to find about the security and isolation model, hopefully that comes in handy for someone.


Windows Azure Pack Websites – Where are my sites?

Another quick post, this time to look at some of the insides of Windows Azure Pack websites feature, and to answer a question that I get asked occasionally – where is has my website gone?

If you’re a typical IT Pro and have been working with IIS for a few years your natural instinct when you want to troubleshoot anything going wrong on Windows Azure Pack websites will be to dive into IIS Manager.  However, if you open this up on a worker, you’ll be in for a nasty surprise:


No running websites at all? What about Application Pools?


Nothing?  So where is my website?

The thing to understand about Windows Azure Pack websites is that the websites run as part of the Windows Process Activation Service (WAS) and are created dynamically on a worker and simply don’t appear in IIS Manager.  So your sites are there, but just not part of IIS.  So how do you find them?

There are two ways that I’ve found.  The first is to look in process monitor on a particular worker, and go to the details view.  Look for a username that corresponds to your website name and you can find out if an instance of your site is running on that worker.


The other way to find the site (and this can be slightly more useful) is to use Windows Explorer, and navigate to <system root>:\inetpub\temp\DWASFiles\sites.  Under here you will find directories corresponding to each website


The reason this can be more useful is that if you drill into that directory, you’ll see a lot more of the bits and pieces of that website – like config files (under “Config”), and a link to the website content (usually “VirtualDirectory0”) so you can compare what you think your website should be serving against what what the worker is delivering.  You can also see if strange things are happening – I had a website that was never able to display content and we were able to observe large numbers of files being compiled in the “Temporary ASP.NET files” directory.

You’ll also find other interesting stuff in the <system root>:\inetpub\temp\DWASFiles directory, most interesting usually being the “log” directory which allows you to see the logs for the sites running on the worker.

I needed this information when I first started with Azure Pack and needed to do some troubleshooting, so hopefully someone else will find it useful.

How does gallery publishing work in Azure Pack websites?

A quick blog post as I’ve been troubleshooting an issue with gallery publishing, and there wasn’t a lot of information available out there.  My problem was that I would run through the wizard to deploy a preconfigured web application through the gallery which would work fine, but publishing would eventually fail.  I had to break out some network traces to figure out what was going on in the end (along with the information in this page in the documentation)

So, what happens when you choose to deploy a new website using the gallery feature?

Once you’ve made the request, the first thing that happens is that an empty website is created.  This is a call from the tenant API to the websites management server.  As part of this process, a new directory is created on the file server for the content.  If you’ve got your permissions wrong in any way, this process will fail and the whole thing falls over.

The content for the website is then downloaded.  This content is downloaded from the tenant portal, and you can track what is happening by looking in  C:\users\MgmtSvc-TenantSite\Appdata\Local\Temp.  The download is a zip file with a random name (you can open it and inspect the contents if you like).  If you need to track where the download is coming from, grab a copy of the XML source for the gallery feed.  The link for the gallery feed is available in the settings tab of the web sites cloud page in the Azure Pack admin portal (the default is here).  Look for the <installerURL> section of the application you’re looking for to find the download link.

Once the download is complete, the tenant portal makes a connection to the publisher on port 8172 (the default publishing port) to upload the content.  The publisher writes the content to the file server as it receives it.  Once this is complete the site is basically ready to use.

The things you’ll need to confirm are working to ensure website gallery resources work are:

  • Internet access and internet name resolution from the tenant portal
  • Tenant portal can resolve the IP address for the publisher.  Depending on your topology this might need to be an internal IP address that the tenant portal can access, in which case you’ll need to plan for that in your DNS structure.  You may end up with an internal DNS zone for the websites feature to use as well as a corresponding external DNS zone for tenants
  • Tenant portal can access the publisher on port 8172
  • Permissions on the file server are correct – if you’ve used the websites controller to deploy the file server you’ll be fine.  If you’re using a NAS or a file cluster, then check your permissions are correct.

Hopefully this information will be useful for someone out there.

Deploying Windows Azure Pack Websites–Overview

There have been plenty of blog posts about using Windows Azure Pack for Infrastructure as a Service, but not many about using the websites feature.  I’ve been working with the websites functionality since Windows Azure Services for Windows Server, and thought I’d give an overview of how this feature works.  In this blog I assume you’ve already set up the tenant & administrative portals and the associated bits and pieces that go along with that.  Part 1 (this one) will give an overview of the websites feature and its components, and some of the things you need to consider for deployment.  In part 2 we’ll have a look at the deployment process.

Windows Azure Pack websites feature lets you run high density, scalable modern websites with an experience that is similar to Windows Azure websites.  It supports .NET, PHP & Node.js, and provides integration with a number of source control systems.  In addition, through the tenant portal you can also provision databases with MS SQL or MySQL to use with your websites.

The WAP websites feature is made up of several components:

  • The websites controller which provisions and manages all the other roles in the websites infrastructure.  The controller is the first role that is provisioned
  • The management server (sometimes known as the REST server) provides a REST interface to the websites feature that the WAP portal can consume.  The configuration process of the websites controller also configures the management server.
  • Web Workers are servers that actually process the web requests, and can be either shared or reserved.  Shared workers as the name implies share their resources across multiple tenants and websites, while reserved workers are dedicated to a tenant.  For redundancy you should have multiple workers.
  • Front end servers, which take web requests from clients and route them to the correct worker servers.  The front end servers use a modified version of Application Request Routing to provide this functionality.  There should be at least two front end servers for redundancy, and you will need to load balance them in some way (Windows NLB seems to work just fine, but you may have a hardware device you can use).
  • File servers provide the content store for all the web site content & certificates.
  • The Publisher allows tenants to deploy their content to their websites via FTP, Visual Studio and WebMatrix.
  • The websites runtime database provides a configuration store for the websites feature, and runs on MS SQL server.  The runtime database needs to be contactable by all the servers in the websites infrastructure.  SQL mixed mode authentication must be used.  If you use a named instance, be sure to start the SQL browser and open UDP port 1434 (and create a program exclusion for the SQL instance process)

Of these components, only the front end servers & the publisher are exposed to the Internet.  All other components are purely internal.  For the front end servers, you will expose only the load balanced IP address.

When I talk about the Websites feature, I like to break it down into the management components (controller, management server, publisher, runtime DB) and the delivery components (front ends, workers, file servers).

Websites Overview

One of the key things you’ll need to do before you deploy the websites feature is choose a DNS name for your hosted websites.  By default, each website you create gets a unique DNS name in the format <website name>.<your custom domain name>.  For example, websites created in Windows Azure get a name <something>.azurewebsites.net.  You could use something like websites.yourdomain.com, or register a custom domain, there are no special requirements placed on this.  You do need to consider how easy it will be for someone to type in those URL’s, so don’t choose something too complex or with too many layers.

Tenants can also map other domain names to your websites once they are provisioned (assuming your plan allows this), but they will always have your custom domain name as well.

You’ll then need to set up DNS records in this domain that will allow tenants to publish content, and users to access content.  The DNS records you’ll require are detailed here.

When deploying the websites feature if you are using NAT to present the public facing components, you’ll also need to set up an internal DNS that has the same records as the external facing records, but referring to the internal IP addresses.  If you don’t do this, you’ll find things like deploying new websites from the gallery will fail.

You’ll also need to consider certificates.  You’ll need to provide a default certificate for the websites feature which is used if an SSL session is requested and there is no corresponding SSL certificate loaded, and also for git publishing.  This certificate needs to be a wildcard certificate which includes the names *.<your custom DNS suffix> & *.scm.<your custom DNS suffix>.  You’ll also need to provide a certificate for the publisher – you can reuse the default certificate if you want to, or you can provision a new certificate with the publish.<your custom DNS suffix> name explicitly in it.

The other thing you’ll need to think about is redundancy.  As noted above, you should build redundancy in to your front end servers by load balancing and provision multiple worker servers.  You should also plan for redundancy with your content store (the file server).  You could go as far as a file server cluster, or try something simpler like using DFS to replicate the content.  Another alternative to consider is using a NAS or a SAN that exposes a Windows SMB file share.