Category Archives: Identity and Authentication

ADFS and Office Modern Authentication, What Could Possibly Go Wrong?

We’ve recently thrown the load balancer switch to send users to our new ADFS 4.0 farm rather than the old ADFS 2.x farm. My first baby steps in this process were documented in a prior post.1 It turns out that was just the beginning of this long, tortured journey. Things got very complicated when we started getting errors from users of Outlook hitting our Office 365 Exchange Online. In my prior post I explain how SAML Tracer can be helpful. However, you can’t use a browser-based HTTP debugger/tracer with a thick client like Outlook. In these cases Fiddler is your friend.

Many of the Office 2016 apps (and some of the Office 2013 apps with the right updates and registry settings) can use what Microsoft likes to call Modern Authentication. This is nothing but a lame pseudonym for OpenID Connect. OIDC, as it is abbreviated, uses a web-API friendly exchange to authenticate users. This is in contrast with the older and well established SAML and WS-Trust authentication protocols which are SOAP-based. We don’t (yet) use MFA with Office 365 so the settings I discussed in the prior article don’t apply to it.

Older versions of the Office thick clients use basic authentication with Office 365. The app puts up a credential dialog and then sends the user’s credentials to the O365 service where the actual authentication against Azure AD takes place. The user credentials are protected by TLS. This means that the user has to enter their credentials each time they start the app unless they choose to have the credentials stored locally. The biggest downside to this is that those locally stored credentials can easily be harvested by malware.

How does OIDC change the authentication flow? Newer Office apps open a window that hosts a browser which the app directs to the address of the OIDC provider (OP) configured during auto-discovery. The OP puts up a web form to collect the user’s credentials and, after validating them, returns two JSON web tokens. One is an app authentication token, the other is a refresh token which can be used by the app to request a new auth token when the current one expires. Thus the user’s credentials are never stored locally. The app and refresh tokens could be replayed but they are bound to the app so their loss would be far less damaging.

The Office 365 OP is the familiar https://login.windows.net and/or login.microsoftonline.com which both sit in front of Azure Active Directory (AAD). Things get more complicated when ADFS is in the mix and it really is a bit of a mess when your ADFS is using a SAML Claims Trust Provider (CTP). The UW, like many higher-ed institutions, uses the community developed Shibboleth SAML IdP and our ADFS is configured with it as the CTP. This means we get an authentication flow that transitions between 3 different protocols. The initial step from the Office app uses OIDC. AAD then calls ADFS using WS-Trust. ADFS then translates the WS-Trust call into a SAML protocol call to Shibboleth and the whole process unwinds as the security tokens are returned.2 As you can see there are lots of places where things can go haywire.

Our first sign of something amiss was users reporting this error when they attempted to sign on after the switch to ADFS 4.0.

ADFS error

Error shown by ADFS

Ah-ha, there is an Activity ID. I can look that up in the ADFS event logs to get more detail. Except that the logs didn’t say anything other than there had been an authentication failure. Not very helpful. Here is where breaking out Fiddler becomes necessary. As an aside I recommend running Fiddler from an otherwise unused machine because it captures all network traffic. Your typical workstation is going to be way too noisy which will clutter the network capture with lots of extraneous traffic. I use a virtual machine for this purpose so that nothing is running other than the app (usually Outlook) and Fiddler.

I’m not going to spend time describing how to use Fiddler. There are lots of web articles on that topic.3 What I discovered is that Azure AD (AAD) was sending a WS-Trust request to ADFS with a URL query parameter of:4

wauth=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password

ADFS then sends this same URI as the SAML AuthnContextClassRef to Shibboleth. This parameter value is not a part of the SAML spec so Shib returned an error and ADFS then displayed its error.

If you recall from my prior post there is an ADFS CTP property called CustomMfaUri that gets applied in the MFA case. Unfortunately Microsoft did not create a corresponding non-MFA property. I’ve asked them to consider creating a CustomDefaultUri property. We’ll see if that gains any traction.

At this point I called Microsoft Premier Support to find out what, if anything, could be done to fix this. The support engineer took a look at the Fiddler trace and pointed out something I hadn’t noticed. Outlook 2016 was adding a “prompt=login” parameter to its OIDC login request. AAD translates this into the WS-Trust wauth parameter. He told me that this AAD behavior is configurable and that I should follow example 3 of this article https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/ad-fs-prompt-login. That indeed fixed the issue. Wauth is an optional WS-Trust parameter. Password authentication is the default in the absence of this parameter. The SAML AuthnRequest created by ADFS when there is no wauth has no AuthnContextClassRef so Shib also defaults to password authentication.

Our celebration of success was short-lived as other users continued to have similar login problems. What we discovered is that different versions of Office and Outlook 2016 have different “modern auth” behavior. The click-to-run version downloaded from the Office 365 site sends the prompt=login parameter. However, the MSI volume licensed version we distribute to campus is an older build of Office 2016 and it instead sends a different OIDC parameter: “amr_values=pwd”. There is no AAD property to configure the behavior with this parameter, it results in the above wauth being sent to ADFS. As far as we can tell there is no update that can be applied to the MSI version to have it change its behavior to send the “prompt=login” parameter. At this point the MS support engineer had no suggestions.

I’m thinking we have 3 options. The first is to convince everyone with the MSI version to uninstall it and install the c2r version. This is a non-starter because campus IT has no real authority; there is no way to force people to upgrade. We still have thousands of people running Windows 7 and a few even with XP! The next option to consider was writing an F5 load balancer iRule to do a URL rewrite to remove the wauth parameter. That solution would not work in the long run because we want to use AAD conditional access to do a structured roll-out of MFA. Removing the wauth would negate the requirement to use MFA. We could detect this specific wauth and only remove it but now the iRule is becoming fairly complex. So the third option was to ask the Shibboleth team to accept the Microsoft URI as a legitimate proxy for the PasswordProtectedTransport URI which is defined by the SAML spec.3

The Shib team made the change and now things are working properly. To quote Jim the lead engineer “We frequently have to make accommodations for vendor apps that are not spec compliant.” A better solution would be for Microsoft to more fully support SAML-spec IdPs as CTPs. Another solution would be to do password hash sync from AD to AAD so that AAD could handle the login without ADFS or Shibboleth in the mix. However we are concerned about introducing a second login portal to our campus users. We have enough of a problem with phishing and this would only complicate matters. We’ll see which way we end up going.

Endnotes

ADFS 4.0, Shibboleth and MFA

The University of Washington uses the InCommon Shibboleth SAML identity provider for web SSO. We run ADFS as a proxy between Office 365/Azure AD and our on-premise identity systems. Our ADFS is configured to use our Shib IdP as an additional “Claims Trust Provider” (CTP). We do this for two reasons: we want all web SSO to have the same login experience and we provide multi-factor authentication through our Shib service. The problem I was working to solve was how to configure ADFS 4.0 to require MFA through our Shib instance.

We initially set up what is known as ADFS 2.0. This was a downloadable upgrade to the original version of ADFS that shipped with Windows Server 2008. ADFS normally would show a “Home Realm Discovery” (HRD) page if there is more than one CTP (with AD being the default CTP). We wanted our ADFS relying parties (RPs) to go straight to Shibboleth, so we modified the HRD page code to effect this. We also used this modified code to require MFA for certain RPs. When ADFS 3.0 was released with WS 2012 R2 it threw a monkey wrench into this design. ADFS 3.0 no longer ran as an IIS web site such that the HRD page code was no longer accessible to be modified. We discovered that you can configure RPs to go to a specific CTP, but we were stymied as to how to require MFA. In the interim ADFS 4.0 was released with WS 2016 and yet the solution to the MFA problem remained elusive. I finally opened a support request with Microsoft to seek an answer to this problem. Here is what I’ve learned.

It turns out this is a two step solution. The first step is configuring your CTP for MFA. The second step is to configure RPs to require MFA.

Almost all advanced ADFS settings are accessed via PowerShell. There are new parameters to the familiar PS commandlets that are of interest. Here are those new commands and how they would be applied to solve our conundrum.

Configuring the Shibboleth CTP

You need to tell ADFS how to invoke MFA through the CTP. Our Shibboleth IdP will require MFA if it receives an AuthNRequest containing an AuthnContextClassRef with the value of

Note that this is the value that our Shibboleth is configured to look for. Your Shib installation may use a different URI to signal MFA.

Hint: use Firefox and its SAML Tracer plugin to trace a login session. SAML Tracer helpfully decodes the SAML token so you can examine it along with the rest of the HTTP exchange.

A WS-Federation authentication request can effect this request for MFA by setting the wauth parameter to the above value. ADFS translates this WS-Fed request into a SAMLP AuthnRequest with the required AuthnContextClassRef value of this URI. If you have a WS-Fed RP that can’t be configured to send the wauth parameter, or if you need to enforce MFA at the IdP, then this won’t work. Instead you configure ADFS so it knows how to make this request using the following PowerShell.

First, to get the current CTP state you can call

The parameter of interest is CustomMFAUri. Use the following code to set it.

Now ADFS knows how to ask for MFA from our SAML IdP.

Configuring Your Relying Parties

We want to send all of our RPs to our Shib CTP. Use the following PowerShell to do this.

You could script this by sending the output of Get-AdfsRelyingPartyTrust to the above command. Note also that since we set a display name on the CTP we had to use that rather than the URN identifier.

The next step is configuring those RPs of which you require MFA. That is done with this command

Now an incoming authentication request to the RP will result in our Shib prompting for the second factor after the entry of the correct user name and password.

I did a search on the RequestMFAFromClaimsProviders parameter after being told about its existence and didn’t find much. The MS documentation is useless and gives no examples of the use of this and the related parameters. I did find one non-MS blog post here, but it was rather general in nature. I hope these detailed instructions help those who want to use Shibboleth as their institutional identity provider via ADFS.

What is a Web Service?

The earliest computers didn’t talk to one-another. They were islands of information. A lot has changed since those pioneering days. Web services are the current state-of-the-art in computer to computer communications. I present below a brief history to illustrate and help explain this transformation from isolation to connectedness.

Let’s Talk! Connecting Computers

Networking technologies were developed to enable inter-computer communications. At that point you could connect two or more computers together but it still wasn’t easy to share information. There were initially no standard ways to represent or manipulate data.

Multiple, competing efforts progressed to standardize network communications. TCP/IP emerged as the primary way to interconnect networks and enabled the Internet. SMTP saw increasing adoption as an electronic mail protocol. Things were not as simple in the world of client-server communications. DCE/RPC and CORBA competed for attention, with Microsoft settling on the former. While providing a framework for client-server computing, these are still low-level binary network protocols that are not easy to use nor are they firewall-friendly. By that I mean those protocols requires a large number of TCP ports to be open which nullifies most of the security gained from a firewall.

Web Services

The next major advancement in network communications was SOAP. SOAP is not a wire-level protocol meaning that SOAP messages can be transmitted via a variety of application layer protocols including HTTP and SMTP. SOAP also standardized on XML as the data representation model. Both of these concepts were transformational in that now you could use a set of ports that are usually left open on firewalls and the data could be interpreted without an understanding of a complex binary layout. Major vendors jumped on SOAP and produced a raft of web service specifications (WS-*). Why were these called “web services?” Because they used the same underlying protocol as the World-Wide-Web: HTTP!

Except this is not a completely accurate timeline. SOAP was developed after HTTP and it turns out that HTTP itself makes a great client-server computing protocol. The HTTP protocol was developed by early Internet luminaries including Tim Berners-Lee, Paul Leach and Roy Fielding. The latter published a revolutionary dissertation in 2000 in which he did an analysis of networking architecture. Within his dissertation Dr. Fielding presented Representational State Transfer, a,k.a. REST. This is an architectural pattern for producing client server communications using the rich semantics of HTTP. However, given the large investment that had been made in the WS-* suite, it took a long time for folks to realize the inherent advantages of REST over SOAP.

RESTful Web Services

Although SOAP-based services used HTTP, they did not and cannot fully leverage all of the features of HTTP. All SOAP message exchanges use the POST HTTP verb. It doesn’t matter what you want to do, the SOAP client POSTs a request to the SOAP server. This is incredibly inefficient. The majority of network transactions are data reads (I don’t have any handy references for this but I believe it to be true). HTTP has a built-in verb for fetching data: GET. HTTP GETs are by definition stateless, idempotent and without side-effects. This enables two very powerful features: scale-out and caching. Because the requests are stateless you can use a load balancer to spread them out to a farm of servers. This also enables caching of requests on intermediate nodes of the Internet such as proxies and gateways. These combined capabilities have enabled the creation of Content Delivery Networks (CDNs).

Details of REST

REST is a resource-centric architecture which gives it the following characteristics.

  • Each distinct resource is named by a unique URL path.
    • e.g. https://myservice.example.com/stuff/things/mything22
    • The leaf element is the resource name while the intervening path elements can be thought of as containers or collections; thus the leaf element name need only be unique within the specific path hierarchy.
  • CRUD (create, read, update, delete) operations map directly to the HTTP verbs PUT, GET, POST, and DELETE respectively.
  • Stateless – as noted above this enabled Internet-scale services
  • Standard MIME media-types for payload encoding (JSON, XML, etc.)
  • Searching for resources is rooted in a container path and employs URL parameters to describe the search query
    • e.g. https://myservice.example.com/stuff/things?$filter=thingnum lt 22

While all of this is cool, REST isn’t an actual protocol. Rather, it is a set of architectural styles or conventions. Several competing implementation protocols have evolved as a result. The two dominant REST API description languages are OData and OpenAPI (was: Swagger). The former is being pushed heavily by Microsoft which may explain why some in the open source community prefer the latter (and I’m sure there are lots of other good reasons). In any case they both aspire to the same goals: providing a standard way for a service to describe its capabilities (the service description endpoint) and the schema of its data (the service metadata endpoint).

Examples of RESTful Web Services

Where to start? They are all around us. Facebook, Amazon, Google, Microsoft all expose resources via web services. I have code that calls the Amazon AWS Simple Queue Service for event message delivery. I am developing code to call the Microsoft Azure Active Directory Graph API (AAD Graph for short).

My employer, the University of Washington, hosts a number of RESTful web services. One that has been in use for a while is the Groups Web Service. A new middleware service is being developed to provide a standardized way to access University data. This is known as the Enterprise Integration Platform.

My next post will dive into making web service calls using the PowerShell scripting language.

Addenda

Example PowerShell and a PowerPoint deck at https://github.com/erickool/ws-powershell

Kerberos Delegation in Active Directory

The topic of Active Directory Kerberos delegation seems rather retro given that it is as old as AD itself. However, this is a very confusing and complex subject which has resulted in much misinformation out on the Internet. I am hoping that my explanation will be useful to a broad audience.

What is Kerberos Authentication?

Kerberos is an authentication protocol. It facilitates users proving their identity to services via the exchange of “tickets” mediated by the AD domain controllers. It is also a mutual authentication mechanism that allows services to prove their identities to users. Much has been written about Kerberos so suffice it to say that it is one of the most secure authentication protocols in wide use. The protocol is defined in https://tools.ietf.org/html/rfc4120 .

What is Kerberos Delegation?

Kerberos delegation is used in multi-tier application/service situations. A common scenario would be a web server application making calls to a database running on another server. The first tier is the user who browses to the web site’s URL. The second tier is the web site. The third or data tier would be the database. Delegation allows the database to know who is actually accessing its data.

One way to set this up is to run the web site using a domain service account. Let’s call this service account WebServerAcct. The database is running on a different server under its own service account. In many cases the database is run by a separate team from the web application so that the web application team must request database access for their WebServerAcct service account. The database admins would need to grant sufficient access to the WebServerAcct account for all possible actions of the web application. This means that the web application developers and/or admins determine who can access the application and by extension the data in the back end. This situation may be unacceptable to the database admins as they cannot control who ultimately has access to the data. The solution is to use Kerberos delegation.

Kerberos delegation would be configured on the WebServerAcct service account which grants it permission to delegate to the database service account. What does this actually mean? When a user accesses the web site they authenticate with Windows Integrated Authentication. This results in the WebServerAcct application receiving a request from the user that is accompanied by the user’s Kerberos ticket (I’m glossing over lots of details here in order to keep the scenario relatively simple). The user’s ticket contains a list of the user’s AD group memberships. The WebServerAcct application can examine the user’s group memberships and only allow access if the user is in a specific group. With delegation configured, the WebServerAcct service can request a Kerberos ticket to the database as the user rather than as itself. IOW, the database would receive a Kerberos ticket from the user rather than from the WebServerAcct application. This allows the database to examine the user’s groups to see if there is a membership in a group that is permitted access to the database. Without delegation the database would have no idea what user is actually accessing the data since it would have to give blanket access to the WebServerAcct account.

A concrete example of the above scenario is running SQL Server Reporting Services (SSRS) on a computer that is separate from the SQL Server database that provides the report data. The SSRS developer/admin can limit access to reports to specific users or groups. However, this does not actually grant those users/groups access to the data in the database. With delegation the database admins can control which users or groups can actually access the data rather than giving unlimited access to the SSRS service account.

Constrained Versus Unconstrained Delegation

Unconstrained delegation (a.k.a. basic delegation) was introduced with Active Directory in Windows 2000. It has the rather severe shortcoming in that it allows a user/service to request delegated tickets to any other service. This capability can be abused as an elevation-of-privilege attack vector. It was, however, the only reliable way to do delegation across a domain-trust boundary until Server 2012. Constrained delegation imposes limits as to which service accounts a delegating account can delegate to. This vastly reduces the potential for abuse of the delegating service account’s privileges.

There are actually two flavors of constrained delegation.

Original Constrained Delegation

This initial form of constrained delegation was introduced in Server 2003. With this type of delegation you explicitly list the services that the first tier account is allowed to delegate to. Using the above example, you would set constrained delegation on the WebServerAcct account. The Active Directory Users and Computers (ADUC) user property sheet has a page for configuring delegation. This form of constrained delegation may not be used across a domain/forest trust. Both the middle tier and back end tier services must be in the same domain.1 There are two other caveats around this form of constrained delegation. 1) the delegation tab will only appear on user and computer objects that have Service Principal Names (SPNs) set. If you expect a delegation tab and it isn’t there, that means that SPNs are not configured. 2) the delegation tab has some shortcomings in supporting service accounts that are user accounts; it will only list services running as a computer’s local account (Network Service, etc.). Thus to delegate to a user object service account one must directly edit the msDS-AllowedToDelegateTo (A2D2) attribute.

SPNs are discussed in many places on the web, so I won’t dwell on them here.

Resource-Based Delegation

The new form of delegation was introduced in Server 2012. It is designed to overcome several limitations of A2D2 delegation. First, it allows for delegation across a trust. Second, it changes how delegation is controlled. Rather than configuring the middle tier account to enable delegation, you configure the data-tier (resource) account to specify who can delegate to it. Additionally it does not require domain-administrator privilege to configure. The admin who has the ability to manage the resource service account can make the delegation changes. This change introduced the msDS-AllowedToActOnBehalfOfOtherIdentity attribute which would be configured on the resource service account.

This article is a good in-depth explanation of the Kerberos S4U2Proxy extension that enables constrained delegation and the changes introduced with Server 2012: http://windowsitpro.com/security/how-windows-server-2012-eases-pain-kerberos-constrained-delegation-part-1 (with more technical details in the second part).

I don’t believe the proffered advantages are as compelling in a real world situation. First, the domain is not a security boundary (see Security Bulletin MS02-001). I understand that there are a lot of legacy setups in the wild, but if you aren’t thinking about domain consolidation you really ought to be. Second, the data custodians/DBAs still need to control access to the databases by limiting access to specific groups. Do you really gain much by giving DBAs the additional ability to limit access to specific apps/services through this second delegation option? Regardless, there are certainly scenarios where these features will be useful.

Sensitive to Delegation?

This may be the most confusing part of Kerberos delegation. What exactly does the user account option “Account is sensitive and cannot be delegated” do? Does it control whether an account can request delegated tickets to another account? NO! It has no bearing on whether an account can do delegation! Rather, it means that a service account cannot request a delegated ticket for an account with this setting.

I think an example is in order. First, what would be a sensitive account? That means an account with elevated privilege in AD. An obvious example is Domain Admins. You would not want a service to request a delegated token for a domain admin. That would elevate the service’s privilege to that of domain admin. It is a best practice to set AD ACLs to limit the access of ordinary user accounts (e.g. ordinary users should not be able to log into and configure servers). Service accounts and systems admin accounts often need additional privileges to do what they do. Thus you should also stamp these accounts with the “Account is sensitive” setting.

A note on AD security: do not grant ordinary user accounts elevated privileges! Create clearly named separate accounts for those administrative tasks. Never use highly privileged domain or enterprise admin accounts for tasks that do not require that level of privilege! If you do server administration, browse the web, or read email with an account with EA/DA privs, the hackers will own you. ‘Nuff said.

Technical Details

All AD security principals contain the attribute userAccountControl. This attribute is a bit set, meaning that each binary digit is assigned a specific meaning. These bit values are defined in the Windows SDK in lmaccess.h. We are interested in three of these “flag” values:

#define UF_TRUSTED_FOR_DELEGATION                     0x80000
#define UF_NOT_DELEGATED                             0x100000
#define UF_TRUSTED_TO_AUTHENTICATE_FOR_DELEGATION   0x1000000

The UF_NOT_DELEGATED bit is set when you select the “Account is sensitive and cannot be delegated” checkbox.

The UF_TRUSTED_FOR_DELEGATION bit specifies unconstrained delegation. It is set when you select “Trust this user/computer for delegation to any service (Kerberos only)” in the Delegation tab. The only accounts that should have this bit set are the domain controller computer accounts. We have to trust our DCs; we’d rather not extend this level of trust to anyone else!

The UF_TRUSTED_TO_AUTHENTICATE_FOR_DELEGATION bit must be set to enable constrained delegation. It is set automatically when you add delegation through the Delegation UI.

As I mentioned earlier, the msDS-AllowedToDelegateTo attribute enables constrained delegation to the named servers/services. The entries in this attribute must match the SPN(s) set on the corresponding server or service account. If you manually modify this attribute, then you must ensure that the UF_TRUSTED_TO_AUTHENTICATE_FOR_DELEGATION bit is set on userAccountControl.

The msDS-AllowedToActOnBehalfOfOtherIdentity attribute controls the newer form of constrained delegation. It is set on the back-end data tier service account and names those middle-tier accounts that are allowed to request delegated tickets to the back-end service.

LDAP and PowerShell Techniques for Managing Delegation

It is a good policy to periodically scan your AD accounts to see which have delegation enabled. To make this an effective tool though you’d need a table of those accounts that have been granted permission to delegate. This enables spotting accounts whose delegation authorization has expired or who were never actually given administrative authorization. Similarly it is a good idea to scan privileged and service accounts to ensure that they have the “Account is sensitive” bit set.

Searching AD for accounts with one of these bits set in userAccountControl is straightforward but certainly not obvious. The first challenge is understanding LDAP query filter structure which is based on prefix notation. This means that the logical operators that combine query clauses are placed before the clauses. LDAP query clauses are enclosed in parenthesis. If you have clause A and clause B and you wanted both to be true to satisfy the query, it would be structured as (&(A)(B)) rather than the more conventional programming infix notation of (A & B).

The second hurdle is searching for specific bits in a bit set. This requires the specification of a “custom” query operator that is identified using an OID (an Object ID). OIDs are a bit like GUIDs except that they have a hierarchical namespace (digit space?) and are regulated by a standards body. At any rate the OID for doing a bit-match query clause in LDAP is “1.2.840.113556.1.4.803”. Another thing to keep in mind is that this LDAP bit-match query expects a decimal (base-10) number rather than the hexadecimal (base-16) number used in lmaccess.h.

  • Unconstrained delegation (UF_TRUSTED_FOR_DELEGATION 0x80000) = 524288 decimal
  • Sensitive to delegation (UF_NOT_DELEGATED 0x100000) = 1048576 decimal

To search for all accounts that are enabled for unconstrained delegation use the LDAP query filter of:

(userAccountControl:1.2.840.113556.1.4.803:=524288)

To search for accounts that should have “Sensitive to delegation” but don’t:

(&(name=$userPrefix)(!userAccountControl:1.2.840.113556.1.4.803:=1048576))

Note the exclamation point in front of userAccountControl. That means to find all accounts that don’t have that bit set. The $userPrefix is a placeholder for a user filter expression that would apply to your AD. We create all of our admin and service accounts with specific prefixes to make them easy to identify. Thus you could have (name=a_*) to search for all accounts that start with a_.

You can use these query filters with a tool like LDAPDE. I’ll show how to make these queries from PowerShell. The first example searches for user accounts starting with a specific prefix that don’t have UF_NOT_DELEGATED set.

# Find all user accounts matching the prefix that don't have "Sensitive to delegation" set
param([string]$userPrefix="a_*")
Import-Module ActiveDirectory
$filter = "(&(name=$userPrefix)(!userAccountControl:1.2.840.113556.1.4.803:=1048576))"
$users = Get-ADUser -LDAPFilter $filter -Properties userAccountControl
Write-Host "$($users.Count) accounts found without UF_NOT_DELEGATED set"
foreach ($user in $users) {
    # do something for each user or simply log the results
}

This script searches for all accounts (user, computer, gMSA, etc.) that have unconstrained delegation (UF_TRUSTED_FOR_DELEGATION) set.

# Find all accounts that are enabled for unconstrained delegation
$filter = "(userAccountControl:1.2.840.113556.1.4.803:=524288)"
$objects = Get-ADObject -LDAPFilter $filter
$objects | select Name

To search for objects with constrained delegation, you look for non-empty msDS-AllowedToDelegateTo attributes with this query filter:

$filter = "(msDS-AllowedToDelegateTo=*)"

If you want to change the userAccountControl value of accounts that are out of compliance, there is a PowerShell commandlet for doing this.

Set-ADAccountControl

This commandlet does not require bit-set manipulation. You list the settings you want as separate command parameters. See https://technet.microsoft.com/en-us/library/ee617249.aspx. There does not appear to be a corresponding Get-ADAccountControl which I find a little strange.

Conclusion

Wow, this ended up being much longer than I expected. I hope that this information is useful and leads to less confusion over the topic of Kerberos delegation.

Addenda

Additional resources:

  • Microsoft’s overview of the new Server 2012 delegation features: https://technet.microsoft.com/en-us/library/jj553400(v=ws.11).aspx
  • A deep dive into the details of Kerberos: https://technet.microsoft.com/en-us/library/4a1daa3e-b45c-44ea-a0b6-fe8910f92f28

The above post updated on 2016/11/16 to clarify several points.

  1. I’ve seen references to doing constrained delegation across a domain trust using versions of Windows Server prior to 2012. However, I’ve not found a definitive explanation of how this would work. At the very least it would require an up-level trust that supports Kerberos and all of the related configuration to enable Kerberos referrals to work properly. The second addendum-linked article, which is for pre-Server 2012, says “Constrained delegation is the only delegation mode supported with protocol transition and only works in the boundary of a domain.”

Excessive CPU Use on Win8.1 Redux

The changes I made to solve the “Immersive Shell” DCOM errors did clear up the System event log. However, the high CPU usage persisted so I did more digging. I eventually found several references to problems with a Windows Backup scheduled task and its sdclt.exe process. I started Resource Monitor and saw many instances of sdclt.exe. Some were running and many more were recently terminated. There is a scheduled task that is designed to notify the user that Windows Backup has not been configured. For some reason the sdclt.exe process is repeatedly restarted and this ends up using considerable system resources.

The fix is to go to the task and disable it. The task is located in the Task Scheduler under Microsoft -> Windows -> WindowsBackup and is called ConfigNotification. Select it and disable it. Unfortunately a reboot is necessary to actually get the incessant sdclt.exe restarting to stop.

I have not found an official Microsoft acknowledgement that this is a problem nor have I seen any postulations as to why sdclt.exe is behaving in this fashion. The only common thread is that it occurs on Win8.1. Was this scheduled task introduced in Win8.1 or were there changes made to it with the Win8.1 upgrade? As far as I can tell the high CPU usage started after I installed the 2013-12-13 Windows Updates but I’ve no idea what those updates may have changed.

Regardless, I have a hunch as to what’s going on. I am one of what is certainly a very small number of people who run with User Account Control turned off. A few people turn UAC off because they don’t want to be nagged about running programs with full admin privileges. My reasons are more pragmatic. I have a home (Documents) folder that is redirected to a UNC share. I also run Visual Studio with Administrator privilege because that is the only way to enable debugging. Unfortunately folder redirection does not play nicely with UAC. This was causing all sorts of weird errors in Visual Studio. Thus I turned UAC off. There is a major Win8/Win8.1 consequence to turning UAC off: modern apps won’t run. This didn’t seem to me like much of an issue because I couldn’t stand them anyway. The reason they won’t run is they are configured to only run in a partially trusted application domain. With UAC off you can only run managed code in full trust mode. I’m guessing that the Windows Backup notification was written in partial-trust managed code. If this is the case, it certainly won’t run with UAC turned off. Apparently running the system with UAC off is not part of the Microsoft test matrix.

This brings up an old beef of mine. Why doesn’t the redirector have better support for UAC? It is a total pain that a redirection made as ordinary (limited privilege) user can’t be accessed by that same user with a full local administrator token. I’m sure there is some use case that I’m being protected against but I can’t figure out what it is since the file system ACLs will still be applied. Yeah, I know I am in the extreme minority of power users who push the system to its limits. That’s the standard argument for not accommodating corner cases.

At any rate, I’m sure glad I got the CPU usage issue sorted out. Boy I can’t wait to see what surprises are in the next round of updates!

 

Hosting a Shibboleth SP Web Site in Azure, Conclusion

This is the last in a series of posts about using the Shibboleth Service Provider to implement SAML SSO authentication in an Azure cloud service web site. The first three posts present background information. There are two posts that are specific to using the Shibboleth SP with Azure and then there is this concluding post.

The information I am presenting has come from three sources. First is the official Azure documentation from Microsoft and Shibboleth documentation from the Shibboleth Consortium. Each discusses their respective areas but there is no overlap. My second source of information is what I learned while working on the Azure team at Microsoft. That was over a year ago and many things have changed in the interim. However, it gave me a foundation in understanding how an Azure cloud service works. Finally I did a lot of experimenting, trying things, seeing what worked and what didn’t. I’ve found nothing on the web about hosting the Shibboleth SP in Azure so I believe I am blazing a trail of sorts. I hope this information may be of some value to others. Having said that I must offer some caveats.

The first caveat is to simply underscore the fact that Azure is being updated by Microsoft at a frantic pace. My explorations that produced this web application were done in June 2013. Some of the details will certainly change or become obsolete over time. E.g. I noted that an Azure web site did not support SSL. I’ve since seen an announcement of an SSL preview for the web site feature.

Unresolved and Unexplained Issues

Mysterious Site ID in IIS

I used the remote desktop to look at IIS Manager on my Azure role instance and saw that the web site ID was 1273337584. I thought “it’s assigning a random number” and expected it to change with subsequent deployments. It didn’t. So I deleted the deployment and redeployed instead of just doing an update. It remained the same number. Then I deleted the entire cloud service and created a new one with a new name. The web site ID remained the same. What can I conclude from this? Nothing really. I don’t know if this is a fixed number, used for all web role instances (remember that each role instance gets its own VM, so there is no chance of site ID collisions), or if there is some algorithm that could be related to my subscription or some other value.

I looked into using appcmd to read the site ID assigned by Azure thinking I could then modify the shibboleth2.xml file on the fly. Then I discovered that the web site hasn’t been created at the time the startup script is running. The only option is to have a method override in my app code for the role start notification. This is a bit of a chicken-and-egg problem because I’d have to restart the Shibboleth service after updating the config file and might also have to restart the web site – from within the web site code. So this issue remains without a good resolution.

Azure Load Balancer Affinity

A SSO exchange involves several browser redirects first initiated by the SP code, then by the IdP code back to the SP. In between there is a user interactive log in. All of the Azure documentation stresses that your web applications must be stateless. If you have multiple instances of a web role running for load handling reasons, you have no control over which instance will receive a request. Will this cause problems during the authentication sequence?

I found one non-Microsoft blog post that said the Azure load balancer would have session affinity for a given source-IP/port pair. My own understanding from when I worked in Azure was that the load balancer would maintain a session for up to one minute of inactivity. I’ve seen no official confirmation of either notion. I’ve not spun up more than one instance yet to test this issue. Considering that Microsoft provides an SSO federation service in Azure’s Access Control Service, which uses the same sort of redirect sequence for authentication (more actually because of the extra federation hops), I’d have to believe that this is not an issue. It would be nice to know for sure though.

Conclusion

Of course this begs the question: why doesn’t Microsoft natively support SAML authentication? That is, why isn’t there a Windows Identity Foundation protocol handler for the SAML profiles and bindings? That would eliminate the need to jump through these hoops. I’ve asked some of my former Microsoft colleagues who are in a position to know and have received no response. I know the official line is to not comment on unreleased products or product plans, so the lack of a response is not surprising.

There is also the option of updating the Shibboleth SP implementation so it can act as a WIF protocol handler. It is open source and community developed. I might be able to contribute. Stay tuned.

 

Code Details for Hosting a Shibboleth SP Web Site in Azure

Continuing my series on hosting a Shibboleth SP web site in Azure, here is the entire install-shib.cmd startup file I used.

rem batch file to install the Shibboleth SP
echo running install-shib batch file >> %temp%\install-shib.txt 2>&1
date /t >> %temp%\install-shib.txt 2>&1

echo calling msiexec to run the Shib MSI >> %temp%\install-shib.txt 2>&1
msiexec.exe /i Shibboleth-SP\shibboleth-sp-2.5.1-win64.msi /quiet /L*v %temp%\shib-msi.txt INSTALLDIR=c:\opt\shibboleth-sp\ /norestart
if errorlevel 1 goto err1

echo calling xcopy to copy the config files >> %temp%\install-shib.txt 2>&1
xcopy /y /q Shibboleth-SP\*.xml c:\opt\shibboleth-sp\etc\shibboleth
if errorlevel 1 goto err2
echo calling xcopy to copy the key files >> %temp%\install-shib.txt 2>&1
xcopy /y /q Shibboleth-SP\*.pem c:\opt\shibboleth-sp\etc\shibboleth
if errorlevel 1 goto err2
echo calling xcopy to copy Shib DLLs that the ISAPI filter loader can't find >> %temp%\install-shib.txt 2>&1
xcopy /y /q "%systemdrive%\Program Files\Shibboleth\SP\lib\*.dll" c:\opt\shibboleth-sp\lib64\shibboleth
if errorlevel 1 goto err2

echo calling appcmd to add the ISAPI handler >> %temp%\install-shib.txt 2>&1
%windir%\System32\inetsrv\appcmd.exe set config /section:handlers /+[name='ShibbolethSP',path='*.sso',verb='*',modules='IsapiModule',scriptProcessor='C:\opt\shibboleth-sp\lib64\shibboleth\isapi_shib.dll',requireAccess='Script',responseBufferLimit='0']
rem appcmd returns 183 if the setting already exists; ignore and continue
if %errorlevel% EQU 183 goto appcmd2
if errorlevel 1 goto err3

:appcmd2
echo calling appcmd to add the ISAPI filter >> %temp%\install-shib.txt 2>&1
%windir%\System32\inetsrv\appcmd set config /section:isapiFilters /+[name='Shibboleth',path='C:\opt\shibboleth-sp\lib64\shibboleth\isapi_shib.dll',preCondition='bitness64']
if %errorlevel% EQU 183 goto appcmd3
if errorlevel 1 goto err4

:appcmd3
echo calling appcmd to remove the ISAPI filter restriction >> %temp%\install-shib.txt 2>&1
%windir%\System32\inetsrv\appcmd set config /section:isapiCgiRestriction /+[path='C:\opt\shibboleth-sp\lib64\shibboleth\isapi_shib.dll',description='ShibbolethWebServiceExtension',allowed='True']
if %errorlevel% EQU 183 goto icaclscmd
if errorlevel 1 goto err5

:icaclscmd
echo calling icacls to grant User execute to the Shib folders so the ISAPI filter will load >> %temp%\install-shib.txt 2>&1
icacls c:\opt /grant "Users":(OI)(CI)(RX)
rem if errorlevel 1 goto err6

echo calling icacls to grant NetworkService write to the Shib logging folder so the ISAPI filter can log >> %temp%\install-shib.txt 2>&1
icacls c:\opt\shibboleth-sp\var\log\shibboleth /grant "NetworkService":(OI)(CI)(RX,M)
rem if errorlevel 1 goto err6

:restart
echo restarting the Shib service to pick up the config changes >> %temp%\install-shib.txt 2>&1
net stop shibd_Default
net start shibd_Default
if errorlevel 1 goto err7

rem return a non-zero exit code for success
:success
exit /b 0

:err1
ECHO msiexec exited with errorlevel of %errorlevel% >> %temp%\install-shib.txt 2>&1
exit /b %errorlevel%

:err2
ECHO xcopy exited with errorlevel of %errorlevel% >> %temp%\install-shib.txt 2>&1
exit /b %errorlevel%

:err3
echo appcmd configuring handler exited with errorlevel of %errorlevel% >> %temp%\install-shib.txt 2>&1
exit /b %errorlevel%

:err4
echo appcmd configuring filter exited with errorlevel of %errorlevel% >> %temp%\install-shib.txt 2>&1
exit /b %errorlevel%

:err5
echo appcmd enabling filter exited with errorlevel of %errorlevel% >> %temp%\install-shib.txt 2>&1
exit /b %errorlevel%

:err6
echo icacls setting Shib folder perms exited with errorlevel of %errorlevel% >> %temp%\install-shib.txt 2>&1
exit /b %errorlevel%

:err7
echo restarting Shib service failed with errorlevel of %errorlevel% >> %temp%\install-shib.txt 2>&1
exit /b %errorlevel%

Here are some of the important edits that must be made to the shibboleth2.xml file. First, update the Site line inside of the ISAPI tag to read like this.

<Site id="1273337584" name="myshibbolethsp.cloudapp.net"/>

Next update the RequestMap section to name your host. Note that a Path element is not used, the entire web site is protected.

<Host name="myshibbolethsp.cloudapp.net" authType="shibboleth" requireSession="true"/>

Now set the entityID. This is the host name with the protocol prefix.

<ApplicationDefaults entityID="https://myshibbolethsp.cloudapp.net/"
 REMOTE_USER="eppn persistent-id targeted-id">

Remember to substitute your site’s DNS name and URL in the above edits.

Ensure that SSL is required.

<Sessions lifetime="28800" timeout="3600" relayState="ss:mem"
 checkAddress="false" handlerSSL="true" cookieProps="https">

You need to specify your IdP’s entityID. I am using the University of Washington’s IdP in this example.

<SSO entityID="urn:mace:incommon:washington.edu">

Last, you must specify the metadata source and signature. The UW is part of InCommon so its metadata is listed there.

<MetadataProvider type="XML" uri="http://wayf.incommonfederation.org/InCommon/InCommon-metadata.xml"
 backingFilePath="federation-metadata.xml" reloadInterval="7200">
<MetadataFilter type="Signature" certificate="incommon.pem"/>

Note the reference to the incommon.pem file. The metadata signature public key file is one of the files you must add to the Shibboleth-SP folder in your VS project.

Many more customizations of the Shibboleth SP are possible. These posts just scratch the surface WRT using SAML as an authentication protocol and with using the Shibboleth implementation as a Service Provider. There is a wealth of information on the Shibboleth.net site and on blogs and posts around the web.

In the next post I’ll discuss some issues I’ve yet to resolve.

Hosting a Shibboleth SP Web Site in Azure

I explained how to create an Azure cloud service application web role in my prior post. Now I will discuss the steps and code needed to add the Shibboleth Service Provider so that the web site can use SAML for single-sign-on authentication.

Adding an SSL Certificate to the Azure Web Role

SSL is the first layer of defense for an SSO web application. Thus you must obtain an SSL certificate for your web site’s URL. This is a whole topic in itself, so I will skip the details of how to obtain an SSL certificate. The biggest trick seems to be configuring Azure and Visual Studio to not get confused between the remote desktop certificate and the SSL certificate. The Azure instructions for configuring SSL are here. I’ll summarize the most important points below.

  1. If you are creating a test web site, then a self-signed certificate will suffice. Make sure the certificate’s subject CN is set to your web site’s full DNS name. In this example that DNS name would be myshibbolethsp.cloudapp.net.
    1. You will need to have the certificate in a PFX file so it can be uploaded to Azure. This PFX file must contain the certificate’s private key! You will also need the certificate thumbprint.
  2. Modify the cloud service’s service definition and service configuration files as shown in the Azure article.
    1. There are actually two service configuration files, a local and a cloud version. They must both be updated to contain the SSL certificate.
    2. Note that the service configuration files will already have a listing for the remote destkop certificate. The SSL cert entry should be added to this <Certificates> section.
    3. You probably don’t want a non-SSL connection, so remove the HTTP binding and endpoint and replace it with the HTTPS binding and endpoint in the service definition file.
  3. Upload the SSL certificate to your cloud service application.
    1. Go to the Azure Developers’ Portal and click on your cloud service application.
    2. Click on the CERTIFICATES item at the top right of the window.
    3. Click the UPLOAD button at the bottom of the screen.
    4. This opens the “Upload certificate” dialog. Browse for your cert’s PFX file and enter the private key password and click the OK check button.
  4. Now the SSL-modified cloud service application needs to be uploaded to Azure.
    1. I got an error when I tried to upload the app using the VS Publish command: Certificate: ‘myshibbolethsp.cloudapp.net’ with Thumbprint: <…> for Role: WebRole1 has not been uploaded to the cloud service. I had previously uploaded the certificate, but that didn’t seem to help.
    2. Rather than use VS to Publish the app, you can upload the “deployment” package using the Developer Portal. This is the option described in the Azure article above. Perhaps there is a reason they explain doing it in this fashion.
    3. First you need to build the deployment package. In VS, right-click the cloud service project and choose “Package…”. This brings up a small dialog where the Service configuration should be set to “Cloud” and the “Enable Remote Desktop” checkbox should be checked. Click Package.
    4. Go to the Developers Portal and click on your cloud service application. Go to the DASHBOARD and then click the UPDATE button at the bottom of the screen. This brings up the “Update your deployment” dialog which is very similar to the “Custom Create” shown in the Azure SSL article.
      1. Click the FROM LOCAL button to browse to the package file you just built. It should be under your VS cloud service project folder in the bin\Release\app.publish folder.
      2. Do the same thing to locate the configuration file in the same folder.
      3. Make sure the “Update the deployment even if one or more roles contain a single instance” box is checked. Click the OK check button.
      4. You can check the progress of the deployment update from the Developers’ Portal.
  5. Browse to your web site using HTTPS. Fingers crossed! It should work but HTTP should be rejected.

Adding the Shibboleth SP to the Azure Web Role

This post assumes that you have a SAML IdP that will authenticate your test Shibboleth SP. If you need to set up a test IdP, Microsoft has produced a series of videos on how to do this. Of course there is the Shibboleth IdP documentation at shibboleth.net. You should also have downloaded the Shibboleth SP 64-bit Windows/IIS 7 MSI file. Get the latest version from the download site.

  1. Install the Shibboleth SP to your development machine or a local web server. You will need several of the configuration files and it would be best to understand how it works in a local installation. The UW has some great instructions on doing this install.
  2. You have to upload the Shibboleth SP MSI and related files to Azure. To do this, add the files to the VS project.
    1. In VS create a folder under the web role project. Call that folder Shibboleth-SP.
    2. Add the Shibboleth SP MSI file to this folder.
    3. Add a properly configured shibboleth2.xml file to this folder. More on this below.
    4. Add a certificate public key PEM file for the IdP’s metadata signature.
    5. Create a text file in this folder. Name it install-shib.cmd.
    6. You would also need an attribute-map.xml file if you are doing any custom attribute mapping.
    7. All of these files must have their “Copy to output directory” property set to copy-always. You can find this setting on each file’s Properties tab.
  3. Several changes must be made to the shibboleth2.xml file. There is one change that is rather mysterious. The site ID for the cloud service web role must be set to match the site ID in IIS. I found that Azure was setting the site ID to 1273337584. I have no idea why this particular number was used. I’ll discuss this further in my follow-up post.
  4. Modify the service definition file to run the install-shib.cmd startup script. Add the line
    <Task commandLine=”Shibboleth-SP\install-shib.cmd” executionContext=”elevated” taskType=”simple” />
    inside of a <Startup> element following the instructions from Microsoft.
  5. Last, but not least, you need to fill in the startup script. I’ll post the entire script in a subsequent blog post, but here are the notes about what it does.
    1. First, it runs the Shibboleth SP MSI in unattended mode. I am using the default install path which had to be explicitly declared in the version 2.5.1 MSI. I also specify that install logging goes to the temp folder. This article describes how startup logging works.
    2. It then copies files from the Shibboleth-SP directory to the SP install directory. This includes some DLLs that are in the path but that can’t be found by the Shibboleth ISAPI filter for some reason.
    3. The SP MSI relies on the IIS 6.0 compatibility API extension to install the Shibboleth ISAPI filter. The Azure version of IIS does not have this extension installed, so I use the appcmd utility to install and configure the ISAPI filter.
    4. Some file system ACLs are missing so I use the icacls tool to set them.
    5. Finally I restart the Shibboleth service daemon so it will pick up the new Shibboleth2.xml values.
  6. Now you can build and upload this new version of your Azure web application. At this point you should be able to use either method: publish from VS or update from the Developers’ Portal. Note that there is actually a third method that could be used. VS calls out to Azure SDK tools. You could use those tools directly in case you want to have a build script automate the upload and deployment.

Now when you browse to your Azure web site you should be redirected to your IdP’s login page. Once you successfully log in you should be redirected back to your web app. There will now be session variables that contain authentication attributes such as IdP URN, user name, and so on.

Simple, huh? More details to follow. I didn’t say this was going to be easy.

 

Hosting a Web Site as an Azure Cloud Service

I’ve used the Shibboleth Service Provider (SP) for authentication of web applications running on my own IIS web servers. I wrote a simple ASP.Net web site in Visual Studio and configured it to run in IIS and then added the Shibboleth SP to it. This is a fairly straightforward task with much of the work done for you by the Shibboleth SP installer. The only thing that remained to be done after the SP installation was updating some configuration files and registering my SP with my university’s IdP. Having completed this project I wondered if this process could be repeated using Azure as the web hoster.

Shibboleth Service Provider

The Shibboleth SP comes in two flavors: IIS and Apache. As I outlined in my prior post, there are several different options for web hosting in Azure. I could create a virtual machine running either Windows or Linux and install the Apache web server on it. I’m not going down that road for a variety of reasons that I’ve already noted. The simple web site option won’t work because it doesn’t support SSL or startup scripts. That leaves the option of exploring the Cloud Service and its IIS web role to do this Shibboleth SP hosting.

The IIS version of the Shibboleth SP is composed of two parts, an ISAPI filter DLL that intercepts requests before they reach your web application code and a Windows service that maintains SP state. The SP is packaged as an MSI and is installed by the Windows installer. This means that there must be a way to run the MSI on your Azure web host before the web application starts. Fortunately the Cloud Service web role can be configured to run startup scripts. There is another wrinkle to consider. The Shibboleth SP MSI uses the IIS 6.0 compatibility API to install its ISAPI filter. I did a bit of experimenting and discovered that the Azure Windows Server 2012 web role does not have the IIS 6.0 compatibility API installed. Thus additional startup steps are required.

Steps to Create an Azure Cloud Service Web App

The analog to creating a local IIS web app is to create an Azure cloud service web role. To create an Azure web site you need to have an Azure subscription. There are several ways to obtain said subscription. One option is to sign up for a 90-day free trial. If you do this you must cancel within those initial 90 days or charges will begin to accrue. If you have an MSDN subscription, that entitles you to a $100/month subsidy for Azure services. This is a good way to kick the tires and even to run a small web site. Azure only supports two different kind of user login accounts: Live accounts or Office 365 accounts. Since MSDN also requires the use of a Live account, this is a straightforward way to get an Azure subscription.

With a subscription in hand, you can log into the Azure Management Portal using the corresponding Live or Office 365 account. The next step is to create a new web application. The Azure documentation on creating a cloud service is here.

This blog post assumes you are using Visual Studio. The steps I describe apply to both VS 2010 and VS 2012 although the later version has more built-in support.

  1. Install the Windows Azure SDK. The version that is current as of the writing of this post is 2.0. This installs VS templates and extends the VS menus with Azure-specific commands. It also installs Azure libraries and tools.
  2. Open VS and create a new project using the Visual C# Cloud template.
    1. Go with the default .Net framework version. With VS 2010 that is 4.0. With VS 2012, that is 4.5. The .Net version support is one of the biggest differences between VS versions.
    2. You will need to name the project. Use whatever name makes sense to you; this name will not be used by Azure.
    3. Leave the “Create directory for solution” checkbox checked.
    4. After you click OK the “New Windows Cloud Service” dialog will open. It lists 3 ASP.Net web role templates along with others. Choose whichever template you may be familiar with using. If you are not familiar with ASP.Net, then beware; all of these are complex web application templates. I chose the ASP.Net Web Role and discovered that it created several hundred web site files. Yikes! VS does have an “Empty ASP.Net web site” template, but it is not available as one of the cloud service roles. At any rate, you can accept the proposed name of WebRole1 or you can click on the name and an edit icon (a pencil) appears. If you click on the pencil you can rename the web role to something more meaningful to your web application. When you click OK VS whirs away for a while and then presents to you the beginnings of a web application.
    5. Create whatever basic web site functionality you may want. Build it and run it to ensure it works.
  3. Sign up for an Azure account and log in to the Azure Management Portal
  4. Create a Cloud Service application. You can use the quick create option.
    1. You will need to choose a URL. Whatever you choose will be prepended to .cloudapp.net. It is possible to use your own full DNS name, but I won’t go into that here. Rather, you need to choose a name that is unique within the cloudapp.net namespace. For example, if you choose myshibbolethsp, then your web site’s URL will be https://myshibbolethsp.cloudapp.net.
    2. You should select a region that is close to your location to keep latency and transfer time to a minimum.
    3. If you have more than one Azure subscription, you will be asked which to use for this new cloud service.
    4. When you are done you will have an empty cloud service with no running instances.
  5. Now upload your web application to Azure. Go to the Visual Studio Solution Explorer. There will be a cloud service project in addition to the web role project. Right-click on the cloud service project and choose Publish…
    1. This opens the Publish Windows Azure Application wizard. Follow the steps in this MSDN Article to complete the upload.
    2. Choose the option to enable remote desktop. The Azure tools automatically create a remote-session encryption certificate and does the VS and Azure configuration for remote desktop.
    3. It will ask you for a storage account for debugging. You can create one if you don’t already have one. It won’t actually be used unless you add Azure debug logging to your code.
    4. Since this is a test cloud service you can select deploy to production. Staging would have a different URL which will complicate things unnecessarily.
    5. It takes a while to upload the project packages and then start the web role. You can monitor the progress in the VS output pane.
  6. You can try to access the web site after VS says it has successfully deployed. You can also go to the Azure Developer’s Portal to monitor and/or configure your new cloud service application.

Now that you have a running Azure cloud service application, you can configure it for SAML authentication using the Shibboleth SP. I will demonstrate how to do that in my next post.

Hosting Options for a Web Application

In my prior post I discussed SAML as a popular federated authentication protocol standard. To create a SAML-protected web site, in fact, to create any web site, you need to have a web server. You can use almost any type of computer as a web server, but for reasons of reliability and load handling, you’d probably want a server-class machine. Server-class computers are much more expensive than commodity workstation machines. You would probably also be concerned about the ancillary systems such as the networking gear and power conditioning components. For these and may other reasons you may wonder if it would be advantageous to have someone else provide a web server that you could use. This type of service is commonly termed web hosting.

Editorial Note: I dislike using the term “cloud” to describe what is nothing more than hosted services. That is, services that are running in someone else’s data center. Technical people love latching on to buzzwords. These are terms that get used way beyond their technical origins and often to the point where they become meaningless. Despite my objections, it is difficult to discuss Microsoft hosting services without using the word “cloud.”

So, I’d like to create a web site whose content is protected by SAML authentication. What are my options? First off, I’ll use the Shibboleth open-source service provider as the SAML software for the web site. Then I’d like to have someone else host it so I don’t have to invest in server-class infrastructure. What are my hosting choices?

Software as a Service

Software as a Service, abbreviated SAAS, is very common and most people have used it. Imagine a software application that runs on your computer such as Microsoft Word/Office or Intuit’s Turbo-Tax. Office 365 and Turbo-Tax Online are SAAS versions of those programs. Other common examples of SAAS include Hotmail, G-Mail and Google Docs.

Infrastructure as a Service

There are a number of hosting providers that allow you to upload a virtual machine. They will run it for you on a virtual server and will provide the networking and related components. This is IAAS. You build your own web server, either physically or within your own virtualization environment. You then author a web application using a framework supported by your web server and configure this web server as needed to run the application. Finally you upload the virtual server image of your web server to the IAAS provider. Thus you provide and configure the operating system, the web server service, and any libraries or other programs needed by your web application. You also have to provide the health and load monitoring of your web application. You do all of the work to create a web server and someone else supplies the infrastructure to run that web server. IAAS can host nearly any type of server that communicates over a network.

Platform as a Service

PAAS falls in between SAAS and IAAS. The PAAS vendor provides a virtual environment that includes an operating system and a web server service and related support services such as monitoring. There are no clean lines, it is a continuum such that it is difficult to say precisely where PAAS stops and SAAS begins. I like to think that if it requires writing computer code, then it is PAAS. Thus services like BlogSpot and WordPress seem to be more SAAS than PAAS.

Microsoft Windows Azure

A number of companies offer variations on these hosting services. As I mentioned earlier, Microsoft Office 365 is an example of SAAS. I used to work on Microsoft’s Azure, so I will discuss some of its hosting services. Azure provides both PAAS and IAAS options. Azure can host your virtual machines; that is their IAAS offering. Azure has two PAAS variations that they term Web Sites and Cloud Services. The web site service is new and is currently limited to vanilla ASP.Net web sites that cannot employ SSL/TLS. The cloud service variation is much more powerful and configurable. More about that in a moment. Azure also offers a number of supporting services which includes SQL databases, high availability no-SQL storage, networking services such as VPN and event messaging, and directory services and this list is growing on a nearly monthly basis.

Azure Cloud Services

An Azure cloud service allows you to create instances of two different roles; a web server role and a worker role. The idea is that a sophisticated web application will need front end services to process web requests (the web role) and back end services to do extended processing (the worker role). The typical scenario would have a web role field a request for some “thing” that is part of the web application. The web role would then queue up a request to the worker role to get that “thing.” The queued request would include the address to which the response should be directed. The worker role would do whatever processing is needed (say, do a cart check-out) and then post an item to the response queue with the results. A web role instance would read the response queue in between handling requests so that the results can be sent back to the requesting web browser.

Communications Infrastructure

Azure provides a number of mechanisms that web roles and worker roles can employ for communications.

  • Azure storage provides highly-available and fault-tolerant storage. There are 3 copies of all datum stored in a data center and, if needed, the data can be replicated between Azure data centers.
    • Blob storage – allows the storage of very large datums, each with a unique address. You could build a photo-sharing site using blob storage for the photos.
    • Table storage – on-the-fly table creation using standard .Net datatypes for each table column. One element of a row must be declared to be the unique key for the table. This is much lighter-weight storage than a SQL database but it comes with limited indexing and searching capabilities and no relational operations.
    • Queue storage – supporting the standard push, pop, and peek operations and perfect for the introductory example web/worker role communications
  • Azure Drives – this is actually a variation of blob storage where you upload an NTFS-formatted virtual hard drive (VHD) to your blob storage and then mount this as a drive in your azure role
  • SQL Azure – SQL database instances that you can create and use in your Azure cloud service roles
  • Azure Service Bus – this is a set of services that offers event messaging, queues, and message forwarding
  • Virtual Private Network – create a VPN that can be used by all of your Azure cloud service roles and can optionally connect to your local network

Azure Cloud Service Role Details

A cloud service role is composed of instances where you must have one or more running instances to be operational. This is one of the largest advantages of using Azure as a hosting provider. If the load on your application increases, you can add more instances simply by making a configuration change in the Management Portal. Azure handles the work of finding space for the new instances, starting them up, and configuring them in the load balancer. The load balancer will automatically distribute traffic to all of the running instances.

Azure cloud service roles have some unique characteristics that differentiate them from conventional on-premise servers.

  • Not domain joined – Azure roles are all stand-alone, thus there are no shared service identities; most intra-Azure communications is secured using shared secrets
  • Role instances run as VMs and can be stopped and restarted without warning by the Azure fabric controller in cases of hardware or other failure; for this reason:
    • Applications must be stateless or store state off-machine to Azure storage or SQL Azure
    • Conventional logging to files or the event log will be lost if the instance gets recycled; you can use an Azure library to log to Azure storage
  • Each instance is on the public internet by virtual of the load balancer mapping internal IPs to public IPs
    • There is a per-machine Windows firewall
    • Can use Azure VPN to connect role instances together if secure direct TCP communications is needed
  • Limited out-of-the-box configuration but can install and configure additional software by the use of startup scripts
    • The startup scripts run each time an instance is recycled or updated; thus a complex startup script will slow role startup
  • Can configure per-instance remote desktop for inspection and debugging

Azure roles are tightly coupled to ASP.Net. IIS is the default web server. You can choose the version of Windows Server you’d like to use. This also implies the version of IIS and the .Net framework. Azure currently offers the use of Windows Server 2008 SP2, Server 2008 R2, or Server 2012.

The Azure SDK extends Visual Studio with cloud service templates and publication support and provides a development simulation of the Azure run-time environment. Thus you can develop and test an Azure application on your desktop and then upload it to Azure all without leaving Visual Studio.

Next time: hosting a Shibboleth SP web application in Azure.