Category Archives: Web Services

A Brief Primer on Azure Active Directory

I recently gave an introductory talk on Azure AD at a conference. I’m posting the talk here so that others can access it.

This is the PDF of the talk: 20181017-Kool-Brown-AzureAD1

This is a link to the original PPTX file along with the PDF:

The PPTX file has a number of slide notes that expands on the content. It also has some simple animation to make the presentation a bit more interesting.

Note that I have no affiliation with Microsoft and that all of this information is gleaned from public sources. Also note that this is just a snapshot-in-time view of AAD. It is changing rapidly such that the info in this presentation will likely be out-of-date before long.

Safely Storing Azure App Connection Secrets

Microsoft’s Azure AD surfaces a wide variety of capabilities that can be accessed programmatically via RESTful web APIs. The Azure Graph and its successor the Microsoft Graph are two of the more comprehensive APIs that enable the manipulation of most AAD objects. Using these APIs requires obtaining an OAuth access token to send with API requests. AAD uses two object types to provide the mechanism for obtaining OAuth tokens: Applications and Service Principals. An Application object can span multiple AAD tenants whereas the Service Principal is the tenant-specific representation of the application.

To obtain an OAuth access token, one must call the Microsoft authorization server endpoint to request it. This call must be authenticated by providing the Application’s client ID and client secret. These two values are the app’s credentials and must be protected as you would protect any other privileged credentials. This becomes a problem if you have automated tasks that need to connect to Azure. How can these credentials be safely stored?

The naive programmer would just embed these values into the task’s code. This is especially egregious if the code is checked into a source library because then the secrets will be in the change history even if they are later removed from the code. The first refinement is to put the secrets into a task configuration file. That’s fine as long as the configuration file is never checked into the source library or otherwise stored in an insecure fashion.

It turns out there is a much better solution. X509 certificates are designed to store secret keys and can be used for AAD OAuth token requests. The first step, after a suitable certificate is created1, is to add the certificate as an app access key2 using either the Azure Portal GUI4 or PowerShell. Then you need to install the cert, with private key, into the cert store of the server on which the task will run. Make certain to grant private key access to the service account being used to run the task. Refer to the article in the second footnote for an example of using this technique in C# code.

Using Certificate-based Authentication From PowerShell

The AzureAD PowerShell module wraps the functionality of the MS Graph. A connection to Azure must be made before any of the AzureAD commandlets can be called. The Connect-AzureAD3 commandlet is used to do this. In reality, what it is doing is obtaining and storing an OAuth access token in the PS session. I use the following bit of code to do this:

Import-Module AzureAD
# Check if there is a connection to AAD. If this call throws, then make the connection.
try {
    Get-AzureADCurrentSessionInfo | Out-Null
catch {
    # Use cert-based auth via the AAD app. Ensure that the
    # task context account has access to the cert private key.
    Write-Output "Connecting to AAD"
    $tenantId = ''
    $appId = ''
    $certThumbprint = ''
    Connect-AzureAD -TenantId $tenantId -ApplicationId $appId -CertificateThumbprint $certThumbprint

First this code tries to make an AAD call (Get-AzureADCurrentSessionInfo). That call will fail and throw an exception if there is no valid OAuth token. If that is the case, then the code in the catch block will make the connection and obtain the token using the certificate. Note that you will need to fill in the missing tenantId, appId, and certThumbprint values.

The try/catch setup allows you to run this code multiple times in the same PS session such that it will only attempt to make a connection if there isn’t a valid token.

Unfortunately there are still many functions related to Exchange Online and other Office 365 apps that are not currently manageable by the AzureAD module. For these operations one must revert to the older MSOnline PS module. It employs the Connect-MsolService commandlet to make the connection to Azure but it does not support using certificates to store an app secret.

Using a certificate as the Azure app secret store has many advantages. It provides multiple levels of security. The private key is stored only on the server running the tasks. The private key is protected by ACLs that only grant access to the accounts that are configured to have access. I recommend storing the certificate with the private key set as exportable but protected by a password. That way you can move the tasks and the cert to another server if needed. If you are truly paranoid you can skip this step since it is pretty easy to repeat the steps of creating a self-signed cert and adding it as a secret key to the Application object.


  1. A self-signed certificate is fine for this use. Creating a self-signed cert is pretty straightforward. This article explains one way to do this:
  2. This article uses a sample daemon app to describes using a certificate for API access and goes into the details of every step:
  3. The MS docs for the Connect-AzureAD commandlet also show how to do this from PowerShell although I wasn’t able to get every step of the sample to work as expected.
  4. The Azure Portal GUI for setting a certificate as an Application secret Key:

    Note the “Upload Public Key” button that enables selecting a cert file that contains the public key.

What is a Web Service?

The earliest computers didn’t talk to one-another. They were islands of information. A lot has changed since those pioneering days. Web services are the current state-of-the-art in computer to computer communications. I present below a brief history to illustrate and help explain this transformation from isolation to connectedness.

Let’s Talk! Connecting Computers

Networking technologies were developed to enable inter-computer communications. At that point you could connect two or more computers together but it still wasn’t easy to share information. There were initially no standard ways to represent or manipulate data.

Multiple, competing efforts progressed to standardize network communications. TCP/IP emerged as the primary way to interconnect networks and enabled the Internet. SMTP saw increasing adoption as an electronic mail protocol. Things were not as simple in the world of client-server communications. DCE/RPC and CORBA competed for attention, with Microsoft settling on the former. While providing a framework for client-server computing, these are still low-level binary network protocols that are not easy to use nor are they firewall-friendly. By that I mean those protocols requires a large number of TCP ports to be open which nullifies most of the security gained from a firewall.

Web Services

The next major advancement in network communications was SOAP. SOAP is not a wire-level protocol meaning that SOAP messages can be transmitted via a variety of application layer protocols including HTTP and SMTP. SOAP also standardized on XML as the data representation model. Both of these concepts were transformational in that now you could use a set of ports that are usually left open on firewalls and the data could be interpreted without an understanding of a complex binary layout. Major vendors jumped on SOAP and produced a raft of web service specifications (WS-*). Why were these called “web services?” Because they used the same underlying protocol as the World-Wide-Web: HTTP!

Except this is not a completely accurate timeline. SOAP was developed after HTTP and it turns out that HTTP itself makes a great client-server computing protocol. The HTTP protocol was developed by early Internet luminaries including Tim Berners-Lee, Paul Leach and Roy Fielding. The latter published a revolutionary dissertation in 2000 in which he did an analysis of networking architecture. Within his dissertation Dr. Fielding presented Representational State Transfer, a,k.a. REST. This is an architectural pattern for producing client server communications using the rich semantics of HTTP. However, given the large investment that had been made in the WS-* suite, it took a long time for folks to realize the inherent advantages of REST over SOAP.

RESTful Web Services

Although SOAP-based services used HTTP, they did not and cannot fully leverage all of the features of HTTP. All SOAP message exchanges use the POST HTTP verb. It doesn’t matter what you want to do, the SOAP client POSTs a request to the SOAP server. This is incredibly inefficient. The majority of network transactions are data reads (I don’t have any handy references for this but I believe it to be true). HTTP has a built-in verb for fetching data: GET. HTTP GETs are by definition stateless, idempotent and without side-effects. This enables two very powerful features: scale-out and caching. Because the requests are stateless you can use a load balancer to spread them out to a farm of servers. This also enables caching of requests on intermediate nodes of the Internet such as proxies and gateways. These combined capabilities have enabled the creation of Content Delivery Networks (CDNs).

Details of REST

REST is a resource-centric architecture which gives it the following characteristics.

  • Each distinct resource is named by a unique URL path.
    • e.g.
    • The leaf element is the resource name while the intervening path elements can be thought of as containers or collections; thus the leaf element name need only be unique within the specific path hierarchy.
  • CRUD (create, read, update, delete) operations map directly to the HTTP verbs PUT, GET, POST, and DELETE respectively.
  • Stateless – as noted above this enabled Internet-scale services
  • Standard MIME media-types for payload encoding (JSON, XML, etc.)
  • Searching for resources is rooted in a container path and employs URL parameters to describe the search query
    • e.g.$filter=thingnum lt 22

While all of this is cool, REST isn’t an actual protocol. Rather, it is a set of architectural styles or conventions. Several competing implementation protocols have evolved as a result. The two dominant REST API description languages are OData and OpenAPI (was: Swagger). The former is being pushed heavily by Microsoft which may explain why some in the open source community prefer the latter (and I’m sure there are lots of other good reasons). In any case they both aspire to the same goals: providing a standard way for a service to describe its capabilities (the service description endpoint) and the schema of its data (the service metadata endpoint).

Examples of RESTful Web Services

Where to start? They are all around us. Facebook, Amazon, Google, Microsoft all expose resources via web services. I have code that calls the Amazon AWS Simple Queue Service for event message delivery. I am developing code to call the Microsoft Azure Active Directory Graph API (AAD Graph for short).

My employer, the University of Washington, hosts a number of RESTful web services. One that has been in use for a while is the Groups Web Service. A new middleware service is being developed to provide a standardized way to access University data. This is known as the Enterprise Integration Platform.

My next post will dive into making web service calls using the PowerShell scripting language.


Example PowerShell and a PowerPoint deck at