The easiest way to illustrate the purpose of this blog is to describe the scenario we are trying to solve:

‘ContosoApps’, a Microsoft cloud solution provider (CSP), sells Office 365 and additionally bundles their cloud software and services (‘family-of-apps’) to provide value-add to their customers. As part of the customer acquisition flow, the goal is for a new customer to sign up for Office 365 (which includes creating a new tenant and subscription) through ContosoApps. Additionally, as part of this onboarding process the flow should “pre-install” ContosoApps’ ‘family-of-apps’ so that all the customer’s users can take advantage of.

Today this ‘family-of-apps’ onboarding (through the CSP pre-consent flow in Microsoft Graph) only works for applications that access Directory and Intune functionality. If the app tries to access other services behind Microsoft Graph, this pre-consent flow will not work.

This inability to use the CSP pre-consent flow then requires an admin of the customer to pre-consent each AAD application in the ‘family-of-apps’ manually by opening the application, signing-in and then consenting (accepting) the consent dialog describing the permissions required.

This manual approach is sometimes considered laborious and often leads to a bad user experience which can lead to a negative perception to your ‘family-of-apps’

Therefore, the objective of this blog is to provide guidance on how to use the AAD Graph to allow a tenant (customer / ISV user) admin to grant pre-consent for multiple applications (‘family-of-apps’) by consenting to a single ‘bootstrapper’ application.

That is, how you as an ISV / developer, can create a bootstrap application that you can then use to onboard your AAD applications to your customers / users preventing the tenant admin to have to do multiple consents. i.e. one for each of your AAD applications or ‘family-of-apps’.


The approach requires that there is a ‘bootstrap’ application that, once granted the appropriate permissions by an administrator, can record consent grants for the family-of-apps. If you as an ISV / developer also happens to be a CSP as well, then the ‘bootstrap’ application can actually be pre-consented following the how-to-set-up-a-partner-managed-application flow in this document. If you are not a CSP or you prefer to manually configure the bootstrap application, you should skip the steps under ‘Pre-consent your app for all your customers’.

Important: When registering the bootstrap application in the Azure Portal in your organisation’s directory, you should configure the application to require ‘Access the directory as the signed-in user’ in the API Windows Azure Active Directory as per figure below. You also need to add a permission for the Microsoft Graph API, e.g. ‘Sign in and read user profile’, this ensures the necessary Service Principal is created for the customer’s tenant directory.


Note: It is also assumed that each of your applications (‘family-of-apps’) are already registered with AAD through either the Application Registration experience in the Azure Portal, or via the Application Registration website that can be found here.

Note: If you plan to create a web app version of the bootstrap application, remember to register it as multi-tenant.

Bootstrap Application

To help you understand the steps required to build the bootstrap application, we have included two applications you can download and modify. Once registered in your tenant’s directory (see above) and then modified to include your ‘family-of-apps’, the code should work as-is. However, how you onboard your customer’s will probably require you to consider where the bootstrap application fits in your onboarding process. Therefore, at the least you should perhaps consider the downloadable applications as simply templates for your own bootstrap application that you can use to understand how to build your own.

For this blog, we are going to concentrate on the code in the native .NET Console application.

Where possible (more on that later) we are using the AAD Graph .NET SDK, which is installed as NuGet package, to make the requests to the AAD Graph. Whilst it is possible to make requests directly against the AAD Graph rest endpoint ( using the .NET SDK simplifies the code and removes the need for us to serialize and de-serialize JSON objects.

There is also a reference to the Active Directory Authentication Library or ADAL for short. Using this library simplifies the authentication process immensely, easily allowing us to separate the authentication code from the application logic. All the code required to authenticate a user can be found in the class AuthenticationHelper.cs

Finally, there is a reference to the Microsoft Graph Service Client SDK. Whilst the bootstrap application does not make use of the data returned by the call there is a circumstance where a need to make a request is required…more on that later.


The bootstrap application (once granted the correct permissions) will be able to programmatically provision any AAD application, including providing consent (administrative consent on behalf of the organisation) if it is run in the context of a signed-in administrator in the target customer / user tenant.

The following sections provide details on how to record consent for delegated permissions (i.e. for interactive apps that run in the context of a signed-in user) and application permissions (i.e. for daemon or background services that run without any signed-in user being present). See this article for more details. The process of provisioning and recording consent will need to be done for each of the apps in the ‘family-of-apps’ that you provide.

clip_image002Configuration: Once you have registered the bootstrap application there are a number of parameters that you need to modify in order for the authentication to happen successfully. These parameters can all be found in the class Constants.cs. The parameters you need to modify are:

  • BootstrapClientId – generated when registering the application in the Azure Portal. This value is represented by the Application Id for the application.
  • RedirectUrl – specified when registering the application in the Azure Portal.


Authentication: As per previous comments, the logic to Authenticate a user is in the most the responsibility of the AuthenticationHelper.cs class. It’s only purpose is to authenticate a user and manage the access token. There are 2 tokens, one the for the AAD resource and one for the Microsoft Graph resource. Once the user has authenticated we cache the access token as a static resource for the lifetime of the process. This effectively means the user only needs to sign-in once (maybe twice if we do need to use the Microsoft Graph resource) and then we can make use of that token for all future requests.

Consent: The code for processing the consent flow can all be found in the class Request.cs.

When adding the OAuthGrant permissions you need to have at hand the following values for each of the applications in your ‘family-of-apps’:

  • AppId…this is the AppId of your AAD application
  • DisplayName…this is the name you have given to your AAD application
  • ResourceServicePrincipalId…this is the Id relevant to the API the permission belongs to, e.g. if you need to add the permission User.Read for the Microsoft Graph API then you will need the Id of the Microsoft Graph API…values for the Microsoft Graph API and the AAD Graph API can be found in the class Constants.cs
  • DelegatedPermissions…represented as a string delimited by a <space>, these are the delegated permissions required by your AAD application, e.g. if you require to read the user’s profile and send an email on behalf of the user, you will need the following scope of permissions “User.Read Mail.Send”. Note, if the permissions cross multiple APIs, then you need to create am OAuthGrant per API.
  • AppOnlyPermissions…represented by a list of strings, these are the app-only permissions required by your AAD applications. Note, these permissions are represented by their GUID value and not their string name. To find the GUID value have a look at the AAD Application manifest file under resources.


The assumption we have made in the Bootstrap application is that the application permissions you need to the user to consent belong to either the Microsoft Graph API or the AAD Graph API. Prior to consenting to these permissions, the Microsoft Graph and/or AD Graph API applications need to be represented as a Service Principal in the target tenant. At the time of writing, it is not possible to add the Microsoft Graph API application as a Service Principal directly in code. However, the Service Principal for each API will be added JIT to your tenant once consent to any application requesting permissions using those APIs (e.g. the AD Graph or Microsoft Graph). Therefore, your bootstrap application should include whatever basic ‘Read own profile’ permission is available to ensure the SPs to the Microsoft APIs are automatically created. The console application then will be able to request consent for use of the bootstrap application; it will prompt the user to sign-in again with their credentials as a new access token needs to be generated to make the request.


The next step is to add any other API applications as Service Principals that your AAD applications needs to consent permissions for. As per previous comments, we assume you will only need permissions for the Microsoft Graph API and the AAD Graph API and therefore these are the only Service Principals we add.


The code to add the Service Principal will only add if the tenant does not already have the Service Principal in their directory.

If you need to add more, then follow the code on how to create the Service Principal replacing the GUID and the Display name for each. Note, if you are struggling to find the GUID for the API then the advice is to look in the manifest file associated with your AAD application as per the app-only permission Ids discussed earlier.

Next, create a collection OAuthGrants. The contents of each OAuthGrant and the values you need was discussed earlier…the code to do so is as per below:


From here on in you should no longer have to modify any more of the code. The process to add the permissions is as follows:


You will notice when adding the App-only permissions we don’t use the AAD Graph Client SDK. This is because at the time of writing the bootstrap application, the SDK does not support adding AppRoleAssignments.


Okay we are done. The bootstrap application should now have granted all the necessary permissions for your AAD applications (‘family-of-apps’) require. You should find now that when a user of the tenant, targeted by the bootstrap application, sign-ins in to your AAD application the consent dialog should be suppressed.

Remember, you can download either a native or a web application bootstrap here.

If you would like to know more about the specific API calls used for programmatic consent (e.g. for use in a language unsupported by the Graph SDK), see also this post by Arsen which details some of the API calls and their parameters in greater detail. You can also use HTTP tracing tools like Fiddler to view the API calls made by this code.

Finally, thank you to Stewart Adam and Denis Kisselev for collaborating on this project.

A couple of weeks ago I was given the opportunity of working with a partner to build a solution that would hopefully help them automate their expense (receipts) processing.

The scenario was simple:

  1. Upload Image (.png, .jpg).
  2. Extract Data from Image.
  3. Process Expense using extracted Data.

Whilst the scenario sounded simple there was a need for a reliable infrastructure to enable the processing, i.e. queue the image, hand-off the image for processing, track status, error-handling, and return success or failed state.

[Note, for the purposes of the blog post I am not going touch on how the data was extracted. This is referenced in another blog post by one of my colleagues on the team that can be found here. In my example I am going to mock the data extraction using a single callout to the Microsoft Vison API OCR (Optical Character Recognition) method.]

After several iterations the following architecture was agreed:

Receipt Processing Diagram

Walking through the steps:

  1. User takes a photo of the receipt and using a Xamarin app, chooses to upload the image for processing (Azure Blob Storage). [Note, for the purpose of this blog, rather than a Mobile Test App (as per diagram) I have included a simple .NET Console application which is not production ready and for demo / PoC purposes only]
  2. Once the image is uploaded the Xamarin app adds a message to the Expenses Queue (Azure Queue Storage) to trigger the next step in the process.
  3. The Expense Processing Function is an Azure Function that is triggered when a message is placed onto a specific Azure Queue Storage (QueueTrigger). When triggered the Azure Function creates a record in a table (Azure Table Storage) to track status and to store the success or failed state of the process.
  4. The Expense Processing Function hands off the actual processing of the image to another function. Like step 2, this is managed by placing a message on a queue which then triggers the Receipt Processing Function (Azure Function). At this point you may be wondering why there is a dotted line around the 3 boxes and it is named Smart Services. This is to suggest that these services are isolated, that they have no dependencies on any other service. This was a key ask by the partner because over time other apps, not just the Expense Processor, may need to call the receipt processing service.
  5. Once the processing had completed (success or fail) the Receipt Processing Function hands-off the result back to the calling application. To ensure isolation (as per step 5) the Receipt Processing Function simply hands-off using a callback URL as defined in the original message item. This callback Url endpoint is another Azure Function, denoted in the diagram as Processing Callback Function whose trigger this time is a HttpTrigger.
  6. The purpose of Processing Callback Function is to update the state the Receipt Processing Function. It does so by updating the table record that was created as per step 3.
  7. The Processing Callback Function also adds another message to the Expenses Queue, which in turn will again trigger the Expense Processing Function. This step is optional, but allows the Expense Processing Function to do any post-processing such as notifying a user that there expense has been processed.

[Note, for the purposes of this blog I have not included any code relating to the Web App that was built to view and manage the outputs of the OCR processing.]

So why Azure Functions? Well I’m not going to paraphrase the contents of the Azure Function documentation page but in our scenario Azure Functions fitted perfectly:

  • small discrete code classes;
  • simple bindings and triggers;
  • no complicated server infrastructure;
  • cost effective – pay as you go;
  • simple to use, simple to integrate;
  • continuous deployment;
  • scale;
  • analytics and monitoring;

Okay that’s the summary complete. Next I am going to walkthrough some of the key pieces of the solution and then finally provide instructions on where to learn how you would go about setting up continuous deployment to Azure using Visual Studio Team Services.

The Solution

All the code I describe in this blog post can be found on GitHub.

There are 2 folders: a simple image uploader console application and, the azure functions to process the image. Note, there are 2 azure function solutions: ExpenseOCRCapture which contains the Expense Processing Functions (as per diagram) that handle the processing workflow; SmartOCRService which contains the Receipt Processing Function (as per diagram) to manage the callout to the Microsoft Vision API and parse the result.

Please feel free to download the solutions and try out the code yourself. For instructions on how to deploy and run, please refer to the pre-requisite and setup instructions outlined in the readme documents in each folder.

[Note, at the time of writing to build and deploy the Azure Functions you must use Visual Studio 2017 Preview (2). For details on where to download please refer here.]

The Azure Functions

Let’s have a look at the Azure Function solutions that are in GitHub:


The contents of the expense-capture folder contains a single Visual Studio 2017 Preview (2) solution that contains two Azure Functions called ExpenseProcessor and OCRCallback.

Looking at the contents of ExpenseProcessor.cs:


The ExpenseProcessor function’s primary purpose is to handle the image processing workflow. The function itself is triggered by a message being added to the Azure Storage Queue, receiptQueueItem.

As well as receiptQueueItem there are several other important parameters of this function, namely:

  • receiptsTable – this is an Azure Storage Table which provide tracking status and ultimately the output of the OCR request.
  • ocrQueue – this is an Azure Storage Queue and provides the binding to allow this function to callout to the SmartOCRService that we will discuss later. Note, its connection property is set as SmartServicesStorage – this is an Application Setting key/value pair and should be the Azure Storage Connection string associated with the SmartOCRService storage account.
  • incontainer/receipts – this is an Azure Storage Blob that is used to store the image files for processing. Note, rather than sharing this blob with the SmartOCRService, this function generates a Shared Access Signature (SAS) which the OCR service then uses. This removes a dependency on the SmartOCRService thus allowing multiple blob stores and therefore multiple requestors.

Step 0 in the case statement is responsible for the primary activity of this function and is the the one that provides the SmartOCRService with the necessary information so that it can process the image:


The method StartOCR(…) is responsible for creating a new message queue item of type OCRQueueMessage. The message has 4 properties:

  • ItemId – unique identifier for the image being processed
  • ItemType – the type of the image, in this case ‘receipt’ but in other solutions this may be ‘invoice’ or ‘order’
  • ImageUrl – the SAS which provides the SmartOCRService the location and permissions required to access the image in blob storage
  • Callback – so that the SmartOCRService knows where to respond to once the OCR processing is complete, a callback URL is provided. This is the HTTP address of the OCRCallback function which we will describe next plus the function key which provides the caller (SmartOCRService) the necessary authentication to call the function. This property requires 2 application settings key/value pairs to be added:
    • OCRCallbackKey – when creating an HttpTrigger the creator needs to provide the AuthLevel required to call the function. This can be one of 3 levels: function (default), anonymous, and admin. For the purpose of the OCRCallback function the auth level has been set to function (function is useful when the function is only called from another service and there is usually no user-interaction). Setting the auth level to function means that on each request the requestor must provide a key. This key can be found in the OCRCallback function manage tab as shown below. Copy the value and create a new Application Setting key/value pair with the name of the key being OCRCallbackKey.


    • BaseCallbackAddress – this is Url of where the OCRCallback function is hosted. This is Url of where you have published your Azure Function which will usually be something like You should create a new Application Setting key/value pair with the name of the key being BaseCallbackAddress

To trigger the SmartOCRService the message is simply added to the OCRQueue. An added advantage of using a queue rather than a direct request to the SmartOCRService is that the role of the ExpenseProcessor is now temporarily complete until the OCRCallback function triggers the state change or continuation of the workflow.

Step 1 of the process provides a placeholder to communicate when the processing of the image is complete (success or failure). In this simplified case, the code simply updates the receiptsTable to highlight the final status of the process. To identify which image has been processed, the ItemId is a property of the message payload.

Step 99 of the process allows the function to handle any necessary ‘retries’. As you will see as part of the OCRSmartService, there are several non-catastrophic scenarios which we may want to handle by retrying the process. This step simply restarts the process by following the steps executed as part of Step 0.

Now looking at the contents of the function OCRCallback (found in OCRCallback.cs):


You’ll see that its primary role is to act as conduit between the OCRSmartServices and the ExpenseProcessor. It simply takes the result of the OCRSmartService and translates it as new workflow state. This new workflow state will either be Complete, Error or Retry. In the case of Complete and Error there will be additional state captured which will either be the returned Text from the OCRSmartService in the case of Complete or the reason why the SmartOCRService failed which in this case will be the Exception message.

Its important to note that it was a desired condition by the partner to have this separation of concerns between the ExpenseProcessor and the SmartOCRService – it is imagined that overtime more processors will be put in place (e.g. InvoiceProcessor, OrderProcessor, etc.) and therefore the OCRSmartService should have no dependency on the requestor.


The contents of the smart-services folder contains a Visual Studio 2017 Preview (2) solution called SmartOCRService with one Azure Function called SmartOCRService.

As per the previous ExpenseProcessor function, the function is triggered when a message is added to the Azure Storage Queue (QueueTrigger) described by the parameter ocrQueue

The function’s key responsibility is to call out to the OCR Service (in this case the Microsoft Cognitive Vision API) and then return the result to the requestor via the callback Url provided as property of the message payload.


The method MakeOCRRequest  is responsible for calling out the OCR service and then determining how to handle the response:


Key thing to note within this function: As the function is dependent on the OCR Service being available it needs to handle the exception when the service is unavailable. If the service is unavailable the requestor may want to inform the user that they need to try later, or in our case automate that process by having the requestor retry the whole process automagically.

By default, if the function fails there will be a maximum of 5 retry attempts. If the last retry attempt fails then the original queue message is added to a poison-message queue. By adding the message to a poison-message queue means the message will not be acted upon again but provides a user some notification that the message has failed.

In our case we wanted to override this behaviour by preventing this message being added to the poison-message queue. We did this by monitoring the number of retries so that on the last retry we threw a MaxRetryException (custom Exception) which we then in-turn handled by return a result with a new status of ‘Retry’. If we go back to the previous OCRCallback function above we handled the Retry status by adding the original message back onto the SmartOCRService queue.

Note, to monitor the number of retries the function has tried then add the parameter dequeueCount, which is type int to the signature of the Run(…) method.

Continuous Integration / Deployment

At the time of writing there were certain issues setting up Continuous Integration / Deployment from within Visual Studio 2017 Preview (2). It was a requirement of the Partner that they needed this continuous infrastructure in place.

The original plan was to investigate how this could be done by trying out ourselves the setup within Visual Studio Team Services, but after some researching on the internet we found a great blog which set out the steps perfectly.

If you are interested in using Continuous Integration / Deployment I would suggest following the steps found here.

Application Monitoring

The final requirement the partner had was being able to monitor their Azure Functions. After several iterations it was decided that to get the best insights into how the application was performing was to integrate Microsoft Application Insights.

If you are interested in using Application Insights inside your Azure Functions then I would suggest you read the following blog post found here.

If you find Application Insights is overkill for your projects then I would suggest having a look at the following documentation.

Managing Authentication flow within Office Add-ins has just got a whole lot easier with the introduction of the new Office UI Dialog API.

This post shows you how to create a simple Excel Add-in Command in Visual Studio 2015 that uses the new Dialog API to authenticate a user and then import their Outlook Contacts using the Microsoft Graph API.

If you are new to developing O365 Add-ins then I would highly recommend you check out the Getting Started page of There are reams of examples and samples to help you get up to speed with links to the Office SDKs and a Yeoman Generator for Office if Visual Studio is not your thing.


The following is required to complete this module:

Step 1

In this step you’ll create a new Excel Office Add-in using Visual Studio. If you haven’t done so already, make sure you have downloaded and installed the Visual Studio Office Developer Tools.

With the latest Office developer tools installed open Visual Studio and choose File-New-Project. In the dialog prompt choose Office/SharePoint under Installed Templates-Visual C#.

VS Template

Choose Excel Add-in and enter a project name.

You will then be prompted to choose the type of add-in:

Addin type

Accept the default and press Finish.

Visual Studio will create 2 projects: an Office Project which contains the add-in manifest; a web project that contains the content for the add-in. If you want more detail about the anatomy of an office add-in please refer to the following resource.

Full source code can be found here but I just want to highlight some of the key areas:

  • (Contacts.js) Following the Dialog API guidance, create a displayDialog with the appropriate parameters:

dialog code

  • (Contacts.js) Specify how to handle the Dialog event type DialogMessageReceived. Note how the result message is parsed to extract the AAD access token:

message received code

  • (Auth.js) This solution uses the ADAL JavaScript library to simplify the Authentication process. For more information about these libraries please refer to the following resource. The key point here is the simplicity on how to authenticate with O365 credentials. The solution does not contain any login pages or credential stores – all of that is managed by ADAL based on the parameters that is provided.

auth code

  • (Manifest.xml) The Office manifest defines the requirements and the capabilities of the office add-in. As this Excel add-in will use Add-in Commands to deliver its functionality, the manifest needs to be structured accordingly: Note the manifest contains only one control which exexcutes a JavaScript function rather than launching an add-in Task-pane.


  • Based on the definition of the manifest this add-in has a single entry point which is a button on the Home Ribbon

ribbon button

Step 2

In this step you’ll go through the steps of registering an app in Azure AD using the Office 365 App Registration Tool. You can also register apps in the Azure Management Portal but the Office 365 App Registration Tool allows you to register applications without having access to the Azure Management Portal. Azure AD is the identity provider for Office 365, so any service/application that wants to use Office 365 data must be registered with it..

Once you have signed in you will be asked for some details about the app:


Feel free to give the app a name of your choice but the redirect URI must match what is shown above.

Once you hit register you will be shown a second dialog which will; hopefully say “Registration Successful” and show you your Client ID. Copy this value as you will need it to complete the Office add-in solution.

Step 3

Back in Visual Studio open the file App.js which can be found in the App folder.:

app details

The tenant is the O365 tenant you either created as part of the app registration or your existing O365 tenant name.


Build the solution.

Ensure the Office Project properties add-in start action is set to Office Desktop Client:

office properties

Press F5.

When you click the Get Contacts button in the Office Ribbon you should be prompted with the following dialog:

Authentication screen

Enter your O365 credentials (a user belonging to the tenant you specified in App.js) and if the user has contacts associated with the account then they should see them automatically added once the dialog is closed.

I hope you have enjoyed this post…feel free to send me any comments.