A friend of mine asked whether there were any ‘simple’ techniques to stop / start an Azure VM from a mobile application.

Obviously they could use the Azure Portal directly from the browser app on their phone, but this was deemed ‘not-simple-enough’.

Before we get into the solution, if you want to try this for yourself you will need:

  1. An O365 Account with Flow and PowerApps enabled.
  2. An Azure Account with a single VM (v2) (make a note of the Name and it’s ResourceGroup Name).

So, being a big fan of Microsoft Flow I came up with the following Flow Template:

image

The trigger is simple: ‘when a new item is created in a SharePoint list’. Based on the value of the Action field do something.

To create the Flow Template, lets first start with the SharePoint List. 

Here is the SharePoint List:

image

Just two columns: Title which by default is created when a new SharePoint list is created; Action which I have created as a Choice with two values Stop and Start. That is it, that is the SharePoint List in its entirety. To create the Flow, choose the first Flow Template in the list.

image

On selecting the template you will be redirected to the Flow website. On editing the Flow Template you will be presented with the following flow:

image

Remove the last two actions so that you only see the When a new item is created action. Keep Flow open as we will be coming back to it later to complete the template.

Back in SharePoint, now click the PowerApps drop down. By clicking this dropdown, PowerApps will automagically build you a mobile application based on your list. In 2 minutes you have a working mobile application that you can publish and then share. That’s all I am going to cover in PowerApps, the application it builds gives us everything we need. Once you have published the app, go to your mobile device, download the PowerApps app for your device, login with your O365 Account used to create the SharePoint List and you should see the PowerApp created above to open and run. You could even pin the new app to your Home screen thereby bypassing the need to open PowerApps first (PowerApps will open in the background).

So now you have your trigger and your mobile app. All we need to do now is add the actions when a SharePoint List item is added with an action of Start and when an action is added of type Stop. For this I am going to use Azure Automation Runbooks:

To create the Runbook log into the Azure Portal and open your Azure Automation Account. If you don’t have one, click +New and search for Automation. Once your Automation Account is open click on the Runbooks option under Process Automation. For the purposes of this solution we are going to use two Gallery Runbooks, so choose Browse Gallery and search for StartAzureV2Vm and StopAzureV2Vm (Graphical)…for each, choose Import. For more information on how to create or import a Runbook please follow this document.

Once you have both Runbooks in your Account choose one of them and select Edit. For the purpose of this solution we want to stop start a single VM, so edit the Graphical Runbook and edit them so that once the script has connected to Azure, get a single VM only. Your Graphical Runbook should then look something like this:

image

Publish your Runbooks and try each of them out. You will be asked for the name ResourceGroup of the VM and the name of the Single VM.

If they both run successfully we now want to add a Webhook for each Runbook so that we can run the Runbooks remotely via our Flow template. For information on how to o create the Webhook for each Runbook please follow this document. Note, you only get the option once to copy the Webhook Url so make sure you click the Copy button. For ease, I pre-populated the RESOURCEGROUPNAME and VMNAME. As a strectch goal, you may want to try passing these values as part of the request – to do that you should add 2 extra columns to your SharePoint List so that once you have re-created the PowerApp you will have the option of entering the names which you can then use as part of your Flow.

Once you have the Webhooks for the Stop and Start Runbook, it’s now just a case of completing the Flow Template.

image

Following the Trigger, create a Condition. The Condition then is based on the Action Value if it is equal to Start then do the If yes Action, else do the If no Action. The Action is simply a HTTP and choose the Method POST. The Url is either the Start or Stop Runbook Webhook Url you created previously.

Voila! You can know use a mobile app to stop start your Azure VM.

This is obviously a very simple case, but hopefully it gives you food for thought on how you can use a mix of O365 and Azure Automation to manage your Azure Assets.

Scenario

The easiest way to illustrate the purpose of this blog is to describe the scenario we are trying to solve:

‘ContosoApps’, a Microsoft cloud solution provider (CSP), sells Office 365 and additionally bundles their cloud software and services (‘family-of-apps’) to provide value-add to their customers. As part of the customer acquisition flow, the goal is for a new customer to sign up for Office 365 (which includes creating a new tenant and subscription) through ContosoApps. Additionally, as part of this onboarding process the flow should “pre-install” ContosoApps’ ‘family-of-apps’ so that all the customer’s users can take advantage of.

Today this ‘family-of-apps’ onboarding (through the CSP pre-consent flow in Microsoft Graph) only works for applications that access Directory and Intune functionality. If the app tries to access other services behind Microsoft Graph, this pre-consent flow will not work.

This inability to use the CSP pre-consent flow then requires an admin of the customer to pre-consent each AAD application in the ‘family-of-apps’ manually by opening the application, signing-in and then consenting (accepting) the consent dialog describing the permissions required.

This manual approach is sometimes considered laborious and often leads to a bad user experience which can lead to a negative perception to your ‘family-of-apps’

Therefore, the objective of this blog is to provide guidance on how to use the AAD Graph to allow a tenant (customer / ISV user) admin to grant pre-consent for multiple applications (‘family-of-apps’) by consenting to a single ‘bootstrapper’ application.

That is, how you as an ISV / developer, can create a bootstrap application that you can then use to onboard your AAD applications to your customers / users preventing the tenant admin to have to do multiple consents. i.e. one for each of your AAD applications or ‘family-of-apps’.

Approach

The approach requires that there is a ‘bootstrap’ application that, once granted the appropriate permissions by an administrator, can record consent grants for the family-of-apps. If you as an ISV / developer also happens to be a CSP as well, then the ‘bootstrap’ application can actually be pre-consented following the how-to-set-up-a-partner-managed-application flow in this document. If you are not a CSP or you prefer to manually configure the bootstrap application, you should skip the steps under ‘Pre-consent your app for all your customers’.

Important: When registering the bootstrap application in the Azure Portal in your organisation’s directory, you should configure the application to require ‘Access the directory as the signed-in user’ in the API Windows Azure Active Directory as per figure below. You also need to add a permission for the Microsoft Graph API, e.g. ‘Sign in and read user profile’, this ensures the necessary Service Principal is created for the customer’s tenant directory.

image

Note: It is also assumed that each of your applications (‘family-of-apps’) are already registered with AAD through either the Application Registration experience in the Azure Portal, or via the Application Registration website that can be found here.

Note: If you plan to create a web app version of the bootstrap application, remember to register it as multi-tenant.

Bootstrap Application

To help you understand the steps required to build the bootstrap application, we have included two applications you can download and modify. Once registered in your tenant’s directory (see above) and then modified to include your ‘family-of-apps’, the code should work as-is. However, how you onboard your customer’s will probably require you to consider where the bootstrap application fits in your onboarding process. Therefore, at the least you should perhaps consider the downloadable applications as simply templates for your own bootstrap application that you can use to understand how to build your own.

For this blog, we are going to concentrate on the code in the native .NET Console application.

Where possible (more on that later) we are using the AAD Graph .NET SDK, which is installed as NuGet package, to make the requests to the AAD Graph. Whilst it is possible to make requests directly against the AAD Graph rest endpoint (graph.windows.net) using the .NET SDK simplifies the code and removes the need for us to serialize and de-serialize JSON objects.

There is also a reference to the Active Directory Authentication Library or ADAL for short. Using this library simplifies the authentication process immensely, easily allowing us to separate the authentication code from the application logic. All the code required to authenticate a user can be found in the class AuthenticationHelper.cs

Finally, there is a reference to the Microsoft Graph Service Client SDK. Whilst the bootstrap application does not make use of the data returned by the call there is a circumstance where a need to make a request is required…more on that later.

Code

The bootstrap application (once granted the correct permissions) will be able to programmatically provision any AAD application, including providing consent (administrative consent on behalf of the organisation) if it is run in the context of a signed-in administrator in the target customer / user tenant.

The following sections provide details on how to record consent for delegated permissions (i.e. for interactive apps that run in the context of a signed-in user) and application permissions (i.e. for daemon or background services that run without any signed-in user being present). See this article for more details. The process of provisioning and recording consent will need to be done for each of the apps in the ‘family-of-apps’ that you provide.

clip_image002Configuration: Once you have registered the bootstrap application there are a number of parameters that you need to modify in order for the authentication to happen successfully. These parameters can all be found in the class Constants.cs. The parameters you need to modify are:

  • BootstrapClientId – generated when registering the application in the Azure Portal. This value is represented by the Application Id for the application.
  • RedirectUrl – specified when registering the application in the Azure Portal.

image


Authentication: As per previous comments, the logic to Authenticate a user is in the most the responsibility of the AuthenticationHelper.cs class. It’s only purpose is to authenticate a user and manage the access token. There are 2 tokens, one the for the AAD resource and one for the Microsoft Graph resource. Once the user has authenticated we cache the access token as a static resource for the lifetime of the process. This effectively means the user only needs to sign-in once (maybe twice if we do need to use the Microsoft Graph resource) and then we can make use of that token for all future requests.

Consent: The code for processing the consent flow can all be found in the class Request.cs.

When adding the OAuthGrant permissions you need to have at hand the following values for each of the applications in your ‘family-of-apps’:

  • AppId…this is the AppId of your AAD application
  • DisplayName…this is the name you have given to your AAD application
  • ResourceServicePrincipalId…this is the Id relevant to the API the permission belongs to, e.g. if you need to add the permission User.Read for the Microsoft Graph API then you will need the Id of the Microsoft Graph API…values for the Microsoft Graph API and the AAD Graph API can be found in the class Constants.cs
  • DelegatedPermissions…represented as a string delimited by a <space>, these are the delegated permissions required by your AAD application, e.g. if you require to read the user’s profile and send an email on behalf of the user, you will need the following scope of permissions “User.Read Mail.Send”. Note, if the permissions cross multiple APIs, then you need to create am OAuthGrant per API.
  • AppOnlyPermissions…represented by a list of strings, these are the app-only permissions required by your AAD applications. Note, these permissions are represented by their GUID value and not their string name. To find the GUID value have a look at the AAD Application manifest file under resources.

image

The assumption we have made in the Bootstrap application is that the application permissions you need to the user to consent belong to either the Microsoft Graph API or the AAD Graph API. Prior to consenting to these permissions, the Microsoft Graph and/or AD Graph API applications need to be represented as a Service Principal in the target tenant. At the time of writing, it is not possible to add the Microsoft Graph API application as a Service Principal directly in code. However, the Service Principal for each API will be added JIT to your tenant once consent to any application requesting permissions using those APIs (e.g. the AD Graph or Microsoft Graph). Therefore, your bootstrap application should include whatever basic ‘Read own profile’ permission is available to ensure the SPs to the Microsoft APIs are automatically created. The console application then will be able to request consent for use of the bootstrap application; it will prompt the user to sign-in again with their credentials as a new access token needs to be generated to make the request.

clip_image010

The next step is to add any other API applications as Service Principals that your AAD applications needs to consent permissions for. As per previous comments, we assume you will only need permissions for the Microsoft Graph API and the AAD Graph API and therefore these are the only Service Principals we add.

clip_image012

The code to add the Service Principal will only add if the tenant does not already have the Service Principal in their directory.

If you need to add more, then follow the code on how to create the Service Principal replacing the GUID and the Display name for each. Note, if you are struggling to find the GUID for the API then the advice is to look in the manifest file associated with your AAD application as per the app-only permission Ids discussed earlier.

Next, create a collection OAuthGrants. The contents of each OAuthGrant and the values you need was discussed earlier…the code to do so is as per below:

clip_image014

From here on in you should no longer have to modify any more of the code. The process to add the permissions is as follows:

clip_image016

You will notice when adding the App-only permissions we don’t use the AAD Graph Client SDK. This is because at the time of writing the bootstrap application, the SDK does not support adding AppRoleAssignments.

Conclusion

Okay we are done. The bootstrap application should now have granted all the necessary permissions for your AAD applications (‘family-of-apps’) require. You should find now that when a user of the tenant, targeted by the bootstrap application, sign-ins in to your AAD application the consent dialog should be suppressed.

Remember, you can download either a native or a web application bootstrap here.

If you would like to know more about the specific API calls used for programmatic consent (e.g. for use in a language unsupported by the Graph SDK), see also this post by Arsen which details some of the API calls and their parameters in greater detail. You can also use HTTP tracing tools like Fiddler to view the API calls made by this code.

Finally, thank you to Stewart Adam and Denis Kisselev for collaborating on this project.

A couple of weeks ago I was given the opportunity of working with a partner to build a solution that would hopefully help them automate their expense (receipts) processing.

The scenario was simple:

  1. Upload Image (.png, .jpg).
  2. Extract Data from Image.
  3. Process Expense using extracted Data.

Whilst the scenario sounded simple there was a need for a reliable infrastructure to enable the processing, i.e. queue the image, hand-off the image for processing, track status, error-handling, and return success or failed state.

[Note, for the purposes of the blog post I am not going touch on how the data was extracted. This is referenced in another blog post by one of my colleagues on the team that can be found here. In my example I am going to mock the data extraction using a single callout to the Microsoft Vison API OCR (Optical Character Recognition) method.]

After several iterations the following architecture was agreed:

Receipt Processing Diagram

Walking through the steps:

  1. User takes a photo of the receipt and using a Xamarin app, chooses to upload the image for processing (Azure Blob Storage). [Note, for the purpose of this blog, rather than a Mobile Test App (as per diagram) I have included a simple .NET Console application which is not production ready and for demo / PoC purposes only]
  2. Once the image is uploaded the Xamarin app adds a message to the Expenses Queue (Azure Queue Storage) to trigger the next step in the process.
  3. The Expense Processing Function is an Azure Function that is triggered when a message is placed onto a specific Azure Queue Storage (QueueTrigger). When triggered the Azure Function creates a record in a table (Azure Table Storage) to track status and to store the success or failed state of the process.
  4. The Expense Processing Function hands off the actual processing of the image to another function. Like step 2, this is managed by placing a message on a queue which then triggers the Receipt Processing Function (Azure Function). At this point you may be wondering why there is a dotted line around the 3 boxes and it is named Smart Services. This is to suggest that these services are isolated, that they have no dependencies on any other service. This was a key ask by the partner because over time other apps, not just the Expense Processor, may need to call the receipt processing service.
  5. Once the processing had completed (success or fail) the Receipt Processing Function hands-off the result back to the calling application. To ensure isolation (as per step 5) the Receipt Processing Function simply hands-off using a callback URL as defined in the original message item. This callback Url endpoint is another Azure Function, denoted in the diagram as Processing Callback Function whose trigger this time is a HttpTrigger.
  6. The purpose of Processing Callback Function is to update the state the Receipt Processing Function. It does so by updating the table record that was created as per step 3.
  7. The Processing Callback Function also adds another message to the Expenses Queue, which in turn will again trigger the Expense Processing Function. This step is optional, but allows the Expense Processing Function to do any post-processing such as notifying a user that there expense has been processed.

[Note, for the purposes of this blog I have not included any code relating to the Web App that was built to view and manage the outputs of the OCR processing.]

So why Azure Functions? Well I’m not going to paraphrase the contents of the Azure Function documentation page but in our scenario Azure Functions fitted perfectly:

  • small discrete code classes;
  • simple bindings and triggers;
  • no complicated server infrastructure;
  • cost effective – pay as you go;
  • simple to use, simple to integrate;
  • continuous deployment;
  • scale;
  • analytics and monitoring;

Okay that’s the summary complete. Next I am going to walkthrough some of the key pieces of the solution and then finally provide instructions on where to learn how you would go about setting up continuous deployment to Azure using Visual Studio Team Services.

The Solution

All the code I describe in this blog post can be found on GitHub.

There are 2 folders: a simple image uploader console application and, the azure functions to process the image. Note, there are 2 azure function solutions: ExpenseOCRCapture which contains the Expense Processing Functions (as per diagram) that handle the processing workflow; SmartOCRService which contains the Receipt Processing Function (as per diagram) to manage the callout to the Microsoft Vision API and parse the result.

Please feel free to download the solutions and try out the code yourself. For instructions on how to deploy and run, please refer to the pre-requisite and setup instructions outlined in the readme documents in each folder.

[Note, at the time of writing to build and deploy the Azure Functions you must use Visual Studio 2017 Preview (2). For details on where to download please refer here.]

The Azure Functions

Let’s have a look at the Azure Function solutions that are in GitHub:

ExpenseOCRCapture

The contents of the expense-capture folder contains a single Visual Studio 2017 Preview (2) solution that contains two Azure Functions called ExpenseProcessor and OCRCallback.

Looking at the contents of ExpenseProcessor.cs:

image

The ExpenseProcessor function’s primary purpose is to handle the image processing workflow. The function itself is triggered by a message being added to the Azure Storage Queue, receiptQueueItem.

As well as receiptQueueItem there are several other important parameters of this function, namely:

  • receiptsTable – this is an Azure Storage Table which provide tracking status and ultimately the output of the OCR request.
  • ocrQueue – this is an Azure Storage Queue and provides the binding to allow this function to callout to the SmartOCRService that we will discuss later. Note, its connection property is set as SmartServicesStorage – this is an Application Setting key/value pair and should be the Azure Storage Connection string associated with the SmartOCRService storage account.
  • incontainer/receipts – this is an Azure Storage Blob that is used to store the image files for processing. Note, rather than sharing this blob with the SmartOCRService, this function generates a Shared Access Signature (SAS) which the OCR service then uses. This removes a dependency on the SmartOCRService thus allowing multiple blob stores and therefore multiple requestors.

Step 0 in the case statement is responsible for the primary activity of this function and is the the one that provides the SmartOCRService with the necessary information so that it can process the image:

image 

The method StartOCR(…) is responsible for creating a new message queue item of type OCRQueueMessage. The message has 4 properties:

  • ItemId – unique identifier for the image being processed
  • ItemType – the type of the image, in this case ‘receipt’ but in other solutions this may be ‘invoice’ or ‘order’
  • ImageUrl – the SAS which provides the SmartOCRService the location and permissions required to access the image in blob storage
  • Callback – so that the SmartOCRService knows where to respond to once the OCR processing is complete, a callback URL is provided. This is the HTTP address of the OCRCallback function which we will describe next plus the function key which provides the caller (SmartOCRService) the necessary authentication to call the function. This property requires 2 application settings key/value pairs to be added:
    • OCRCallbackKey – when creating an HttpTrigger the creator needs to provide the AuthLevel required to call the function. This can be one of 3 levels: function (default), anonymous, and admin. For the purpose of the OCRCallback function the auth level has been set to function (function is useful when the function is only called from another service and there is usually no user-interaction). Setting the auth level to function means that on each request the requestor must provide a key. This key can be found in the OCRCallback function manage tab as shown below. Copy the value and create a new Application Setting key/value pair with the name of the key being OCRCallbackKey.

image

    • BaseCallbackAddress – this is Url of where the OCRCallback function is hosted. This is Url of where you have published your Azure Function which will usually be something like https://myfunctions.azurewebsites.net/api You should create a new Application Setting key/value pair with the name of the key being BaseCallbackAddress

To trigger the SmartOCRService the message is simply added to the OCRQueue. An added advantage of using a queue rather than a direct request to the SmartOCRService is that the role of the ExpenseProcessor is now temporarily complete until the OCRCallback function triggers the state change or continuation of the workflow.

Step 1 of the process provides a placeholder to communicate when the processing of the image is complete (success or failure). In this simplified case, the code simply updates the receiptsTable to highlight the final status of the process. To identify which image has been processed, the ItemId is a property of the message payload.

Step 99 of the process allows the function to handle any necessary ‘retries’. As you will see as part of the OCRSmartService, there are several non-catastrophic scenarios which we may want to handle by retrying the process. This step simply restarts the process by following the steps executed as part of Step 0.

Now looking at the contents of the function OCRCallback (found in OCRCallback.cs):

image

You’ll see that its primary role is to act as conduit between the OCRSmartServices and the ExpenseProcessor. It simply takes the result of the OCRSmartService and translates it as new workflow state. This new workflow state will either be Complete, Error or Retry. In the case of Complete and Error there will be additional state captured which will either be the returned Text from the OCRSmartService in the case of Complete or the reason why the SmartOCRService failed which in this case will be the Exception message.

Its important to note that it was a desired condition by the partner to have this separation of concerns between the ExpenseProcessor and the SmartOCRService – it is imagined that overtime more processors will be put in place (e.g. InvoiceProcessor, OrderProcessor, etc.) and therefore the OCRSmartService should have no dependency on the requestor.

SmartOCRService

The contents of the smart-services folder contains a Visual Studio 2017 Preview (2) solution called SmartOCRService with one Azure Function called SmartOCRService.

As per the previous ExpenseProcessor function, the function is triggered when a message is added to the Azure Storage Queue (QueueTrigger) described by the parameter ocrQueue

The function’s key responsibility is to call out to the OCR Service (in this case the Microsoft Cognitive Vision API) and then return the result to the requestor via the callback Url provided as property of the message payload.

image

The method MakeOCRRequest  is responsible for calling out the OCR service and then determining how to handle the response:

image

Key thing to note within this function: As the function is dependent on the OCR Service being available it needs to handle the exception when the service is unavailable. If the service is unavailable the requestor may want to inform the user that they need to try later, or in our case automate that process by having the requestor retry the whole process automagically.

By default, if the function fails there will be a maximum of 5 retry attempts. If the last retry attempt fails then the original queue message is added to a poison-message queue. By adding the message to a poison-message queue means the message will not be acted upon again but provides a user some notification that the message has failed.

In our case we wanted to override this behaviour by preventing this message being added to the poison-message queue. We did this by monitoring the number of retries so that on the last retry we threw a MaxRetryException (custom Exception) which we then in-turn handled by return a result with a new status of ‘Retry’. If we go back to the previous OCRCallback function above we handled the Retry status by adding the original message back onto the SmartOCRService queue.

Note, to monitor the number of retries the function has tried then add the parameter dequeueCount, which is type int to the signature of the Run(…) method.

Continuous Integration / Deployment

At the time of writing there were certain issues setting up Continuous Integration / Deployment from within Visual Studio 2017 Preview (2). It was a requirement of the Partner that they needed this continuous infrastructure in place.

The original plan was to investigate how this could be done by trying out ourselves the setup within Visual Studio Team Services, but after some researching on the internet we found a great blog which set out the steps perfectly.

If you are interested in using Continuous Integration / Deployment I would suggest following the steps found here.

Application Monitoring

The final requirement the partner had was being able to monitor their Azure Functions. After several iterations it was decided that to get the best insights into how the application was performing was to integrate Microsoft Application Insights.

If you are interested in using Application Insights inside your Azure Functions then I would suggest you read the following blog post found here.

If you find Application Insights is overkill for your projects then I would suggest having a look at the following documentation.

A common bot use case that is often discussed is how to have a bot contact a user based on some triggered event or lengthy job or an external event such as a state change in the system. These type of bots are often referred to as proactive bots.

In this post I am going to show you how to create a proactive bot using the Microsoft Bot Framework and Microsoft Azure. I have decided to write the solution using C# but all of the concepts are just as valid for Node.js as well.

The scenario we are going to cover is simple.

Ask a bot to notify me when an order is despatched

To follow the steps in this post you will need:

Here is an overview of the general architecture you will be building:

General Architecture

You can find all the code you need in my GitHub here.

1. Building and Registering the Bot.

The code to build the bot can be found here. To use this code you must:

  • First decide where to publish your bot. If you are using Visual Studio then an easy way to do so is to right-click the project and choose publish. From here you can choose to publish directly to Azure. If you require additional help deploying your app then I encourage you to look at this resource. Please make a note of the website url – you will need this in the next step.
  • Next, register your bot in the Microsoft Bot Framework. Full instructions on how to do this can be found here. Important: the messaging endpoint is the url noted above and include /api/messages, e.g. https://mybot.azurewebsites.net/api/messages. You should make a note of the Bot Handle, Microsoft App ID and the Microsoft App Password (auto generated) as  you will need these values for the next step.

Register Bot

  • Back in Visual Studio project update the web.config with the Bot Handle, Microsoft App ID and Microsoft App Password. 

Web Config

  • Next, configure an Azure Storage Account. This Storage Account will be used for both the Table Storage and Queues. If you don’t already have a Storage Account then follow these steps here to create a new account. Once created you will need the Storage Account connection string. To get this string from the Azure Portal find your Storage Account and under Settings—>Access Keys, copy the Connection Sting value and paste it as the value of the StorageConnectionString in the web.config.
  • Finally, rebuild the solution and re-publish. You can test the bot works either by adding your Bot to your favourite channel such as Skype, Slack, or Teams or by using the web chat pane – both options can be found in the My bots (choose your bot) menu option in the Microsoft Bot Framework portal.

2. Creating the Azure Functions

Azure functions allow you to develop and deploy small pieces of code in the language of your choice and in the cloud without setting up infrastructure. If you are new to Azure Functions then I would definitely recommend having a look at this tutorial. As well as being a great getting started it will also introduce you to working with Azure Queues which form the basis of our solution

As per the general architecture diagram above, will need to create 2 Azure Functions: one to capture the subscription posted from the Bot and another to capture the back office event being completed. So looking at each one in turn:

Subscription Trigger

This function is triggered when a new Queue Message is added to an Azure Queue. To create this function select the ‘+’ symbol under the Azure Function Plan and choose the QueueTrigger-CSharp template. Give, your function the name SubscriptionTrigger and then specify the Queue name as subscription-items and select from the dropdown the Storage Account you created previously.

Once the function has been created, you need to allow the function to update an Azure Table. To do this select the Integrate menu option under the Azure Function – select + New Input and choose Azure Table Storage. Change both the Table parameter name and Table name to subscriptionTable, and select from the dropdown the Storage Account you created previously.

Back in the body of the function (run.csx) copy the code found here. Note, the parameter names for the types SubscriptionMessage and CloudTable need to match what you entered as part of the Integrate step. Choose Save and I suspect you will get an error complaining about the type ConversationReference. This is a class specified in the Microsoft.Bot.Builder framework which is available as a Nuget package. We need to therefore include a reference to this framework and to do so we need to add a new file. The diagram below shows the new file (project.json) and the associated code:

Project Json

Once you have added the file, try saving the function again and hopefully the code should compile successfully.

At this stage, you should be able to test your code to see if the Subscription side of the solution is working successfully. To check, go to the bot channel (e.g. web chat, Skype, Slack, etc.) of your choice and type the message #1234 . You should get a response from the bot that the subscription to that order has been noted. If you look in your Azure Table Storage (try the tool Azure Storage Explorer) you should see a new row in the subscriptionTable.

Event Trigger

Like the previous function, this function is triggered when a new Queue Message is added to an Azure Queue. This time however, rather than the user triggering the new message from their bot, this message gets added as a result of a back office system when the order has been despatched.

Therefore, as per the Subscription Trigger, create a new function using the QueueTrigger-CSharp template. The Queue name should be set as event-items and the Storage Account should be the account you set up previously.

Once the function has been saved, again as per the Subscription Trigger, use the Integrate menu item to add a new Azure Table Storage input. As this function will be reading from the table created as part of the subscription trigger, you should use the same Table name, subscriptionTable.

This function also makes use of the Microsoft.Bot.Builder type ConversationReference so as per above add a new file called project.json and add a reference to Microsoft.Bot.Builder.

Now click on the file run.csx and copy the code from here.

Finally, one thing you will notice in the code is that it has a reference to a MicrosoftAppPassord, MicrosoftAppId. These are values that are required to be added to the Function’s app settings. To add them go to the Function App—>Platform Features—>Application settings.

App Setting Menu

Under App settings in the task pane that will open you should add the names MicrosoftAppId and MicrosoftAppPassword. The values for these 2 new keys should be values you captured when registering your bot. Note, they should match what you have in your Bot project’s web.config file.

3. Configuring the ‘Some Back-office App’

The final project that I have included in the solution is a simple console app, the DeleteMeClient (the DeleteMe reference is meant to refer the fact that it should thrown away and replaced with a concrete app Smile). 

This console app is meant to represent the back-office app that is responsible for triggering the bot to tell the user their order has now been despatched.

To use the app you will need to update it’s app.config with the correct Storage Account Connection String as per the setting specified in the Bot project’s web.config. You should also ensure the queue name in Program.cs matches the one specified in the Event Trigger’s integration. If you have followed my recommendation this should be event-items.

4. Pulling it altogether!

Now that we have our Bot published and our Azure Function services configured, let’s pull it altogether and play out the scenario. Note, to start any proactive scenario the user must interact with the bot first.

  • Using the Bot Channel of your choice, for illustration purposes I have configured Skype, type an order number in the format #XXXX, e.g. #1234
  • You should get a message response telling you your request has been noted.

image

  • Now run the DeleteMeClient console app and when prompted for an order number enter 1234
  • The bot will respond to you letting you know the order has despatched:

image

Try extending this simple scenario by subscribing to the same order but in a different channel, e.g Slack or Microsoft Teams. You should see that when the order is despatched you get a notification in both channels.

5. What if the user wants to act on the notification they have been sent?

Usually when we start a dialog / conversation with a bot we do so with the goal of achieving a desired outcome. To help a user reach that desired outcome the bot is usually designed with some aspect of workflow in mind. In this scenario that workflow is disrupted when the user is notified when the order is despatched. Simply notifying the user is not always enough…what if the user wants to act upon the notification and get access to further actions. In a world of prescribed workflows this is not often easy. What we need to solve is how to disrupt a prescribed dialog with a notification, let the user act upon that notification and then once they have finished let them continue with their original dialog.

We can solve this problem using Scorable Dialogs:

Scorable dialogs monitor all incoming messages to a bot and decide if they should try to handle a message. If a scorable matches against an incoming message then it can then handle the response to the user rather than it being picked up by the current dialog in the stack.

In our solution we have defined a Scorable (ScorableActions.cs) to handle when a user responds with the message ‘show actions’. In this context the Bot handles this request by creating a new dialog with the user asking them if they want to view the order or notify the customer of the order.

The result, the conversation is disrupted and the user is allowed to complete an action:

image

For more details on Scorable Dialogs, I would highly recommend you watch MVP James Mann’s video and read MVP Gary Pretty’s blog post.

6. What about the disruption to the conversation flow?

The flow above is very simple. In the real world an order won’t be despatched in 5 minutes and I suspect a bot’s applicability will be more than: tell me when an order is despatched.

My point is, the triggered event may occur after 5 minutes, 5 hours, 5 days, even 5 months or 5 years. Given this time frame, there is the potential that the user may already be in mid conversation with the bot when a notification is triggered. The complication then is how to respond back to user with the notification and offer them further actions without dramatically affecting the user’s current dialog and therefore experience.

Let’s consider the following conversation flow:

  • User asks the bot to notify them when order #5674 is despatched
  • The bot immediately responds telling the user their request has been noted.

image

  • The user then starts a dialog with the bot unrelated to orders being despatched:

image

  • In the middle of this conversation, the bot responds with the notification that order 5674 has been despatched. The user then decides they want to act upon the notification they have just received, thus disrupting the original conversation flow:

image

Once they view the order, how can we remind the user where they were in the original conversation flow? To do this we make use of the frame collection associated with the bot context. This collection is a list of the current dialogs that are in play. By interrogating this collection and implementing a custom-defined function we can determine the last message that was sent and then include it in the closing response of the Scorable dialog:

image

You can see the code in action by first looking at RootDialog.cs. This class implements a custom interface called ILastDialogMessageSentToUser.

The purpose of this interface is two-fold: keep a track of the last message sent; allow the Scorable implementation to filter the bot context collection based on its type:

RootDialog.cs

image

ScorableActionsDialog.cs

image

Note, you’ll notice in each of the bot responses I have included the name of the dialog responsible for the message. This is deliberate and hopefully illustrates how Scorable Dialogs can help you traverse between dialogs.

7. Summary

In this blog you have learnt how to use the Microsoft Bot Framework and Azure to create a proactive bot. You have also learnt techniques on how to handle disruptions that notifications cause when they interrupt conversational flows.

When using the Microsoft Bot Framework, to configure the bot to be available to a particular channel (e.g. Skype. Slack, Facebook, Teams, etc.) you need to host the Bot service on a public URL endpoint. The channel won’t be able to access your bot service if it is on a local server port hidden behind a NAT or firewall.

When designing / building / testing your code you don’t always want to have to keep redeploying and more importantly paying hosting costs – introducing ngrok,

ngrok allows you to create secure tunnels to localhost and thereby exposes your local server to the internet.

So what I thought I would do is walk you through:

  1. configuring a simple Node.js bot using the Bot Builder SDK samples and deploying the bot to your localhost server
  2. using ngrok to expose your bot to the public internet
  3. registering the bot on the Microsoft Bot Framework
  4. adding the bot to your Skype channel

 

1. Create a simple Hello, World! Bot

The easiest way to do this is to use one of the sample bots found here. Follow the instructions to clone the repository and then navigate to one of the samples. If you are getting started with bots then my advice is to start with one of the Hello World samples. For the purpose of this walkthrough I used the hello-ChatConnector.

If you want to make sure your bot is working without having to register it first, try using the Bot Framework Emulator following these steps:

  1. In a Node.js console navigate to the hello-ChatConnector found at \botbuilder\Node\examples\hello-ChatConnector
  2. Run the command node app.js
  3. Make a note of the port, if you have not changed any of the app.js code the port should be 3978
  4. Open the emulator and change the Bot Url to http://localhost:3978/api/messages
  5. Try the emulator by posting a message to the chat window. You should see something like this:

 

image

 

2. Using ngrok

Given the bot is being hosted on localhost:3978 now use ngrok to expose this port to the public internet. To do this simply type at a command prompt ngrok http 3978. You should the following UI:

image

Copy the forwarding Url – this is the public endpoint for the Bot service.

For the purpose of trying this out in the emulator, copy this Url to the Bot Url as per #1 – remember to include /api/messages.

You should notice that the emulator Url now shows as error ! to solve this open another command prompt and type ngrok http –host-header rewrite=9000

Doing so should now enable you to run the emulator but using public endpoints:

image

 

3. Register the bot on the Microsoft Bot Framework

If you haven’t already signed up to the Microsoft Bot Framework then you need to that first. Once signed up and signed in choose the Register a Bot tab.

The form will guide you through the steps required to register the bot. Ensure you copy the Microsoft App ID and Password somewhere safe, you will need to add these to your Bot service code. For the Messaging Endpoint use the forwarding Url captured in #2 that you used as the Bot Url.

4. Adding the bot to the Skype channel

Go back to the bot sample configured in step #1. You need to amend the app.js code file to include the Microsoft App ID and Password copied in step #3.

Once you have saved the changes follow the steps in #1 and #2 to deploy your bot to your local server and then use ngrok to make it public to the internet.

To ensure the changes you have made still work, use the emulator to test – you will need to update the Microsoft App ID and Microsoft App Password with the values captured in step #3. If you don’t you will get a 401 Unauthorized.

Finally, go back to the Microsoft Bot Framework portal and add your bot to the Skype Channel.

 

image

 

And that is it…you have your first bot running locally being served to the public internet using ngrok and available to chat with using the Skype channel.

As a next step, you can debug your bot service within Visual Code using the steps described in here

Has your customer asked you, “Which build of Office do I need in order to get that cool API or requirement set?”

Here is some great guidance that addresses that question. Check out these topics:

Ever wanted to develop for Skype for Business? Then here is a great set of resources to get you started: 

Skype Web & App SDK documentation: https://msdn.microsoft.com/en-us/skype/skypedeveloperplatform

Skype Preview program: https://skypepreview.com

Office development: https://dev.office.com/Skype

Web SDK Overview: https://msdn.microsoft.com/en-us/skype/websdk/skypewebsdk

Web SDK API keys & feature grid: https://msdn.microsoft.com/en-us/skype/websdk/apiproductkeys

Web SDK API reference: https://ucwa.skype.com/reference/WebSDK/modules/_s4b_sdk_d_.jcafe.html

UCWA 2.0 General reference: https://msdn.microsoft.com/en-us/skype/ucwa/ucwa2_0generalreference

UCWA 2.0 API reference: https://msdn.microsoft.com/en-us/skype/ucwa/ucwa2_0apireference

Skype App SDK: https://msdn.microsoft.com/en-us/skype/appsdk/skypeappsdk

Skype App iOS API reference: https://ucwa.skype.com/reference/appSDK/IOS/

Skype App Android reference: https://ucwa.skype.com/reference/appSDK/Android/

Code Samples

If you haven’t read my Part 1 of this series then let me recap. Last week the Microsoft Office team announced the general availability of Excel REST APIs for the Microsoft Graph. With these new endpoints it is now possible to interact with your existing Excel Workbooks using REST from your mobile, desktop and web apps without the need to have to open the Excel app – pretty awesome eh?

In my previous post I demonstrated how you can call one of the 300+ built-in functions that Excel supports out-of-the-box. Sometimes though, the built-in functions that come with Excel are not enough. Sometimes we need to build our own Excel calculations to support our own LoB reports and visualisations. In this post I will demonstrate how you can re-use that functionality within your own modern day web and mobile apps.

For the purposes of this post I have created a simple Restaurant Bill Calculator within Excel:.

Excel Bill Calculator

This is a very simple example but it’s the concept of re-use of the calculation logic, in this case Bill+Tip and Amount per person, that I am trying to convey not it’s complexity.

I then created a visual representation of the Excel Workbook in an ASP.NET application – of course I didn’t need to create a web app, I could have created just a web API or a Console app,

Web App Calculator

So in this example we use the new Excel REST API to set the Excel Workbook named items, Bill Amount, Tip, and Number in Party. We then use the new Excel REST API to get the results by asking for the Excel Workbook named items Total Bill and Amount per Person..

In summary, complex business models no longer need to be rebuilt repeatedly; developers can leverage Excel to perform those calculations instantly and retrieve the results with simple API calls.

The full source-code for my sample can be found be here. Please refer to the readme file for pre-requisites and how to configure the app based on your Azure and O365 accounts. One important note, the sample relies on the fact that you have an Excel workbook called RestaurantBillCalculator.xlsx in the root of your tenant’s OneDrive – if the file does not exist the file is uploaded when you run the application for the first time. There are obvious changes that can be made to code to be more performant such as a namedItem that contains all of the fields rather than a nameditem per field, and not checking if the file exists each time.

If you want to start writing your own apps that use the Microsoft Graph then check out the following blog on how to get started with the Graph Excel REST API and as always, refer the Microsoft Graph API documentation for examples and syntax.