A couple of weeks ago I was given the opportunity of working with a partner to build a solution that would hopefully help them automate their expense (receipts) processing.

The scenario was simple:

  1. Upload Image (.png, .jpg).
  2. Extract Data from Image.
  3. Process Expense using extracted Data.

Whilst the scenario sounded simple there was a need for a reliable infrastructure to enable the processing, i.e. queue the image, hand-off the image for processing, track status, error-handling, and return success or failed state.

[Note, for the purposes of the blog post I am not going touch on how the data was extracted. This is referenced in another blog post by one of my colleagues on the team that can be found here. In my example I am going to mock the data extraction using a single callout to the Microsoft Vison API OCR (Optical Character Recognition) method.]

After several iterations the following architecture was agreed:

Receipt Processing Diagram

Walking through the steps:

  1. User takes a photo of the receipt and using a Xamarin app, chooses to upload the image for processing (Azure Blob Storage). [Note, for the purpose of this blog, rather than a Mobile Test App (as per diagram) I have included a simple .NET Console application which is not production ready and for demo / PoC purposes only]
  2. Once the image is uploaded the Xamarin app adds a message to the Expenses Queue (Azure Queue Storage) to trigger the next step in the process.
  3. The Expense Processing Function is an Azure Function that is triggered when a message is placed onto a specific Azure Queue Storage (QueueTrigger). When triggered the Azure Function creates a record in a table (Azure Table Storage) to track status and to store the success or failed state of the process.
  4. The Expense Processing Function hands off the actual processing of the image to another function. Like step 2, this is managed by placing a message on a queue which then triggers the Receipt Processing Function (Azure Function). At this point you may be wondering why there is a dotted line around the 3 boxes and it is named Smart Services. This is to suggest that these services are isolated, that they have no dependencies on any other service. This was a key ask by the partner because over time other apps, not just the Expense Processor, may need to call the receipt processing service.
  5. Once the processing had completed (success or fail) the Receipt Processing Function hands-off the result back to the calling application. To ensure isolation (as per step 5) the Receipt Processing Function simply hands-off using a callback URL as defined in the original message item. This callback Url endpoint is another Azure Function, denoted in the diagram as Processing Callback Function whose trigger this time is a HttpTrigger.
  6. The purpose of Processing Callback Function is to update the state the Receipt Processing Function. It does so by updating the table record that was created as per step 3.
  7. The Processing Callback Function also adds another message to the Expenses Queue, which in turn will again trigger the Expense Processing Function. This step is optional, but allows the Expense Processing Function to do any post-processing such as notifying a user that there expense has been processed.

[Note, for the purposes of this blog I have not included any code relating to the Web App that was built to view and manage the outputs of the OCR processing.]

So why Azure Functions? Well I’m not going to paraphrase the contents of the Azure Function documentation page but in our scenario Azure Functions fitted perfectly:

  • small discrete code classes;
  • simple bindings and triggers;
  • no complicated server infrastructure;
  • cost effective – pay as you go;
  • simple to use, simple to integrate;
  • continuous deployment;
  • scale;
  • analytics and monitoring;

Okay that’s the summary complete. Next I am going to walkthrough some of the key pieces of the solution and then finally provide instructions on where to learn how you would go about setting up continuous deployment to Azure using Visual Studio Team Services.

The Solution

All the code I describe in this blog post can be found on GitHub.

There are 2 folders: a simple image uploader console application and, the azure functions to process the image. Note, there are 2 azure function solutions: ExpenseOCRCapture which contains the Expense Processing Functions (as per diagram) that handle the processing workflow; SmartOCRService which contains the Receipt Processing Function (as per diagram) to manage the callout to the Microsoft Vision API and parse the result.

Please feel free to download the solutions and try out the code yourself. For instructions on how to deploy and run, please refer to the pre-requisite and setup instructions outlined in the readme documents in each folder.

[Note, at the time of writing to build and deploy the Azure Functions you must use Visual Studio 2017 Preview (2). For details on where to download please refer here.]

The Azure Functions

Let’s have a look at the Azure Function solutions that are in GitHub:

ExpenseOCRCapture

The contents of the expense-capture folder contains a single Visual Studio 2017 Preview (2) solution that contains two Azure Functions called ExpenseProcessor and OCRCallback.

Looking at the contents of ExpenseProcessor.cs:

image

The ExpenseProcessor function’s primary purpose is to handle the image processing workflow. The function itself is triggered by a message being added to the Azure Storage Queue, receiptQueueItem.

As well as receiptQueueItem there are several other important parameters of this function, namely:

  • receiptsTable – this is an Azure Storage Table which provide tracking status and ultimately the output of the OCR request.
  • ocrQueue – this is an Azure Storage Queue and provides the binding to allow this function to callout to the SmartOCRService that we will discuss later. Note, its connection property is set as SmartServicesStorage – this is an Application Setting key/value pair and should be the Azure Storage Connection string associated with the SmartOCRService storage account.
  • incontainer/receipts – this is an Azure Storage Blob that is used to store the image files for processing. Note, rather than sharing this blob with the SmartOCRService, this function generates a Shared Access Signature (SAS) which the OCR service then uses. This removes a dependency on the SmartOCRService thus allowing multiple blob stores and therefore multiple requestors.

Step 0 in the case statement is responsible for the primary activity of this function and is the the one that provides the SmartOCRService with the necessary information so that it can process the image:

image 

The method StartOCR(…) is responsible for creating a new message queue item of type OCRQueueMessage. The message has 4 properties:

  • ItemId – unique identifier for the image being processed
  • ItemType – the type of the image, in this case ‘receipt’ but in other solutions this may be ‘invoice’ or ‘order’
  • ImageUrl – the SAS which provides the SmartOCRService the location and permissions required to access the image in blob storage
  • Callback – so that the SmartOCRService knows where to respond to once the OCR processing is complete, a callback URL is provided. This is the HTTP address of the OCRCallback function which we will describe next plus the function key which provides the caller (SmartOCRService) the necessary authentication to call the function. This property requires 2 application settings key/value pairs to be added:
    • OCRCallbackKey – when creating an HttpTrigger the creator needs to provide the AuthLevel required to call the function. This can be one of 3 levels: function (default), anonymous, and admin. For the purpose of the OCRCallback function the auth level has been set to function (function is useful when the function is only called from another service and there is usually no user-interaction). Setting the auth level to function means that on each request the requestor must provide a key. This key can be found in the OCRCallback function manage tab as shown below. Copy the value and create a new Application Setting key/value pair with the name of the key being OCRCallbackKey.

image

    • BaseCallbackAddress – this is Url of where the OCRCallback function is hosted. This is Url of where you have published your Azure Function which will usually be something like https://myfunctions.azurewebsites.net/api You should create a new Application Setting key/value pair with the name of the key being BaseCallbackAddress

To trigger the SmartOCRService the message is simply added to the OCRQueue. An added advantage of using a queue rather than a direct request to the SmartOCRService is that the role of the ExpenseProcessor is now temporarily complete until the OCRCallback function triggers the state change or continuation of the workflow.

Step 1 of the process provides a placeholder to communicate when the processing of the image is complete (success or failure). In this simplified case, the code simply updates the receiptsTable to highlight the final status of the process. To identify which image has been processed, the ItemId is a property of the message payload.

Step 99 of the process allows the function to handle any necessary ‘retries’. As you will see as part of the OCRSmartService, there are several non-catastrophic scenarios which we may want to handle by retrying the process. This step simply restarts the process by following the steps executed as part of Step 0.

Now looking at the contents of the function OCRCallback (found in OCRCallback.cs):

image

You’ll see that its primary role is to act as conduit between the OCRSmartServices and the ExpenseProcessor. It simply takes the result of the OCRSmartService and translates it as new workflow state. This new workflow state will either be Complete, Error or Retry. In the case of Complete and Error there will be additional state captured which will either be the returned Text from the OCRSmartService in the case of Complete or the reason why the SmartOCRService failed which in this case will be the Exception message.

Its important to note that it was a desired condition by the partner to have this separation of concerns between the ExpenseProcessor and the SmartOCRService – it is imagined that overtime more processors will be put in place (e.g. InvoiceProcessor, OrderProcessor, etc.) and therefore the OCRSmartService should have no dependency on the requestor.

SmartOCRService

The contents of the smart-services folder contains a Visual Studio 2017 Preview (2) solution called SmartOCRService with one Azure Function called SmartOCRService.

As per the previous ExpenseProcessor function, the function is triggered when a message is added to the Azure Storage Queue (QueueTrigger) described by the parameter ocrQueue

The function’s key responsibility is to call out to the OCR Service (in this case the Microsoft Cognitive Vision API) and then return the result to the requestor via the callback Url provided as property of the message payload.

image

The method MakeOCRRequest  is responsible for calling out the OCR service and then determining how to handle the response:

image

Key thing to note within this function: As the function is dependent on the OCR Service being available it needs to handle the exception when the service is unavailable. If the service is unavailable the requestor may want to inform the user that they need to try later, or in our case automate that process by having the requestor retry the whole process automagically.

By default, if the function fails there will be a maximum of 5 retry attempts. If the last retry attempt fails then the original queue message is added to a poison-message queue. By adding the message to a poison-message queue means the message will not be acted upon again but provides a user some notification that the message has failed.

In our case we wanted to override this behaviour by preventing this message being added to the poison-message queue. We did this by monitoring the number of retries so that on the last retry we threw a MaxRetryException (custom Exception) which we then in-turn handled by return a result with a new status of ‘Retry’. If we go back to the previous OCRCallback function above we handled the Retry status by adding the original message back onto the SmartOCRService queue.

Note, to monitor the number of retries the function has tried then add the parameter dequeueCount, which is type int to the signature of the Run(…) method.

Continuous Integration / Deployment

At the time of writing there were certain issues setting up Continuous Integration / Deployment from within Visual Studio 2017 Preview (2). It was a requirement of the Partner that they needed this continuous infrastructure in place.

The original plan was to investigate how this could be done by trying out ourselves the setup within Visual Studio Team Services, but after some researching on the internet we found a great blog which set out the steps perfectly.

If you are interested in using Continuous Integration / Deployment I would suggest following the steps found here.

Application Monitoring

The final requirement the partner had was being able to monitor their Azure Functions. After several iterations it was decided that to get the best insights into how the application was performing was to integrate Microsoft Application Insights.

If you are interested in using Application Insights inside your Azure Functions then I would suggest you read the following blog post found here.

If you find Application Insights is overkill for your projects then I would suggest having a look at the following documentation.

A common bot use case that is often discussed is how to have a bot contact a user based on some triggered event or lengthy job or an external event such as a state change in the system. These type of bots are often referred to as proactive bots.

In this post I am going to show you how to create a proactive bot using the Microsoft Bot Framework and Microsoft Azure. I have decided to write the solution using C# but all of the concepts are just as valid for Node.js as well.

The scenario we are going to cover is simple.

Ask a bot to notify me when an order is despatched

To follow the steps in this post you will need:

Here is an overview of the general architecture you will be building:

General Architecture

You can find all the code you need in my GitHub here.

1. Building and Registering the Bot.

The code to build the bot can be found here. To use this code you must:

  • First decide where to publish your bot. If you are using Visual Studio then an easy way to do so is to right-click the project and choose publish. From here you can choose to publish directly to Azure. If you require additional help deploying your app then I encourage you to look at this resource. Please make a note of the website url – you will need this in the next step.
  • Next, register your bot in the Microsoft Bot Framework. Full instructions on how to do this can be found here. Important: the messaging endpoint is the url noted above and include /api/messages, e.g. https://mybot.azurewebsites.net/api/messages. You should make a note of the Bot Handle, Microsoft App ID and the Microsoft App Password (auto generated) as  you will need these values for the next step.

Register Bot

  • Back in Visual Studio project update the web.config with the Bot Handle, Microsoft App ID and Microsoft App Password. 

Web Config

  • Next, configure an Azure Storage Account. This Storage Account will be used for both the Table Storage and Queues. If you don’t already have a Storage Account then follow these steps here to create a new account. Once created you will need the Storage Account connection string. To get this string from the Azure Portal find your Storage Account and under Settings—>Access Keys, copy the Connection Sting value and paste it as the value of the StorageConnectionString in the web.config.
  • Finally, rebuild the solution and re-publish. You can test the bot works either by adding your Bot to your favourite channel such as Skype, Slack, or Teams or by using the web chat pane – both options can be found in the My bots (choose your bot) menu option in the Microsoft Bot Framework portal.

2. Creating the Azure Functions

Azure functions allow you to develop and deploy small pieces of code in the language of your choice and in the cloud without setting up infrastructure. If you are new to Azure Functions then I would definitely recommend having a look at this tutorial. As well as being a great getting started it will also introduce you to working with Azure Queues which form the basis of our solution

As per the general architecture diagram above, will need to create 2 Azure Functions: one to capture the subscription posted from the Bot and another to capture the back office event being completed. So looking at each one in turn:

Subscription Trigger

This function is triggered when a new Queue Message is added to an Azure Queue. To create this function select the ‘+’ symbol under the Azure Function Plan and choose the QueueTrigger-CSharp template. Give, your function the name SubscriptionTrigger and then specify the Queue name as subscription-items and select from the dropdown the Storage Account you created previously.

Once the function has been created, you need to allow the function to update an Azure Table. To do this select the Integrate menu option under the Azure Function – select + New Input and choose Azure Table Storage. Change both the Table parameter name and Table name to subscriptionTable, and select from the dropdown the Storage Account you created previously.

Back in the body of the function (run.csx) copy the code found here. Note, the parameter names for the types SubscriptionMessage and CloudTable need to match what you entered as part of the Integrate step. Choose Save and I suspect you will get an error complaining about the type ConversationReference. This is a class specified in the Microsoft.Bot.Builder framework which is available as a Nuget package. We need to therefore include a reference to this framework and to do so we need to add a new file. The diagram below shows the new file (project.json) and the associated code:

Project Json

Once you have added the file, try saving the function again and hopefully the code should compile successfully.

At this stage, you should be able to test your code to see if the Subscription side of the solution is working successfully. To check, go to the bot channel (e.g. web chat, Skype, Slack, etc.) of your choice and type the message #1234 . You should get a response from the bot that the subscription to that order has been noted. If you look in your Azure Table Storage (try the tool Azure Storage Explorer) you should see a new row in the subscriptionTable.

Event Trigger

Like the previous function, this function is triggered when a new Queue Message is added to an Azure Queue. This time however, rather than the user triggering the new message from their bot, this message gets added as a result of a back office system when the order has been despatched.

Therefore, as per the Subscription Trigger, create a new function using the QueueTrigger-CSharp template. The Queue name should be set as event-items and the Storage Account should be the account you set up previously.

Once the function has been saved, again as per the Subscription Trigger, use the Integrate menu item to add a new Azure Table Storage input. As this function will be reading from the table created as part of the subscription trigger, you should use the same Table name, subscriptionTable.

This function also makes use of the Microsoft.Bot.Builder type ConversationReference so as per above add a new file called project.json and add a reference to Microsoft.Bot.Builder.

Now click on the file run.csx and copy the code from here.

Finally, one thing you will notice in the code is that it has a reference to a MicrosoftAppPassord, MicrosoftAppId. These are values that are required to be added to the Function’s app settings. To add them go to the Function App—>Platform Features—>Application settings.

App Setting Menu

Under App settings in the task pane that will open you should add the names MicrosoftAppId and MicrosoftAppPassword. The values for these 2 new keys should be values you captured when registering your bot. Note, they should match what you have in your Bot project’s web.config file.

3. Configuring the ‘Some Back-office App’

The final project that I have included in the solution is a simple console app, the DeleteMeClient (the DeleteMe reference is meant to refer the fact that it should thrown away and replaced with a concrete app Smile). 

This console app is meant to represent the back-office app that is responsible for triggering the bot to tell the user their order has now been despatched.

To use the app you will need to update it’s app.config with the correct Storage Account Connection String as per the setting specified in the Bot project’s web.config. You should also ensure the queue name in Program.cs matches the one specified in the Event Trigger’s integration. If you have followed my recommendation this should be event-items.

4. Pulling it altogether!

Now that we have our Bot published and our Azure Function services configured, let’s pull it altogether and play out the scenario. Note, to start any proactive scenario the user must interact with the bot first.

  • Using the Bot Channel of your choice, for illustration purposes I have configured Skype, type an order number in the format #XXXX, e.g. #1234
  • You should get a message response telling you your request has been noted.

image

  • Now run the DeleteMeClient console app and when prompted for an order number enter 1234
  • The bot will respond to you letting you know the order has despatched:

image

Try extending this simple scenario by subscribing to the same order but in a different channel, e.g Slack or Microsoft Teams. You should see that when the order is despatched you get a notification in both channels.

5. What if the user wants to act on the notification they have been sent?

Usually when we start a dialog / conversation with a bot we do so with the goal of achieving a desired outcome. To help a user reach that desired outcome the bot is usually designed with some aspect of workflow in mind. In this scenario that workflow is disrupted when the user is notified when the order is despatched. Simply notifying the user is not always enough…what if the user wants to act upon the notification and get access to further actions. In a world of prescribed workflows this is not often easy. What we need to solve is how to disrupt a prescribed dialog with a notification, let the user act upon that notification and then once they have finished let them continue with their original dialog.

We can solve this problem using Scorable Dialogs:

Scorable dialogs monitor all incoming messages to a bot and decide if they should try to handle a message. If a scorable matches against an incoming message then it can then handle the response to the user rather than it being picked up by the current dialog in the stack.

In our solution we have defined a Scorable (ScorableActions.cs) to handle when a user responds with the message ‘show actions’. In this context the Bot handles this request by creating a new dialog with the user asking them if they want to view the order or notify the customer of the order.

The result, the conversation is disrupted and the user is allowed to complete an action:

image

For more details on Scorable Dialogs, I would highly recommend you watch MVP James Mann’s video and read MVP Gary Pretty’s blog post.

6. What about the disruption to the conversation flow?

The flow above is very simple. In the real world an order won’t be despatched in 5 minutes and I suspect a bot’s applicability will be more than: tell me when an order is despatched.

My point is, the triggered event may occur after 5 minutes, 5 hours, 5 days, even 5 months or 5 years. Given this time frame, there is the potential that the user may already be in mid conversation with the bot when a notification is triggered. The complication then is how to respond back to user with the notification and offer them further actions without dramatically affecting the user’s current dialog and therefore experience.

Let’s consider the following conversation flow:

  • User asks the bot to notify them when order #5674 is despatched
  • The bot immediately responds telling the user their request has been noted.

image

  • The user then starts a dialog with the bot unrelated to orders being despatched:

image

  • In the middle of this conversation, the bot responds with the notification that order 5674 has been despatched. The user then decides they want to act upon the notification they have just received, thus disrupting the original conversation flow:

image

Once they view the order, how can we remind the user where they were in the original conversation flow? To do this we make use of the frame collection associated with the bot context. This collection is a list of the current dialogs that are in play. By interrogating this collection and implementing a custom-defined function we can determine the last message that was sent and then include it in the closing response of the Scorable dialog:

image

You can see the code in action by first looking at RootDialog.cs. This class implements a custom interface called ILastDialogMessageSentToUser.

The purpose of this interface is two-fold: keep a track of the last message sent; allow the Scorable implementation to filter the bot context collection based on its type:

RootDialog.cs

image

ScorableActionsDialog.cs

image

Note, you’ll notice in each of the bot responses I have included the name of the dialog responsible for the message. This is deliberate and hopefully illustrates how Scorable Dialogs can help you traverse between dialogs.

7. Summary

In this blog you have learnt how to use the Microsoft Bot Framework and Azure to create a proactive bot. You have also learnt techniques on how to handle disruptions that notifications cause when they interrupt conversational flows.

When using the Microsoft Bot Framework, to configure the bot to be available to a particular channel (e.g. Skype. Slack, Facebook, Teams, etc.) you need to host the Bot service on a public URL endpoint. The channel won’t be able to access your bot service if it is on a local server port hidden behind a NAT or firewall.

When designing / building / testing your code you don’t always want to have to keep redeploying and more importantly paying hosting costs – introducing ngrok,

ngrok allows you to create secure tunnels to localhost and thereby exposes your local server to the internet.

So what I thought I would do is walk you through:

  1. configuring a simple Node.js bot using the Bot Builder SDK samples and deploying the bot to your localhost server
  2. using ngrok to expose your bot to the public internet
  3. registering the bot on the Microsoft Bot Framework
  4. adding the bot to your Skype channel

 

1. Create a simple Hello, World! Bot

The easiest way to do this is to use one of the sample bots found here. Follow the instructions to clone the repository and then navigate to one of the samples. If you are getting started with bots then my advice is to start with one of the Hello World samples. For the purpose of this walkthrough I used the hello-ChatConnector.

If you want to make sure your bot is working without having to register it first, try using the Bot Framework Emulator following these steps:

  1. In a Node.js console navigate to the hello-ChatConnector found at \botbuilder\Node\examples\hello-ChatConnector
  2. Run the command node app.js
  3. Make a note of the port, if you have not changed any of the app.js code the port should be 3978
  4. Open the emulator and change the Bot Url to http://localhost:3978/api/messages
  5. Try the emulator by posting a message to the chat window. You should see something like this:

 

image

 

2. Using ngrok

Given the bot is being hosted on localhost:3978 now use ngrok to expose this port to the public internet. To do this simply type at a command prompt ngrok http 3978. You should the following UI:

image

Copy the forwarding Url – this is the public endpoint for the Bot service.

For the purpose of trying this out in the emulator, copy this Url to the Bot Url as per #1 – remember to include /api/messages.

You should notice that the emulator Url now shows as error ! to solve this open another command prompt and type ngrok http –host-header rewrite=9000

Doing so should now enable you to run the emulator but using public endpoints:

image

 

3. Register the bot on the Microsoft Bot Framework

If you haven’t already signed up to the Microsoft Bot Framework then you need to that first. Once signed up and signed in choose the Register a Bot tab.

The form will guide you through the steps required to register the bot. Ensure you copy the Microsoft App ID and Password somewhere safe, you will need to add these to your Bot service code. For the Messaging Endpoint use the forwarding Url captured in #2 that you used as the Bot Url.

4. Adding the bot to the Skype channel

Go back to the bot sample configured in step #1. You need to amend the app.js code file to include the Microsoft App ID and Password copied in step #3.

Once you have saved the changes follow the steps in #1 and #2 to deploy your bot to your local server and then use ngrok to make it public to the internet.

To ensure the changes you have made still work, use the emulator to test – you will need to update the Microsoft App ID and Microsoft App Password with the values captured in step #3. If you don’t you will get a 401 Unauthorized.

Finally, go back to the Microsoft Bot Framework portal and add your bot to the Skype Channel.

 

image

 

And that is it…you have your first bot running locally being served to the public internet using ngrok and available to chat with using the Skype channel.

As a next step, you can debug your bot service within Visual Code using the steps described in here

Has your customer asked you, “Which build of Office do I need in order to get that cool API or requirement set?”

Here is some great guidance that addresses that question. Check out these topics:

Ever wanted to develop for Skype for Business? Then here is a great set of resources to get you started: 

Skype Web & App SDK documentation: https://msdn.microsoft.com/en-us/skype/skypedeveloperplatform

Skype Preview program: https://skypepreview.com

Office development: https://dev.office.com/Skype

Web SDK Overview: https://msdn.microsoft.com/en-us/skype/websdk/skypewebsdk

Web SDK API keys & feature grid: https://msdn.microsoft.com/en-us/skype/websdk/apiproductkeys

Web SDK API reference: https://ucwa.skype.com/reference/WebSDK/modules/_s4b_sdk_d_.jcafe.html

UCWA 2.0 General reference: https://msdn.microsoft.com/en-us/skype/ucwa/ucwa2_0generalreference

UCWA 2.0 API reference: https://msdn.microsoft.com/en-us/skype/ucwa/ucwa2_0apireference

Skype App SDK: https://msdn.microsoft.com/en-us/skype/appsdk/skypeappsdk

Skype App iOS API reference: https://ucwa.skype.com/reference/appSDK/IOS/

Skype App Android reference: https://ucwa.skype.com/reference/appSDK/Android/

Code Samples

If you haven’t read my Part 1 of this series then let me recap. Last week the Microsoft Office team announced the general availability of Excel REST APIs for the Microsoft Graph. With these new endpoints it is now possible to interact with your existing Excel Workbooks using REST from your mobile, desktop and web apps without the need to have to open the Excel app – pretty awesome eh?

In my previous post I demonstrated how you can call one of the 300+ built-in functions that Excel supports out-of-the-box. Sometimes though, the built-in functions that come with Excel are not enough. Sometimes we need to build our own Excel calculations to support our own LoB reports and visualisations. In this post I will demonstrate how you can re-use that functionality within your own modern day web and mobile apps.

For the purposes of this post I have created a simple Restaurant Bill Calculator within Excel:.

Excel Bill Calculator

This is a very simple example but it’s the concept of re-use of the calculation logic, in this case Bill+Tip and Amount per person, that I am trying to convey not it’s complexity.

I then created a visual representation of the Excel Workbook in an ASP.NET application – of course I didn’t need to create a web app, I could have created just a web API or a Console app,

Web App Calculator

So in this example we use the new Excel REST API to set the Excel Workbook named items, Bill Amount, Tip, and Number in Party. We then use the new Excel REST API to get the results by asking for the Excel Workbook named items Total Bill and Amount per Person..

In summary, complex business models no longer need to be rebuilt repeatedly; developers can leverage Excel to perform those calculations instantly and retrieve the results with simple API calls.

The full source-code for my sample can be found be here. Please refer to the readme file for pre-requisites and how to configure the app based on your Azure and O365 accounts. One important note, the sample relies on the fact that you have an Excel workbook called RestaurantBillCalculator.xlsx in the root of your tenant’s OneDrive – if the file does not exist the file is uploaded when you run the application for the first time. There are obvious changes that can be made to code to be more performant such as a namedItem that contains all of the fields rather than a nameditem per field, and not checking if the file exists each time.

If you want to start writing your own apps that use the Microsoft Graph then check out the following blog on how to get started with the Graph Excel REST API and as always, refer the Microsoft Graph API documentation for examples and syntax.

Its been awhile since my first post, but I have been waiting for the big announcement that I’m sure is going to change people’s mind-sets on how they interact with Excel today – introducing to you the general availability of the Microsoft Graph REST APIs for Excel.

In this post I am going to give you an example of how to use Excel as a Calculation Service, demonstrating how you can call one of the 300+ Excel functions from a ASP.NET MVC application using the Graph REST APIs for Excel.

The function I am going to call is the one highlighted in the GA blog post – PMT. The Excel PMT function calculates the constant periodic payment required to pay off (or partially pay off) a loan or investment, with a constant interest rate, over a specified period. The syntax of the function is:

PMT(rate, nper, pv, [fv], [type]]

Where:

rate = the interest rate per period as a percentage

nper = the number of periods over which the loan or investment is to be paid

pv = the present value of the loan / investment

[fv] = an optional argument that specifies the future value of the loan / investment, at the end of the nper payments

[type] = an optional argument that defines whether the payment is made at the start or the end of the period.

This function translates to the following REST API request:

POST …/workbook/functions/pmt { “rate”: 0.035, “nper”: 20, “pv”: -2000 }

Note, don’t fall into the same trap as I did when calling the function – as it is a calculation the values of the post request (body content) are numbers not strings!!

Check out the following blog on how to get started with the Graph Excel REST API and as always, refer the Microsoft Graph API documentation for examples and syntax.

The full source-code demonstrating how to call the PMT function using the Graph REST API can be found be here. Please refer to the readme file for pre-requisites and how to configure the app based on your Azure and O365 accounts. One important note, the sample relies on the fact that you have an Excel workbook called book.xlsx in the root of your tenant’s OneDrive. To make the solution more user friendly to a wider audience, why not try extending the code to automatically create a new workbook if Book.xlsx does not exist. [Hint, my next post includes the code to show you how this can be done]

In my next post I will demonstrate how to you can re-use your own existing Excel workbook functions in modern day web applications.  .

Managing Authentication flow within Office Add-ins has just got a whole lot easier with the introduction of the new Office UI Dialog API.

This post shows you how to create a simple Excel Add-in Command in Visual Studio 2015 that uses the new Dialog API to authenticate a user and then import their Outlook Contacts using the Microsoft Graph API.

If you are new to developing O365 Add-ins then I would highly recommend you check out the Getting Started page of dev.office.com. There are reams of examples and samples to help you get up to speed with links to the Office SDKs and a Yeoman Generator for Office if Visual Studio is not your thing.

Prerequisites

The following is required to complete this module:

Step 1

In this step you’ll create a new Excel Office Add-in using Visual Studio. If you haven’t done so already, make sure you have downloaded and installed the Visual Studio Office Developer Tools.

With the latest Office developer tools installed open Visual Studio and choose File-New-Project. In the dialog prompt choose Office/SharePoint under Installed Templates-Visual C#.

VS Template

Choose Excel Add-in and enter a project name.

You will then be prompted to choose the type of add-in:

Addin type

Accept the default and press Finish.

Visual Studio will create 2 projects: an Office Project which contains the add-in manifest; a web project that contains the content for the add-in. If you want more detail about the anatomy of an office add-in please refer to the following resource.

Full source code can be found here but I just want to highlight some of the key areas:

  • (Contacts.js) Following the Dialog API guidance, create a displayDialog with the appropriate parameters:

dialog code

  • (Contacts.js) Specify how to handle the Dialog event type DialogMessageReceived. Note how the result message is parsed to extract the AAD access token:

message received code

  • (Auth.js) This solution uses the ADAL JavaScript library to simplify the Authentication process. For more information about these libraries please refer to the following resource. The key point here is the simplicity on how to authenticate with O365 credentials. The solution does not contain any login pages or credential stores – all of that is managed by ADAL based on the parameters that is provided.

auth code

  • (Manifest.xml) The Office manifest defines the requirements and the capabilities of the office add-in. As this Excel add-in will use Add-in Commands to deliver its functionality, the manifest needs to be structured accordingly: Note the manifest contains only one control which exexcutes a JavaScript function rather than launching an add-in Task-pane.

manifest

  • Based on the definition of the manifest this add-in has a single entry point which is a button on the Home Ribbon

ribbon button

Step 2

In this step you’ll go through the steps of registering an app in Azure AD using the Office 365 App Registration Tool. You can also register apps in the Azure Management Portal but the Office 365 App Registration Tool allows you to register applications without having access to the Azure Management Portal. Azure AD is the identity provider for Office 365, so any service/application that wants to use Office 365 data must be registered with it..

Once you have signed in you will be asked for some details about the app:

registration.

Feel free to give the app a name of your choice but the redirect URI must match what is shown above.

Once you hit register you will be shown a second dialog which will; hopefully say “Registration Successful” and show you your Client ID. Copy this value as you will need it to complete the Office add-in solution.

Step 3

Back in Visual Studio open the file App.js which can be found in the App folder.:

app details

The tenant is the O365 tenant you either created as part of the app registration or your existing O365 tenant name.

Finish

Build the solution.

Ensure the Office Project properties add-in start action is set to Office Desktop Client:

office properties

Press F5.

When you click the Get Contacts button in the Office Ribbon you should be prompted with the following dialog:

Authentication screen

Enter your O365 credentials (a user belonging to the tenant you specified in App.js) and if the user has contacts associated with the account then they should see them automatically added once the dialog is closed.

I hope you have enjoyed this post…feel free to send me any comments.