Building a REST Service with Node.JS, DocumentDb, and TypeScript

REST Services are commonly the backbone for modern applications.  They provide the interface to the back-end logic, storage, security, and other services that are vital for an application whether it is a web app, a mobile app, or both!  

For me, as recently as a years ago, my solutions were typically built with the .NET Stack consisting of ASP.NET Web API based services written in c#, various middleware plugins, and backend storage that usually involved SQL Server. Those applications could then be packaged up and deployed to IIS on premise or somewhere in the cloud such as Microsoft Azure. It's a pretty common architecture and has been a fairly successful recipe for me. 

Like many developers however in recent years I've spent an increasing amount of time with JavaScript. My traditional web application development has slowly moved from .NET server side solutions such as ASP.NET Forms & ASP.NET MVC to client side JavaScript solutions using popular JavaScript frameworks such as Angular. As I began working with the beta of Angular 2 I was first introduced to TypeScript and quickly grew to appreciate JavaScript even further. 

Spending so much time in JavaScript though I was intrigued with the idea of a unified development stack based entirely on JavaScript that wasn't necessarily tied directly to any particular platform.  I also started spending much more time with Microsoft Azure solutions and the combination of a REST based services built on Node.JS, Express, TypeScript, and DocumentDB seemed very attractive.  In my journey with that goal in mind I found I couldn't find a single all-inclusive resource that provided me what I was looking for, especially with DocumentDB,  so I worked through a sample project of my own which I'm sharing in this blog post to hopefully benefit some others on the same path.

A quick message on Azure DocumentDB

One of the foundations of this solution is Azure's DocumentDB service.  DocumentDB is one of Azure's Schema-Free NoSQL \ JSON Database offerings.  If you've had experience with MongoDB you will find DocumentDB very familiar.  In fact DocumentDB introduced protocol compatibility with MongoDB so your existing MongoDB apps can be quickly migrated to DocumentDB.   In addition to all the benefits, you might expect from a NoSQL solution you also get the high availability, highly scalable, low latency benefits of the Azure platform.  You can learn more about DocumentDB over at


Before getting started there are a couple prerequisites I suggest for the best experience while working with the source code of this project.

Visual Studio Code

If you're not using Visual Studio Code I highly encourage you to check it out. It's a fantastic free code editor from Microsoft. Grab yourself a copy at . If you're already using another editor such as Atom or Sublime you're still good to go but you will need to make adjustments for your own debugging and development workflow for that editor.  If you're using Visual Studio Code you should have a solid "f5" debugging experience with the current configuration.

Azure DocumentDB Emulator

This project is pre-configured to use the Azure DocumentDB Emulator so no Azure Subscription is required!  The emulator is still in preview but is pretty solid and saves a lot of money for those just wishing to evaluate DocumentDB without any costs or subscriptions. More details on the emulator and links to download can be found at . 

If you'd like to you can also use a live instance of DocumentDB with your azure subscription but please make sure you understand the cost structure of DocumentDB as creating additional collections within a DocumentDB database has a cost involved.

Getting Started

For this sample project we'll be building a hypothetical photo location information storage service API. Our API will accept requests to create, update, delete, get photo location information.  In a future post we'll spend some more time with DocumentDB's Geospatial query features but for now we'll just keep it to simple CRUD operations.

The entire project can be found over at  Feel free to clone\review the code there. 

For this project we'll be making use of the following stack:

  • NodeJS - 6.10.0 (Runtime)
  • Express - 4.14.1 (Host)
  • TypeScript - 2.2.1 (Language)
  • Mocha\Chai - 2.2.39\3.4.35 (Testing)
  • Azure DocumentDB Emulator - (Storage)

Project Setup

There are some folks with some pretty strong opinions in regards to a project's folder structure but I tend to side more with consistency than any particular ideology.

-- dist
-- src
+ -- data
 + -- LocationData.ts
 + -- LocationDataConfig.ts
 + -- LocationDocument.ts
+ -- routes
 + -- PhotoLocationRouter.ts
+ -- app.ts
+ -- index.ts
-- test
+ -- photolocation.test.ts
-- gulpfile.js
-- package.json
-- tsconfig.json

Let's break down some of these project elements:

  • dist - build destination for compiled javascript
  • src  - project source code containing our typescript. We'll be breaking down each of the source files further down in the post
  • test - test scripts
  • gulpfile.js - our build tasks
  • package.json - project metadata & NPM dependencies
  • tsconfig.json - typescript configuration file


One of my pet peeves with sample projects, especially when I’m new to a technology,  are unexplained dependencies. With the multitude of open source tools and libraries I often find myself looking up modules to find out what they do and if I need them.  This muddies the waters for those not as familiar with the stack so I find it helpful keeping them to a minimum in my projects and explaining each of them and their purpose.

  "name": "azure-documentdb-node-typescript",
  "version": "1.0.0",
  "description": "A sample Node.JS based REST Service utilizing Microsoft Azure DocumentDB and TypeScript",
  "main": "dist/index.js",
  "scripts": {
    "test": "mocha --reporter spec --compilers ts:ts-node/register test/**/*.test.ts"
  "repository": {
    "type": "git",
    "url": "git+"
  "keywords": [],
  "author": "Joshua Carlisle (",
  "license": "MIT",
  "bugs": {
    "url": ""
  "homepage": "",
  "devDependencies": {
    "@types/chai": "^3.4.35",
    "@types/chai-http": "0.0.30",
    "@types/debug": "0.0.29",
    "@types/documentdb": "0.0.35",
    "@types/express": "^4.0.35",
    "@types/mocha": "^2.2.39",
    "@types/node": "^7.0.5",
    "chai": "^3.5.0",
    "chai-http": "^3.0.0",
    "del": "^2.2.2",
    "documentdb": "^1.10.2",
    "gulp": "^3.9.1",
    "gulp-mocha": "^4.0.1",
    "gulp-sourcemaps": "^2.4.1",
    "gulp-typescript": "^3.1.5",
    "mocha": "^3.2.0",
    "morgan": "^1.8.1",
    "ts-node": "^2.1.0",
    "typescript": "^2.2.1"
  "dependencies": {
    "body-parser": "^1.16.1",
    "express": "^4.14.1"

view rawpackage.json hosted with ❤ by GitHub

  • @types/*  - these are typescript definition files so our typescript code can interact with external libraries in a defined manner (key to intellisense in many editors)
  • Typescript - core typescript module
  • gulp/del - javascript based task runner we use for typescript builds and any extra needs. Del is a module that deletes files for us
  • gulp-sourcemaps/gulp-typescript - helper modules to allow us to use gulp to compile our typescript during builds
  • Mocha - A javascript testing framework and test runner
  • chai/chai-http - a testing assertion framework we use for creating tests. We're specifically testing out http REST requests
  • Gulp-mocha - A gulp task to help us run mocha tests in gulp (NOTE: running into issues with this one so it remains while I sort it out but we'll be running tests form npm scripts - more on this further in the post)
  • Ts-node - A typescript helper module used by Mocha for tests written in TypeScript
  • Documentdb: Azure's DocumentDB Javascript Module
  • Express - our service host
  • Body-parser - a module that helps express parse JSON parameters automatically for us.

Setting up and working with Typescript

Typescript is fairly straight forward especially for developers who are comfortable working with compiled languages.  Essentially TypeScript provides us with advanced language features that aren't yet available in the current version of Javascript. It does this by compiling typescript down to a targeted version of Javascript. For browsers application, this is typically ES5 but for Node.JS based applications we can reliably target ES6 which is newer but generally not available in most browsers.  The end result of that process though is always standard JavaScript. Typescript is not actually a runtime environment.  For .net developers, this is very much a kin to c# being compiled down to IL.   Additionally, to making debugging easier we have the concept of sourcemaps which map our generated JavaScript with the original lines of typescript code.

Where the confusion often occurs is where\when\how to compile. There are lots of options for us. For front-end UI developers webpack is a common tools. For Node.js projects, such as this, another common approach, and one we make use of in this project, is gulp. 

Configuring Typescript and Gulp

Getting Typescript to compile is typically pretty straight forward, but getting it to compile properly for debugging can be another story entirely and typically the pain point for many developers.

The first file we'll be working with is the tsconfig.json which provides compiler options to the typescript compiler.

  "compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "outDir": "dist"
  "include": [
  "exclude": [

view rawtsconfig.json hosted with ❤ by GitHub

In the case of our project we wanted to target JavaScript ES6, use commonjs for our module format, and specify which directories we want the compiler to include and exclude.


Type definition files in Typescript allow TypeScript to work with external libraries that were not developed originally in typescript. Most popular JavaScript libraries have had types defined by either the project or contributions from the typescript community at large. They define the interfaces, types, functions, data types that the library exposes. The way in which TypeScript references and works with Types changed from 1.x - 2.x of typescript so you still may see references to the old way. Ensure that you're using @types/* to get the latest versions of your types from NPM and that they are in sync with the version of the library you pulled down from NPM if your library doesn't already included types. 


Gulp is a JavaScript based task runner. It has a very large community of plugins to help execute practically every conceivable task you could think of that you might need for a build. To go back to a .NET comparison Gulp is akin to the features you may find in MSBuild.

var gulp = require("gulp");
var ts = require("gulp-typescript");
var mocha = require('gulp-mocha');
var tsProject = ts.createProject("tsconfig.json");
var sourcemaps = require("gulp-sourcemaps");
var del = require("del");
var path = require("path");

/* Test Tasks
WARNING:  GULP MOCHA task is a work in progress and currently has issue.
NPM Script "Test" (package.json) currently more reliable
gulp.task('test', function(){
    return gulp.src('./test/**/*.ts')
        reporter: 'progress'

/* Cleanup the DIST folder before compiling */
gulp.task('clean', function(){
    return del('dist/**/*');

/* Compile the Typescript */
/* IMPORTANT: The Sourcemaps settings here are important or the sourcemap url and source path in the source
maps will not be correct and your breakpoints will not hit - this is especially important for subfolders in the dist folder   */
gulp.task('compile', ['clean'], function () {
var result = tsProject.src()
                    .pipe(sourcemaps.write('.', {includeContent:false, sourceRoot: '.'})) 

/* The default task will allow us to just run "gulp" from the command line to build everything */
gulp.task('default', ['compile']);

view rawgulpfile.js hosted with ❤ by GitHub

Gulp tasks are defined as JavaScript functions and can optionally have dependent other tasks that are executed first which is syntactically represented by the option array of functions as the second argument for a task function. In our case we have a "Compile" task that executes the typescript gulp plugin and works in conjunction with the source maps plugin to compile and output our Javascript to the dist directly.  Note that we have the "dist" directory both within the gulp task and the tsconfig task. The gulp plugin uses the base settings from the tsconfig to execute the typescript compilation process but also supports Gulps "stream" concept to output files to a desired location.

TAKE NOTE:  If you change the commands within the Compile function you may not have full debugging support within Visual Studio Code. The settings provided ensure that the sourcemaps have the correct references and paths used by Visual Studio to allow for breakpoints to be used as expected. These few lines of code took me several hours of research, trial & error for all the stars to line up. If you have problems the first place to start looking is your generate *.map files and ensure the paths provided are what you expect.

Configuring Express and your API Routes

Express is a web application framework for Node.js. Express will host our REST services and provide all the middleware plumbing we need for our service.


Index.ts is essentially responsible for wiring our express based application into the http pipeline within node.js. It's also the entry point for our application where we can configure options such as port to listen to. 

import * as http from 'http';
import * as debug from 'debug';

import App from './App';


const port = normalizePort(process.env.PORT || 3000);
App.set('port', port);

const server = http.createServer(App);
server.on('error', onError);
server.on('listening', onListening);

function normalizePort(val: number|string): number|string|boolean{

    let port: number = (typeof val === 'string') ? parseInt(val,10): val;

    if(isNaN(port)) return val;
    else if(port > 0 ) return port;
    else return false;


function onError(error: NodeJS.ErrnoException): void{

     if (error.syscall !== 'listen') throw error;
  let bind = (typeof port === 'string') ? 'Pipe ' + port : 'Port ' + port;
  switch(error.code) {
    case 'EACCES':
      console.error(`${bind} requires elevated privileges`);
    case 'EADDRINUSE':
      console.error(`${bind} is already in use`);
      throw error;

function onListening(): void {
  let addr = server.address();
  let bind = (typeof addr === 'string') ? `pipe ${addr}` : `port ${addr.port}`;
  debug(`Listening on ${bind}`);

view rawIndex.ts hosted with ❤ by GitHub

NOTE: I need to give some credit to another developer for Index.ts because I know at some point many aspects of this module were copy\paste from somewhere but unfortunately I can't recall who that was but if I do find the original source I'll make sure to update the source and this post with the developer's name.


App.ts sets up an instance of express and provides us a location to configure our middleware with some additional plumbing such as parsing JSON, urlencoding and some basic logging.  Also key to our application is the configuration of the routes used by our API and the code that is going to handle the routes. Other possible middleware components could be authentication, authorization, and caching just to name a couple.'/api/v1/photolocations', PhotoLocationRouter);
/*  Express Web Application - REST API Host  */
import * as path from 'path';
import * as express from 'express';
import * as logger from 'morgan';
import * as bodyParser from 'body-parser';
import PhotoLocationRouter from './routes/PhotoLocationRouter';

class App{

    public express: express.Application;
    constructor(){ = express();
    private middleware(): void{'dev'));;{extended: false}));

    private routes(): void{
        let router = express.Router();'/api/v1/photolocations', PhotoLocationRouter);


export default new App().express;

view rawapp.ts hosted with ❤ by GitHub


The core of how our service handles request at a particular end point is managed within the router.  We define the functions that execute on a given protocol and any appropriate parameters:

this.router.get("/:id", this.GetPhotoLocation),"/", this.AddPhotoLocation),

For more complex applications it can be benficial to follow this seperate of routes from the app.  In the case of our sample project we're following REST standards of using Get/Post/Put/Delete appropriately for different CRUD operations.

public GetPhotoLocation(req:Request, res: Response){

         let query:string =;
         var data:LocationData = new LocationData();

                 data.GetLocationAsync(query).then( requestResult => {

         }).catch( e => {

                 message: e.message,
                 status: res.status});



Each operation has a Request and Response object to interact with. In the request object we can access the parameters from either the URL or from the body. Our body parser middleware neatly packs up our body data into a JSON object (given the correct content-type header) and any URL paramters also get neatly extracted from the URL and packed into a property name based on the pattern provided in the route. In the above example we're able to access the "id" parameter.  The response object alllows us to respond with specific\appropriate http codes and any appropriate payload of data. 

import { Router, Request, Response, NextFunction} from 'express';
import { LocationData } from '../data/LocationData';
import { PhotoLocationDocument } from  '../data/PhotoLocationDocument';

export class PhotoLocationRouter {


        this.router = Router();

    public GetPhotoLocation(req:Request, res: Response){

        let query:string =;
        var data:LocationData = new LocationData();

        data.GetLocationAsync(query).then( requestResult => {
        }).catch( e => {
                    message: e.message,
                    status: res.status


    public AddPhotoLocation(req:Request, res: Response){

        var doc: PhotoLocationDocument = <PhotoLocationDocument>req.body;
        var data:LocationData = new LocationData();

        data.AddLocationAsync(doc).then( requestResult => {
        }).catch( e => {
                    message: e.message,
                    status: res.status


    public UpdatePhotoLocation(req:Request, res: Response){

        var doc: PhotoLocationDocument = <PhotoLocationDocument>req.body;
        var data:LocationData = new LocationData();

        data.UpdateLocationAsync(doc).then( requestResult => {
        }).catch( e => {
                    message: e.message,
                    status: 404


    public DeletePhotoLocation(req:Request, res: Response){

            let query:string =;
            var data:LocationData = new LocationData();

            data.DeletePhotoLocationAsync(query).then( requestResult => {
                }).catch( e => {
                        message: e.message,
                        status: 404


        this.router.get("/:id", this.GetPhotoLocation),"/", this.AddPhotoLocation),


const photoLocationRouter = new PhotoLocationRouter();

export default photoLocationRouter.router;

view rawPhotoLocationRouter.ts hosted with ❤ by GitHub

NOTE: I dislike peppering my blog posts with disclaimers but this configuration should be intended for development use only. Additional configuration options can\should be applied for production applications, especially with security and threat hardening. I say this because of a recent Express vulnerability that left a lot of Node.JS sites unnecessarily exposed. For a good place to start checkout Helmet over at

Working with DocumentDB

DocumentDB is a NoSQL solution that at its core stores JSON Documents. There are various best practices around working with DocumentDB but sticking with the basic concepts we'll be creating, updating, deleting, and querying those JSON documents.  Having worked with DocumentDB recently within the .NET framework using Linq this was a bit of a departure but luckily DocumentDB supports standard SQL queries so my years of working with relational databases paid off once again!  This was great foresight from Microsoft to make use of SQL instead of implementing yet another query language (I'm pointing at you SharePoint CAML!) 

DATABASE AND COLLECTION CREATION:  This sample project has a DocumentDB database and Collection.  The code does NOT create those for you so the expectation is that you will do this step up front. The database is named photolocations and the collection locations. Due to the fact that DocumentDB Collections and Databases have a direct cost involved with them (outside the emulator of course) I'm not a fan of code that generates these entities for you. I rather have the process be explicit. 

Defining our document

As an initial step we're going to define that JSON document within our project so we can work with it in a consistent way. In addition to our own properties DocumentDBadditionally has some standard properties that are part of every document. We can think of them as system fields.  To better support these fields the DocumentDB framework requires us to implement these interfaces for new documents (such as the id) and for retrieved documents which have additional fields such as etag for optimistic concurrency checks.

import {NewDocument, RetrievedDocument} from 'documentdb';

export class PhotoLocationDocument implements NewDocument<PhotoLocationDocument>, 
    /* NewDocument Interface */

    /* RetrievedDocument Interface */

    /* Photo Location Properties */
    tags: string[];
    address: {
        street: string,
        city: string,
        zip: string,
        country: string
    geoLocation: {
        type: string;
        coordinates: number[];


view rawPhotoLocationDocument.ts hosted with ❤ by GitHub

ADVICE:  To keep things simple we're exposing the DocumentDb JSON document directly from the REST Service.  There are many scenarios where you may want to have a separate model that has different\fewer properties that is returned from the REST calls based on your applications needs.

DocumentDB operations

The core heavy lifting class within DocumentDB is DocumentClient. All operations such as queries, updates, inserts, deletes are all done through the DocumentClient class.  These functions make heavy use of callbacks so to make our own data layer functions more friendly to work against we wrapped all of our calls within Promises.  Note that many of the classes and interfaces we are working with are all imported in from the DocumentDB module and provided to us through those all-important interfaces.

import {DocumentClient, SqlQuerySpec, RequestCallback, QueryError, RequestOptions, SqlParameter, RetrievedDocument} from 'documentdb'
import {LocationDataConfig } from './LocationDataConfig';
import {PhotoLocationDocument } from './PhotoLocationDocument';

export class LocationData{

    private _config:LocationDataConfig;
    private _client: DocumentClient;


        this._config = new LocationDataConfig();
        this._client = new DocumentClient(, {masterKey: this._config.authKey}, this._config.connectionPolicy); 

    public GetLocationAsync = (id:string) => {

        var that = this;

        return new Promise<PhotoLocationDocument>((resolve, reject) => {

            var options:RequestOptions = {};
            var params: SqlParameter[] = [{name: "@id", value: id }];

            var query: SqlQuerySpec = { query:"select * from heros where",
                                                            parameters: params};
                        .toArray((error:QueryError, result:RetrievedDocument<PhotoLocationDocument>[]): void =>{
                            if (error){ reject(error); }
                            if(result.length > 0){
                                reject({message: 'Location not found'});



    public AddLocationAsync = (photoLocation: PhotoLocationDocument) => {

        var that = this;

        return new Promise<PhotoLocationDocument>((resolve, reject) => {

                var options:RequestOptions = {};

                that._client.createDocument<PhotoLocationDocument>(that._config.collectionUrl, photoLocation, options, 
                        (error: QueryError, resource: PhotoLocationDocument, responseHeaders: any): void => {



    public UpdateLocationAsync = (photoLocation: PhotoLocationDocument) => {

        var that = this;

        return new Promise<PhotoLocationDocument>((resolve,reject) =>{

            var options:RequestOptions = {};
            var documentLink = that._config.collectionUrl + '/docs/' +;

            that._client.replaceDocument<PhotoLocationDocument>(documentLink, photoLocation, options, 
                        (error: QueryError, resource: PhotoLocationDocument, responseHeaders: any): void => {



        public DeletePhotoLocationAsync = (id:string) => {

            var that = this;

            return new Promise<PhotoLocationDocument>((resolve, reject) => {

                    var options:RequestOptions = {};
                    var documentLink = that._config.collectionUrl + '/docs/' + id;
                    that._client.deleteDocument(documentLink, options, 
                        (error: QueryError, resource: any, responseHeaders: any): void => {



view rawLocationData.ts hosted with ❤ by GitHub

Unit Testing

Cards on the table I'm very new with Javascript testing frameworks but I found the test-first approach really reduced the debugging and development cycles greatly. I created the test plans first calling the service and letting them fail.  I defined all my expectations (assertions) for the service and then I developed the service until it passed all the tests. I found using Mocha \ Chai pretty straight forward and I'm looking forward to spending more time with the framework in the coming months and sharing that experience.

NOTE: I would not use my test plans for this project as a model for your own test plans. They work but they require more teardown and cleanup then I would like and I suspect introducing the concepts of mocking while working against the DocumentDB would be beneficial. For now the tests are actually performing operations against DocumentDB. I place these firmly in the "place to start" category.

Development & Debugging

Throughout the development process I made heavy use of a Chrome Plugin called postman.  Postman allows me to test various REST calls and adjust the body\headers\etc as needed. I know there are other tools such as CURL that provide the same set of features but I've found the features and user interface of Postman superior to most other solutions I've worked with adding a great bit of efficiency to my development cycle.

TIP:  Chrome has this nasty habit of automatically redirecting localhost traffic to SSL if anything else runs SSL on localhost -it's a feature called HSTS.  In our case the emulator runs under SSL on localhost so I battled this "feature" constantly. This is internal to Chrome and must be reset by going to a settings page within Chrome chrome://net-internals/#hsts and entering in "localhost" to delete the domain. Even worse this property doesn't stick and routinely gets added back in. The "fix" for this is to either run your service under SSL as well,  add a host header for your app locally so it's on a different host then localhost, or use another browser such as Firefox or IE for testing.  I know this is a safety feature but it's very annoying for us developers and I wish there was a way to disable it permanently for given domains. 


Parting thoughts

So we've learned the basics of building a simple REST service based on Node.JS, DocumentDB, and TypeScript. In an upcoming post I'll be building on this solution to demonstrate more features of DocumentDB, especially the GeoSpatial query features, and we're also explore adding oauth authentication to our service to safeguard access to our data.

Azure Functions & Office 365

Azure Functions seems to be taking the Azure community by storm in the last few months. Even prior to General Availability (GA) I saw the developer buzz quickly building during the public preview and for good reason!

What are Azure Functions?

Azure functions are small units of code that can be executed based on numerous types of events that are built around a server-less architecture.  That's a bit of a mouthful so let's break that down a bit.


The functions part should be pretty self-evident.  Functions should ideally be a discreet unit of work not an entire application.  This is not to say that entire application can't be built around groups of Azure Functions, typically referred to as a MicroService architecture. However, the take away is that it should ideally be a discrete unit of work. Let your function do one thing and one thing really well.


Azure functions can be executed based on Events from several different types of resources.  Some of the most popular include:

  • Listening to an Azure Storage Queue
  • Responding to an Http request (think REST service end point)
  • Executing on a predefined schedule.


You may be thinking to yourself "how can this not be running on a server!".  Well of course there are servers involved!  Server-less is a natural extension of the concept of PaaS (Platform as a Service). PaaS is intended to abstract away the complexities of managing the underlying OS and the hardware to allow a closer focus on the application.  However, in traditional Azure PaaS offerings, such as Azure Apps there remains a need to still consider server resources such as RAM and CPU. How an application scales in response to need requires additional considerations.  When it comes to Server-less architecture such as Azure Functions the entire server is abstracted away.  Applications simply need to define their performance requirements and the underlying infrastructure, referred generally as dynamic computing, ensures that your requirements are met.  This may sound like a very expensive proposition but Microsoft Azure has implemented this in such a way that in many common scenarios it turns out to be much cheaper than traditional Azure App offerings.

It is important to understand though that the underlying infrastructure of Azure Functions is Azure Apps.  You can choose to you a consumption mode where you only pay for resources that you consume or you can also have Azure Functions run under the resources of a standard Azure Apps.

There are scenarios where running Azure Functions within the context of a dedicated Azure App may make sense so it is fully supported but for the majority of scenarios the Consumption based plans can often be the better choice.

The development experience

It should be mentioned that Azure Functions only recently was released for GA and the development experience hasn't completely caught up.  Until recently all development code was implemented within an online editor within Azure using either C# or JavaScript.  Alternatively,  a GIT repository could also be monitored for deployments.  Recently a preview of a Visual Studio project type was made available which provides development and deployment through Visual Studio and allows for a local instance of Azure Functions for debugging. Only c# is currently supported for debugging but the project type is in a pre-release still and additional support for other languages with debugging is promised.

The development experience for Azure Functions is quickly evolving and quickly improving.  Microsoft has the stated goal of supporting not only C# and JavaScript but also Python, PHP, Bash, Batch, PowerShell, and F#. The entire run-time has been open sourced so technically speaking the run-time for Azure Functions could be self hosted within any environment.

So Azure Functions are awesome - where does Office 365 fit?

With the exceedingly low (and sometimes free) costs of entry associated with Azure Functions, there are many opportunities within Office 365 to very quickly get value.

Timer Job Replacement

Custom Timer Jobs were very common within Traditional SharePoint on-premise development. Needing "x" to occur within SharePoint every "x" days is an exceedingly common scenario.  For obvious reasons, custom Timer Jobs are not available to Office 365 which does not allow the deployment of any kind of custom server code. The security and stability requirements on a multi-tenant SharePoint solution such as Office 365 would not make it feasible.   Sometimes you could find workarounds in the form of SharePoint workflows.  Microsoft Flow may also be an option for re-occurring scheduled tasks.  Many times though you may have requirements that don't fit well within the feature set of either of those tools.  You may have very specific logic that is easier to implement in custom code. With the use of Azure Functions,  custom logic can be executed AND code can access Office 365 data directly through frameworks like SharePoint CSOM or Microsoft Graph.  Because you are only counted for actual executing code this is very economical for infrequently run jobs.


Webhooks are a standard concept used throughout the industry for HTTP based notifications.  Originally available in OneDrive and Outlook in Office 365, Webhooks are now available within SharePoint as well. Webhooks are often compared conceptually to event receivers. Custom code can be executed based on activity with a SharePoint list or library. There are some differences between Webhooks and traditional event receivers or Remote Event Receivers but generally speaking, if you do not need to respond to the "-ing" events such as ItemUpdating then Webhooks may be a good choice for you. They are simpler to implement than the legacy WCF requirements of Remote Event Receivers and also don't have the additional hosting requirements of WCF based web services.  Similar to Timer Jobs you only pay when something is actually executed so it is very economical.


RunWithElevatedPrivileges was a common tool for developers in traditional full trust environments to execute server side code that the current user may not normally have permissions to execute. Azure Functions can under the right authentication configuration (and of course the right safeguards in your code) execute logic under elevated permissions.  A common scenario may be something like a site provisioning request.  Azure function have the ability to be accessed through HTTP requests like any other REST based endpoint through JavaScript.


Azure Functions pricing can be found at . At the time of this blog post, January 2017,  the first million executions are FREE and additional executions are $.20 per million executions plus any associated storage costs.  The functions themselves are billed based on resource consumption largely based on duration of execution and memory consumption.  Like everything with Azure, there are a lot of cost formulas to work out, so do your homework ahead of time!


Azure Functions make a lot of sense when it comes to Office 365. For those interested in seeing the development side of how Azure Functions are implemented I have some upcoming blog posts that cover a couple realistic real-world scenarios.


B&R can help you leverage Azure in your solutions!



Getting Started With Modern SharePoint Development

"If you don't hate SharePoint development you're not doing SharePoint development." … said everyone

This phrase was on a t-shirt back at the first and only Office Developer Conference (ODC) back in 2008. I can only guess it was meant to bring attention to the plight of the SharePoint development community at the time. Having worked with SharePoint since 2004 and with no real development tools and little or no documentation it rang a cord with me at the time. Unfortunately for many it still rings true for SharePoint developers in 2017.  Despite many improvements over the years SharePoint development still remains a source of frustrations for its legions of developers.  It should come as no surprise that many of its developers who have toiled over the years to get to a level of proficiency are now feeling left behind as a different model for development, that of client side development, starts to emerge as the new de facto standard.  In one of the ultimate ironies developers who have been accused of not being "real" developers, spending most of their time in Content Editor web parts and SharePoint designer writing scripts in many ways may find themselves better equipped for the coming transition.

Traditional SharePoint development that has made heavy use of full trust server side solutions built on top of and the SharePoint server side object model have been steadily falling out of favor for client side development that makes more use of various JavaScript frameworks in concert with SharePoint REST services and the JavaScript Object Model (JSOM). Part of this has been out of necessity with the growing prevalence of Office 365 which does not allow developers to deploy custom code to SharePoint. It also follows a general industry trend for client side development and the user experience that comes along with it.  Client side development however, comes with its own set of challenges.  The various tools and frameworks that have been part of many traditional front-web developers for years often seem very foreign to many server-side SharePoint developers. To add insult many of the tools don't integrate as well with Visual Studio, our traditional development platform of choice. 

Some may see this new modern development shift in a negative light.  Yet another methodology to learn. Yet another investment. I would argue however that the failures of previous development experiences were caused by the inability to bring the SharePoint development experience on par with that of standard web development.  By embracing standard development methodologies SharePoint no longer has to raise that bar but instead simply requires improved integration and best practices. To be fair, traditional SharePoint development is not going anywhere. For on-premises this model will likely be supported as long as there is an on-premises version of SharePoint but the future is here now and it's time to get ready. 

Code Editing: Visual Studio Code

For many developers their code editor of choice plays a major role in their development success. There is little worse than experiencing a lot of friction with your development tools, it just makes everything harder and that much slower.  Traditionally the editor of choice, at least on windows, has been Visual Studio. Although the latest releases of Visual Studio 2015has improved its support for various modern tools such as various package managers and task runners it's generally been slow to respond and for many developers it has the bolted on feel.  Some actions are triggered during builds, some are run through task runners, some have obvious configuration while others are hidden behind custom dialogs and wizards. Some functionality is built from community projects while some has been added with service packs.   I still prefer Visual Studio for what it was built for, traditional, Web API, MVC, Windows - essentially .NET development but when it comes to front-end UI development I now prefer Visual Studio Code, Microsoft's free open source code editor. 

TIP: Visual Studio 2017 has a release candidate. If you're organization has standardized around Visual Studio I would encourage you to explore what new features are available in Visual Studio 2017 and see if the client side development experience improves.

Visual Studio Code has many of the popular features developers would expect including IntelliSense, Debugging Support, Extensions Support, built in support for Git, advanced editing support like peaking and code snippets along with support for a huge amount of languages. Visual Studio Code has more in common with editors such as Sublime and Atom then Visual Studio proper. In many ways it's a very lightweight editor but with optional access to very powerful features and extensions.  Whereas Visual Studio has release schedules measured in months if not years, updates including new features and bug fixes are released for Visual Studio code monthly.  The development experience is a departure from what you would expect with Visual Studio so it does take some getting used to but most will find the streamlined and simplified interface yet access to powerful development features a joy to work with once you get used to it.

This may seem like a strange place to start but beyond the fact that it's a great code editor , you will find many of the tutorials, code examples, and communities that you will visit on your journey have all embraced Visual Studio Code. Having attempted to apply tutorials to Visual Studio that were done in Visual Studio Code I can tell you that it's an unneeded learning distraction.  Lastly a clean break from Visual Studio might facilitate the conceptual shift in development from server side to client side development.

Language: JavaScript

There is no way around it - you will need to become proficient at JavaScript first and foremost. JavaScript will be the foundation for many of your client based solutions and possibly even some of your server solutions as well through Node.js which I cover later in this post as well.  There are countless resources online for learning JavaScript so it's beyond the scope of this post to cover JavaScript but it's the place you should start after downloading Visual Studio Code.

Ensure you are comfortable with the built in methods and standard data types, functions, closures, Callbacks, and promises just to name a few. You should have a solid understanding of how to make server REST calls. As beneficial as frameworks can be there will be often times that using a framework is an overkill for your solution and you cannot go wrong with having a solid foundation with vanilla JavaScript. 

Language: TypeScript

Typescript is a superset of JavaScript that compiles into standard JavaScript. With typescript you gain access to many of the benefits of modern strongly types programming languages.  Some see Typescript as a short term crutch to help .NET developers transition to the loosely typed and dynamic nature of JavaScript but I would argue it opens the door to all development benefits that modern languages and their compilers provide.

Like many programming languages JavaScript has been changing and evolving over the years. New features and language constructs become available bringing enhancements to the core language.  Those language enhancements go through additional industry organizations for ratification and eventually the specification is included in various browsers - all at the release schedule and whims of the various browser vendors. If this sounds like a long pipeline it is! 

This is where tools such as Typescript come into play. They allow us to get access to the latest JavaScript language enhancements today! This is accomplished through a process called transpiling. Typescript is transpiled back down to a desired version of JavaScript that is more widely supported by browsers so you can still maintain broad support for various browsers within your application.  For traditional C# developers this means you have access to many of the same language constructs like classes, interfaces, strong types, async\await just to name a few.  Just like .NET is compiled before running Typescript is transpiled before execution or more commonly automatically in the background as you're working.  Visual Studio Code has built in support for Typescript along with various other tools like Gulp and Webpack that can manage the process for you as well.

Despite the fact that it's transpiled back down to standard JavaScript there are tangible benefits during the development cycle with arguably fewer unexpected runtime errors from unexpected typing issues. Typescript will also feel more comfortable for developers coming from languages such as c#.  It's important to understand however that learning typescript does not negate your need to understand the underlying core JavaScript framework but it may improve your development experience and the potential stability of your application.

Typescript is not the only player on the field though. Additional tools such as Babble and Coffee Script are a popular choice as well. My recommendation for Typescript comes from not only it's growing popularity but it's support from industry leaders such as Microsoft and Google.   In fact many modern libraries and frameworks such as Angular2 and the popular Office UI Fabric React components have been built from the ground up using Typescript.  Typescript is quickly becoming an industry leader.

UI Frameworks

JavaScript frameworks and libraries will play an important role in your client side development efforts. To be clear I'm referring to frameworks and libraries that effect the overall application architecture. The approach you take for your user interface, how you handle data binding, how you manage your application logic not simply the library you may use for example to render charts within your application. This decision is akin to deciding whether to use ASP.NET Forms or ASP.NET MVC for your traditional .NET web application.

Choose poorly and you may get more overhead and little benefit from the framework. Choose one that ultimately becomes unpopular and you risk having to support premature legacy code or incur expensive migration and update costs.  The last several years has seen an absolute flurry of frameworks come and go. To make matters more difficult some don't end cleanly and instead stutter and start as their popularity waxes and wanes within the community. To say the scene is a bit volatile would be an understatement!   This cycle does show some signs of slowing yet the decisions still remain difficult.  With all of these challenges where do you place your investment where it's least likely to fail?

I've worked with many different JavaScript frameworks over the early years of ajax (I'm even guilty of using update panels a time or two…shudder!!.. ) and pure jQuery implementations to more recent years knockout, handlebars, backbone, ember, angular, react, and most recently angular 2.  Putting all my cards on the table I've only used knockout, angular 1\2, and react in production level applications - the others have been more small scale efforts and experiments but I've felt like I've gotten a good feel for the overall landscape. 

In the end I find my two recommendations continuing to be both Angular 2 and React - each when it makes sense. Attempting to compare and contrast the two is much like comparing apples to oranges. Instead I'll explain the niche that I have found success with for each.

UI Frameworks: React

The line of framework vs library blurs a bit when it comes to React. Since its focus is the View most refer to it as a library since you often need other libraries to manage many other common application tasks within React.  For me the sweet spot for React has been smaller components that have a strong display component.   React seems a very good fit for the new SharePoint Development Framework which is essentially SharePoint's new web part framework (although it may become more in the future).  Additionally the SharePoint team has taken great interest in the framework with supported open source projects such as the Office UI Fabric React Components and development of official Office 365 applications such as Delve being built on React. In general React is a slimmer less prescriptive framework but requires additional libraries for common things like routing, web service calls, and more advanced forms handling.  For Single Page Applications (SPA) or larger applications in general I prefer the additional feature set that Angular 2 providers.

UI Frameworks: Angular 2

Angular in many ways broke new ground by combining several emerging concepts at the time around components, data binding, and dependency injection together for arguably one of the most popular frameworks to date.  With it though came complexity, a steep learning curve,  and under some scenarios performance limitations.  Angular 2 worked to increase performance (by all accounts successfully) and make implementation choices more clear and standard - some would say a more prescribed architecture. Larger web applications benefit from the consistency provided from the framework with the different components being well vetted and tested.

The pain point for Angular 2 has been its radical departure from previous 1.x frameworks.  For those still using Angular 1.X it is highly suggested to use the latest 1.5 component architecture which more closely aligns with Angular 2 making migration less of issue.   Although in development for years Angular 2 is relatively new but quickly gaining in popularity. Even the Azure portal team has chosen Angular 2 to develop some of its latest modules including the Azure Functions dashboard (most of the original azure portal is written in Knockout\TypeScript).

NOTE:  There is currently an issue with Angular 2 with the SharePoint Framework.  Multiple Angular 2 components are not currently supported on a single page.  This is not a SharePoint Framework issue but an issue with how Angular 2 is optimized and loaded for page performance.  If you want to use Angular with the SharePoint Framework it is suggested to use Angular 1.5 until this issue is resolved. There are members of the SharePoint developer community actively working with the Angular 2 team to remove this limitation.

Tools: webpack

Webpack at its core is a module bundler.  Module Bundlers are meant to handle the complexity of dependency management across various JavaScript files and libraries. These files are then bundled together to reduce overall page load time and eliminate unused code. Other common client side bundlers include RequireJS and SystemJS.  What makes Webpack more is the ability to process additional file types through the use of custom loaders. These loaders can manage other file types such as CSS and html files. In the case of CSS and JavaScript the files can then be further optimized and minified.  Loaders can compile typescript and convert images to base64.   In the past you may have had to use multiple tools such as Grunt or Gulp for custom tasks in addition to a module bundler can now be achieved with a single tool.  Like many development tools Webpack requires Node.js and is configured through JavaScript configuration files as part of your build process.

Tools: Node.js

Node.js is a JavaScript runtime environment built on Chrome's V8 JavaScript engine.  Not only is it the foundation for many popular development tools such as Webpack and gulp it is also rapidly growing in popularity as an application host. Many application services that you may have written using solutions such as the ASP.NET Web API can also be written in JavaScript (or typescript!) with Node.JS. There is something to be said about the power of developing client side and server side solutions with the same language and toolset - that of JavaScript.

The developer tools however are where most developers will turn to make use of Node.js. Node.js uses a package manager called NPM that has thousands of JavaScript based packages. All of your development dependencies from your tools such as Webpack and gulp to the libraries your application requires such as jQuery and angular.   All can be managed from a single JSON configuration file.   For those familiar with Nuget in Visual Studio this falls into the same family.

Extra Credit - Additional SharePoint Framework Tools

A lot of attention has been given lately to the SharePoint Framework which makes additional tools suggestions.  Although still in preview and only available in Office 365 (and potentially future Feature Packs for SharePoint 2016) it's a likely preview of things to come. None of these tools are technically required for the SharePoint framework but they do make the process of creating, building, and packaging SharePoint Framework Solutions easier.

  • Gulp - Gulp is a task runner that in many ways is comparable to MSBuild. It is based on Node.js so Gulp tasks are written in JavaScript.  Many of the tasks that are typical for Gulp are already managed through Webpack but the SharePoint framework has some specific build and packaging tasks used for solutions.
  • Yeoman - Yeoman created project templates. In many ways it accomplishes the same steps that the New Project wizard in Visual Studio does.  It's a good tool to use and can provide some consistency to your projects.
  • Git - Git is a popular source code repository. Both Visual Studio and Visual Studio Code have Git integration but only Visual Studio has other types of integration like TFS and VSO.   Git also integrates very tightly with many Azure deployment schemes.

Parting Suggestions

I advise not trying to take on all of these at once. Instead start with downloading Visual Studio Code. Then start with your core JavaScript and TypeScript concepts.  Don't get pulled into too much module and packaging when getting started with TypeScript. You can simply compile your typescript from the command line while you're getting started. Typescript will lead a natural transition into learning Webpack. All of this should lead up to working with the frameworks which with the foundation you've built should allow you to concentrate entirely on the frameworks and not getting caught up or confused in the tools and packaging that come along with them.  

Capturing Comments for Nintex’s Lazy Approval in Office 365

Providing solutions for customers is always a win when you can improve communication between employees. With Nintex workflows, one of the challenges many developers have with Lazy Approval on Office 365 is capturing the comments provided by Approvers in the email back to the original submitter. A task email is sent to an Approver and that Approver can only reply with the approval/rejection keywords. Any other comments in the email will be disregarded by the system. Also, there is no option to CC the original submitter in the initial task email which means the only way for an Approver to email comments back to the original submitter is by manually copying them in the approval/rejection email. This would not allow the comments to be captured along with the task. Frustration with this issue can be seen on the Nintex UserVoice site where it is one of the top voted requests. While comments cannot easily be captured if the Approver replies by email, they can be captured if the Approver customizes the Nintex task form. Simply put, the comments are captured in a workflow variable and added to the body of the email.

To accomplish this, start by creating your form and workflow. For this example, I am using the simple workflow below:

The first step is to create the text variable that will capture the comments from the custom Nintex task form. Click "Variables" from the workflow ribbon and then click "New" to create the text variable. I'm calling mine "TaskComments".

After that has been completed, go into your "Assign a task" action and click on the "Edit Task Form" button in the ribbon.

You will be presented with the standard Nintex form that you can customize. Next, create a panel at the bottom of the form and assign a rule to hide the panel at all times. The rule should have these settings:

Add a "Calculated Value" field to the panel. This field will assign the comment to the "TaskComments" variable we created earlier. Add the Named Control called "Comment" to the Formula and set the Connected to field to the variable "TaskComments". The settings should look like this:

Save everything. Your form should now look something like this:

Now all that is left is to add the variable to our email. Go into your "Send an Email" action on the "Rejected" branch and add the "TaskComments" variable to the body of the email. It should look something like this:

Do the same with the Approved email and you're done. Publish the workflow and you're good to go!


Hopefully this can help you address the needs for greater feedback collection in your approval workflows!  Have any questions, or if you have a tip to share, reach out and connect with us!


Need help improving and scaling your workflow processes?

Track and Visualize Compliance Events as Part of Nintex Workflows

Like most consulting groups that build a lot of workflows for process automation, we work with a lot of approval processes to help formalize important reviews and decisions while supporting compliance activities.  We recently received an interesting request to update an existing process we helped build and maintain.  In this particular approval process, the financial transactions were being reviewed and approved, but the stakeholders really needed to be able to view the pending and approved transactions. 

Adding a calendar to visualize when approvals have taken place was a big win with the client. The requirements for setting it up were simple. The events would have either a 'Pending' or 'Approved' state and can approve via email. Finally, the events would appear on a calendar color-coded to each state of approval.  Because the pending documents are stored in a separate library from final documents, and to support color coding and full featured calendar connections, we chose to use a calendar list versus the limited calendar view format for a traditional SharePoint library. 

Nintex's LazyApproval is a great solution for approval workflows. Setting up the workflow was simple. Once a user adds a new document and sets the event date, the workflow creates a calendar event and then creates a task. The first step in the workflow is a filter to check if the Created date is equal to the Modified date. This lets me know if the item was just added to the library without having any metadata added to it. I could check to see if a calendar item has already been created. This would occur if an item had already been rejected and updates have been applied to the original item. If not, the calendar item is created in the 'Pending' state. On approval, the calendar item is set to 'Approved'. If rejected, it will stay as 'Pending'.

The workflow looks like this:

The way I keep the Calendar Events in sync with the documents is by using the Document ID. I added a text field called "Document ID" and write the Document ID value on creation of the calendar item. That makes it easy to query.

Setting up the calendar requires a trick in the use of the ribbon option 'Calendar Overlay'.

Normally, separate SharePoint calendars are used to overlay onto a single calendar. Using that method, 3 separate calendars would be required for this solution, one for each state of the event. The trick is to add a separate calendar view as a separate calendar. By using this method, there is no need for separate calendars, only for separate calendar views.

Once you enter your site into the Web URL and click 'Resolve', the List and List View dropdowns should populate. Simply choose them and when your calendar items meet the criteria for the list view, they will appear with the color you chose.

The result is a single calendar with multiple color-coded entries. Events can even span multiple days or specific times depending on your start/end dates and times.

While this is just a subset of the total workflow, it illustrates a great technique that can be used for supporting your compliance activities. Creative uses of workflows with SharePoint apps help demonstrate the full power of the platform as well as their investment to the client generating a win on many levels.  Utilizing what is available in the box combined with thinking out of the box is key to happy clients.

If you would like to learn more about this solution, or how we can help automate different aspects of your business, please contact us.


 Need help improving and scaling your workflow processes?