Azure Active Directory Premium Features – Why You Want It

Azure Active Directory provides a cloud-based solution for user account and identity management. While the free and basic editions may meet the requirements of organizations that only need Azure AD to maintain user accounts, most of the time, businesses need more from their account and identity management solution and as a result, turn to the Azure AD premium editions (known as Premium P1 and Premium P2).

There are a few features that both the premium and basic editions share that you can’t get with the free edition:

Service Level Agreement

The SLA guarantees a minimum amount of uptime and provides a framework for holding Microsoft accountable for any outages. It makes sense that this wouldn’t be available with the free service as you can’t refund a service cost if there isn’t one to start with. The SLA is calculated based on how many minutes of downtime occur and the number of users impacted.

Branding

The ability to use your organization’s branding on logon pages and access panels. This is a nice touch because it creates a more uniform and polished look across applications, and also provides an identifiable interface for your end users. It can be confusing as an end user seeing a generic logon page and wonder whether you are in the right place.

Password Self Service

One of the most useful (and heavily used features), is the self-service password reset for cloud accounts. This allows users to reset their password whenever they need to without having to contact their help desk or IT department. Depending on the business, password resets can be as much as a 50% drain on the helpdesk’s bandwidth. Adding this feature to the available offerings could immediately provide an ROI just through saved alone.

While the basic edition includes all of the features listed above and those are enough to satisfy the needs of most smaller organizations, they fall short of providing a truly seamless transition between all applications, both on-premises and cloud-based. This is because the free and basic editions limit the number of applications that have an SSO experience to 10 per user, whereas premium has no limit. Additionally, the two premium editions have the following features that provide a seamless user experience between on-premises and the cloud:

  • Self-service group and app management / Self-service application additions / Dynamic groups
  • Self-service password reset / change / unlock with write-back to the on-premises Active Directory
  • Device objects two-way synchronization between on-premises directories and Azure AD (Device write-back)
  • Multi-Factor Authentication (Cloud and on-premises (MFA Server))

With the premium editions, changes to accounts and groups only need to be made in one place because everything is automatically synchronized. For example, whether a user is trying to logon to their on-premises SharePoint environment or trying to login to their mail using mail.office365.com, if the multifactor authentication feature is enabled, the user will be presented with the same prompt. To the user, it feels like a unified system.

Another premium feature that can be very useful is the availability of dynamic groups and conditional access based on group, location, and device state. An AD administrator can end up spending a lot of time managing group memberships. Most applications with complex security structures like SharePoint can have hundreds if not thousands of groups and usually a handful of Active Directory administrators are the only one who can add and remove users from these groups. This leads to the AD admins becoming inundated with requests to change the group memberships. With conditional access and dynamic groups, administrators only needs to setup rules based on user information. For example, all users from Germany will see “X” folder or all users in the Sales department can contribute to “Y” site. This saves the admins from having to update group membership altogether and can instead focus on making sure that users’ account attributes are up to date.

As security concerns keep mounting and data breaches keep occurring all too often, companies are struggling to do more to ensure all sensitive data stays protected. Multi-factor authentication, another premium feature, provides an extra layer of protection by requiring a secondary authentication method (such as a phone call, text message, or mobile app verification) when users attempt to login.

If you’re looking to take things a step further, then you will want to look at the identity protection features of the Premium P2 edition. With this edition, Azure AD uses machine learning to alert you to suspicious activities and detect events that are out of the ordinary and also provides reporting against its findings. Going even further, you can develop risk-based policies that will automatically respond when certain alerts have been triggered, ensuring that the system ‘always has your back’. These features go well above and beyond the capabilities of traditional AD running on your on-premises services. By leveraging the Microsoft Cloud’s AI and Machine Learning capabilities you have access to advanced threat protection.

While this article just scratches the surface of Azure AD and its features, Microsoft has put together the following table to help you understand all of the various features and differences between the different versions: https://azure.microsoft.com/en-us/pricing/details/active-directory/

The Azure Active Directory feature offerings can be overwhelming and can be configured in several different ways depending on business requirements. If you’re considering Azure AD Premium, let B&R Business Solutions make sure all of the features that you are paying for and care about are fully leveraged and configured correctly the first time. Contact us today by completing our contact us form.

cloud-network-concept_CTA.jpg

B&R can help you evaluate 

and plan for implementing Azure!

Protecting and Classifying Your Data using Azure Information Protection

The Azure Information Protection (AIP) client is a much-welcomed improvement from the previous Azure RMS Sharing application.  The AIP client can be downloaded for free and its supported-on Windows 7++ and MacOS10.8++.  The AIP app also supports mobile devices running IOS or Android.  The AIP app replaces the RMS sharing app on both platforms. 

The AIP client provides enhanced usability for the everyday user to protect and classify files in a simple and straight forward manner.  The AIP client can protect most file types out of the box.  Users can easily protect other files types such as images, PDFs, music, and videos all through the AIP client.    The user can also use the AIP client to protect sensitive emails.  In this article, I am going to explain how users can protect and classify files by using the AIP client within Microsoft Office Word, Excel and PowerPoint 2016.  We will then touch on the configuring Azure Information Protection labels and policies within the Azure portal.

Azure Information Protection Requirements

Let’s use a real-world business use case as the foundation for this walkthrough.  This will provide a real example that can be replicated throughout your own organization if desired.  Here is a bulleted breakdown of the requirements:

  • All Office files and emails created by the Finance Management group must be automatically classified as confidential
  • The AIP policy should be scoped to the Azure AD group BR Management Team and should not affect all users in the organization
  • When a user that belongs to the BR Management Team group creates a new email the email should be automatically classified as confidential and protected
  • Emails that are classified as confidential cannot be forwarded
  • Users can override the recommended label classification but should be warned when doing so
  • A watermark should be applied to all files and emails classified as confidential in the footer
  • Protected data should be accessible offline

Now that we have gone through the requirements for the use case lets jump into how we can accommodate all of them in our final solution.  It is worth mentioning that there are some prerequisites for using the AIP client that I will not be covering in this article.  Please find that information in the getting started with AIP article found here.

Let’s begin with what the user sees within Office 20016 when AIP has been activated and installed.  As you can see in the screenshot below from Word the AIP client is an add-on to Office 2016.  Once installed you will see the protect button in the ribbon.

aip-1.png

If you click on the show bar option you will notice the sensitivity settings bar as shown below in the screenshot.  The sensitivity labels can be manually set by an end-user.  Labels can also be set automatically based on the file/email content though.  Labels belong to a default AIP global policy which includes all users within your organizations Azure AD.  The different default sensitivity labels are also shown in the screenshot below.  These labels can be customized and new labels can be created through the Azure Information Protection resource in the Azure Portal.

aip-2.png

Additionally, AIP administrators have the ability in the Azure portal to create scoped policies.  These scoped policies can be created for specific groups of users and edge cases where customized labels and protection is required. All users in a specific department such as finance management require a stricter set of standards for labeling and classification because of the sensitivity of the files and emails they deal with daily.

Configuring AIP Policies

Below I have created a new scoped policy called Finance Management Confidential.  I have selected the appropriate management team group.  This is important to note because this is the group of users who will get the Finance Management Confidential AIP policy.  When we customize this policy, we are customizing what the group of users we have selected will see in their sensitivity bars throughout all of the Office 2016.  Additional labels and sub-labels can be created specifically for the selected group of users.

aip-3.png

As you can see in the image above I have created a new sub-label under the Confidential label.  Sub-labels provide a further level of classification that can be scoped to a subset of users within your organization. 

In the sub-label configuration image below, I have configured the footer text to show the text “confidential”.  This is also where you can setup Azure protection for the specific AIP label that you are creating.

aip-4.png

Once you have selected Azure RMS under the protection heading you can then begin to configure the different Azure RMS permissions.  In here we will make sure that data that is classified with this sub-label cannot be printed or forwarded.  Now that we have configured the protection for our sub-label we can now save this sub-label.  This sub-label is officially configured with AIP and all files that are classified with this sub-label will be automatically protected with the permissions that were setup in the previous step.  Once you have saved the sub-label to the policy make sure that you publish your scope policy. 

aip-5.png

Using AIP in Office 2016

Once the policy has been published it will be pushed to the users detailed in the policy.  Users who belong to this policy will see that all files they create or open will have the recommended sub-label that was created in the previous steps.  If the user hovers over the recommended labeling the tool tip description will pop up which provides valuable information to the users when they are deciding the classification of the document.  It’s important to be concise and spend some extra time on the description of your organizational labels.  These will help guide users in making the right decision when classifying new files. 

aip-6.png

Of course, you can always force the classification and labeling of files and emails instead of recommending a label.  This is useful when using conditions with your policy.  You can force the label of a document or email if for example the condition detects that there is sensitive data such as social security numbers or credit card numbers.  Forcing could potentially erroneously label a file causing additional administrative overhead.  In most cases providing a recommendation and specifying in the policy that the user be warned when reclassifying files that have less restrictive protection.  Such as reclassifying a file recommended as confidential to public.  This would require an auditable action that the user in fact acknowledged that they were reclassifying the file.

Once the file is labeled it will inherit all the classification and protection rules that were applied while editing the policy in the Azure portal.  This includes any protection that was setup for the labels by administrators.  The image below shows a Word document that has been classified by the sub-label Finance Management that was created earlier in this article.  Notice the classification in the left-hand corner of the image below and the footer text which was automatically applied after selecting the recommended label.

aip-7.png

Using the AIP client, the user can decide to downgrade a classification if needed.  Users will be prompted with the image below to set a lower classification label.  This will deter users from simply declassifying files that may be sensitive.  The user acknowledgement is an auditable action.

aip-8.png

Users can manually setup custom Azure RMS permissions if needed by selecting the AIP protect button in the ribbon within their favorite Office 2016 application. 

aip-9.png

The one disadvantage with using this method is users will only be able to configure permissions for one level of rights.  To clarify, if you want to provide two groups of users with two different levels of permissions for example, read only and edit, you will need to use the protect document button within Office 2016.  To do this first select File then Info, then select the Protect button as shown in the image below.  You will notice that our custom confidential AIP sub-label that we configured is also showing up in the restricted access context menu. 

aip-10.png

A user could easily select a label if they wanted to from here.  To get around the issue with applying multi-level custom permissions users can select the restricted access menu item.  Using the permissions dialog box that pops up users can now assign multiple levels of permissions to users and groups.

aip-11.png

Now let’s open up Outlook as a user who belongs to the finance management group.  As you can see in the image below the policy is automatically recommended on all new emails.  The behavior for classification in the Outlook 2016 client for email classification is similar to the rest of the AIP supported Office applications (Word/Excel/PowerPoint).  Once the label is selected all policies are applied to that email.

aip-12.png

Conclusion

The Azure Information Protection client provides the easiest way to classify and protect files and emails when creating or editing them from within the Office desktop applications.  The client is just one piece to the entire puzzle that is AIP.  The real key is in the planning and creation of meaningful labels and classification policies for your users.  This helps to drive users to begin using these classification policies with ease.  I must say from past experience the less the users have to think about the better.  If the classification labels are clear and help guide the user than the users are more likely to engage.  Additionally, forcing users to classify files and emails isn’t always the answer except in specific highly sensitive scenarios.  The AIP client is constantly being improved and added to.  In fact, there was a new version with new capabilities pushed out just this week and can be downloaded here.

 
calltoaction-paas.png

B&R can help you leverage Azure Information Protection

Getting More from Your Microsoft Cloud Hosting

Why Use a Microsoft Cloud Solution Provider (CSP) Such as B&R?

Using a Microsoft Cloud Solution Provider (CSP) can help you get the most out of your cloud hosting experience. More and more, Microsoft is making an effort to drive customers to partners that have the title of ‘Cloud Solution Provider’, or CSP for short. The CSP program is a relatively new (two years old) component of the overall Microsoft partner program that allows partners such as B&R Business Solutions to provide licenses and a variety of services to customers through one of two models:

Direct

The partner has a direct relationship with Microsoft and procures the licenses the customer needs directly from Microsoft and then acts as a trusted adviser for the customer. In this role, the partner provisions any services and licenses needed, bills the customer for the licenses (and any other services bundled with them), monitors the services the customer is using, and provides support for the customer.

Indirect

The partner acts as a reseller and account management is handed off to a distributor who has the relationship with Microsoft. With this approach, the partner is able to leverage the resources of the distributor to provision the licenses and services, and the distributor bills the customer and provides the support and monitoring services.

When B&R became a CSP, we elected to go with the direct model. This means that customers that use B&R can be sure that B&R stays engaged and has the provisioning, support, and billing capabilities that are up to Microsoft standards in-house. Additionally, you can be sure that you are working directly with B&R employees, and not a distributor – ensuring that we build a relationship directly between our customers and our team members.

Let’s break down the benefits of using a Microsoft CSP a bit further:

Savings

If you are purchasing your Office 365 licenses or Azure subscription directly through the office365.com or Azure.com web sites, you are paying the list to Microsoft for the services. With the CSP program, B&R is able to provide discounts on your licenses and consumption that are not available through the ‘web direct’ programs.

Better Terms

When you sign up with B&R for your licenses or Azure consumption, you can pay on NET terms. Additionally, there are no early termination fees for the removal or Office 365 licenses (unlike when you go web direct and you are charged a fee for removing a license prior to its renewal date).

Simplicity

While you may just decide to use B&R for your O365 & Azure subscriptions, if you use B&R for managed services or project-based consulting services, everything appears on one invoice. No more chasing down multiple vendors – you have one place to go for everything and

B&R has a variety of bundles that can further simplify things (and save you money) – check out http://www.bandrsolutions.com/managed-services.

Support

It can be frustrating trying to get the right individuals to support your organization during critical times. With the CSP program, B&R is your trusted partner – and your first line of support to help get you back up and running. The talented team at B&R will work with your on any issues you are experiencing and if needed, B&R has access to ‘Signature Cloud Support’ – which provides a higher level of support to Microsoft CSP partners – and in turn means quick time to resolution and access to excellent Microsoft resources.

Expertise

B&R has been working with Office 365 along with the Azure platform & infrastructure services for many years, and has one of the most talented teams anywhere (the team includes 2 current MVPs and 2 former MVPs). If you want to implement Office 365 and Azure right – the first time – then it makes sense to partner with the best, and that’s exactly what you will get with the B&R Team.

As a CSP, B&R Business Solution is going to ensure that your organization gets the best possible support and works with some of the most experienced individuals in the industry – all while being rewarded with a simplified approach and cost savings.

Interested in the CSP program? Looking to save money? Want to provide your organization with a higher level of support? Then contact B&R Business Solutions today – we can start by taking a look at your current (or proposed) cloud spend and immediately let you know how the CSP program can save you money and make recommendations based on our experience. There’s no charge for this assessment, and we’re confident you will be glad you reached out!

 
calltoaction-msp.png

Worry-free Managed Services with Predictable Pricing

Extending Internal Business Solutions to Azure

As cloud technologies continue to evolve and mature, there is an exciting opportunity that we are seeing more frequently; leveraging Azure’s Platform Services to build and deliver secure business applications for internal company use. While this is a natural progression for organizations already adopting cloud services and technologies like Office 365, we are now seeing this model adopted by companies still primarily running traditional on-premises data centers and applications. There are a lot of advantages to this approach so in this post we will attempt to make the case for taking your first steps toward cloud services used for supporting your internal business solutions.

The key points we will cover in this post are

  • Infinite capacity
  • Consumption based pricing
  • Redundancy immediately available
  • Enhanced insights to further optimize costs

Infinite Capacity

One of the core premises of the cloud services is infinite capacity, and it should not be discounted. From the early days of development, through initial launch, to the long-term use there is no need to worry about having enough capacity on hand to satisfy the application. There is no fear of having to add additional capacity to your Virtual Machine hosts and SANs. Over the years, I cannot count the number of projects that have been delayed because operational capacity issues. These issues are eliminated completely. Likewise, as your app needs to scale out it can do so easily without having to rework anything.

Consumption Based Pricing

Another core premise of cloud services is paying for only what you use. When moving your business solutions from the Virtual Machine (VM) hosted model, to one implementing Azure Platform Services leveraging services like Azure Storage, Web Apps, and Functions we start to see the cost to run our solutions is minimized. We only pay for the processing cycles our solution uses, there is no longer a need to pay for the idle time between requests. Also, unlike traditional on-premises solutions we do not need to budget for the total available disk space (or worse the raw disk space of an underlying disk array), but only what you consume this month. This offers a cost-effective way to approach capacity planning and also encourages good data cleansing and archiving habits.

Redundancy Immediately Available

For those who do not work for a Fortune 500 company with access to geographically distributed data centers and real-time redundancy, you will be pleased to find that you have immediate access to services across data centers with intelligent services to handle synchronization and failover. While redundancy can come at higher utilization costs, the costs are still very reasonable and should be significantly lower than adding the capabilities to your local data centers.

Enhanced Insights to Further Optimize the Costs

If all of this wasn’t enticing enough, there are tools offered from Microsoft and ISVs that can provide rich operational metrics to show where your compute and storage costs are, and how they can be optimized to save money. This allows you to maximize your investment, and continue to leverage the tool while keeping costs under control. We typically look to do a quarterly review with the customer subscriptions we manage to ensure that services and the consumption are optimized for their goals and budgets.

Closing

If you have not already started to look at how you can integrate cloud services into your application development, now is the time. If your organization has an active MSDN subscription, it normally comes with a $150 per month credit to get you started. In our experience that can easily handle dev instances for several projects.

If you are interested, but do not know where to start, engage B&R's Architects to help provide a detailed analysis and roadmap matching your application needs to the appropriate Azure Platform Services and estimate the associated operational costs.

 
calltoaction-paas.png

Need help planning for Azure?

Building a REST Service with Node.JS, DocumentDb, and TypeScript

REST Services are commonly the backbone for modern applications.  They provide the interface to the back-end logic, storage, security, and other services that are vital for an application whether it is a web app, a mobile app, or both!  

For me, as recently as a years ago, my solutions were typically built with the .NET Stack consisting of ASP.NET Web API based services written in c#, various middleware plugins, and backend storage that usually involved SQL Server. Those applications could then be packaged up and deployed to IIS on premise or somewhere in the cloud such as Microsoft Azure. It's a pretty common architecture and has been a fairly successful recipe for me. 

Like many developers however in recent years I've spent an increasing amount of time with JavaScript. My traditional web application development has slowly moved from .NET server side solutions such as ASP.NET Forms & ASP.NET MVC to client side JavaScript solutions using popular JavaScript frameworks such as Angular. As I began working with the beta of Angular 2 I was first introduced to TypeScript and quickly grew to appreciate JavaScript even further. 

Spending so much time in JavaScript though I was intrigued with the idea of a unified development stack based entirely on JavaScript that wasn't necessarily tied directly to any particular platform.  I also started spending much more time with Microsoft Azure solutions and the combination of a REST based services built on Node.JS, Express, TypeScript, and DocumentDB seemed very attractive.  In my journey with that goal in mind I found I couldn't find a single all-inclusive resource that provided me what I was looking for, especially with DocumentDB,  so I worked through a sample project of my own which I'm sharing in this blog post to hopefully benefit some others on the same path.

A quick message on Azure DocumentDB

One of the foundations of this solution is Azure's DocumentDB service.  DocumentDB is one of Azure's Schema-Free NoSQL \ JSON Database offerings.  If you've had experience with MongoDB you will find DocumentDB very familiar.  In fact DocumentDB introduced protocol compatibility with MongoDB so your existing MongoDB apps can be quickly migrated to DocumentDB.   In addition to all the benefits, you might expect from a NoSQL solution you also get the high availability, highly scalable, low latency benefits of the Azure platform.  You can learn more about DocumentDB over at https://docs.microsoft.com/en-us/azure/documentdb/documentdb-introduction.

Prerequisites

Before getting started there are a couple prerequisites I suggest for the best experience while working with the source code of this project.

Visual Studio Code

If you're not using Visual Studio Code I highly encourage you to check it out. It's a fantastic free code editor from Microsoft. Grab yourself a copy at https://code.visualstudio.com/ . If you're already using another editor such as Atom or Sublime you're still good to go but you will need to make adjustments for your own debugging and development workflow for that editor.  If you're using Visual Studio Code you should have a solid "f5" debugging experience with the current configuration.

Azure DocumentDB Emulator

This project is pre-configured to use the Azure DocumentDB Emulator so no Azure Subscription is required!  The emulator is still in preview but is pretty solid and saves a lot of money for those just wishing to evaluate DocumentDB without any costs or subscriptions. More details on the emulator and links to download can be found at https://docs.microsoft.com/en-us/azure/documentdb/documentdb-nosql-local-emulator . 

If you'd like to you can also use a live instance of DocumentDB with your azure subscription but please make sure you understand the cost structure of DocumentDB as creating additional collections within a DocumentDB database has a cost involved.

Getting Started

For this sample project we'll be building a hypothetical photo location information storage service API. Our API will accept requests to create, update, delete, get photo location information.  In a future post we'll spend some more time with DocumentDB's Geospatial query features but for now we'll just keep it to simple CRUD operations.

The entire project can be found over at https://github.com/joshdcar/azure-documentdb-node-typescript.  Feel free to clone\review the code there. 

For this project we'll be making use of the following stack:

  • NodeJS - 6.10.0 (Runtime)
  • Express - 4.14.1 (Host)
  • TypeScript - 2.2.1 (Language)
  • Mocha\Chai - 2.2.39\3.4.35 (Testing)
  • Azure DocumentDB Emulator - 1.11.136.2 (Storage)

Project Setup

There are some folks with some pretty strong opinions in regards to a project's folder structure but I tend to side more with consistency than any particular ideology.

-- dist
-- src
+ -- data
 + -- LocationData.ts
 + -- LocationDataConfig.ts
 + -- LocationDocument.ts
+ -- routes
 + -- PhotoLocationRouter.ts
+ -- app.ts
+ -- index.ts
-- test
+ -- photolocation.test.ts
-- gulpfile.js
-- package.json
-- tsconfig.json

Let's break down some of these project elements:

  • dist - build destination for compiled javascript
  • src  - project source code containing our typescript. We'll be breaking down each of the source files further down in the post
  • test - test scripts
  • gulpfile.js - our build tasks
  • package.json - project metadata & NPM dependencies
  • tsconfig.json - typescript configuration file

NPM PACKAGES - PACKAGES.JSON

One of my pet peeves with sample projects, especially when I’m new to a technology,  are unexplained dependencies. With the multitude of open source tools and libraries I often find myself looking up modules to find out what they do and if I need them.  This muddies the waters for those not as familiar with the stack so I find it helpful keeping them to a minimum in my projects and explaining each of them and their purpose.

{
  "name": "azure-documentdb-node-typescript",
  "version": "1.0.0",
  "description": "A sample Node.JS based REST Service utilizing Microsoft Azure DocumentDB and TypeScript",
  "main": "dist/index.js",
  "scripts": {
    "test": "mocha --reporter spec --compilers ts:ts-node/register test/**/*.test.ts"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/joshdcar/azure-documentdb-node-typescript.git"
  },
  "keywords": [],
  "author": "Joshua Carlisle (www.joshcarlisle.io)",
  "license": "MIT",
  "bugs": {
    "url": "https://github.com/joshdcar/azure-documentdb-node-typescript/issues"
  },
  "homepage": "https://github.com/joshdcar/azure-documentdb-node-typescript#readme",
  "devDependencies": {
    "@types/chai": "^3.4.35",
    "@types/chai-http": "0.0.30",
    "@types/debug": "0.0.29",
    "@types/documentdb": "0.0.35",
    "@types/express": "^4.0.35",
    "@types/mocha": "^2.2.39",
    "@types/node": "^7.0.5",
    "chai": "^3.5.0",
    "chai-http": "^3.0.0",
    "del": "^2.2.2",
    "documentdb": "^1.10.2",
    "gulp": "^3.9.1",
    "gulp-mocha": "^4.0.1",
    "gulp-sourcemaps": "^2.4.1",
    "gulp-typescript": "^3.1.5",
    "mocha": "^3.2.0",
    "morgan": "^1.8.1",
    "ts-node": "^2.1.0",
    "typescript": "^2.2.1"
  },
  "dependencies": {
    "body-parser": "^1.16.1",
    "express": "^4.14.1"
  }
}

view rawpackage.json hosted with ❤ by GitHub

  • @types/*  - these are typescript definition files so our typescript code can interact with external libraries in a defined manner (key to intellisense in many editors)
  • Typescript - core typescript module
  • gulp/del - javascript based task runner we use for typescript builds and any extra needs. Del is a module that deletes files for us
  • gulp-sourcemaps/gulp-typescript - helper modules to allow us to use gulp to compile our typescript during builds
  • Mocha - A javascript testing framework and test runner
  • chai/chai-http - a testing assertion framework we use for creating tests. We're specifically testing out http REST requests
  • Gulp-mocha - A gulp task to help us run mocha tests in gulp (NOTE: running into issues with this one so it remains while I sort it out but we'll be running tests form npm scripts - more on this further in the post)
  • Ts-node - A typescript helper module used by Mocha for tests written in TypeScript
  • Documentdb: Azure's DocumentDB Javascript Module
  • Express - our service host
  • Body-parser - a module that helps express parse JSON parameters automatically for us.

Setting up and working with Typescript

Typescript is fairly straight forward especially for developers who are comfortable working with compiled languages.  Essentially TypeScript provides us with advanced language features that aren't yet available in the current version of Javascript. It does this by compiling typescript down to a targeted version of Javascript. For browsers application, this is typically ES5 but for Node.JS based applications we can reliably target ES6 which is newer but generally not available in most browsers.  The end result of that process though is always standard JavaScript. Typescript is not actually a runtime environment.  For .net developers, this is very much a kin to c# being compiled down to IL.   Additionally, to making debugging easier we have the concept of sourcemaps which map our generated JavaScript with the original lines of typescript code.

Where the confusion often occurs is where\when\how to compile. There are lots of options for us. For front-end UI developers webpack is a common tools. For Node.js projects, such as this, another common approach, and one we make use of in this project, is gulp. 

Configuring Typescript and Gulp

Getting Typescript to compile is typically pretty straight forward, but getting it to compile properly for debugging can be another story entirely and typically the pain point for many developers.

The first file we'll be working with is the tsconfig.json which provides compiler options to the typescript compiler.

{
  "compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "outDir": "dist"
  },
  "include": [
    "src/**/*.ts"
  ],
  "exclude": [
    "node_modules"
  ]
}

view rawtsconfig.json hosted with ❤ by GitHub

In the case of our project we wanted to target JavaScript ES6, use commonjs for our module format, and specify which directories we want the compiler to include and exclude.

@TYPES - WORKING WITH EXTERNAL LIBRARIES

Type definition files in Typescript allow TypeScript to work with external libraries that were not developed originally in typescript. Most popular JavaScript libraries have had types defined by either the project or contributions from the typescript community at large. They define the interfaces, types, functions, data types that the library exposes. The way in which TypeScript references and works with Types changed from 1.x - 2.x of typescript so you still may see references to the old way. Ensure that you're using @types/* to get the latest versions of your types from NPM and that they are in sync with the version of the library you pulled down from NPM if your library doesn't already included types. 

GULP

Gulp is a JavaScript based task runner. It has a very large community of plugins to help execute practically every conceivable task you could think of that you might need for a build. To go back to a .NET comparison Gulp is akin to the features you may find in MSBuild.

var gulp = require("gulp");
var ts = require("gulp-typescript");
var mocha = require('gulp-mocha');
var tsProject = ts.createProject("tsconfig.json");
var sourcemaps = require("gulp-sourcemaps");
var del = require("del");
var path = require("path");

/* Test Tasks
WARNING:  GULP MOCHA task is a work in progress and currently has issue.
NPM Script "Test" (package.json) currently more reliable
 */
gulp.task('test', function(){
    return gulp.src('./test/**/*.ts')
    .pipe(tsProject()).js
    .pipe(gulp.dest('.'))
    .pipe(mocha({
        reporter: 'progress'
    }))
})

/* Cleanup the DIST folder before compiling */
gulp.task('clean', function(){
    return del('dist/**/*');
})

/* Compile the Typescript */
/* IMPORTANT: The Sourcemaps settings here are important or the sourcemap url and source path in the source
maps will not be correct and your breakpoints will not hit - this is especially important for subfolders in the dist folder   */
gulp.task('compile', ['clean'], function () {
var result = tsProject.src()
                    .pipe(sourcemaps.init())
                    .pipe(tsProject()).js
                    .pipe(sourcemaps.write('.', {includeContent:false, sourceRoot: '.'})) 
                    .pipe(gulp.dest('dist'));
});

/* The default task will allow us to just run "gulp" from the command line to build everything */
gulp.task('default', ['compile']);

view rawgulpfile.js hosted with ❤ by GitHub

Gulp tasks are defined as JavaScript functions and can optionally have dependent other tasks that are executed first which is syntactically represented by the option array of functions as the second argument for a task function. In our case we have a "Compile" task that executes the typescript gulp plugin and works in conjunction with the source maps plugin to compile and output our Javascript to the dist directly.  Note that we have the "dist" directory both within the gulp task and the tsconfig task. The gulp plugin uses the base settings from the tsconfig to execute the typescript compilation process but also supports Gulps "stream" concept to output files to a desired location.

TAKE NOTE:  If you change the commands within the Compile function you may not have full debugging support within Visual Studio Code. The settings provided ensure that the sourcemaps have the correct references and paths used by Visual Studio to allow for breakpoints to be used as expected. These few lines of code took me several hours of research, trial & error for all the stars to line up. If you have problems the first place to start looking is your generate *.map files and ensure the paths provided are what you expect.

Configuring Express and your API Routes

Express is a web application framework for Node.js. Express will host our REST services and provide all the middleware plumbing we need for our service.

Index.ts

Index.ts is essentially responsible for wiring our express based application into the http pipeline within node.js. It's also the entry point for our application where we can configure options such as port to listen to. 

import * as http from 'http';
import * as debug from 'debug';

import App from './App';

debug('ts-express:server');

const port = normalizePort(process.env.PORT || 3000);
App.set('port', port);

const server = http.createServer(App);
server.listen(port);
server.on('error', onError);
server.on('listening', onListening);

function normalizePort(val: number|string): number|string|boolean{

    let port: number = (typeof val === 'string') ? parseInt(val,10): val;

    if(isNaN(port)) return val;
    else if(port > 0 ) return port;
    else return false;

}

function onError(error: NodeJS.ErrnoException): void{

     if (error.syscall !== 'listen') throw error;
  let bind = (typeof port === 'string') ? 'Pipe ' + port : 'Port ' + port;
  switch(error.code) {
    case 'EACCES':
      console.error(`${bind} requires elevated privileges`);
      process.exit(1);
      break;
    case 'EADDRINUSE':
      console.error(`${bind} is already in use`);
      process.exit(1);
      break;
    default:
      throw error;
  }
}

function onListening(): void {
  let addr = server.address();
  let bind = (typeof addr === 'string') ? `pipe ${addr}` : `port ${addr.port}`;
  debug(`Listening on ${bind}`);
}

view rawIndex.ts hosted with ❤ by GitHub

NOTE: I need to give some credit to another developer for Index.ts because I know at some point many aspects of this module were copy\paste from somewhere but unfortunately I can't recall who that was but if I do find the original source I'll make sure to update the source and this post with the developer's name.

App.ts

App.ts sets up an instance of express and provides us a location to configure our middleware with some additional plumbing such as parsing JSON, urlencoding and some basic logging.  Also key to our application is the configuration of the routes used by our API and the code that is going to handle the routes. Other possible middleware components could be authentication, authorization, and caching just to name a couple.  

this.express.use('/api/v1/photolocations', PhotoLocationRouter);
/*  Express Web Application - REST API Host  */
import * as path from 'path';
import * as express from 'express';
import * as logger from 'morgan';
import * as bodyParser from 'body-parser';
import PhotoLocationRouter from './routes/PhotoLocationRouter';

class App{

    public express: express.Application;
    
    constructor(){
        this.express = express();
        this.middleware();
        this.routes();
    }
    
    private middleware(): void{
        this.express.use(logger('dev'));
        this.express.use(bodyParser.json());
        this.express.use(bodyParser.urlencoded({extended: false}));
    }

    private routes(): void{
        let router = express.Router();
        this.express.use('/api/v1/photolocations', PhotoLocationRouter);
    }

}

export default new App().express;

view rawapp.ts hosted with ❤ by GitHub

PhotoLocationRouter.ts

The core of how our service handles request at a particular end point is managed within the router.  We define the functions that execute on a given protocol and any appropriate parameters:

this.router.get("/:id", this.GetPhotoLocation),
this.router.post("/", this.AddPhotoLocation),
this.router.put("/",this.UpdatePhotoLocation),
this.router.delete("/:id",this.DeletePhotoLocation)

For more complex applications it can be benficial to follow this seperate of routes from the app.  In the case of our sample project we're following REST standards of using Get/Post/Put/Delete appropriately for different CRUD operations.

public GetPhotoLocation(req:Request, res: Response){

         let query:string = req.params.id;
         var data:LocationData = new LocationData();

                 data.GetLocationAsync(query).then( requestResult => {
                 res.status(200).send(requestResult);

         }).catch( e => {

                 res.status(404).send({
                 message: e.message,
                 status: res.status});

         });

}

Each operation has a Request and Response object to interact with. In the request object we can access the parameters from either the URL or from the body. Our body parser middleware neatly packs up our body data into a JSON object (given the correct content-type header) and any URL paramters also get neatly extracted from the URL and packed into a property name based on the pattern provided in the route. In the above example we're able to access the "id" parameter.  The response object alllows us to respond with specific\appropriate http codes and any appropriate payload of data. 

import { Router, Request, Response, NextFunction} from 'express';
import { LocationData } from '../data/LocationData';
import { PhotoLocationDocument } from  '../data/PhotoLocationDocument';

export class PhotoLocationRouter {

    router:Router;

    constructor(){
        this.router = Router();
        this.init();
    }

    public GetPhotoLocation(req:Request, res: Response){

        let query:string = req.params.id;
        var data:LocationData = new LocationData();

        data.GetLocationAsync(query).then( requestResult => {
            res.status(200).send(requestResult);
        }).catch( e => {
                res.status(404).send({
                    message: e.message,
                    status: res.status
                });
        });

    }

    public AddPhotoLocation(req:Request, res: Response){

        var doc: PhotoLocationDocument = <PhotoLocationDocument>req.body;
        var data:LocationData = new LocationData();

        data.AddLocationAsync(doc).then( requestResult => {
            res.status(200).send(requestResult);
        }).catch( e => {
                res.status(404).send({
                    message: e.message,
                    status: res.status
                });
        });

    }

    public UpdatePhotoLocation(req:Request, res: Response){

        var doc: PhotoLocationDocument = <PhotoLocationDocument>req.body;
        var data:LocationData = new LocationData();

        data.UpdateLocationAsync(doc).then( requestResult => {
            res.status(200).send(requestResult);
        }).catch( e => {
                res.status(404).send({
                    message: e.message,
                    status: 404
                });
        });

    }

    public DeletePhotoLocation(req:Request, res: Response){

            let query:string = req.params.id;
            var data:LocationData = new LocationData();

            data.DeletePhotoLocationAsync(query).then( requestResult => {
                res.status(204).send();
                }).catch( e => {
                res.status(404).send({
                        message: e.message,
                        status: 404
                    });
            });

    };

    init(){
        this.router.get("/:id", this.GetPhotoLocation),
        this.router.post("/", this.AddPhotoLocation),
        this.router.put("/",this.UpdatePhotoLocation),
        this.router.delete("/:id",this.DeletePhotoLocation)
    }

}

const photoLocationRouter = new PhotoLocationRouter();
photoLocationRouter.init();

export default photoLocationRouter.router;

view rawPhotoLocationRouter.ts hosted with ❤ by GitHub

NOTE: I dislike peppering my blog posts with disclaimers but this configuration should be intended for development use only. Additional configuration options can\should be applied for production applications, especially with security and threat hardening. I say this because of a recent Express vulnerability that left a lot of Node.JS sites unnecessarily exposed. For a good place to start checkout Helmet over at  https://www.npmjs.com/package/helmet

Working with DocumentDB

DocumentDB is a NoSQL solution that at its core stores JSON Documents. There are various best practices around working with DocumentDB but sticking with the basic concepts we'll be creating, updating, deleting, and querying those JSON documents.  Having worked with DocumentDB recently within the .NET framework using Linq this was a bit of a departure but luckily DocumentDB supports standard SQL queries so my years of working with relational databases paid off once again!  This was great foresight from Microsoft to make use of SQL instead of implementing yet another query language (I'm pointing at you SharePoint CAML!) 

DATABASE AND COLLECTION CREATION:  This sample project has a DocumentDB database and Collection.  The code does NOT create those for you so the expectation is that you will do this step up front. The database is named photolocations and the collection locations. Due to the fact that DocumentDB Collections and Databases have a direct cost involved with them (outside the emulator of course) I'm not a fan of code that generates these entities for you. I rather have the process be explicit. 

Defining our document

As an initial step we're going to define that JSON document within our project so we can work with it in a consistent way. In addition to our own properties DocumentDBadditionally has some standard properties that are part of every document. We can think of them as system fields.  To better support these fields the DocumentDB framework requires us to implement these interfaces for new documents (such as the id) and for retrieved documents which have additional fields such as etag for optimistic concurrency checks.

import {NewDocument, RetrievedDocument} from 'documentdb';

export class PhotoLocationDocument implements NewDocument<PhotoLocationDocument>, 
                                                                                    RetrievedDocument<PhotoLocationDocument>{
    /* NewDocument Interface */
    id:string;

    /* RetrievedDocument Interface */
    _ts:string;
    _self:string;

    /* Photo Location Properties */
    name:string;
    tags: string[];
    address: {
        street: string,
        city: string,
        zip: string,
        country: string
    };
    geoLocation: {
        type: string;
        coordinates: number[];
    }

}

view rawPhotoLocationDocument.ts hosted with ❤ by GitHub

ADVICE:  To keep things simple we're exposing the DocumentDb JSON document directly from the REST Service.  There are many scenarios where you may want to have a separate model that has different\fewer properties that is returned from the REST calls based on your applications needs.

DocumentDB operations

The core heavy lifting class within DocumentDB is DocumentClient. All operations such as queries, updates, inserts, deletes are all done through the DocumentClient class.  These functions make heavy use of callbacks so to make our own data layer functions more friendly to work against we wrapped all of our calls within Promises.  Note that many of the classes and interfaces we are working with are all imported in from the DocumentDB module and provided to us through those all-important interfaces.

import {DocumentClient, SqlQuerySpec, RequestCallback, QueryError, RequestOptions, SqlParameter, RetrievedDocument} from 'documentdb'
import {LocationDataConfig } from './LocationDataConfig';
import {PhotoLocationDocument } from './PhotoLocationDocument';

export class LocationData{

    private _config:LocationDataConfig;
    private _client: DocumentClient;

    constructor(){

        this._config = new LocationDataConfig();
        this._client = new DocumentClient(this._config.host, {masterKey: this._config.authKey}, this._config.connectionPolicy); 
                
    }

    public GetLocationAsync = (id:string) => {

        var that = this;

        return new Promise<PhotoLocationDocument>((resolve, reject) => {

            var options:RequestOptions = {};
            var params: SqlParameter[] = [{name: "@id", value: id }];

            var query: SqlQuerySpec = { query:"select * from heros where heros.id=@id",
                                                            parameters: params};
                                                    
            this._client.queryDocuments(this._config.collectionUrl,query)
                        .toArray((error:QueryError, result:RetrievedDocument<PhotoLocationDocument>[]): void =>{
                            
                            if (error){ reject(error); }
                            
                            if(result.length > 0){
                                resolve(<PhotoLocationDocument>result[0]);
                            }
                            else
                            {
                                reject({message: 'Location not found'});
                            }
                        });                                                         

        });

    }

    public AddLocationAsync = (photoLocation: PhotoLocationDocument) => {

        var that = this;

        return new Promise<PhotoLocationDocument>((resolve, reject) => {

                var options:RequestOptions = {};

                that._client.createDocument<PhotoLocationDocument>(that._config.collectionUrl, photoLocation, options, 
                        (error: QueryError, resource: PhotoLocationDocument, responseHeaders: any): void => {
                            if(error){
                                reject(error);
                            }
                            resolve(resource);
                });

        });

    }

    public UpdateLocationAsync = (photoLocation: PhotoLocationDocument) => {

        var that = this;

        return new Promise<PhotoLocationDocument>((resolve,reject) =>{

            var options:RequestOptions = {};
            var documentLink = that._config.collectionUrl + '/docs/' + photoLocation.id;

            that._client.replaceDocument<PhotoLocationDocument>(documentLink, photoLocation, options, 
                        (error: QueryError, resource: PhotoLocationDocument, responseHeaders: any): void => {
                            if(error){
                                reject(error);
                            }
                            resolve(resource);
                });

        });

    }

        public DeletePhotoLocationAsync = (id:string) => {

            var that = this;

            return new Promise<PhotoLocationDocument>((resolve, reject) => {

                    var options:RequestOptions = {};
                    var documentLink = that._config.collectionUrl + '/docs/' + id;
                
                    that._client.deleteDocument(documentLink, options, 
                        (error: QueryError, resource: any, responseHeaders: any): void => {
                            if(error){
                                reject(error);
                            }
                            resolve(resource);
                    });
            });

    }

}

view rawLocationData.ts hosted with ❤ by GitHub

Unit Testing

Cards on the table I'm very new with Javascript testing frameworks but I found the test-first approach really reduced the debugging and development cycles greatly. I created the test plans first calling the service and letting them fail.  I defined all my expectations (assertions) for the service and then I developed the service until it passed all the tests. I found using Mocha \ Chai pretty straight forward and I'm looking forward to spending more time with the framework in the coming months and sharing that experience.

NOTE: I would not use my test plans for this project as a model for your own test plans. They work but they require more teardown and cleanup then I would like and I suspect introducing the concepts of mocking while working against the DocumentDB would be beneficial. For now the tests are actually performing operations against DocumentDB. I place these firmly in the "place to start" category.

Development & Debugging

Throughout the development process I made heavy use of a Chrome Plugin called postman.  Postman allows me to test various REST calls and adjust the body\headers\etc as needed. I know there are other tools such as CURL that provide the same set of features but I've found the features and user interface of Postman superior to most other solutions I've worked with adding a great bit of efficiency to my development cycle.

TIP:  Chrome has this nasty habit of automatically redirecting localhost traffic to SSL if anything else runs SSL on localhost -it's a feature called HSTS.  In our case the emulator runs under SSL on localhost so I battled this "feature" constantly. This is internal to Chrome and must be reset by going to a settings page within Chrome chrome://net-internals/#hsts and entering in "localhost" to delete the domain. Even worse this property doesn't stick and routinely gets added back in. The "fix" for this is to either run your service under SSL as well,  add a host header for your app locally so it's on a different host then localhost, or use another browser such as Firefox or IE for testing.  I know this is a safety feature but it's very annoying for us developers and I wish there was a way to disable it permanently for given domains. 

 

Parting thoughts

So we've learned the basics of building a simple REST service based on Node.JS, DocumentDB, and TypeScript. In an upcoming post I'll be building on this solution to demonstrate more features of DocumentDB, especially the GeoSpatial query features, and we're also explore adding oauth authentication to our service to safeguard access to our data.