Quantcast
Channel: Colin's ALM Corner
Viewing all 114 articles
Browse latest View live

Aurelia: Object Binding Without Dirty Checking

$
0
0

Over the past few weeks I have been developing a Web UI using Aurelia by Rob Eisenberg. It’s really well thought out – though it’s got a steep learning curve at the moment since the documentation is still very sparse. Of course it hasn’t officially released yet, so that’s understandable!

TypeScript

I love TypeScript– if it wasn’t for TypeScript, I would really hate Javascript development! Aurelia is written in ES6 and ES7 which is transpiled to ES5. You can easily write Aurelia apps in TypeScript – you can transpile in Gulp if you want to – otherwise Visual Studio will transpile to Javascript for you anyway. Since I use TypeScript, I also use Mike Graham’s TypeScript Aurelia sample repos. He has some great samples there if you’re just getting started with Aurelia/TypeScript. Code for this post comes from the “aurelia-vs-ts” solution in that repo.

Binding in Aurelia

Aurelia has many powerful features out the box – and most of its components are pluggable too – so you can switch out components as and when you need to. Aurelia allows you to separate the view (html) from the view-model (a Javascript class). When you load a view, Aurelia binds the properties of the view-model with the components in the view. This works beautifully for primitives – Aurelia knows how to create a binding between an HTML element (or property) and the object property. Let’s look at home.html and home.ts to see how this works:

<template>
  <section>
    <h2>${heading}</h2>

    <form role="form" submit.delegate="welcome()">
      <div class="form-group">
        <label for="fn">First Name</label>
        <input type="text" value.bind="firstName" class="form-control" id="fn" placeholder="first name">
      </div>
      <div class="form-group">
        <label for="ln">Password</label>
        <input type="text" value.bind="lastName" class="form-control" id="ln" placeholder="last name">
      </div>
      <div class="form-group">
        <label>Full Name</label>
        <p class="help-block">${fullName | upper}</p>
      </div>
      <button type="submit" class="btn btn-default">Submit</button>
    </form>
  </section>
</template>

This is the view (html) for the home page (views\home.html). You bind to variables in the view-model using the ${var} syntax (lines 3 and 16). You can also bind attributes directly – like value.bind=”firstName” in line 8 binds the value of the input box to the “firstName” property. Line 16 uses a value converter to convert the value of the bound parameter to uppercase. Line 5 binds a function to the submit action. I don’t want to get into all the Aurelia binding capabilities here – that’s for another discussion.

Here’s the view-model (views\home.ts):

export class Home {
    public heading: string;
    public firstName: string;
    public lastName: string;

    constructor() {
        this.heading = "Welcome to Aurelia!";
        this.firstName = "John";
        this.lastName = "Doe";
    }

    get fullName() {
        return this.firstName + " " + this.lastName;
    }

    welcome() {
        alert("Welcome, " + this.fullName + "!");
    }
}

export class UpperValueConverter {
    toView(value) {
        return value && value.toUpperCase();
    }
}

The code is very succinct – and easy to test. Notice the absence of any “binding plumbing”. So how does the html know to update when values in the view-model change? (If you’ve ever used Knockout you’ll be wondering where the observables are!)

Dirty Binding

The bindings for heading, firstName and lastName are primitive bindings – in other words, when Aurelia binds the html to the property, it creates an observer on the property so that when the property is changed, a notification of the change is triggered. It’s all done under the covers for you so you can just assume that any primitive on any model will trigger change notifications to anything bound to them.

However, if you’re not using a primitive, then Aurelia has to fall-back on “dirty binding”. Essentially it sets up a polling on the object (every 120ms). You’ll see this if you put a console.debug into the getter method:

get fullName() {
    console.debug("Getting fullName");
    return this.firstName + " " + this.lastName;
}

Here’s what the console looks like when you browse (the console just keeps logging forever and ever):

image

 

Unfortunately there simply isn’t an easy way around this problem.

Declaring Dependencies

Jeremy Danyow did however leverage the pluggability of Aurelia and wrote a plugin for observing computed properties without dirty checking called aurelia-computed. This is now incorporated  into Aurelia and is plugged in by default.

This plugin allows you to specify dependencies explicitly – thereby circumventing the need to dirty check. Here are the changes we need to make:

  1. Add a definition for the declarePropertyDependencies() method in Aurelia.d.ts (only necessary for TypeScript)
  2. Add an import to get the aurelia-binding libs
  3. Register the dependency

Add these lines to the bottom of the aurelia.d.ts file (in the typings\aurelia folder):

declare module "aurelia-binding" {
    function declarePropertyDependencies(moduleType: any, propName: string, deps: any[]): void;
}

This just lets Visual Studio know about the function for compilation purposes.

Now change home.ts to look as follows:

import aub = require("aurelia-binding");

export class Home {
    public heading: string;
    public firstName: string;
    public lastName: string;

    constructor() {
        this.heading = "Welcome to Aurelia!";
        this.firstName = "John";
        this.lastName = "Doe";
    }

    get fullName() {
        console.debug("Getting fullName");
        return this.firstName + " " + this.lastName;
    }

    welcome() {
        alert("Welcome, " + this.fullName + "!");
    }
}

aub.declarePropertyDependencies(Home, "fullName", ["firstName", "lastName"]);

export class UpperValueConverter {
    toView(value) {
        return value && value.toUpperCase();
    }
}

The highlighted lines are the lines I added in. Line 24 is the important line – this explicitly registers a dependency on the “fullName” property of the Home class – on “firstName” and “lastName”. Now any time either firstName or lastName changes, the value of “fullName” is recalculated. Bye-bye polling!

Here’s the console output now:

image

We can see that the fullName getter is called 4 times. This is a lot better than polling the value every 120ms. (I’m not sure why it’s called 4 times – probably to do with how the binding is initially set up. Both firstName and lastName change when the page loads and they are instantiated to “John” and “Doe” so I would expect to see a couple firings of the getter function at least).

Binding to an Object

So we’re ok to bind to primitives – but we get stuck again when we want to bind to objects. Let’s take a look at app-state.ts (in the scripts folder):

import aur = require("aurelia-router");

export class Redirect implements aur.INavigationCommand {
    public url: string;
    public shouldContinueProcessing: boolean;

    /**
      * Application redirect (works with approuter instead of current child router)
      *
      * @url the url to navigate to (ex: "#/home")
      */
    constructor(url) {
        this.url = url;
        this.shouldContinueProcessing = false;
    }

    navigate(appRouter) {
        appRouter.navigate(this.url, { trigger: true, replace: true });
    }
}

class AppState {
    public isAuthenticated: boolean;
    public userName: string;

    /**
      * Simple application state
      *
      */
    constructor() {
        this.isAuthenticated = false;
    }

    login(username: string, password: string): boolean {
        if (username == "Admin" && password == "xxx") {
            this.isAuthenticated = true;
            this.userName = "Admin";
            return true;
        }
        this.logout();
        return false;
    }

    logout() {
        this.isAuthenticated = false;
        this.userName = "";
    }
}

var appState = new AppState();
export var state = appState;

The AppState is a static global object that tracks the state of the application. This is a good place to track logged in user, for example. I’ve added in the highlighted lines so that we can expose AppState.userName. Let’s open nav-bar.ts (in views\controls) and add a getter so that the nav-bar can display the logged in user’s name:

import auf = require("aurelia-framework");
import aps = require("scripts/app-state");

export class NavBar {
    static metadata = auf.Behavior.withProperty("router");

    get userName() {
        console.debug("Getting userName");
        return aps.state.userName;
    }
}

We can now bind to userName in the nav-bar.html view:

<template>
  <nav class="navbar navbar-default navbar-fixed-top" role="navigation">
    <div class="navbar-header">
      <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
        <span class="sr-only">Toggle Navigation</span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
      </button>
      <a class="navbar-brand" href="#">
        <i class="fa fa-home"></i>
        <span>${router.title}</span>
      </a>
    </div>

    <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
      <ul class="nav navbar-nav">
        <li repeat.for="row of router.navigation" class="${row.isActive ? 'active' : ''}">
          <a href.bind="row.href">${row.title}</a>
        </li>
      </ul>

      <ul class="nav navbar-nav navbar-right">
        <li><a href="#">${userName}</a></li>
        <li class="loader" if.bind="router.isNavigating">
          <i class="fa fa-spinner fa-spin fa-2x"></i>
        </li>
      </ul>
    </div>
  </nav>
</template>

I’ve added line 24. Of course we’ll see polling if we run the solution as is. So we can just declare the dependency, right? Let’s try it:

import auf = require("aurelia-framework");
import aub = require("aurelia-binding");
import aps = require("scripts/app-state");

export class NavBar {
    static metadata = auf.Behavior.withProperty("router");

    get userName() {
        return aps.state.userName;
    }
}

aub.declarePropertyDependencies(NavBar, "userName", [aps.state.userName]);

Seems to compile and run – but the value of userName is never updated!

It turns out that we can only declare dependencies to the same object (and only to primitives) using declarePropertyDependencies. Seems like we’re stuck.

The Multi-Observer

I posed this question on the gitter discussion page for Aurelia. The guys working on Aurelia (and the community) are very active there – I’ve been able to ask Rob Eisenberg himself questions! Jeremy Danyow is also active on there (as is Mike Graham) so getting help is usually quick. Jeremy quickly verified that declarePropertyDependencies cannot register dependencies on other objects. However, he promptly whacked out the “Multi-Observer”. Here’s the TypeScript for the class:

import auf = require("aurelia-framework");

export class MultiObserver {
    static inject = [auf.ObserverLocator];

    constructor(private observerLocator: auf.ObserverLocator) {
    }

    /**
     * Set up dependencies on an arbitrary object.
     * 
     * @param properties the properties to observe
     * @param callback the callback to fire when one of the properties changes
     * 
     * Example:
     * export class App {
     *      static inject() { return [MultiObserver]; }
     *      constructor(multiObserver) {
     *        var session = {
     *          fullName: 'John Doe',
     *          User: {
     *            firstName: 'John',
     *            lastName: 'Doe'
     *          }
     *        };
     *        this.session = session;
     *
     *        var disposeFunction = multiObserver.observe(
     *          [[session.User, 'firstName'], [session.User, 'lastName']],
     *          () => session.fullName = session.User.firstName + ' ' + session.User.lastName);
     *      }
     *    }
     */
    observe(properties, callback) {
        var subscriptions = [], i = properties.length, object, propertyName;
        while (i--) {
            object = properties[i][0];
            propertyName = properties[i][1];
            subscriptions.push(this.observerLocator.getObserver(object, propertyName).subscribe(callback));
        }

        // return dispose function
        return () => {
            while (subscriptions.length) {
                subscriptions.pop()();
            }
        }
    }
}

Add this file to a new folder called “utils” under “views”. To get this to compile, you have to add this definition to the aurelia.d.ts file (inside the aurelia-framework module declaration):

interface IObserver {
    subscribe(callback: Function): void;
}

class ObserverLocator {
    getObserver(object: any, propertyName: string): IObserver;
}

Now we can use the multi-observer to register a callback when any property on any object changes. Let’s do this in the nav-bar.ts file:

import auf = require("aurelia-framework");
import aub = require("aurelia-binding");
import aps = require("scripts/app-state");
import muo = require("views/utils/multi-observer");

export class NavBar {
    static metadata = auf.Behavior.withProperty("router");
    static inject = [muo.MultiObserver];

    dispose: () => void;
    userName: string;

    constructor(multiObserver: muo.MultiObserver) {
        // set up a dependency on the session router object
        this.dispose = multiObserver.observe([[aps.state, "userName"]],() => {
            console.debug("Setting new value for userName");
            this.userName = aps.state.userName;
        });
    }

    deactivate() {
        this.dispose();
    }
}

We register the function to execute when the value of the property on the object changes – we can execute whatever code we want in this callback.

Here’s the console after logging in:

image

There’s no polling – the view-model is bound to the userName primitive on the view-model. But whenever the value of userName on the global state object changes, we get to update the value. We’ve successfully avoided the dirty checking!

One last note: we register the dependency callback into a function object called “dispose”. We can then simply call this function when we want to unregister the callback (to free up resources). I’ve put the call in the deactivate() method, which is the method Aurelia calls on the view-model when navigating away from it. In this case it’s not really necessary, since the nav-bar is “global” and we won’t navigate away from it. But if you use the multi-observer in a view-model that is going to be unloaded (or navigated away from), be sure to put the dispose function somewhere sensible.

A big thank you to Jeremy Danyow for his help!

Happy binding!


Aurelia – Debugging from within Visual Studio

$
0
0

In my last couple of posts I’ve spoken about the amazing Javascript framework, Aurelia, that I’ve been coding in. Visual Studio is my IDE of choice – not only because I’m used to it but because it’s just a brilliant editor – even for Javascript, Html and other web technologies. If you’re using VS for web development, make sure that you install Web Essentials– as the name implies, it’s essential!

Debugging

One of the best things about doing web development in VS – especially if you have a lot of Javascript – is the ability to debug from within VS. You set breakpoints in your script, run your site in IE, and presto! you’re debugging. You can see call-stack, autos, set watches – it’s really great. Unfortunately, until recently I haven’t been able to debug Aurelia projects in VS. We’ll get to why that is shortly – but I want to take a small tangent to talk about console logging in Aurelia. It’s been the lifesaver I’ve needed while I work out why debugging Aurelia wasn’t working.

Console

Few developers actually make use of the browser console while developing – which is a shame, since the console is really powerful. The easiest way to see it in action is to open an Aurelia project, locate app.ts (yes, I’m using TypeScript for my Aurelia development) and add a “console.debug(“hi there!”) to the code:

import auf = require("aurelia-framework");
import aur = require("aurelia-router");

export class App {
    static inject = [aur.Router];

    constructor(private router: aur.Router) {
        console.log("in constructor");
        this.router.configure((config: aur.IRouterConfig) => {
            config.title = "Aurelia VS/TS";
            config.map([
                { route: ["", "welcome"], moduleId: "./views/welcome", nav: true, title: "Welcome to VS/TS" },
                { route: "flickr", moduleId: "./views/flickr", nav: true },
                { route: "child-router", moduleId: "./views/child-router", nav: true, title: "Child Router" }
            ]);
        });
    }
}

Line 8 is where I add the call to console.log. Here it is in IE’s console when I run the solution:

image

(To access the console in Chrome or in IE, press F12 to bring up “developer tools” – then just open the console tab). Here’s the same view in Chrome:

image

There are a couple of logging methods: log(), info(), warn(), error() and debug(). You can also group entries together and do host of other useful debugging tricks, like timing or logging stack traces.

Logging an Object

Beside simply logging a string message you can also log an object. I found this really useful to inspect objects I was working with – usually VS lets you inspect objects, but since I couldn’t access the object in VS, I did it in the console. Let’s change the “console.log” line to “console.log(“In constructor: %O”, this);” The “%O” argument tells the console to log a hyperlink to the object that you can then use to inspect it. Here is the same console output, this time with “%O” (Note: you have to have the console open for this link to actually expand – otherwise you’ll just see a log entry, but won’t be able to inspect the object properties):

image

You can now expand the nodes in the object tree to see the properties and methods of the logged object.

Aurelia Log Appenders

If you’re doing a lot of debugging, then you may end up with dozens of view-models. Aurelia provides a LogManager class – and you can add any LogAppender implementation you want to create custom log collectors. (I do this for Application Insights so that you can have Aurelia traces sent up to App Insights). Aurelia also provides an out-of-the-box ConsoleLogAppender. Here’s how you can add it (and set the logging level) – I do this in main.ts just before I bootstrap Aurelia:

auf.LogManager.addAppender(new aul.ConsoleAppender());
auf.LogManager.setLevel(auf.LogManager.levels.debug);

Now we can change the app.ts file to create a logger specifically for the class – anything logged to this will be prepended by the class name:

import auf = require("aurelia-framework");
import aur = require("aurelia-router");

export class App {
    private logger: auf.Logger = auf.LogManager.getLogger("App");

    static inject = [aur.Router];

    constructor(private router: aur.Router) {
        this.logger.info("Constructing app");

        this.router.configure((config: aur.IRouterConfig) => {
            this.logger.debug("Configuring router");
            config.title = "Aurelia VS/TS";
            config.map([
                { route: ["", "welcome"], moduleId: "./views/welcome", nav: true, title: "Welcome to VS/TS" },
                { route: "flickr", moduleId: "./views/flickr", nav: true },
                { route: "child-router", moduleId: "./views/child-router", nav: true, title: "Child Router" }
            ]);
        });
    }
}

On line 5 I set up a logger for the class – which I then use in lines 10 and 13. Here’s the console output:

image

You can see how the “info” and the “debug” are colored differently (and info has a little info icon in the left gutter) and both entries are prepended with “[App]” – this makes wading through the logs a little bit easier. Also, when I want to switch the log level, I just set it down to LogManager.levels.error and no more info or debug messages will appear in the console – no need to remove them from the code.

Why Can’t VS Debug Aurelia?

Back to our original problem: debugging Aurelia in Visual Studio. Here’s what happens when you set a breakpoint using the skeleton app:

image

Visual Studio says that “No symbols have been loaded for this document”. What gives?

The reason is that Visual Studio cannot debug modules loaded using system.js. Let’s look at how Aurelia is bootstrapped in index.html:

<body aurelia-app>
    <div class="splash">
        <div class="message">Welcome to Aurelia</div>
        <i class="fa fa-spinner fa-spin"></i>
    </div>
    <script src="jspm_packages/system.js"></script>
    <script src="config.js"></script>

    <!-- jquery layout scripts -->
    <script src="Content/scripts/jquery-1.8.0.min.js"></script>
    <script src="Content/scripts/jquery-ui-1.8.23.min.js"></script>
    <script src="Content/scripts/jquery.layout.min.js"></script>

    <script>
    //System.baseUrl = 'dist';
    System.import('aurelia-bootstrapper');
    </script>
</body>

You can see that system.js is being used to load Aurelia and all its modules – it will also be the loader for your view-models. I’ve pinged the VS team about this – but haven’t been able to get an answer from anyone as to why this is the case.

Switching the Loader to RequireJS

Aurelia (out of the box) uses jspm to load its packages – and it’s a great tool. Unfortunately, for anyone who wants to debug with VS you’ll have to find another module loader. Fortunately Aurelia allows you to swap out your loader! I got in touch with Mike Graham via the Aurelia gitter discussion page– and he was kind enough to point me in the right direction – thanks Mike!

Following some examples by Mike Graham, I was able to switch from system.js to requirejs. The switch is fairly straight-forward – here they are:

  1. Create a bundled require-compatible version of aurelia using Mike’s script and add it to the solution as a static script file. Updating the file means re-running the script and replacing the aurelia-bundle. Unfortunately this is not as clean an upgrade path as jspm, where you’d just run “jspm update” to update the jspm packages automatically.
  2. Change the index.html page to load require.js and then configure it.
  3. Make a call to load the Aurelia run-time using requirejs.
  4. Fix relative paths to views in router configurations – though this may not be required for everyone, depending on how you’re referencing your modules when you set up your routes.

Here’s an update index page that uses requirejs:

<body aurelia-main>
    <div class="splash">
        <div class="message">Welcome to Aurelia AppInsights Demo</div>
        <i class="fa fa-spinner fa-spin"></i>
    </div>

    <script src="Content/scripts/core-js/client/core.js"></script>
    <script src="Content/scripts/requirejs/require.js"></script>
    <script>
        var baseUrl = window.location.origin
        console.debug("baseUrl: " + baseUrl);
        require.config({
            baseUrl: baseUrl + "/dist",
            paths: {
                aurelia: baseUrl + "/Content/scripts/aurelia",
                webcomponentsjs: baseUrl + "/Content/scripts/webcomponentsjs",
                dist: baseUrl + "/dist",
                views: baseUrl + "/dist/views",
                resources: baseUrl + "/dist/resources",
            }
        });

        require(['aurelia/aurelia-bundle-latest']);
    </script>
</body>

Now instead of loading system.js, you need to load core.js and require.js. Then I have a script (this could be placed into its own file) which configures requirejs (lines 9-24). I set the baseUrl for requirejs as well as some paths. You’ll have to play with these until requirejs can successfully locate all or your dependencies and view-models. Line 23 then loads the Aurelia runtime bundle via requirejs – this then calls your main or app class, depending on how you configure the <body> tag (either as aurelia-main or aurelia-app).

Now that you’re loading Aurelia using requirejs, you can set breakpoints in your ts file (assuming that you’re generating symbols through VS or through Gulp/Grunt):

image

Voila – you can now debug Aurelia using VS!

Conclusion

When you’re doing Aurelia development using Visual Studio, you’re going to have to decide between the ease of package update (using jspm) or debugging ability (using requirejs). Using requirejs requires (ahem) a bit more effort since you need to bundle Aurelia manually, and I found getting the requirejs paths correct proved a fiddly challenge too. However, the ability to set breakpoints in your code in VS and debug is, in my opinion, worth the effort. I figure you’re probably not going to be updating the Aurelia framework that often (once it stabilizes after release) but you’ll be debugging plenty. Also, don’t forget to use the console and log appenders! Every tool in your arsenal makes you a better developer.

Happy debugging!

P.S. If you know how to debug modules that are loaded using system.js from VS, please let the rest of us know!

Aurelia, Karma and More VS Debugging Goodness

$
0
0

In my previous post I walked through how to change Aurelia to load modules via Require.js so that you can set breakpoints and debug from VS when you run your Aurelia project. In this post I want to share some tips about unit testing your Aurelia view-models.

Unit Testing Javascript

If you aren’t yet convinced of the value of unit testing, please read my post about why you absolutely should be. Unfortunately, unit testing Javascript in Visual Studio (and during automated builds) is a little harder to do than running unit tests on managed code. This post will show you some of the techniques I use to unit test Javascript in my Aurelia project – though of course you don’t need to be using Aurelia to make use of these techniques. If you want to see the code I’m using for this post, check out this repo.

But I’ve already got tests!

This post isn’t going to go too much into how to unit test – there are hundreds of posts about how to test. I’m going to assume that you already have some unit tests. I’ll discuss the following topics in this post:

  • Basic Karma/Jasmine overview
  • Configuring Karma and RequireJS
  • Running Karma from Gulp
  • Using a SpecRunner.html page to enable debugging unit tests
  • Fudges to enable PhantomJS
  • Code Coverage
  • Running tests in your builds (using TeamBuild)
  • Karma VS Test adapter

Karma and Jasmine

There are many JavaScript testing frameworks out there. I like Jasmine as a (BDD) testing framework, and I like Karma (which used to be called Testacular) as a test runner. One of the things I like about Karma is that you can run your tests in several browsers – it also has numerous “reporters” that let you track the tests, and even add code coverage. Aurelia itself uses Karma for its testing.

Configuring Karma and RequireJS

To configure karma, you have to set up a karma config file – by convention it’s usually called karma.conf.js. If you use karma-cli, you can run “karma init” to get karma to lead you through a series of questions to help you set up a karma config file for the first time. I wanted to use requirejs, mostly because using requirejs means I can set breakpoints in Visual Studio and debug. So I made sure to answer “yes” for that question. Unfortunately, that opens a can of worms!

The reason for the “can of worms” is that karma tries to serve all the files necessary for the test run – but if they are AMD modules, then you can’t “serve” them – they need to be loaded by requirejs. In order to do that, we have to fudge the karma startup a little. We specify the files that should be served in the karma.conf.js file, being careful to “exclude” the files. This flag tells karma to serve the file when it is requested, but not to execute it (think of it as treating the file as static text rather than a JavaScript file to execute). Then, we create a “test-main.js” file to configure requirejs, load the modules and then launch karma.

Here’s the karma.conf.js file:

// Karma configuration
module.exports = function (config) {
    config.set({
        basePath: "",

        frameworks: ["jasmine", "requirejs", "sinon"],

        // list of files / patterns to load in the browser
        files: [
            // test specific files
            "test-main.js",
            "node_modules/jasmine-sinon/lib/jasmine-sinon.js",

            // source files
            { pattern: "dist/**/*.js", included: false },

            // test files
            { pattern: 'test/unit/**/*.js', included: false },

            // framework and lib files
            { pattern: "Content/scripts/**/*.js", included: false },
        ],

        // list of files to exclude
        exclude: [
        ],

        // available reporters: https://npmjs.org/browse/keyword/karma-reporter
        reporters: ["progress"],

        // web server port
        port: 9876,

        // enable / disable colors in the output (reporters and logs)
        colors: true,

        // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
        logLevel: config.LOG_DEBUG,

        // enable / disable watching file and executing tests whenever any file changes
        autoWatch: true,

        // available browser launchers: https://npmjs.org/browse/keyword/karma-launcher
        browsers: ["Chrome"],

        // Continuous Integration mode
        // if true, Karma captures browsers, runs the tests and exits
        singleRun: true
    });
};

Notes:

  • Line 6: We tell karma what frameworks to use when running the tests – jasmine (the test framework), requirejs (for loading modules) and sinon (for mocking). These are installed using “npm install [karma-jasmine|karma-requirejs|karma-sinon] respectively
  • Lines 11/12: We load files that the tests will need – test-main to configure the modules for the test and the sinon file to load the sinon libs. Since these files are not “excluded”, karma executes them on load.
  • Line 15: We serve all the source files we are testing, using the “exclude” to tell karma to serve them but not execute them (so it only serves them when requested – requirejs will load them)
  • Line 18: We serve all the test (spec) files to run (again, not executing them)
  • Line 21: We serve libraries (including the Aurelia framework)

Here’s the test-main.js file:

var allTestFiles = [];
var allSourceFiles = [];

var TEST_REGEXP = /(spec|test)\.js$/i;
var SRC_REGEXP = /dist\/[a-zA-Z]+\/[a-zA-Z]+.js$/im;

var normalizePathToSpecFiles = function (path) {
    return path.replace(/^\/base\//, '').replace(/\.js$/, '');
};

var normalizePathToSourceFiles = function (path) {
    return path.replace(/^\/base\/dist\//, '').replace(/\.js$/, '');
};

var loadSourceModulesAndStartTest = function () {
    require(["aurelia/aurelia-bundle"], function () {
        require(allSourceFiles, function () {
            require(allTestFiles, function () {
                window.__karma__.start();
            });
        });
    });
};

Object.keys(window.__karma__.files).forEach(function (file) {
    if (TEST_REGEXP.test(file)) {
        allTestFiles.push(normalizePathToSpecFiles(file));
    } else if (SRC_REGEXP.test(file)) {
        allSourceFiles.push(normalizePathToSourceFiles(file));
    }
});

require.config({
    // Karma serves files under /base, which is the basePath from your config file
    baseUrl: "/",

    paths: {
        test: "/base/test",
        dist: "/base/dist",
        views: "/base/dist/views",
        resources: "/base/dist/resources",
        aurelia: "/base/Content/scripts/aurelia",
    },

    // dynamically load all test files
    deps: ["aurelia/aurelia-bundle"],

    // we have to kickoff jasmine, as it is asynchronous
    callback: loadSourceModulesAndStartTest
});

Notes:

  • Line 4, 5: We set up regex patterns to match test (spec) files as well as source files
  • Line 8, 12: We normalize the path to test or source files. This is necessary since the paths that requirejs use are a different to the base path that karma sets up.
  • Lines 16-18: We load the modules we need in order of dependency – starting with Aurelia (frameworks), then the sources, and then the test files
  • Line 19: We need to start the karma engine ourselves, since we’re hijacking the default start to load everything via requirejs
  • Line 25: We hook into the karma function that loads files to normalize the file paths
  • Line 37: We set up paths for requirejs
  • Line 46: We tell requirejs that the most “basic” dependency is the Aurelia framework
  • Line 49: We tell karma to execute our custom launch function once the “base” dependency is loaded

To be honest, figuring out the final path and normalize settings was a lot of trial and error. I turned karma logging onto debug, and then just played around until karma was serving all the files and requirejs was happy with path resolution. You’ll have to play around with these paths yourself for your project structure.

Running Karma Test from the CLI

Now we can run the karma tests: simply type “karma start” and karma will fire up and run the tests: you should see the Chrome window popping up (assuming you’re using the Chrome karma launcher) and a message telling you that the tests were run successfully.

Running Karma from Gulp

Now that we have the tests running from the karma CLI, we can easily run them from within Gulp. We are using Gulp to transpile TypeScript to Javascript, compile LESS files to CSS and do minification and any other “production-izing” we need – so running tests in Gulp makes sense. Also, this way we make sure we’re using the latest sources we have instead of old stale code that’s been lying around (especially if you forget to run the gulp build tasks!). Here are the essential bits of the “unit-test” target in Gulp:

var gulp = require('gulp');
var karma = require("karma").server;

gulp.task("unit-test", ["build-system"], function () {
    return karma.start({
        configFile: __dirname + "/../../karma.conf.js",
        singleRun: true
    });
});

Notes:

  • We import Gulp and Karma server. I didn’t install gulp-karma – rather, I just rely on “pure” karma.
  • We create a task called “unit-test” that fist calls “build-system” before invoking karma
    • The build-system task transpiles TypeScript to JavaScript – we make sure that we generate un-minified files and source maps in this task (so that later on we can set breakpoints and debug)
    • We tell karma where to find the karma config file (so you need to specify the path relative to __dirName, which is the current directory where the Gulp script is
    • We tell karma to perform a single run, rather then keeping the browsers open and running the tests every time we change a file

We can now run “gulp unit-test” from the command line, or we can execute the gulp “unit-test” task from the Visual Studio Task Runner Explorer (which is native to VS 2015 and can be installed into VS 2013 via an extension):

image

Debugging Tests

Now that we can run the tests from Gulp, we may want to debug while testing. In order to do that, we’ll need to make sure the tests can run in IE (since VS will break on code running in IE). The karma launcher creates its own dynamic page to launch the tests, so we’re going to need to code an html page ourselves if we want to be able to debug tests. I create a “SpecRunner.html” page in my unit-test folder:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <title>Jasmine Spec Runner v2.2.0</title>

    <link rel="stylesheet" href="../../node_modules/jasmine-core/lib/jasmine-core/jasmine.css">
    <script src="/Content/scripts/jquery-2.1.3.js"></script>
    
    <!-- source files here... -->
    <script src="/Content/scripts/core-js/client/core.js"></script>
    <script src="/Content/scripts/requirejs/require.js"></script>
    
    <script>
        var baseUrl = window.location.origin;
        require.config({
            baseUrl: baseUrl + "/src",
            paths: {
                jasmine: baseUrl + "/node_modules/jasmine-core/lib/jasmine-core/jasmine",
                "jasmine-html": baseUrl + "/node_modules/jasmine-core/lib/jasmine-core/jasmine-html",
                "jasmine-boot": baseUrl + "/node_modules/jasmine-core/lib/jasmine-core/boot",
                "sinon": baseUrl + "/node_modules/sinon/pkg/sinon",
                "jasmine-sinon": baseUrl + "/node_modules/jasmine-sinon/lib/jasmine-sinon",
                aurelia: baseUrl + "/Content/scripts/aurelia",
                webcomponentsjs: baseUrl + "/Content/scripts/webcomponentsjs",
                dist: baseUrl + "/dist",
                views: baseUrl + "/dist/views",
                resources: baseUrl + "/dist/resources",
                test: "/test"
            },
            shim: {
                "jasmine-html": {
                    deps: ["jasmine"],
                },
                "jasmine-boot": {
                    deps: ["jasmine", "jasmine-html"]
                }
            }
        });

        // load Aurelia and jasmine...
        require(["aurelia/aurelia-bundle"], function() {
            // ... then jasmine...
            require(["jasmine-boot"], function () {
                // .. then jasmine plugins...
                require(["sinon", "jasmine-sinon"], function () {
                    // build a list of specs
                    var specs = [];
                    specs.push("test/unit/aurelia-appInsights.spec");

                    // ... then load the specs
                    require(specs, function () {
                        // finally we can run jasmine
                        window.onload();
                    });
                });
            });
        });
    </script>
</head>

<body>
</body>
</html>

Notes:

  • Lines 15-39: We configure requirejs for the tests
    • Lines 15: the base url is the window location – when debugging from VS this is usually http://localhost followed by some port
    • Lines 19-29: the paths requirejs needs to resolve all the modules we want to load, as well as some jasmine-specific libs
    • Lines 31-38: we need to shim a couple of jasmine libs to let requirejs know about their dependencies
  • Lines 42, 44, 46: We load the dependencies in order so that requirejs loads them in the correct order
  • Line 49: We create an array of all our test files
  • Lines 52, 54: After loading the test specs, we trigger the onload() method which start the karma tests

Again you see that we hijack the usual Jasmine startup so that we can get requirejs to load all the sources, libs and tests before launching the test runner. Now we set the SpecRunner.html page to be the startup page for the project, and hit F5:

imageNow that we can finally run the tests from VS in IE, we can set a breakpoint, hit F5 and we can debug!

image 

PhantomJS – mostly harmless*, er, headless

While debugging in IE or launching Chrome from karma is great, there are situations where we may want to run our tests without the need for an actual browser (like on the build server). Fortunately there is a tool that allows you to run “headless” tests – PhantomJS. And even better – there’s a PhantomJS launcher for karma! Let’s add it in:

Run “npm install karma-phantomjs-launcher --save-dev” to install the PhantomJS launcher for karma. Then change the launcher config in the karma.conf.js file from [“Chrome”] to [“PhantomJS2”] and run karma. Unfortunately, this won’t work: you’ll likely see an error like this:

TypeError: 'undefined' is not a function (evaluating 'Array.prototype.forEach.call.bind(Array.prototype.forEach)')

This sounds like a native JavaScript problem – perhaps since Aurelia uses ES6 (and even ES7) we need a more modern launcher. Let’s try install PhantomJS2 (the PhantomJS launcher that uses an experimental Phantom 2, a more modern version of PhantomJS). That seems to get us a little further:

ReferenceError: Can't find variable: Map

Hmm. Map is again, an ES6 structure. Fortunately there is a library with the ES5 polyfills for some newer ES6 structures like Map: harmony-collections. We run “npm install harmony-collections --save-dev” to install the harmony-collections package, and then reference it in the karma.conf.js file (on line 13):

"node_modules/harmony-collections/harmony-collections.min.js",

We get a bit further, but there is still something missing:

ReferenceError: Can't find variable: Promise

Again a little bit of searching leads to another node package: so we run “npm install promise-polyfill --save-dev” and again reference the file (just after the harmony-collections reference):

"node_modules/promise-polyfill/Promise.min.js",

Success! We can now run our tests headless.

In another system I was coding, I ran into a further problem with the “find” method on arrays. Fortunately, we can polyfill the find method too! I didn’t find a package for that – I simply added the polyfill from here into one of my spec files.

Code Coverage

So we can now run test from karma, from Gulp, from Task explorer, from VS using F5 and we can run them headless using PhantomJS2. If we add in a coverage reporter, we can even get some code coverage analysis: run “npm install karma-coverage --save-dev”. That will install a new reporter, so we need to add it in to the reporters section of karma.conf.js:

reporters: ["progress", "coverage"],

coverageReporter: {
    dir: "test/coverage/",
    reporters: [
        { type: 'lcov', subdir: 'report-lcov' },
        { type: 'text-summary', subdir: '.', file: 'coverage-summary.txt' },
        { type: 'text' },
    ]
},

We add the reporter in (just after “progress”). We also configure what sort of coverage information formats we want and which directory the output should go to. Since the coverage requires our code to be instrumented, we need to add in a preprocessor (just above reporters):

preprocessors: {
    "dist/**/*.js": ["coverage"]
},

This tells the coverage engine to instrument all the js files in the dist folder. Any other files we want to calculate coverage from, we’ll need to add in the glob pattern.

image

The output in the image above is from the “text” output. For more detailed coverage reports, we browse to test/coverage/report-lcov/lcov-report/index. We can then click through the folders and then down to the files, where we’ll be able to see exactly which lines our test covered (or missed):

image

This will help us discover more test opportunities.

Running Test in TeamBuilds

With all the basics in place, we can easily include the unit tests into our builds. If you’re using TFS 2013 builds, you can just add a PowerShell script into your repo and then add that script as a pre- or post-test script. Inside the PowerShell you simply invoke “gulp unit-test” to run the unit tests via Gulp. I wanted to be a bit fancier, so I also added code to inspect the output from the test run and the coverage to add them into the build summary:

image

The full PowerShell script is here.

Seeing Tests in Visual Studio

Finally, just in case we don’t have enough ways of running the tests, we can install the Visual Studio Karma Test Adapter. This great adapter picks up the tests we’ve configured in karma and displays them in the test explorer window, where we can run them:

image

Conclusion

Unit testing your front-end view-model logic is essential if you’re going to deploy quality code. Enabling a good experience for unit testing requires a little bit of thought and some work – but once you’ve got the basics in place, you’ll be good to go. Ensuring quality as you code means you’ll have better quality down the road – and that means more time for new features and less time fixing bugs. Using Gulp and Karma enables continuous testing, and augmenting these with the techniques I’ve outlines you can also debug tests, run tests several ways and even integrate the tests (and coverage) into your builds.

Happy testing!

* Mostly Harmless– from the Hitchhikers Guide to the Galaxy by Douglas Adams

Why You Should Switch to Build VNext

$
0
0

Now that VNext builds are in Preview, you should be moving your build definitions over from the “old” XAML definitions to the new VNext definitions. Besides the fact that I suspect at some point that XAML builds will be deprecated, the VNext builds are just much better, in almost every respect.

Why Switch?

There are several great reasons to switch to (or start using) the new VNext builds. Here’s a (non-exhaustive) list of some of my favorites:

  1. Build is now an orchestrator, not another build engine.This is important – VNext build is significantly different in architecture from the old XAML engine. Build VNext is basically just an orchestrator. That means you can orchestrate whatever build engine (or mechanism) you already have – no need to lose current investments in engines like Ant, CMake, Gradle, Gulp, Grunt, Maven, MSBuild, Visual Studio, Xamarin, XCode or any other existing engine. “Extra” stuff – like integrating with work items, publishing drops and test results and other “plumbing” is handled by Build.VNext.
  2. Edit build definitions in the Web. You no longer have to download, edit or – goodness – learn a new DSL. You can stitch together fairly complex builds right in Web Access.
  3. Improved Build Reports. The build reports are much improved – especially Test Results, which are now visible on the Web (with nary a Visual Studio in sight).
  4. Improved logging. Logging in VNext builds is significantly better – the logs are presented in a console window, and not hidden somewhere obscure.
  5. Improved Triggers. The triggers have been improved – you can have multiple triggers for the same build, including CI triggers (where a checkin/commit triggers the build) and scheduled triggers.
  6. Improved Retention Policies. Besides being able to specify multiple rules, you can now also use “days” to keep builds, rather than “number” of builds. This is great when a build is being troublesome and produces a number of builds – if you were using “number of builds” you’d start getting drop-offs that you don’t really want.
  7. Composability. Composing builds from the Tasks is as easy as drag and drop. Setting properties is a snap, and properties such as “Always Run” make the workflow easy to master.
  8. Simple Customization. Have scripts that you want to invoke? No problem – drag on a “PowerShell” or “Bat” Task. Got a one-liner that needs to execute? No problem – use the “Command Line” task and you’re done. No mess, no fuss.
  9. Deep Customization. If the Tasks aren’t for you, or there isn’t a Task to do what you need, then you can easily create your own.
  10. Open Source Toolbox. Don’t like the way an out-of-the-box Task works? Simply download its source code from the vso-agent-tasks Github repo, and fix it! Of course you can share your awesome Tasks once you’ve created them so that the community benefits from your genius (or madness, depending on who you ask!)
  11. Cross Platform. The cross-platform agent will run on Mac or Linux. There’s obviously a windows agent too. That means you can build on whatever platform you need to.
  12. Git Policies. Want to make sure that a build passes before accepting merges into a branch? No problem – set up a VNext build, and then add a Policy to your branch that forces the build to run (and pass) before merges are accepted into the branch (via Pull Requests). Very cool.
  13. Auditing. Build definitions are now stored as JSON objects. Every change to the build (including changing properties) is kept in a history. Not only can you see who changed what when, but you can do side-by-side comparisons of the changes. You can even enter comments as you change the build to help browse history.
  14. Templating. Have a brilliant build definition that you want to use as a template for other builds? No problem – just save your build definition as a template. When you create a new build next time, you can start from the template.
  15. Deployment. You can now easily deploy your assets. This is fantastic for Continuous Delivery – not only can you launch a build when someone checks in (or commits) but you can now also include deployment (to your test rigs, obviously!). Most of the deployment love is for Azure – but since you can create your own Tasks, you can create any deployment-type Task you want.
  16. Auto-updating agents. Agents will auto-update themselves – no need to update every agent in your infrastructure.
  17. Build Pools and Queues. No more limitations on “1 TPC per Build Controller and 1 Controller per build machine”. Agents are xcopyable, and live in a folder. That means you can have as many agents (available to as many pools, queues and TPCs as you want) on any machine. The security and administration of the pools, queues and agents is also better in build vNext.
  18. Capabilities and Demands. Agents will report their “capabilities” to TFS (or VSO). When you create builds, the sum of the capabilities required for each Task is the list of “demands” that the build requires. When a build is queued, TFS/VSO will find an agent that has capabilities that match the demands. A ton of capabilities are auto-discovered, but you can also add your own. For example, I added “gulp = 0.1.3” to my build agent so that any build with a “Gulp” task would know it could run on my agent. This is a far better mechanism of matching agents to builds than the old “Agent Tags”.

Hopefully you can see that there are several benefits to switching. Just do it! It’s worth noting that there are also Hosted VNext agents, so you can run your VNext builds on the “Hosted” queue too. Be aware though that the image for the agent is “stock”, so it may not work for every build. For example, we’re using TypeScript 1.5 beta, and the hosted agent build only has TypeScript extensions 1.4, so our builds don’t work on the Hosted agents.

Environment Variables Name Change

When you use a PowerShell script Task, the script is invoked in a context that includes a number of predefined environment variables. Need access to the build number? No problem – just look up $env.BUILD_BUILDNUMBER. It’s way easier to use the environment variables that to remember how to pass parameters to the scripts. Note – the prefix “TF_” has been dropped – so if you have PowerShell scripts that you were invoking as pre- or post-build or test scripts in older XAML builds, you’ll have to update the names.

Just a quick tip: if you directly access $env.BUILD_BUILDNUMBER in your script, then you have to set the variable yourself before testing the script in a PowerShell console. I prefer to use the value as a default for a parameter – that way you can easily invoke the script outside of Team Build to test it. Here’s an example:

Param(
  [string]$pathToSearch = $env:BUILD_SOURCESDIRECTORY,
  [string]$buildNumber = $env:BUILD_BUILDNUMBER,
  [string]$searchFilter = "VersionInfo.",
  [regex]$pattern = "\d+\.\d+\.\d+\.\d+"
)
 
if ($buildNumber -match $pattern -ne $true) {
    . . .
}

See how I default the $pathToSearch and $buildNumber parameters using the $env variable? Invoking the script myself when testing is then easy – just supply values for the variables explicitly.

Node Packages – Path too Long

I have become a fan of node package manager – npm. Doing some web development recently, I have used it a log. The one thing I have against it (admittedly this is peculiar only to npm on Windows), is that the node_modules path can get very deep – way longer than good ol’ 260 character limit.

This means that any Task that does a wild-char search on folders is going to error when there’s a node_modules folder in the workspace. So you have to explicitly specify the paths to files like sln or test (for the VSBuild and VSTest Tasks) respectively – you can’t use the “**/*.sln” path wildcard (**) because it will try to search in the node_modules folder, and will error out when the path gets too long. No big deal – I just specify the path using the repo browser dialog. I was also forced to check “Continue on Error” on the VSBuild Task – the build actually succeeds (after writing a “Path too long” error in the log), but because the Task outputs the “Path too long” error to stderr, the Task fails.

image

EDIT: If you are using npm and run into this problem, you can uncheck “Restore NuGet Packages” (the VSBuild Task internally does a wild-card search for package.config, and this is what is throwing the path too long error as it searches the node_modules folder). You’ll then need to add a “Nuget installer” Task before the VSBuild task and explicitly specify the path to your sln file.

image

image

Migrating from XAML Builds

Migrating may be too generous – you have to re-create your builds. Fortunately, trying to move our build from XAML to VNext didn’t take all that long, even with the relatively large customizations we had – but I was faced with Task failures due to the path limit, and so I had to change the defaults and explicitly specify paths wherever there was a “**/” folder. Also, the npm Task itself has a bug that will soon be fixed – for now I’m getting around that by invoking “npm install” as part of a “Command Line” Task (don’t forget to set the working directory):

image

No PreGet Tasks

At first I had npm install before my “UpdateVersion” script – however, the UpdateVersion script searches for files with a matching pattern using Get-ChildItem. Unfortunately, this errors out with “path too long” when it goes into the node_modules directory. No problem, I thought to myself – I’ll just run UpdateVersion before npm install. That worked – but the build still failed on the VSBuild Task. So I set “Continue on Error” on the VSBuild Task – and I got a passing build!

I then queued a new build – and the build failed. The build agent couldn’t even get the sources because – well, “Path too long”. Our XAML build actually had a “pre-Pull” script hook so that we could delete the node_modules folder (using RoboCopy which can handle too long paths). However, VNext builds cannot execute Tasks before getting sources. Fortunately Chris Patterson, the build PM, suggested that I run the delete at the end of the build.

Initially I thought this was a good idea – but then I thought, “What if the build genuinely fails – like failed tests? Then the ‘delete’ task won’t be run, and I won’t be able to build again until I manually delete the agent’s working folder”. However, when I looked at the Tasks, I saw that there is a “Run Always” checkbox on the Task! So I dropped a PowerShell Task at the end of my build that invokes the “CleanNodeDirs.ps1” script, and check “Always Run” so that even if something else in the build fails, the CleanNodeDirs script always runs. Sweet!

CleanNodeDirs.ps1

To clean the node_modules directory, I initially tried “rm –rf node_modules”. But it fails – guess why? “Path too long”. After searching around a bit, I came across a way to use RoboCopy to delete folders. Here’s the script:

Param(
  [string]$srcDir = $env:BUILD_SOURCESDIRECTORY
)

try {
    if (Test-Path(".\empty")) {
        del .\empty -Recurse -Force
    }
    mkdir empty

    robocopy .\empty "$srcDir\src\Nwc.Web\node_modules" /MIR > robo.log
    del .\empty -Recurse -Force
    del robo.log -Force

    Write-Host "Successfully deleted node_modules folder"
    exit 0
} catch {
    Write-Error $_
    exit 1
}

Build.VNext Missing Tasks

There are a couple of critical Tasks that are still missing:

  1. No “Associate Work Items” Task
  2. No “Create Work Item on Build Failure” Task
  3. No “Label sources” Tasks

These will no doubt be coming soon. It’s worth working on converting your builds over anyway – when the Tasks ship, you can just drop them into your minty-fresh builds!

Conclusion

You really need to be switching over to BuildVNext – even though it’s still in preview, it’s still pretty powerful. The authoring experience is vastly improved, and the Task library is going to grow rapidly – especially since it’s open source. I’m looking forward to what the community is going to come up with.

Happy building!

My First VSO Extension: Retry Build

$
0
0

Visual Studio Online (VSO) and TFS 2015 keep getting better and better. One of the coolest features to surface recently is the ability to add (supported) extensions to VSO. My good friend Tiago Pascoal managed to hack VSO to add extensions a while ago, but it was achieved via browser extensions, not through a supported VSO extensibility framework. Now Tiago can add his extensions in an official manner!

TL;DR – if you just want the code for the extension, then just go to this repo.

retry-build-screenshot.png

Retry Build

I was recently playing with Build VNext and got a little frustrated that there was no way to retry a build from the list of completed builds in Web Access. I had to click the build definition to queue it. I found this strange, since the build explorer in Visual Studio has an option to retry a build. I was half-way through writing a mail to the Visual Studio Product team suggesting that they add this option, when I had an epiphany: I can write that as an extension! So I did…

I started by browsing to the Visual Studio Extensions sample repo on Github. I had to join the Visual Studio Partner program, which took a while since I signed up using my email address but adding my work Visual Studio account (instead of my personal account). Switching the account proved troublesome, but I was able to get it sorted with help from Will Smythe on the Product team. Make sure you’re the account owner and that you specify the correct VSO account when you sign up for the partner program!

Next I cloned the repo and took a look at the code – it looked fairly straightforward, especially since all I wanted to do with this extension was add a menu command – no new UI at all.

I followed the instructions for installing the “Contribution Point Guide” so that I could test that extensions worked on my account, as well as actually see the properties of the extension points. It’s a very useful extension to have when you’re writing extensions (does that sounds recursive?).

TypeScript

I’m a huge TypeScript fan, so I wanted to write my extension in TypeScript. There is a sample in the samples repo that has TypeScript, so I got some hints from that. There is a “Delete branch” sample that adds a menu command (really the only thing I wanted to do), so I started from that sample and wrote my extension.

Immediately I was glad I had decided to use TypeScript – the d.ts (definition files) for the extension frameworks and services is very cool – getting IntelliSense and being able to type the objects that were passed around made discovery of the landscape a lot quicker than if I was just using plain JavaScript.

The code tuned out to be easy enough. However, when I ran the extension, I kept getting a ’define’ is not defined error. We’ll come back to that. Let’s first look at main.ts to see the extension:

import {BuildHttpClient} from "TFS/Build/RestClient";
import {getCollectionClient} from "VSS/Service";
var retryBuildMenu = (function () {
    "use strict";

    return <IContributedMenuSource> {
        execute: (actionContext: any) => {
            var vsoContext = VSS.getWebContext();
            var buildClient = getCollectionClient(BuildHttpClient);

            VSS.ready(() => {
                // get the build
                buildClient.getBuild(actionContext.id, vsoContext.project.name).then(build => {
                    // and queue it again
                    buildClient.queueBuild(build, build.definition.project.id).then(newBuild => {
                        // and navigate to the build summary page
                        // e.g. https://myproject.visualstudio.com/DefaultCollection/someproject/_BuildvNext#_a=summary&buildId=1347
                        var buildPageUrl = `${vsoContext.host.uri}/${vsoContext.project.name}/_BuildvNext#_a=summary&buildId=${newBuild.id}`;
                        window.parent.location.href = buildPageUrl;
                    });
                });
            });
        }
    };
}());

VSS.register("retryBuildMenu", retryBuildMenu);

Notes:

  1. Lines 1/2: imports of framework objects – these 2 lines were causing an error for me initially
  2. Line 3: the name of this function is only used on line 27 when we register it
  3. Line 6: we’re returning an IContributedMenuSource struct
  4. Line 7: the struct has an ‘execute’ method that is invoked when the user clicks on the menu item
  5. Line 9: we get a reference to what is essentially the build service
  6. Line 13: using the build it (a property I discovered on the actionContext object using the Contribution Point sample extension) we can get the completed build object
  7. Line 15: I simple pop the build back onto the queue – all the other information is already in the build object (like branch, configuration and so on) from the previous queuing of the build
  8. Line 18: I build an url that points to the summary page for the new build
  9. Line 19: redirect the browser to the new build url
  10. Line 13/15: note the use of the .then() syntax – these methods return promises (good asynch programming), so we use the .then() to execute once the async operation has completed
  11. Line 27: registering the extension using the name (1st arg) which is the name we use in the extension.json file, and the function name we specified on line 3 (the 2nd arg)

It was in fact, simpler than I thought it would be. I was expecting to have to create a new build object from the old build object – turns out that wasn’t necessary at all. So I had my code and was ready to run – except that I ran into a snag. When I ran my code, I kept getting ’define’ is not defined. To understand why, we need to quickly understand how the extensions are organized.

Anatomy of an Extension

A VSO extension consists of a couple of key files: the extension.json, the main.html and the main.ts or main.js file.

  • extension.json – the manifest file for the extension – used to register the extension
  • main.html – the main loading page for the extension – used to bootstrap the extension
  • main.js (or main.ts) – the main script entry point for the extension – used to provide the starting point for any extension logic

The “Build Inspector” sample has a main.ts, but this file doesn’t really do much – it only redirects to the main page of the extension custom UI. So there are no imports or requires. So I was at a bit of a loss as to why I was getting what looked like a require error when my extension was loaded. Here’s the html for the main.html page of the sample “Delete Branch” extension:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Delete Branch</title>
</head>
<body>
    <script src="sdk/scripts/VSS.SDK.js"></script>
    <script src="scripts/main.js"></script>
    <script>
        VSS.init({ setupModuleLoader: true });
    </script>
</body>
</html>

You’ll see that the main.js file is imported in line 9, and then we’ve told VSO to use the module loader – necessary for any “require” work. So I was still baffled – here we’re telling the framework that we’re going to be using “require” and I’m getting a require error! (Remember, since the sample doesn’t use any requires in the main.js, it doesn’t error). My main.html page looked exactly the same – and then looked at the items.html page of the sample “Build Inspector” extension, and I got an idea – I need to require my main module, not just load it. Here’s what my main.html ended up looking like:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Retry Build</title>
</head>
<body>
    <script src="sdk/scripts/VSS.SDK.js"></script>
    <p>User will never see this</p>
    <script type="text/javascript">
        // Initialize the VSS sdk
        VSS.init({
            setupModuleLoader: true,
        });

        // Wait for the SDK to be initialized, then require the script
        VSS.ready(function () {
            require(["scripts/main"], function (main) { });
        });
    </script>
</body>
</html>

You can see how instead of just importing the main.js script (like the “Delete Branch” sample) I “require” the main script on line 18. Once I had that, no more errors and I was able to get the extension to work.

Once I had that worked out, I was able to quickly publish the extension to Azure, change the url in the extension.json file to point to my Azure site url, and I was done! The code is in this repo.

Conclusion

Writing extensions for VSO is fun, and having a good sample library to start from is great. The “Contribution Points” sample is clever – letting you test the extension loading as well as giving very detailed information about the various hooks and properties available for extensions. Finally, the TypeScript definitions make navigating the APIs available a snap. While my first extension is rather basic, I am really pleased with the extensibility framework that the Product team have devised.

Happy customizing!

PaaS and Time Compression in the Cloud

$
0
0

Recently I got to write a couple of articles which were posted on the Northwest Cadence Blog. I am not going to reproduce them here, so please read them on the NWCadence blog from the links below.

PaaS

The first, PaaS Architecture: Designing Apps for the Cloud covers some of the considerations you’ll need to make if you’re porting your applications to the Cloud. Moving your website to a VM with IIS is technically “moving to the cloud”, but this is IaaS (infrastructure as a service) rather than PaaS (platform as a service). If you’re going to unlock true scale from the cloud, you’re going to have to move towards PaaS.

Unfortunately, you can’t simply push any site into PaaS, since you’ll need to consider where and how your data is stored, how your authentication is going to work, and most importantly, how your application will handle scale up – being spread over numerous servers simultaneously. The article deals with some these and other considerations you’ll need to make.

Time Compression

The second article, Compressing Time: A Competitive Advantage is written off the back of Joe Weinman’s excellent white paper Time for the Cloud. Weinman asserts that “moving to the cloud” is not a guarantee of success in and of itself – companies must strategically utilize the advantages cloud computing offers. While there are many cloud computing advantages, Weinman focuses on what he calls time compression– the cloud’s ability to speed time to market as well as time to scale. Again, I consider some of the implications you’ll need to be aware of when you’re moving applications to the cloud.

Happy cloud computing!

Enable SAFe Features in Existing Team Projects After Upgrading to TFS 2015

$
0
0

TFS 2015 has almost reached RTM! If you upgrade to CPT2, you’ll see a ton of new features, not least of which are significant backlog and board improvements, the identity control, Team Project rename, expanded features for Basic users, the new Build Engine, PRs and policies for Git repos and more. Because of the schema changes required for Team Project rename, this update can take a while. If you have large databases, you may want to run the “pre-upgrade” utility that will allow you to prep your server while it’s still online and decrease the time required to do the upgrade (which will need to be done offline).

SAFe Support

The three out of the box templates have been renamed to simply Scrum, Agile and CMMI. Along with the name change, there is now “built in” support for SAFe. This means if you create a new TFS 2015 team project, you’ll have 3 backlogs – Epic, Feature and “Requirement” (where Requirement will be Requirement, User Story or PBI for CMMI, Agile and Scrum respectively). In Team Settings, team can opt into any of the 3 backlogs. Also, Epics, Features and “Requirements” now have an additional “Value Area” field which can be Business or Architectural, allowing you to track Business vs Architectural work.

Where are my Epics?

After upgrading my TFS to 2015, I noticed that I didn’t have Epics. I remember when upgrading from 2012 to 2013, when you browsed to the Backlog a message popped up saying, “Some features are not available” and a wizard walked you through enabling the “backlog feature”, adding in missing work items and configuring the process template settings. I was expecting the same behavior when upgrading to TFS 2015 – but that didn’t happen. I pinged the TFS product team and they told me that, “Epics are not really a new ‘feature’ per se – just a new backlog level, so the ‘upgrade’ ability was not built in.” If you’re on VSO, your template did get upgraded, so you won’t have a problem – however, for on-premises Team Projects you have to apply the changes manually.

Doing it Manually

Here are the steps for enabling SAFe to your existing TFS 2013 Agile, Scrum or CMMI templates:

  1. Add the Epic work item type
  2. Add the “Value Area” field to Features and “Requirements”
  3. Add the “Value Area” field to the Feature and “Requirement” form
  4. Add the Epic category
  5. Add the Epic Product Backlog
  6. Set the Feature Product Backlog parent to Epic Backlog
  7. Set the work item color for Epics

It’s a whole lot of “witadmin” and XML editing – never fun. Fortunately for you, I’ve created a script that will do it for you.

Isn’t there a script for that?

Here’s the script – but you can download it from here.

<#
.SYNOPSIS

Author: Colin Dembovsky (http://colinsalmcorner.com)
Updates 2013 Templates to 2015 base templates, including addition of Epic Backlog and Area Value Field.


.DESCRIPTION

Adds SAFe support to the base templates. This involves adding the Epic work item (along with its backlog and color settings) as well as adding 'Value Area' field to Features and Requirements (or PBIs or User Stories).

This isn't fully tested, so there may be issues depending on what customizations of the base templates you have already made. The script attempts to add in values, so it should work with your existing customizations.

To execute this script, first download the Agile, Scrum or CMMI template from the Process Template Manager in Team Explorer. You need the Epic.xml file for this script.

.PARAMETER tpcUri

The URI to the Team Project Collection - defaults to 'http://localhost:8080/tfs/defaultcollection'

.PARAMETER project

The name of the Team Project to ugprade

.PARAMETER baseTemplate

The name of the base template. Must be Agile, Scrum or CMMI

.PARAMETER pathToEpic

The path to the WITD xml file for the Epic work item

.PARAMETER layoutGroupToAddValueAreaControlTo

The name of the control group to add the Value Area field to in the FORM - defaults to 'Classification' (Agile), 'Details' (SCRUM) and '' (CMMI). Leave this as $null unless you've customized your form layout.

.PARAMETER pathToWitAdmin

The path to witadmin.exe. Defaults to 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\witadmin.exe'

.EXAMPLE

Upgrade-Template -project FabrikamFiber -baseTemplate Agile -pathToEpic '.\Agile\WorkItem Tracking\TypeDefinitions\Epic.xml'

#>

param(
    [string]$tpcUri = "http://localhost:8080/tfs/defaultcollection",

    [Parameter(Mandatory=$true)]
    [string]$project,

    [Parameter(Mandatory=$true)]
    [ValidateSet("Agile", "Scrum", "CMMI")]
    [string]$baseTemplate,

    [Parameter(Mandatory=$true)]
    [string]$pathToEpic,

    [string]$layoutGroupToAddValueAreaControlTo = $null,

    [string]$pathToWitAdmin = 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\witadmin.exe'
)

if (-not (Test-Path $pathToEpic)) {
    Write-Error "Epic WITD not found at $pathToEpic"
    exit 1
}

if ((Get-Alias -Name witadmin -ErrorAction SilentlyContinue) -eq $null) {
    New-Alias witadmin -Value $pathToWitAdmin
}

$valueAreadFieldXml = '
<FIELD name="Value Area" refname="Microsoft.VSTS.Common.ValueArea" type="String">
    <REQUIRED />
    <ALLOWEDVALUES>
        <LISTITEM value="Architectural" />
        <LISTITEM value="Business" />
    </ALLOWEDVALUES>
    <DEFAULT from="value" value="Business" />
    <HELPTEXT>Business = delivers value to a user or another system; Architectural = work to support other stories or components</HELPTEXT>
</FIELD>'
$valueAreaFieldFormXml = '<Control FieldName="Microsoft.VSTS.Common.ValueArea" Type="FieldControl" Label="Value area" LabelPosition="Left" />'

$epicCategoryXml = '
<CATEGORY name="Epic Category" refname="Microsoft.EpicCategory">
  <DEFAULTWORKITEMTYPE name="Epic" />
</CATEGORY>'

$epicBacklogXml = '
    <PortfolioBacklog category="Microsoft.EpicCategory" pluralName="Epics" singularName="Epic" workItemCountLimit="1000">
      <States>
        <State value="New" type="Proposed" />
        <State value="Active" type="InProgress" />
        <State value="Resolved" type="InProgress" />
        <State value="Closed" type="Complete" />
      </States>
      <Columns>
        <Column refname="System.WorkItemType" width="100" />
        <Column refname="System.Title" width="400" />
        <Column refname="System.State" width="100" />
        <Column refname="Microsoft.VSTS.Scheduling.Effort" width="50" />
        <Column refname="Microsoft.VSTS.Common.BusinessValue" width="50" />
        <Column refname="Microsoft.VSTS.Common.ValueArea" width="100" />
        <Column refname="System.Tags" width="200" />
      </Columns>
      <AddPanel>
        <Fields>
          <Field refname="System.Title" />
        </Fields>
      </AddPanel>
    </PortfolioBacklog>'
$epicColorXml = '<WorkItemColor primary="FFFF7B00" secondary="FFFFD7B5" name="Epic" />'

#####################################################################
function Add-Fragment(
    [System.Xml.XmlNode]$node,
    [string]$xml
) {
    $newNode = $node.OwnerDocument.ImportNode(([xml]$xml).DocumentElement, $true)
    [void]$node.AppendChild($newNode)
}

function Add-ValueAreaField(
    [string]$filePath,
    [string]$controlGroup
) {
    # check if the field already exists
    if (($valueAreaField = $xml.WITD.WORKITEMTYPE.FIELDS.ChildNodes | ? { $_.refname -eq "Microsoft.VSTS.Common.ValueArea" }) -ne $null) {
        Write-Host "Work item already has Value Area field" -ForegroundColor Yellow
    } else {
        # add field to FIELDS
        $xml = [xml](gc $filePath)
        Add-Fragment -node $xml.WITD.WORKITEMTYPE.FIELDS -xml $valueAreadFieldXml

        # add field to FORM
        # find the "Classification" Group
        $classificationGroup = (Select-Xml -Xml $xml -XPath "//Layout//Group[@Label='$layoutGroupToAddValueAreaControlTo']").Node
        Add-Fragment -node $classificationGroup.Column -xml $valueAreaFieldFormXml

        # upload definition
        $xml.Save((gi $filePath).FullName)
        witadmin importwitd /collection:$tpcUri /p:$project /f:$filePath
    }
}
#####################################################################

$defaultControlGroup = "Classification"
switch ($baseTemplate) {
    "Agile" { $wit = "User Story" }
    "Scrum" { $wit = "Product Backlog Item"; $defaultControlGroup = "Details" }
    "CMMI"  { $wit = "Requirement" }
}
if ($layoutGroupToAddValueAreaControlTo -ne $null) {
    $defaultControlGroup = $layoutGroupToAddValueAreaControlTo
}

Write-Host "Exporting requirement work item type $wit" -ForegroundColor Cyan
witadmin exportwitd /collection:$tpcUri /p:$project /n:$wit /f:"RequirementItem.xml"

Write-Host "Adding 'Value Area' field to $wit" -ForegroundColor Cyan
Add-ValueAreaField -filePath ".\RequirementItem.xml" -controlGroup $defaultControlGroup

Write-Host "Exporting work item type Feature" -ForegroundColor Cyan
witadmin exportwitd /collection:$tpcUri /p:$project /n:Feature /f:"Feature.xml"

Write-Host "Adding 'Value Area' field to Feature" -ForegroundColor Cyan
Add-ValueAreaField -filePath ".\Feature.xml" -controlGroup $defaultControlGroup

if (((witadmin listwitd /p:FabrikamFiber /collection:$tpcUri) | ? { $_ -eq "Epic" }).Count -eq 1) {
    Write-Host "Process Template already contains an Epic work item type" -ForegroundColor Yellow
} else {
    Write-Host "Adding Epic" -ForegroundColor Cyan
    witadmin importwitd /collection:$tpcUri /p:$project /f:$pathToEpic
}

witadmin exportcategories /collection:$tpcUri /p:$project /f:"categories.xml"
$catXml = [xml](gc "categories.xml")
if (($catXml.CATEGORIES.ChildNodes | ? { $_.name -eq "Epic Category" }) -ne $null) {
    Write-Host "Epic category already exists" -ForegroundColor Yellow
} else {
    Write-Host "Updating categories" -ForegroundColor Cyan
    Add-Fragment -node $catXml.CATEGORIES -xml $epicCategoryXml
    $catXml.Save((gi ".\categories.xml").FullName)
    witadmin importcategories /collection:$tpcUri /p:$project /f:"categories.xml"

    Write-Host "Updating ProcessConfig" -ForegroundColor Cyan
    witadmin exportprocessconfig /collection:$tpcUri /p:$project /f:"processConfig.xml"
    $procXml = [xml](gc "processConfig.xml")

    Add-Fragment -node $procXml.ProjectProcessConfiguration.PortfolioBacklogs -xml $epicBacklogXml
    Add-Fragment -node $procXml.ProjectProcessConfiguration.WorkItemColors -xml $epicColorXml

    $featureCat = $procXml.ProjectProcessConfiguration.PortfolioBacklogs.PortfolioBacklog | ? { $_.category -eq "Microsoft.FeatureCategory" }
    $parentAttrib = $featureCat.OwnerDocument.CreateAttribute("parent")
    $parentAttrib.Value = "Microsoft.EpicCategory"
    $featureCat.Attributes.Append($parentAttrib)

    $procXml.Save((gi ".\processConfig.xml").FullName)
    witadmin importprocessconfig /collection:$tpcUri /p:$project /f:"processConfig.xml"
}

Write-Host "Done!" -ForegroundColor Green

Running the Script

To run the script, just make sure you’re a Team Project administrator and log in to a machine that has witadmin.exe on it. Then open Team Explorer, connect to your server, and click Settings. Then click “Process Template Manager” and download the new template (Agile, Scrum or CMMI) to a folder somewhere. You really only need the Epic work item WITD. Make a note of where the Epic.xml file ends up.

Then you’re ready to run the script. You’ll need to supply:

  • (Optional) The TPC Uri (defaults is http://localhost:8080/tfs/defaultcollection)
  • The Team Project name
  • The path to the Epic.xml file
  • The name of the base template – either Agile, Scrum or CMMI
  • (Optional) The path to witadmin.exe (defaults to C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\witadmin.exe)
  • (Optional) The name of the group you want to add the “Value Area” field to on the form – default is “Classification”

You can run Get-Help .\Upgrade-TemplateTo2015.ps1 to get help and examples.

Bear in mind that this script is a “best-effort” – make sure you test it in a test environment before going gung-ho on your production server!

Results

After running the script. you’ll be able to create Epic work items:

image

You’ll be able to opt in/out of the Epic backlog in the Team Settings page:

image

You’ll see “Value Area” on your Features and “Requirements”:

image

Happy upgrading!

Release Management 2015 with Build vNext: Component to Artifact Name Matching and Other Fun Gotchas

$
0
0

I’ve been setting up a couple of VMs in Azure with a TFS demo. Part of the demo is release management, and I finally got to upgrade Release Management to the 2015 release. I wanted to test integrating with the new build vNext engine. I faced some “fun” gotchas along the way. Here are my findings.

Fail: 0 artifacts(s) found

After upgrading Release Management server, I gleefully associated a component with my build vNext build. I was happy when the build vNext builds appeared in the drop-down. Since I was using the root of the output folder, I simply selected “\” as the location of the component (since I have several folders that I want to use via scripts, so I usually just specify the root of the drop folder).

I then queued the release – and the deployment failed almost instantly. “0 artifact(s) found corresponding to the name ‘FabFiber’ for BuildId: 91”. After a bit of head-scratching and head-to-table-banging, I wondered if the error was hinting at the fact that RM is actually looking for a published artifact named “FabFiber” in my build. Turns out that was correct.

Component Names and Artifact Names

To make a long story short: you have to match the component name in Release Management with the artifact name in your build vNext “Publish Artifact” task. This may seem like a good idea, but for me it’s a pain, since I usually split my artifacts into Scripts, Sites, DBs etc. and publish each as a separate artifact so that I get a neat folder layout for my artifacts. Since I use PowerShell scripts to deploy, I used to specify the root folder “\” for the component location and then used Scripts\someScript.ps1 as the path to the script. So I had to go back to my build and add a PowerShell script to first put all the folders into a “root” folder for me and then use a single “Publish Artifacts” task to publish the neatly laid out folder structure. I looked at this post from my friend Ricci Gian Maria to get some inspiration!

Here’s the script that I created and checked into source control:

param(
    [string]$srcDir,
    [string]$targetDir,
    [string]$fileFilter = "*.*"
)

if (-not (Test-Path $targetDir)) {
    Write-Host "Creating $targetDir"
    mkdir $targetDir
}

Write-Host "Executing xcopy /y '$srcDir\$fileFilter' $targetDir"
xcopy /y "$srcDir\$fileFilter" $targetDir

Write-Host "Done!"

Now I have a couple of PowerShell tasks that copy the binaries (and other files) in the staging directory – which I am using as the root folder for my artifacts. I configure the msbuild arguments to publish the website webdeploy package to $(build.stagingDirectory)\FabFiber, so I don’t need to copy it, since it’s already in the staging folder. For the DB components and scripts:

  • I configure the copy scripts to copy my DB components (dacpacs and publish.xmls) so I need 2 scripts which have the following args respectively:
    • -srcDir MAIN\FabFiber\FabrikamFiber.CallCenter\FabrikamFiber.Schema\bin\$(BuildConfiguration) -targetDir $(build.stagingDirectory)\db -fileFilter *.dacpac
    • -srcDir MAIN\FabFiber\FabrikamFiber.CallCenter\FabrikamFiber.Schema\bin\$(BuildConfiguration) -targetDir $(build.stagingDirectory)\DB -fileFilter *.publish.xml
  • I copy the scripts folder directly from the workspace into the staging folder using these arguments:
    • -srcDir MAIN\FabFiber\DscScripts -targetDir $(build.stagingDirectory)\Scripts
  • Finally, I publish the artifacts like so:

image

Now my build artifact (yes, a single artifact) looks as follows:

image

Back in Release Management, I made sure I had a component named “FabFiber” (to match the name of the artifact from the Publish Artifact task). I then also supplied “\FabFiber” as the root folder for my components:

image

That at least cleared up the “cannot find artifact” error.

A bonus of this is that you can now use server drops for releases instead of having to use shared folder drops. Just remember that if you choose to do this, you have to set up a ReleaseManagementShare folder. See this post for more details (see point 7). I couldn’t get this to work for some reason so I reverted to a shared folder drop on the build.

Renaming Components Gotcha

During my experimentation I renamed the component in Release Management that I was using in the release. This caused some strange behavior when trying to create releases: the build version picker was missing:

image

I had to open the release template and set the component from the drop-down everywhere that it was referenced!

image

Once that was done, I got the build version picker back:

image

Deployments started working again – my name is Boris, and I am invincible!

image

The Parameter is Incorrect

A further error I encountered had to do with the upgrade from RM 2013. At least, I think that was the cause. The deployment would copy the files to the target server, but when the PowerShell task was invoked, I got a failure stating (helpfully – not!), “The parameter is incorrect.”

image

At first I thought it was an error in my script – turns out that all you have to do to resolve this one is re-enter the password in the credentials for the PowerShell task in the release template. All of them. Again. Sigh… Hopefully this is just me and doesn’t happen to you when you upgrade your RM server.

Conclusion

I have to admit that I have a love-hate relationship with Release Management. It’s fast becoming more of a HATE-love relationship though. The single feature I see that it brings to the table is the approval workflow – the client is slow, the workflows are clunky and debugging is a pain.

I really can’t wait for the release of Web-based Release Management that will use the same engine as the build vNext engine, which should mean a vastly simpler authoring experience! Also the reporting and charting features we should see around releases are going to be great.

For now, the best advice I can give you regarding Release Management is to make sure you invest in agent-less deployments using PowerShell scripts. That way your upgrade path to the new Web-based Release Management will be much smoother and you’ll be able to reuse your investments (i.e. your scripts).

Perhaps your upgrade experiences will be happier than mine – I can only hope, dear reader, I can only hope.

Happy releasing!


Azure Web Apps, Kudu Console and TcpPing

$
0
0

I was working with a customer recently that put a website into Azure Web Apps. This site needed to connect to their backend databases (which they couldn’t move to Azure because legacy systems still needed to connect to it). We created an Azure VNet and configured site-to-site connectivity that created a secure connection between the Azure VNet and their on-premises network.

We then had to configure point-to-site connections for the VNet so that we could put the Azure Web App onto the VNet. This would (in theory) allow the website to access their on-premises resources such as the database. We also had to upgrade the site to Standard pricing in order to do this.

We had to reconfigure the site-to-site gateway to allow dynamic routing in order to do this, which meant deleting and recreating the gateway. A bit of a pain, but not too bad. We then configured static routing from the on-premises network to the point-to-site addresses on the VNet.

Ping from Azure Web App?

Once we had that all configured, we wanted to test connectivity. If we had deployed a VM, it would have been simple – just open a cmd prompt and ping away. However, we didn’t have a server, since we were deploying an Azure Web App. So initially we deployed a dummy Azure Web App onto the VNet to test the connection. This became a little bit of a pain. However, I remembered reading about Kudu and decided to see if that would be easier.

Kudu to the Rescue

If you browse to http://<yoursite>.scm.azurewebsites.net (where <yoursite> is the name of your Azure Web App) then you’ll see the Kudu site.

image

Once you’ve opened the Kudu site, you can do all sorts of interesting things (see this blog post and this Scott Hanselman and David Ebbo video). If you open the Debug console (you can go CMD or PowerShell) then you get to play! I opened the CMD console and typed “help” – to my surprise I got a list of commands I could run:

image

Unfortunately I didn’t see anything that would help me with testing connectivity. However, I remembered that I had read somewhere about the command “tpcping”. So I tried it:

tcpping <enter>

image Looks promising! Even better than the “ping” command, you can also test for a specific port, not just the ip address. So I want to test if my site can reach my database server on port 1443, no problem:

tcpping 192.168.0.1:1443 <enter>

image Hmm, seems that address isn’t working.

After troubleshooting for a while, we managed to sort the problem and tcpping gave us a nice “Success” message, so we knew we were good to go. Kudu saved us a lot of time!

Happy troubleshooting!

Build vNext and SonarQube Runner: Dynamic Version Script

$
0
0

SonarQube is a fantastic tool for tracking technical debt, and it’s starting to make some inroads into the .NET world as SonarSource collaborates with Microsoft. I’ve played around with it a little to start getting my hands dirty.

Install Guidance

If you’ve never installed SonarQube before, then I highly recommend this eGuide. Just one caveat that wasn’t too clear: you need to create the database manually before running SonarQube for the first time. Just create an empty database (with the required collation) and go from there.

Integrating into TeamBuild vNext – with Dynamic Versioning

Once you’ve got the server installed and configured, you’re ready to integrate with TeamBuild. It’s easy enough using build VNext Command Line task. However, one thing bugged me as I was setting this up – hard-coding the version number. I like to version my assemblies from the build number on the build using a PowerShell script. Here’s the 2015 version (since the environment variable names have changed):

Param(
  [string]$pathToSearch = $env:BUILD_SOURCESDIRECTORY,
  [string]$buildNumber = $env:BUILD_BUILDNUMBER,
  [string]$searchFilter = "AssemblyInfo.*",
  [regex]$pattern = "\d+\.\d+\.\d+\.\d+"
)
 
if ($buildNumber -match $pattern -ne $true) {
    Write-Error "Could not extract a version from [$buildNumber] using pattern [$pattern]"
    exit 1
} else {
    try {
        $extractedBuildNumber = $Matches[0]
        Write-Host "Using version $extractedBuildNumber in folder $pathToSearch"
 
        $files = gci -Path $pathToSearch -Filter $searchFilter -Recurse

        if ($files){
            $files | % {
                $fileToChange = $_.FullName  
                Write-Host "  -> Changing $($fileToChange)"
                
                # remove the read-only bit on the file
                sp $fileToChange IsReadOnly $false
 
                # run the regex replace
                (gc $fileToChange) | % { $_ -replace $pattern, $extractedBuildNumber } | sc $fileToChange
            }
        } else {
            Write-Warning "No files found"
        }
 
        Write-Host "Done!"
        exit 0
    } catch {
        Write-Error $_
        exit 1
    }
}

So now that I get dll’s versions matching my build number, why not SonarQube too? So I used the same idea to wrap the “begin” call into a PowerShell script which can get the build number too:

Param(
  [string]$buildNumber = $env:BUILD_BUILDNUMBER,
  [regex]$pattern = "\d+\.\d+\.\d+\.\d+",
  [string]$key,
  [string]$name
)
 
$version = "1.0"
if ($buildNumber -match $pattern -ne $true) {
    Write-Verbose "Could not extract a version from [$buildNumber] using pattern [$pattern]" -Verbose
} else {
    $version = $Matches[0]
}

Write-Verbose "Using args: begin /v:$version /k:$key /n:$name" -Verbose
$cmd = "MSBuild.SonarQube.Runner.exe"

& $cmd begin /v:$version /k:$key /n:$name

I drop this into the same folder as the MsBuild.SonarQube.Runner.exe so that I don’t have to fiddle with more paths. Here’s the task in my build:

image

The call to the SonarQube runner “end” doesn’t need any arguments, so I’ve left that as a plain command line call:

image

Now when the build runs, the version number passed to SonarQube matches the version number of my assemblies which I can tie back to my builds. Sweet!

image

One more change you could make is to specify the key and name arguments as variables. That way you can manage them as build variables instead of managing them in the call to the script on the task.

Finally, don’t forget to install the Roslyn SonarQube SonarLint extension. This will give you the same analysis that SonarQube uses inside VS.

Happy SonarQubing!

Developing a Custom Build vNext Task: Part 1

$
0
0

I love the new build engine in VSO / TFS 2015. You can get pretty far with the out of the box tasks, but there are cases where a custom task improves the user experience. The “Microsoft” version of this is SonarQube integration – you can run the SonarQube MSBuild Runner by using a “Command Line” task and calling the exe. However, there are two tasks on the Microsoft Task Github repo that clean up the experience a little – SonarQube PreBuild and SonarQube PostTest. A big benefit of the tasks is that they actually “wrap” the exe within the task, so you don’t need to install the runner on the build machine yourself.

One customization I almost always make in my customers’ build processes is to match binary versions to the build number. In TFS 2012, this required a custom windows workflow task – a real pain to create and maintain. In 2013, you could enable it much more easily by invoking a PowerShell script. The same script can be invoked in Build vNext by using a PowerShell task.

The only down side to this is that the script has to be in source control somewhere. If you’re using TFVC, then this isn’t a problem, since all your builds (within a Team Project Collection) can use the same script. However, for Git repos it’s not so simple – you’re left with dropping the script into a known location on all build servers or committing the script to each Git repo you’re building. Neither option is particularly appealing. However, if we put the script “into” a custom build task for Build vNext, then we don’t have to keep the script anywhere else!

TL;DR

I want to discuss creating a task in some detail, so I’m splitting this into two posts. This post will look at scaffolding a task and then customizing the manifest and PowerShell implementation. In the next post I’m going to show the node implementation (along with some info on developing in TypeScript and VS Code) and how to upload the task.

If you just want the task, you can get the source at this repo.

Create a Custom Task

In order to create a new task, you need to supply a few things: a (JSON) manifest file, an icon and either a PowerShell or Node script (or both). You can, of course, create these by hand – but there’s an easier way to scaffold the task: tfx-cli. tfx-cli is a cross-platform command line utility that you can use to manage build tasks (including creating, deleting, uploading and listing). You’ll need to install both node and npm before you can install tfx-cli.

tfx login

Once tfx-cli is installed, you should be able to run “tfx” and see the help screen.

image

You could authenticate each time you want to perform a command, but it will soon get tedious. It’s far better to cache your credentials.

For VSO, it’s simple. Log in to VSO and get a Personal Access Token (pat). When you type “tfx login” you’ll be prompted for your VSO url and your pat. Easy as pie.

For TFS 2015, it’s a little more complicated. You need to first enable basic authentication on your TFS app tier’s IIS. Then you can log in using your windows account (note: the tfx-cli team is working on ntfs authentication, so this is just a temporary hack).

Here are the steps to enable basic auth on IIS:

  • Open Server Manager and make sure that the Basic Auth feature is installed (under the Security node)

image

  • If you have to install it, then you must reboot the machine before continuing
  • Open IIS and find the “Team Foundation Server” site and expand the node. Then click on the “tfs” app in the tree and double-click the “Authentication” icon in the “Features” view to open the authentication settings for the app.

image

  • Enable “Basic Authentication” (note the warning!)

image

  • Restart IIS

DANGER WILL ROBINSON, DANGER! This is insecure since the passwords are sent in plaintext. You may want to enable https so that the channel is secure.

tfx build tasks create

Once login is successful, you can run “tfx build tasks create” – you’ll be prompted for some basic information, like the name, description and author of the task.

>> tfx build tasks create
Copyright Microsoft Corporation

Enter short name > VersionAssemblies
Enter friendly name > Version Assemblies
Enter description > Match version assemblies to build number
Enter author > Colin Dembovsky

That creates a folder (with the same name as the “short name”) that contains four files:

  • task.json – the json manifest file
  • VersionAssemblies.ps1 – the PowerShell implementation of the task
  • VersionAssemblies.js – the node implementation of the task
  • icon.png – the generic icon for the task

Customizing the Task Manifest

The first thing you’ll want to do after getting the skeleton task is edit the manifest file. Here you’ll set things like:

  • demands – a list of demands that must be present on the agent in order to run the task
  • visibility – should be “Build” or “Release” or both, if the task can be used in both builds and releases
  • version – the version number of your task
  • minimumAgentVersion – the minimum agent version this task requires
  • instanceNameFormat – this is the string that appears in the build tasks list once you add it to a build. It can be formatted to use any of the arguments that the task uses
  • inputs – input variables
  • groups – used to group input variables together
  • execution – used to specify the entry points for either Node or PowerShell (or both)
  • helpMarkDown – the markdown that is displayed below the task when added to a build definition

Inputs and Groups

The inputs all have the following properties:

  • name – reference name of the input. This is the name of the input that is passed to the implementation scripts, so choose wisely
  • type – type of input. Types include “pickList”, “filePath” (which makes the control into a source folder picker) and “string”
  • label – the input label that is displayed to the user
  • defaultValue – a default value (if any)
  • required – true or false depending on whether the input is mandatory or not
  • helpMarkDown – the markdown that is displayed when the user clicks the info icon next to the input
  • groupName – specify the name of the group (do not specify if you want the input to be outside a group)

The groups have the following format:

  • name – the group reference name
  • displayName – the name displayed on the UI
  • isExpanded – set to true for an open group, false for a closed group

Another note: the markdown needs to be on a single line (since JSON doesn’t allow multi-line values) – so if your help markdown is multi-line, you’ll have to replace line breaks with ‘\n’.

Of course, browsing the tasks on the Microsoft vso-agent-tasks repo lets you see what types are available, how to structure the files and so on.

VersionAssembly Manifest

For the version assembly task I require a couple of inputs:

  1. The path to the root folder where we start searching for files
  2. The file pattern to match – any file in the directory matching the pattern should have the build version replaced
  3. The regex to use to extract a version number from the build number (so if the build number is MyBuild_1.0.0.3, then we need regex to get 1.0.0.3)
  4. The regex to use for the replacement in the files – I want this under advanced, since most of the time this is the same as the regex specified previously

I also need the build number – but that’s an environment variable that I will get within the task scripts (as we’ll see later).

Here’s the manifest file:

{
  "id": "5b4d14d0-3868-11e4-a31d-3f0a2d8202f4",
  "name": "VersionAssemblies",
  "friendlyName": "Version Assemblies",
  "description": "Updates the version number of the assemblies to match the build number",
  "author": "Colin Dembovsky (colinsalmcorner.com)",
  "helpMarkDown": "## Settings\nThe task requires the following settings:\n\n1. **Source Path**: path to the sources that contain the version number files (such as AssemblyInfo.cs).\n2. **File Pattern**: file pattern to search for within the `Source Path`. Defaults to 'AssemblyInfo.*'\n3. **Build Regex Pattern**: Regex pattern to apply to the build number in order to extract a version number. Defaults to `\\d+\\.\\d+\\.\\d+\\.\\d+`.\n4. **(Optional) Regex Replace Pattern**: Use this if the regex to search for in the target files is different from the Build Regex Pattern.\n\n## Using the Task\nThe task should be inserted before any build tasks.\n\nAlso, you must customize the build number format (on the General tab of the build definition) in order to specify a format in such a way that the `Build Regex Pattern` can extract a build number from it. For example, if the build number is `1.0.0$(rev:.r)`, then you can use the regex `\\d+\\.\\d+\\.\\d\\.\\d+` to extract the version number.\n",
  "category": "Build",
  "visibility": [
    "Build"
  ],
  "demands": [],
  "version": {
    "Major": "0",
    "Minor": "1",
    "Patch": "1"
  },
  "minimumAgentVersion": "1.83.0",
  "instanceNameFormat": "Version Assemblies using $(filePattern)",
  "groups": [
    {
      "name": "advanced",
      "displayName": "Advanced",
      "isExpanded": false
    }
  ],
  "inputs": [
    {
      "name": "sourcePath",
      "type": "filePath",
      "label": "Source Path",
      "defaultValue": "",
      "required": true,
      "helpMarkDown": "Path in which to search for version files (like AssemblyInfo.* files)." 
    },
    {
      "name": "filePattern",
      "type": "string",
      "label": "File Pattern",
      "defaultValue": "AssemblyInfo.*",
      "required": true,
      "helpMarkDown": "File filter to replace version info. The version number pattern should exist somewhere in the file."
    },
    {
      "name": "buildRegex",
      "type": "string",
      "label": "Build Regex Pattern",
      "defaultValue": "\\d+\\.\\d+\\.\\d+\\.\\d+",
      "required": true,
      "helpMarkDown": "Regular Expression to extract version from build number. This is also the default replace regex (unless otherwise specified in Advanced settings)."
    },
    {
      "name": "replaceRegex",
      "type": "string",
      "label": "Regex Replace Pattern",
      "defaultValue": "",
      "required": false,
      "helpMarkDown": "Regular Expression to replace with in files. Leave blank to use the Build Regex Pattern.",
      "groupName": "advanced"
    }
  ],
  "execution": {
    "Node": {
      "target": "versionAssemblies.js",
      "argumentFormat": ""
    },  
    "PowerShell": {
      "target": "$(currentDirectory)\\VersionAssemblies.ps1",
      "argumentFormat": "",
      "workingDirectory": "$(currentDirectory)"
    }
  }
}

The PowerShell Script

Since I am more proficient in PowerShell that in Node, I decided to tackle the PowerShell script first. Also, I have a script that does this already! You can see the full script in my Github repo– but here’s the important bit – the parameters declaration:

[CmdletBinding(DefaultParameterSetName = 'None')]
param(
    [string][Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()] $sourcePath,
    [string][Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()] $filePattern,
    [string][Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()] $buildRegex,
    [string]$replaceRegex,
    [string]$buildNumber = $env:BUILD_BUILDNUMBER
)

Notes:

  • Line 3-5: these are the mandatory inputs. The name of the argument is the same as the name property of the inputs from the manifest file
  • Line 6: the optional input (again with the name matching the input name in the manifest)
  • Line 7: the build number is passed into the execution context as a predefined variable which is set in the environment, which I read here

While any of the predefined variables can be read anywhere in the script, I like to put the value as the default for a parameter. This makes debugging the script (executing it outside of the build environment) so much easier, since I can invoke the script and pass in the value I want to test with (as opposed to first setting an environment variable before I call the script).

Once I had the inputs (and the build number) I just pasted the existing script. I’ve included lots of “Write-Verbose –Verbose” calls so that if you set “system.debug” to “true” in your build variables, the task spits out some diagnostics. Write-Host calls end up in the console when the build is running.

Wrap up

In this post I covered how to use tfx-cli to scaffold a task, then customize the manifest and implement a PowerShell script.

In the next post I’ll show you how to write the node implementation of the task, using TypeScript and VS Code. I’ll also show you how to upload the task and use it in a build.

Happy customizing!

Developing a Custom Build vNext Task: Part 2

$
0
0

In part 1 I showed you how to scaffold a task using tfx-cli, how to customize the manifest and how to implement the PowerShell script for my VersionAssemblies task. In this post I’ll show you how I went about developing the Node version of the task and how I uploaded the completed task to my TFS server.

VS Code

I chose to use VS Code as the editor for my tasks. Partly because I wanted to become more comfortable using VS Code, and partly because the task doesn’t have a project file – it’s just some files in a folder – perfect for VS Code. If you’re unfamiliar with VS Code, then I highly recommend this great intro video by Chris Dias.

Restructure and Initialize a Git Repo

It was time to get serious. I wanted to publish the task to a Git repo, so I decided to reorganize a little. I wanted the root of my repo to have a README.md file and then have a folder per task. Each task should also have a markdown file. So I created a folder called cols-agent-tasks and moved the VersionAssemblies task into a subfolder called Tasks. Then I initialized the Git repo.

Next I right clicked on my cols-agent-tasks folder and selected “Open with Code” to open the folder. Here’s how it looked:

image

See the Git icon on the left? Clicking on it allows you to enter a commit message and commit. You can also diff files and undo changes. Sweet.

Installing Node Packages

I knew that the VSO agent has a client library (vso-task-lib) from looking at the out of the box tasks in the vso-agent-tasks repo. I wanted to utilize that in my node task. The task lib is a node package, and so I needed to pull the package down from npm. So I opened up a PowerShell prompt and did an “npm init” to initialize a package.json file (required for npm) and walked through the wizard:

imageSince I chose MIT for the license, I added a license file too.

Now that the package.json file was initialized, I could run “npm install vso-task-lib –-save-dev” to install the vso-task-lib and save it as a dev dependency (since it’s not meant to be bundled with the completed task):

imageThe command also installed the q and shelljs libraries (which are dependencies for the vso-task-lib). I also noticed that a node_modules folder had popped up (where the packages are installed) and double checked that my .gitignore was in fact ignoring this folder (since I don’t want these packages committed into my repo).

TypeScript Definitions Using tsd

I was almost ready to start diving into the code – but I wanted to make sure that VS Code gave me intellisense for the task library (and other libs). So I decided to play with tsd, a node TypeScript definition manager utility. I installed it by running “npm install tsd –g” (the –g is for global, since its a system-wide util). Once its installed, you can run “tsd” to see what you can do with the tool.

I first ran “tsd query vso-task-lib” to see if there is a TypeScript definition for vso-agent-lib. The font color was a bit hard to see, but I saw “zero results”. Bummer – the definition isn’t on DefinitelyTyped yet. So what about q and shelljs? Both of them returned results, so I installed them (using the –-save option to save the definitions I’m using to a json file):

image

On disk I could see a couple new files:

  • The “typings” folder with the definitions as well as a “global” definition file, tsd.d.ts, including q and shelljs (and node, which is installed when you install the type definition for shelljs)
  • A tsd.json file in the root (which aggregates all the definitions)

Opening the versionAssemblies.js file, I was able to see intellisense for q, shelljs and node:

imageSo what about the vso-task-lib? Since there was no definition on definitely typed, I had to import it manually. I copied the vso-task-lib.d.ts from the Microsoft repo into a folder called vso-task-lib in the typings folder and then updated the tsd.d.ts file adding another reference to this definition file. I also ended up declaring the vso-task-lib (since it doesn’t actaully declare an export) so that the imports worked correctly. Here’s the tsd.d.ts file:

/// <reference path="q/Q.d.ts" />
/// <reference path="node/node.d.ts" />
/// <reference path="vso-task-lib/vso-task-lib.d.ts" />
/// <reference path="shelljs/shelljs.d.ts" />

declare module "vso-task-lib" {
    export = VsoTaskLib;
}

Coverting versionAssemblies.js to TypeScript

Things were looking good! Now I wanted to convert the versionAssemblies.js file to TypeScript. I simply changed the extension from js to ts, and I was ready to start coding in TypeScript. But of course TypeScript files need to be transpiled to Javascript, so I hit “Ctrl-Shift-B” (muscle memory). To my surprise, VS Code informed me that there was “No task runner configured”.

imageSo I clicked on “Configure Task Runner” and VS Code created a .settings folder with a tasks.json file. There were some boilerplate examples of how to configure the task runner. After playing around a bit, I settled on the second example – which supposedly runs TypeScript using a tfconfig.json in the root folder:

{
    "version": "0.1.0",

    // The command is tsc. Assumes that tsc has been installed using npm install -g typescript
    "command": "tsc",

    // The command is a shell script
    "isShellCommand": true,

    // Show the output window only if unrecognized errors occur.
    "showOutput": "silent",

    // Tell the tsc compiler to use the tsconfig.json from the open folder.
    "args": ["-p", "."],

    // use the standard tsc problem matcher to find compile problems
    // in the output.
    "problemMatcher": "$tsc"
}

Now I had to create a tsconfig.json file in the root, which I did. VS Code knows what to do with json files, so I just opened a { and was pleasantly surprised with schema intellisense for this file!

image I configured the “compilerOptions” and set the module to “commonjs”. Now pressing “Ctrl-Shift-B” invokes the task runner which transpiles my TypeScript file to Javascript – I saw the .js file appear on disk. Excellent! Setting sourceMaps to true will provide source mapping so that I can later debug.

One caveat – the build doesn't automatically happen when you save a TypeScript file. You can configure gulp and then enable a watch so that when you change a TypeScript file the build kicks in – but I decided that was too complicated for this project. I just configured the keyboard shortcut “Ctrl-s” to invoke “workbench.action.tasks.build” in addition to saving the file (you can configure keyboard shortcuts by clicking File->Preferences->Keyboard Shortcuts. Surprise surprise it’s a json file…)

Implementing the Build Task

Everything I’d done so far was just setup stuff. Now I was ready to actually code the task!

Here’s the complete script:

import * as tl from 'vso-task-lib';
import * as sh from 'shelljs';

tl.debug("Starting Version Assemblies step");

// get the task vars
var sourcePath = tl.getPathInput("sourcePath", true, true);
var filePattern = tl.getInput("filePattern", true);
var buildRegex = tl.getInput("buildRegex", true);
var replaceRegex = tl.getInput("replaceRegex", false);

// get the build number from the env vars
var buildNumber = tl.getVariable("Build.BuildNumber");

tl.debug(`sourcePath :${sourcePath}`);
tl.debug(`filePattern : ${filePattern}`);
tl.debug(`buildRegex : ${buildRegex}`);
tl.debug(`replaceRegex : ${replaceRegex}`);
tl.debug(`buildNumber : ${buildNumber}`);

if (replaceRegex === undefined || replaceRegex.length === 0){
    replaceRegex = buildRegex;
}
tl.debug(`Using ${replaceRegex} as the replacement regex`);

var buildRegexObj = new RegExp(buildRegex);
if (buildRegexObj.test(buildNumber)) {
    var versionNum = buildRegexObj.exec(buildNumber)[0];
    console.info(`Using version ${versionNum} in folder ${sourcePath}`);
    
    // get a list of all files under this root
    var allFiles = tl.find(sourcePath);

    // Now matching the pattern against all files
    var filesToReplace = tl.match(allFiles, filePattern, { matchBase: true });
    
    if (filesToReplace === undefined || filesToReplace.length === 0) {
        tl.warning("No files found");
    } else {
        for(var i = 0; i < filesToReplace.length; i++){
            var file = filesToReplace[i];
            console.info(`  -> Changing version in ${file}`);
            // replace all occurrences by adding g to the pattern
            sh.sed("-i", new RegExp(replaceRegex, "g"), versionNum, file);
        }
        console.info(`Replaced version in ${filesToReplace.length} files`);
    }
} else {
    tl.warning(`Could not extract a version from [${buildNumber}] using pattern [${buildRegex}]`);
}

tl.debug("Leaving Version Assemblies step");

Notes:

  • Lines 1-2: import the library references
  • Line 4: using the task library to log to the console
  • Lines 6-13: using the task library to get the inputs (matching names from the task.json file) as well as getting the build number from the environment
  • Lines 15-19: more debug logging
  • Lines 21-23: default the replace regex to the build regex if the value is empty
  • Line 26: compile the regex pattern into a regex object
  • Line 27: test the build number to see if we can extract a version number using the regex pattern
  • Lines 28-29: extract the version number from the build number and write the value to the console
  • Line 32: get a list of all files in the sourcePath (recursively) using the task library method
  • Line 35: filter the files to match the filePattern input, again using a task library method
  • Lines 37-38: check if there are files that match – warn if there aren’t any
  • Line 40: for each file that matches,
  • Lines 41-45: use shelljs’s sed() method to do the regex replacement inline
  • Line 45: I use the “g” option when compiling the regex to indicate that all matches should be replaced (as opposed to just the 1st match)
  • Line 47: log to the console how many files were updated
  • The remainder is just logging

Using the task library really made developing the task straightforward. The setup involved in getting intellisense to work was worth the effort!

Debugging from VS Code

Now that I had the code written, I wanted to test it. VS Code to the rescue again! Click on the “debug” icon on the left, and then click the gear icon at the top of the debug pane:

imageThat creates a new file called launch.json in the .settings folder. What – a json file? Who would have guessed! Here’s my final file:

{
    "version": "0.1.0",
    // List of configurations. Add new configurations or edit existing ones.
    // ONLY "node" and "mono" are supported, change "type" to switch.
    "configurations": [
        {
            // Name of configuration; appears in the launch configuration drop down menu.
            "name": "Launch versionAssemblies.js",
            // Type of configuration. Possible values: "node", "mono".
            "type": "node",
            // Workspace relative or absolute path to the program.
            "program": "Tasks/VersionAssemblies/versionAssemblies.ts",
            // Automatically stop program after launch.
            "stopOnEntry": false,
            // Command line arguments passed to the program.
            "args": [],
            // Workspace relative or absolute path to the working directory of the program being debugged. Default is the current workspace.
            "cwd": ".",
            // Workspace relative or absolute path to the runtime executable to be used. Default is the runtime executable on the PATH.
            "runtimeExecutable": null,
            // Optional arguments passed to the runtime executable.
            "runtimeArgs": ["--nolazy"],
            // Environment variables passed to the program.
            "env": { 
                "BUILD_BUILDNUMBER": "1.0.0.5",
                "INPUT_SOURCEPATH": "C:\\data\\ws\\col\\ColinsALMCornerCheckinPolicies",
                "INPUT_FILEPATTERN": "AssemblyInfo.*",
                "INPUT_BUILDREGEX": "\\d+\\.\\d+\\.\\d+\\.\\d+",
                "INPUT_REPLACEREGEX": ""
            },
            // Use JavaScript source maps (if they exist).
            "sourceMaps": true,
            // If JavaScript source maps are enabled, the generated code is expected in this directory.
            "outDir": "."
        },
        {
            "name": "Attach",
            "type": "node",
            // TCP/IP address. Default is "localhost".
            "address": "localhost",
            // Port to attach to.
            "port": 5858,
            "sourceMaps": false
        }
    ]
}

The changes I made have been highlighted. I changed the name and program settings. I also added some environment variables to simulate the values that the build agent is going to pass into the task. Finally, I changed “sourceMaps” to true and the output dir to “.” so that I could debug my TypeScript files. Now I just press F5:

imageThe debugger is working – but my code isn’t! Looks like I’m missing a node module – minimatch. No problem – just run “npm install minimatch -–save-dev” to add the module in and run again. Another module not found – this time shelljs. Run “npm install shelljs –-save-dev” and start again. Success! I can see watches in the left window, hover over variables to see their values, and start stepping through my code.

imageMy code ended up being perfect. Just kidding – I had to sort out some errors, but at least debugging made it a snap. 

Uploading the Task

In part 1 I introduced tfx-cli. I now returned to the command line in order to test uploading the task. I changed to the cols-agent-tasks\Tasks directory and ran

tfx-cli build tasks upload .\VersionAssemblies

I got a success, and so now I could test it in a build!

Testing a Windows Build

Testing the windows build was fairly simple. I opened up an existing hosted build and replaced the PowerShell task that called my original PowerShell version assemblies task, and added in a brand new shiny “VersionAssemblies” task:

image

The run worked perfectly too – I was able to see the version change in the build output. Just a tip – setting “system.debug” to “true” in the build variables caused the task to log verbose.

image

Testing a Linux build using Docker

Now I wanted to test the task in a Linux build. I’ve installed a couple of Ubuntu vms before, so I was prepared to spin one up when I came across an excellent post by my friend and fellow ALM MVP Rene van Osnabrugge. Rene shows how you can quickly spin up a cross-platform build agent in a docker container– and even provides the Dockerfile to be able to do it in 1 line! The timing was perfect – I downloaded Docker Toolbox and installed a docker host (I couldn’t get the Hyper-V provider to work, so I had to resort to VirtualBox), then grabbed Rene’s Dockerfile and in no time at all I had a build agent on Ubuntu ready to test!

Here’s my x-plat agent in the default pool:

image

Note the Agent.OS “capability”. In order to target this agent, I’m going to add it as a demand for the build:

image

Here’s the successful run:

imageI committed, pushed to Github and now I can rest my weary brain!

Conclusion

Creating a custom task is, if not simple, at least easy. The agent architecture has been well thought out and overall custom task creation is a satisfying process, both for PowerShell and for Node. I look forward to seeing what custom tasks start creeping out of the woodwork. Hopefully task designers will follow Microsoft’s lead and make them open source.

Happy building!

Continuous Deployment with Docker and Build vNext

$
0
0

I really like the idea of Docker. If you’re unfamiliar with Docker, then I highly recommend Nigel Poulton’sDocker Deep Dive course on Pluralsight. Containers have been around for quite a while in the Linux world, but Microsoft is jumping on the bandwagon with Windows Server Containers too. This means that getting to grips with containers is a good idea – I think it’s the way of the future.

tl;dr

If you’re just after the task, then go to my Github repo. You can get some brief details in the section below titled “Challenge 2: Publishing to Docker (a.k.a The Publish Build Task)”. If you want the full story read on!

Deploying Apps to Containers

After hacking around a bit with containers, I decided to see if I could deploy some apps into a container manually. Turns out it’s not too hard. You need to have (at least) the Docker client, some code and a Dockerfile. Then you can just call a “docker build” (which creates an image) and then “docker run” to deploy an instance of the image.

Once I had that working, I wanted to see if I could bundle everything up into a build in Build vNext. That was a little harder to do.

Environment and Tools

You can run a Docker host in Azure but I wanted to be able to work locally too. So here is how I set up my local environment (on my Surface Pro 3):

  1. I enabled Hyper-V (you can also use VirtualBox) so that I can run VMs
  2. I installed Docker Toolbox (note: Docker Toolbox bundles VirtualBox – so you can use that if you don’t have or want Hyper-V, but otherwise, don’t install it)
  3. I then created a Docker host using “docker-machine create”. I used the hyper-v driver. This creates a Tiny Core Linux Docker host running the boot2docker image.
  4. I set my environment variables to default to my docker host settings
  5. I could now execute “docker” commands – a good command to sanity check your environment is “docker info”

Aside: PowerShell to Set Docker Environment

I love using PowerShell. If you run “docker env” you get some settings that you could just “cat” to your profile (if you’re in Unix). However, the commands won’t work in PowerShell. So I created a small function that I put into my $PROFILE that I can run whenever I need to do any Docker stuff. Here it is:

function Set-DockerEnv {
    Write-Host "Getting docker environment settings" -ForegroundColor Yellow
    docker-machine env docker | ? { $_.Contains('export') } | % { $_.Replace('export ', '') } | `
        ConvertFrom-Csv -Delimiter "=" -Header "Key","Value" | % { 
            [Environment]::SetEnvironmentVariable($_.Key, $_.Value)
            Write-Host "$($_.Key) = $($_.Value)" -ForegroundColor Gray
        }
    Write-Host "Done!" -ForegroundColor Green
}

Now I can just run “Set-DockerEnv” whenever I need to set the environment.

VS Tools for Docker

So I have a Docker environment locally – great. Now I need to be able to deploy something into a container! Since I (mostly) use Visual Studio 2015, I installed the VS Tools for Docker extension. Make sure you follow the install instructions carefully – the preview toolset is a bit picky. I wanted to play with Windows containers, but for starters I was working with Unix containers, so I needed some code that could run on Unix. Fortunately, ASP.NET 5 can! So I did a File –> New Project and created an ASP.NET 5 Web application (this is a boilerplate MVC 6 application). Once I had the project created, I right-clicked the project and selected “Publish” to see the publish page. You’ll see the “Docker Containers” target:

image

You can select an Azure Docker VM if you have one – in my case I wanted to deploy locally, so I checked “Custom Docker Host” and clicked OK. I entered in the server url for my local docker host (tcp://10.0.0.19:2376) and left all the rest as default. Clicking “Validate Connection” however, failed. After some trial and error, I realized that the default directory for certificates for the “docker-machine” command I used is different to the default directory the VS Tools for Docker expects. So I just supplied additional settings for “Auth Options” and voila – I could now validate the connection:

image

Here are the settings for “Auth Options” in full:

--tls --tlscert=c:\users\colin\.docker\machine\certs\cert.pem --tlskey=c:\users\colin\.docker\machine\certs\key.pem

I specifically left the Dockerfile setting to (auto generate) to see what I would get. Here’s what VS generated:

FROM microsoft/aspnet:1.0.0-beta6

ADD . /app

WORKDIR /app

ENTRYPOINT ["./kestrel"]

Notes:

  1. The FROM tells Docker which image to base this container on – it’s defaulting to the official image for ASP.NET 5 from Docker hub (the public Docker image repository)
  2. ADD is copying all the files in the current directory (.) to a folder on the container called “/app”
  3. WORKDIR is changing directory into “/app”
  4. ENTRYPOINT tells Docker to run this command every time a container based on this image is fired up

Aside: Retargeting OS

Once you’ve generated the Dockerfile, you need to be careful if you want to deploy to a different OS (specifically Windows vs non-Windows). Rename the Dockerfile (in the root project directory) to “Docker.linux” or something and then clear the Dockerfile setting. VS will then auto generate a Dockerfile for deploying to Windows containers. Here’s the Windows flavored Dockerfile, just for contrast:

FROM windowsservercore

ADD . /app

WORKDIR /app

ENTRYPOINT ["cmd.exe", "/k", "web.cmd"]

VS is even smart enough to nest your Dockerfiles in your solution!

 image

So I could now publishing successfully from VS. Next up: deploying from Team Build!

Docker Publish from Team Build vNext

(Just to make things a little simple, I going to use “build” interchangeably with “build vNext” or even “team build”. I’ve switched over completely from the old XAML builds – so should you).

If you’ve looked at the build tasks on VSO, you’ll notice that there is a “Docker” task:

image

It’s a little unsatisfying, to be blunt. You can (supposedly) deploy a docker image – but there’s no way to get your code into the image (“docker build” the image) in the first place. Secondly, there doesn’t appear to be any security or advanced settings anywhere. Clicking “More Information” takes you to a placeholder markdown file – so no help there. Admittedly the team are still working on the “official” Docker tasks – but I didn’t want to wait!

Prep: PowerShell Docker Publishing from the console

Taking a step back and delving into the files and scripts that VS Tools for Docker generated for me, I decided to take a stab at deploying using the PowerShell script in the PublishProfiles folder of my solution. I created a publish profile called “LocalDocker”, and sure enough VS had generated 3 files: the pubxml file (settings), a PowerShell publish file and a shell script publish file.

image

To invoke the script, you need 3 things:

  1. The path to the files that are going to be published
  2. The path to the pubxml file (contains the settings for the publish profile)
  3. (Optional) A hashtable of overrides for your settings

I played around with the PowerShell script in my console – the first major flaw in the script is that it assumes you have already “packed” the project. So you first have to invoke msbuild with some obscure parameters. Only then can you invoke the Publish script. Also, the publish script does some hocus pocus with assuming that the script name and the pubxml file name are the same and working out the Dockerfile location is also a bit voodoo. It works nicely when you’re publishing from VS – but I found it not to be very “build friendly”.

I tried it in build vNext anyway. I managed to invoke the “LocalDocker-publish.ps1” file, but could not figure out how to pass a hashtable to a PowerShell task (to override settings)! Besides, even if it worked, there’d be a lot of typing and you have to know what the keys are for each setting. Enter the custom build task!

The way I saw it, I had to:

  1. Compile the solution in such a way that it can be deployed into a Docker container
  2. Create a custom task that could invoke the PowerShell publish script, either from a pubxml file or some settings (or a combination)

Challenge 1: Building ASP.NET 5 Apps in Build vNext

Building ASP.NET 5 applications in build vNext isn’t as simple as you would think. I managed to find part of my answer in this post. You have to ensure that the dnvm is installed and that you have the correct runtimes. Then you have to invoke “dnu restore” on all the projects (to install all the dependencies). That doesn’t get you all the way though – you need to also “dnu pack” the project.

Fortunately, you can invoke a simple script to get the dnvm and install the correct runtime. Then you can fight with supply some parameters to msbuild that tell it to pack the project for you. This took me dozens of iterations to get right – and even when I got it right, the “kestrel” command in my containers was broken. For a long time I thought that my docker publish task was broken – turns out I could fix it with a pesky msbuild argument. Be that as it may, my pain is your gain – hence this post!

You need to add this script to your source repo and invoke it in your build using a PowerShell task (note this is modified from the original in this post):

param (
    [string]$srcDir = $env.BUILD_SOURCESDIRECTORY
)

# bootstrap DNVM into this session.
&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}

# load up the global.json so we can find the DNX version
$globalJsonFile = (Get-ChildItem -Path $srcDir -Filter "global.json" -Recurse | Select -First 1).FullName
$globalJson = Get-Content -Path $globalJsonFile -Raw -ErrorAction Ignore | ConvertFrom-Json -ErrorAction Ignore

if($globalJson)
{
    $dnxVersion = $globalJson.sdk.version
}
else
{
    Write-Warning "Unable to locate global.json to determine using 'latest'"
    $dnxVersion = "latest"
}

# install DNX
# only installs the default (x86, clr) runtime of the framework.
# If you need additional architectures or runtimes you should add additional calls
& $env:USERPROFILE\.dnx\bin\dnvm install $dnxVersion -r coreclr& $env:USERPROFILE\.dnx\bin\dnvm install $dnxVersion -Persistent

 # run DNU restore on all project.json files in the src folder including 2>1 to redirect stderr to stdout for badly behaved tools
Get-ChildItem -Path $srcDir -Filter project.json -Recurse | ForEach-Object {
    Write-Host "Running dnu restore on $($_.FullName)"& dnu restore $_.FullName 2>1
}

Notes:

  1. Lines 9,10: I had to change the way that the script searched for the “global.json” (where the runtime is specified). So I’m using BUILD_SOURCESDIRECTORY which the build engine passes in.
  2. Line 25: I added the coreclr (since I wanted to deploy to Linux)
  3. Lines 29-32: Again I changed the search path for the “dnu” commands

Just add a PowerShell task into your build (before the VS Build task) and browse your repo to the location of the script:

image Now you’re ready to call msbuild. In the VS Build task, set the solution to the xproj file (the project file for the ASP.NET project you want to publish). Then add the following msbuild arguments:

/t:Build,FileSystemPublish /p:IgnoreDNXRuntime=true /p:PublishConfiguration=$(BuildConfiguration) /p:PublishOutputPathNoTrailingSlash=$(Build.StagingDirectory)

You’re telling msbuild:

  1. Run the build and FileSystemPublish targets
  2. Ignore the DXN runtime (this fixed my broken kestrel command)
  3. Use the $(BuildConfiguration) setting as the configuration (Debug, Release etc.)
  4. Publish (dnu pack) the site to the staging directory

That now gets you ready to publish to Docker – hooray!

Challenge 2: Publishing to Docker (a.k.a The Publish Build Task)

A note on the build agent machine: the agent must have access to the docker client, as well as your certificate files. You can supply the paths to the certificate files in the Auth options for the task, but you’ll need to make sure the docker client is installed on the build machine.

Now that we have some deployable code, we can finally turn our attention to publishing to Docker. I wanted to be able to specify a pubxml file, but then override any of the settings. I also wanted to be able to publish without a pubxml file. So I created a custom build task. The task has 3 internal components:

  1. The task.json – this specifies all the parameters (including the pubxml path, the path to the “packed” solution, and all the overrides)
  2. A modified version of the publish PowerShell script that VS Tools for Docker generated when I created a Docker publish profile
  3. A PowerShell script that takes the supplied parameters, creates a hashtable, and invokes the publish PowerShell script

The source code for the task is on Github. I’m not going to go through all the details here. To get the task, you’ll need to do the following:

  1. Clone the repo (git clone https://github.com/colindembovsky/cols-agent-tasks.git)
  2. Install node and npm
  3. Install tfx-cli (npm install –g tfx-cli)
  4. Login (tfx login)
  5. Upload (tfx build tasks upload pathToDockerPublish)

The pathToDockerPublish is the path to Tasks/DockerPublish in the repo.

Once you’ve done that, you can then add a Docker Publish task. Set the path to your pubxml (if you have one), the path to the Dockerfile you want to use and set the Pack Output Path to $(Build.StagingDirectory) (or wherever the code you want to deploy to the container is). If you don’t have a pubxml file, leave “Path to Pubxml” empty – you’ll have to supply all the other details. If you have a pubxml, but want to overwrite some settings, just enter the settings in accordingly. The script will take the pubxml setting unless you supply a value in an override.

imageIn this example, I’m overriding the docker image name (using the build number as the tag) and specifying “Build only” as false. That means the image will be built (using “docker build”) and a container will be spun up. Set this value to “true” if you just want to build the image without deploying a container. Here are all the settings:

  • Docker Server Url – url of your docker host
  • Docker image name – name of the image to build
  • Build Only – true to just run “docker build” – false if you want to execute “docker run” after building
  • Host port – the port to open on the host
  • Container port – the port to open on the container
  • Run options – additional arguments passed to the “docker run” command
  • App Type – can be empty or “Web”. Only required for ASP.NET applications (sets the server.urls setting)
  • Create Windows Container – set to “true” if you’re targeting a Windows docker host
  • Auth Options – additional arguments to supply to docker commands (for example --tlscert)
  • Remove Conflicting Containers – removes containers currently running on the same port when set to “true”

Success

Once the build completes, you’ll be able to see the image in “docker images”.

image

image

If you’ve set “build only” to false you’ll be able to access your application!

image

Happy publishing!

Docker DevOps

$
0
0

Recently I attended the MVP Summit in Redmond. This is an annual event where MVPs from around the world converge on Microsoft to meet with each other and various product teams. It’s a highlight of the year (and one of the best benefits of being an MVP).

image

The ALM MVPs have a tradition – we love to hear what other MVPs have been doing, so we have a pre-Summit session where we get 20 minute slots to share anything that’s interesting. This year I did a slide ware chat entitle Docker DevOps. It was just a collection of thoughts that I have on what Docker means for DevOps. I’d like to put some of those thoughts down in this post.

Docker Means Containers

Docker isn’t actually a technology per se. It’s just a containerization manager that happened to be at the right place at the right time – it’s made containers famous.

Container technology has been around for a fairly long time – most notably in the Linux kernel. Think of containers as the evolution of virtualization. When you have a physical server, it can be idle a lot of the time. So virtualization became popular, allowing us to create several virtual machines (VMs) on a single server. Apps running on the VM don’t know they’re on a VM – the VM has abstracted the physical hardware. Now most developers and IT Pros take virtualization for granted.

Containers take the abstraction deeper – they abstract the OS too. Containers are running instances of images. The base layer of the image is typically a lightweight OS – only the bare essentials needed to run an app. Typically that means no UI or anything else that isn’t strictly needed. Images are also immutable. Under the hood, when you change an image, you actually create a differencing layer on top of the current layer. Containers also share layers – for example, if two containers have an ubuntu14.04 base layer, and then one has nginx and another has mySQL, there’s only one physical copy of the ubuntu14.04 image on disk. Shipping containers means just shipping the different top layer, which makes them easily portable.

Windows Containers

So what about Windows containers? Windows Server 2016 TP 4 (the latest release at the time of this article) has support for Windows containers – the first OS from Microsoft to support containerization. There are two flavors of Windows container – Windows Server containers and Hyper-V containers. Windows Server container processes are visible on the host, while Hyper-V containers are completely “black box” as far as the host is concerned – that makes the HyperV containers “more secure” than Windows Server containers. You can switch the mode at any time.

Windows container technology is still in its infancy, so there are a few rough edges, but it does show that Microsoft is investing in container technology. Another glaring sign is the fact that you can already create Docker hosts in Azure (both for Windows and Linux containers). Microsoft is also actively working on open-source Docker.

What Containers Mean For You

So what does it all mean for you? Here’s the rub – just like you’ve probably not installed a physical server for some years because of virtualization, I predict that pretty soon you won’t even install and manage VMs anymore. You’ll have a “cloud of hosts” somewhere (you won’t care where) and have the ability to spin up containers to your heart’s content. In short, it’s the way of the future.

So here are some things you need to be thinking about if you want to ride the wave of the future:

  • Microservices
  • Infrastructure as Code
  • Immutable machines
  • Orchestration
  • Docker Repositories
  • It works on my machine

Microservices

The most important architectural change that containers bring is microservices. In order to use containers effectively, you have to (re-)architect your applications into small, loosely coupled services (each deployed into its own container). This makes each individual service simpler, but moves quite a bit of complexity into managing the services. Coordinating all these microservices is a challenge. However, I believe that the complexity at the management level is – well, more manageable. If done correctly, microservices can be deployed without much (or any) impact to other services, so you can isolate issues, deploy smaller units more frequently and gain scalability in the parts of the overall application that require it, as and when they require it (this is the job of the orchestration engine – something I’ll talk to later). This is much better than having to deploy an entire monolithic application every time.

So what about networking between the containers? Turns out that Docker is pretty good at managing how containers talk to each other (via Docker Swarm and Docker Compose). Each container must define which ports it exposes (if any). You can also link containers, so that they can communicate with each other. Furthermore, you have to explicitly define a “mapping” between the container ports and the host ports in order for the container to be exposed outside its host machine. So you have tight control over the surface area of each container (or group of containers). But it’s another thing to manage.

Infrastructure as Code

When you create a Docker image, you specify it in a Dockerfile. The Dockerfile contains instructions in text that tell the Docker engine how to build up an image. The starting layer typically the (minimal) OS. Then follow instructions to install dependencies that the top app layers will need. Finally, the app itself is added.

Specifying your containers in this manner forces you to express your infrastructure as code. This is a great practice, whether you’re doing it for Docker or not. After you’ve described your infrastructure as code, you can automate building the infrastructure – so Infrastructure as Code is a building block for automation. Automation is good – it allows rapid and reliable deployment, which means better quality, faster. It does mean that you’re going to have to embrace DevOps – and have both developers and operations (or better yet your DevOps team) work together to define and manage the infrastructure code. In this brave new world, no-one is installing OSs or anything else using GUIs. Script it baby, script it!

Immutable Machines

Containers are essentially immutable. Under the hood, if you change a container, you actually freeze the current top layer (so that it’s immutable) and add a new “top layer” with the changes (this is enabled by Union File Systems). In fact, if you do it correctly, you should never have a reason to change a container once it’s out of development. If you really do need to change something (or say, deploy some new code for your app within the container), you actually throw away the existing container and create a new one. Don’t worry though – Docker is super efficient – which means that you won’t need to rebuild the entire image from scratch – the interim layers are stored in the Docker engine, so Docker is smart enough to just use the common layers again and just create a new differencing layer for the new image.

Be that as it may, there is a shift in thinking about containers in production. They should essentially be viewed as immutable. Which means that your containers have to be stateless. That obviously won’t work for databases or any other persistent data. So Docker has the concept of data volumes, which are special directories that can be accessed (and shared) by containers but that are outside the containers themselves. This means you have to really think about where the data are located for containers and where they live on the host (since they’re outside the containers). Migrating or upgrading data is a bit tricky with so many moving parts, so it’s something to think about carefully.

Orchestration

So let’s imagine that you’ve architected an application composed of several microservices that can be deployed independently. You can spin them all up on a single machine and then – hang on, a single machine? Won’t that hit resource limitations pretty quickly? Yes it will. And what about the promise of scale – that if a container comes under pressure I can just spin another instance (container) up and voila – I’ve scaled out? Won’t that depend on how much host resources are available? Right again.

This is where tools like Docker Swarm come into play. Docker Swarm allows you to create and access a pool of Docker hosts. Ok, that’s great for deploying apps. But what about monitoring the resources available? And wouldn’t it be nice if the system could auto-scale? Enter Apache Mesos and Mesosphere (there are other products in this space too). Think of Mesos as a distributed kernel. It aggregates a bunch of machines – be they physical, virtual or cloud – into what appears to be a single machine that you can program against. Mesosphere is then a layer on top of Mesos that further abstracts, allowing much easier consumption and use of the Datacenter OS (dcos), which enables highly available, highly automated systems. Mesos uses containers natively, so Docker works in Mesos and Mesosphere. If you’re going to build scalable apps, then you are going to need an orchestration engine like Mesosphere. And it runs in Azure too!

Docker Repositories

Docker enables you to define a container (or image) using a Dockerfile. This file can be shared via some code repository. Then developers can code against that container (by building it locally) and when it’s ready for production, Ops can pull the file down and build the container. Sounds like a great way to share and automate! Docker repositories allow you to share Dockerfiles in exactly this manner. There are public repos, like DockerHub, and you can of course create (or subscribe) to private repos. This means that you get to share base images from official partners (for example, if you need nginx, no need to build it yourself – just pull down the official image from DockerHub that the nginx guys themselves have built). It also means that you have a great mechanism for moving code from dev to QA to Production. Just share the images in a public (or private) repo, and if a tester wants to test they can just spin up a container or two for themselves. The containers run exactly the same wherever they are run, so it could be the developer’s laptop, in a Dev/Test lab or in Production. And since only the delta’s are actually moved around (common images are shared) it’s quick and efficient to share code in this manner.

It Works on My Machine

“It works on my machine!” The classic exclamation heard by developer’s world over every time a bug is filed. And we all laugh since we know that between your dev environment and Production lie a whole slew of differences. Except that now, since the containers run the same wherever they are run, if it works in the developer’s container, it works in the Prod container.

Of course there are ways the containers may differ – for example, most real-world containers will have environment variables that have different values in different environments. But containers actually allow “It works on my machine” to become a viable statement once more.

Conclusion

Containers are the way of the future. You want to make sure that you’re getting on top of containers early (as in now) so that you don’t get left behind. Start re-architecting your application into microservices, and start investigating hosting options, starting with Docker and Docker Compose and moving towards dcos like Mesosphere. And be proud, once more, to say, “It works on my machine!”

Happy containering!

WebDeploy, Configs and Web Release Management

$
0
0

It’s finally here – the new web-based Release Management (WebRM). At least, it’s here in preview on VSTS (formerly VSO) and should hopefully come to TFS 2015 in update 2.

I’ve blogged frequently about Release Management, the “old” WPF tool that Microsoft purchased from InCycle (it used to be called InRelease). The tool was good in some ways, and horrible in others – but it always felt like a bit of a stop-gap while Microsoft implemented something truly great – which is what WebRM is!

One of the most common deployment scenarios is deploying web apps – to IIS or to Azure. I blogged about using the old tool along with WebDeploy here. This post is a follow-on – how to use WebDeploy and WebRM correctly.

First I want to outline a problem with the out-of-the-box Tasks for deploying web apps. Then I’ll talk about how to tokenize the build package ready for multi-environment deployments, and finally I’ll show you how to create a Release Definition.

Azure Web App Deployment Task Limitations

If you create a new Release Definition, there is an “Azure Web App Deployment” Task. Why not just use that to deploy web apps? There are a couple of issues with this Task:

  1. You can’t use it to deploy to IIS
  2. You can’t manage different configurations for different environments (with the exception of connection strings)

The Task is great in that it uses a predefined Azure Service Endpoint, which abstracts credentials away from the deployment. However, the underlying implementation invokes an Azure PowerShell cmdlet Publish-AzureWebsiteProject. This cmdlet works – as long as you don’t intend to change any configuration except the connection strings. Have different app settings in different environments? You’re hosed. Here’s the Task UI in VSTS:

image

The good:

  • You select the Azure subscription from the drop-down – no messing with passwords
  • You can enter a deployment slot

The bad:

  • You have to select the zip file for the packaged site – no place for handling configs
  • Additional arguments – almost impossible to figure out what to put here. You can use this to set connection strings if you’re brave enough to figure it out

The ugly:

  • Web App Name is a combo-box, but it’s never populated, so you have to type the name yourself (why is it a combo-box then?)

In short, this demo’s nicely, but you’re not really going to use it for any serious deployments – unless you’ve set the app settings on the slots in the Azure Portal itself. Perhaps this will work for you – but if you change a setting value (or add a new setting) you’re going to have to manually update the slot using the Portal. Not a great automation story.

Config Management

So besides not being able to use the Task for IIS deployments, your biggest challenge is config management. Which is ironic, since building a WebDeploy package actually handles the config well – it places config into a SetParameters.xml file. Unfortunately the Task (because it is calling Publish-AzureWebsiteProject under the hood) only looks for the zip file – it ignores the SetParameters file.

So I got to thinking – and I stole an idea from Octopus Deploy: what if the deployment would just automagically replace any config setting value with any correspondingly named variable defined in the Release Definition for the target Environment? That would mean you didn’t have to edit long lists of arguments at all. Want a new value? Just add it to the Environment variables and the deployment takes care of it for you.

The Solution

The solution turned out to be fairly simple:

For the VS Solution:

  1. Add a parameters.xml file to your Website project for any non-connecting string settings you want to manage, using tokens for values
  2. Create a publish profile that inserts tokens for the website name and any db connection strings

For the Build:

  1. Configure a Team Build to produce the WebDeploy package (and cmd and SetParameters files) using the publish profile
  2. Configure the Build to upload the zip and supporting files as the output

For the Release:

  1. Write a script to do the parameter value substitution (replacing tokens with actual values defined in the target Environment) into the SetParameters file
  2. Invoke the cmd to deploy the Website

Of course, the “parameter substituting script” has to be checked into the source repo and also included as a build output in order for you to use it in the Release.

Creating a Tokenized WebDeploy Package in a Team Build

Good releases start with good packages. Since the same package is going to be deployed to multiple environments, you cannot “hardcode” any config settings into the package. So you have to create the package in such a way that it has tokens for any config values that the Release pipeline will replace with Environment specific values at deployment time. In my previous WebDeploy and Release Management post, I explain how to add the parameters.xml file and how to create a publish profile to do exactly that. That technique stays exactly the same as far as the VS solution goes.

Here’s my sample parameters.xml file for this post:

<!--?xml version="1.0" encoding="utf-8" ?-->
<parameters>
  <parameter name="CoolKey" description="The CoolKey setting" defaultvalue="__CoolKey__" tags="">
    <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/appSettings/add[@key='CoolKey']/@value">
    </parameterentry>
  </parameter>
</parameters>

Note how I’m sticking with the double-underscore pre- and post-fix as the token, so the value (token) for CoolKey is __CoolKey__.

Once you’ve got a parameters.xml file and a publish profile committed into your source repo (Git or TFVC – either one works fine), you’re almost ready to create a Team Build (vNext Build). You will need the script that “hydrates” the parameters from the Environment variables. I’ll cover the contents of that script shortly – let’s assume for now that you have a script called “Replace-SetParameters.ps1” checked into your source repo along with your website. Here’s the structure I use:

image

Create a new Build Definition – select Visual Studio Build as the template to start from. You can then configure whatever you like in the build, but you have to do 3 things:

  1. Configure the MSBuild arguments as follows in the “Visual Studio Build” Task:
    1. /p:DeployOnBuild=true /p:PublishProfile=Release /p:PackageLocation="$(build.StagingDirectory)"
    2. The name of the PublishProfile is the same name as the pubxml file in your solution
    3. The package location is set to the build staging directory
    4. image
  2. Configure the “Copy and Publish Build Artifacts” Task to copy the staging directory to a server drop:
    1. image
  3. Add a new “Publish Build Artifact” Task to copy the “Replace-SetParameters.ps1” script to a server drop called “scripts”:
    1. image

 

I like to version my assemblies so that my binary versions match my build number. I use a custom build Task to do just that. I also run unit tests as part of the build. Here’s my entire build definition:

image

Once the build has completed, the Artifacts look like this:

image

image

image

Here’s what the SetParameters file looks like if you open it up:

<?xml version="1.0" encoding="utf-8"?>
<parameters>
  <setParameter name="IIS Web Application Name" value="__SiteName__" />
  <setParameter name="CoolKey" value="__CoolKey__" />
  <setParameter name="EntityDB-Web.config Connection String" value="__EntityDB__" />
</parameters>

The tokens for SiteName and EntityDB both come from my publish profile – the token for CoolKey comes from my parameters.xml file.

Now we have a package that’s ready for Release!

Filling in Token Values

You can see how the SetParameters file contains tokens. We will eventually define values for each token for each Environment in the Release Definition. Let’s assume that’s been done already – then how does the release pipeline perform the substitution? Enter PowerShell!

When you execute PowerShell in a Release, any Environment variables you define in the Release Definition are created as environment variables that the script can access. So I wrote a simple script to read in the SetParameters file, use Regex to find any tokens and replace the tokens with the environment variable value. Of course I then overwrite the file. Here’s the script:

param(
    [string]$setParamsFilePath
)
Write-Verbose -Verbose "Entering script Replace-SetParameters.ps1"
Write-Verbose -Verbose ("Path to SetParametersFile: {0}" -f $setParamsFilePath)

# get the environment variables
$vars = gci -path env:*

# read in the setParameters file
$contents = gc -Path $setParamsFilePath

# perform a regex replacement
$newContents = "";
$contents | % {
    $line = $_
    if ($_ -match "__(\w+)__") {
        $setting = gci -path env:* | ? { $_.Name -eq $Matches[1]  }
        if ($setting) {
            Write-Verbose -Verbose ("Replacing key {0} with value from environment" -f $setting.Name)
            $line = $_ -replace "__(\w+)__", $setting.Value
        }
    }
    $newContents += $line + [Environment]::NewLine
}

Write-Verbose -Verbose "Overwriting SetParameters file with new values"
sc $setParamsFilePath -Value $newContents

Write-Verbose -Verbose "Exiting script Replace-SetParameters.ps1"

Notes:

  • Line 2: The only parameter required is the path to the SetParameters file
  • Line 8: Read in all the environment variables – these are populated according to the Release Definition
  • Line 11: Read in the SetParameters file
  • Line 15: Loop through each line in the file
  • Line 17: If the line contains a token, then:
    • Line 18-22: Find the corresponding environment variable, and if there is one, replace the token with the value
  • Line 27: Overwrite the SetParameters file

Caveats: note, this can be a little bit dangerous since the environment variables that are in scope include more than just the ones you define in the Release Definition. For example, the environment includes a “UserName” variable, which is set to the build agent user name. So if you need to define a username variable, make sure you name it “WebsiteUserName” or something else that’s going to be unique.

Creating the Release Definition

We now have all the pieces in place to create a Release Definition. Each Environment is going to execute (at least) 2 tasks:

  • PowerShell – to call the Replace-SetParameters.ps1 script
  • Batch Script – to invoke the cmd file to publish the website

The PowerShell task is always going to be exactly the same – however, the Batch Script arguments are going to change slightly depending on if you’re deploying to IIS or to Azure.

I wanted to make sure this technique worked for IIS as well as for Azure (both deployment slots and “real” sites). So in this example, I’m deploying to 3 environments: Dev, Staging and Production. I’m using IIS for dev, to a staging deployment slot in Azure for Staging and the “real” Azure site for Production.

Here are the steps to configure the Release Definition:

  1. Go to the Release hub in VSTS and create a new Release Definition. Select “Empty” to start with an empty template.
    1. Enter a name for the Release Definition and change “Default Environment” to Dev
    2. image
  2. Click “Link to a Build Definition” and select the build you created earlier:
    1. image
  3. Click “+ Add Tasks” and add a PowerShell Task:
    1. For the “Script filename”, browse to the location of the Replace-SetParameters.ps1 file:
    2. image
    3. For the “Arguments”, enter the following:
      1. -setParamsFilePath $(System.DefaultWorkingDirectory)\CoolWebApp\drop\CoolWebApp.SetParameters.xml
      2. Of course you’ll have to fix the path to set it to the correct SetParameters file – $(System.DefaultWorkingDirectory) is the root of the Release downloads. Then there is a folder with the name of the Build (e.g. CoolWebApp), then the artifact name (e.g. drop), then the path within the artifact source.
  4. Click “+ Add Tasks” and add a Batch Script Task:
    1. For the “Script filename”, browse to the location of the WebDeploy cmd file:
    2. image
    3. Enter the correct arguments (discussed below).
  5. Configure variables for the Dev environment by clicking the ellipses button on the Environment tile and selecting “Configure variables”
    1. Here you add any variable values you require for your web app – these are the values that you tokenized in the build:
    2. image
    3. Azure sites require a username and password – I’ll cover those shortly.

The Definition should now look something like this:

image

Cmd Arguments and Variables

For IIS, you don’t need username and password for the deployments. This means you’ll need to configure the build agent to run as an identity that has permissions to invoke WebDeploy. The SiteName variable is going to be the name of the website in IIS plus the name of your virtual application – something like “Default Web Site/cool-webapp”. Also, you’ll need to configure the Agent on the Dev environment to be an on-premise agent (so select an on-premise queue) since the hosted agent won’t be able to deploy to your internal IIS servers.

For Azure, you’ll need the website username and password (which you can get by downloading the Publish profile for the site from the Azure Portal). They’ll need to be added as variables in the environment, along with another variable called “WebDeploySiteName” (which is required only if you’re using deployment slots). The SiteName is going to be the name of the site in Azure. Of course you’re going to “lock” the password field to make it a secret. You can use the Hosted agent for Environments that deploy to Azure.

Here are the 2 batch commands – the first is for local deployment to IIS, the 2nd for deployment to Azure:

  • /Y /M:http://$(WebDeploySiteName)/MsDeployAgentService
  • /Y /M:https://$(WebDeploySiteName).scm.azurewebsites.net:443/msdeploy.axd /u:$(AzureUserName) /p:$(AzurePassword) /a:Basic

For IIS deployments, you can set WebDeploySiteName to be the name or IP of the target on-premises server. Note that you’ll have to have WebDeploy remote agent running on the machine, with the appropriate permissions for the build agent identity to perform the deployment.

For Azure, the WebDeploySiteName is of the form “siteName[-slot]”. So if you have a site called “MyWebApp”, and you just want to deploy to the site, then WebDeploySiteName will be “MyWebApp”. If you want to deploy to a slot (e.g. Staging), then WebDeploySiteName must be set to “MyWebApp-staging”. You’ll also need to set the SiteName to the name of the site in Azure (“MyWebApp” for the site, “MyWebApp__slot” for a slot – e.g. “MyWebApp__staging”). Finally, you’ll need “AzureUserName” and “AzurePassword” to be set (according to the publish settings for the site).

Cloning Staging and Production Environments

Once you’re happy with the Dev Environment, clone it to Staging and update the commands and variables. Then repeat for Production. You’ll now have 3 Environments in the Definition:

image

Also, if you click on “Configuration”, you can see all the Environment variables by clicking “Release variables” and selecting “Environment Variables”:

image

That will open a grid so you can see all the variables side-by-side:

image

Now you can ensure that you’ve set each Environment’s variables correctly. Remember to set approvals on each environment as appropriate!

2 More Tips

If you want to trigger the Release every time the linked Build produces a new package, then click on Triggers and enable “Continuous Deployment”.

You can get the Release number to reflect the Build package version. Click on General and change the Release Name format to:

$(Build.BuildNumber)-$(rev:r)

Now when you release 1.0.0.8, say, your release will be “1.0.0.8-1”. If you trigger a new release with the same package, it will be numbered “1.0.0.8-2” and so on.

Conclusion

WebRM is a fantastic evolution of Release Management. It’s much easier to configure Release Definitions, to track logs to see what’s going on and to configure deployment Tasks – thanks to the fact that the Release agent is the same as the Build agent. As far as WebDeploy goes, I like this technique of managing configuration – I may write a custom Build Task that bundles the PowerShell and Batch Script into a single task – that will require less argument “fudging” and bundle the PowerShell script so you don’t have to have it in your source repo. However, the process is not too difficult to master even without a custom Task, and that’s pleasing indeed!

Happy releasing!


Config Per Environment vs Tokenization in Release Management

$
0
0

In my previous post I experimented with WebDeploy to Azure websites. My issue with the out-of-the-box Azure Web App Deploy task is that you can specify the WebDeploy zip file, but you can’t specify any environment variables other than connection strings. I showed you how to tokenize your configuration and then use some PowerShell to get values defined in the Release to replace the tokens at deploy time. However, the solution still felt like it needed some more work.

At the same time that I was experimenting with Release Management in VSTS, I was also writing a Hands On Lab for Release Management using the PartsUnlimited repo. While writing the HOL, I had some debate with the Microsoft team about how to manage environment variables. I like a clean separation between build and deploy. To achieve that, I recommend tokenizing configuration, as I showed in my previous post. That way the build produces a single logical package (this could be a number of files, but logically it’s a single output) that has tokens instead of values for environment config. The deployment process then fills in the values at deployment time. The Microsoft team were advocating hard-coding environment variables and checking them into source control – a la“infrastructure as code”. The debate, while friendly, quickly seemed to take on the the feel of an unwinnable debate like “Git merge vs rebase”. I think having both techniques in your tool belt is good, allowing you to select the one which makes sense for any release.

Config Per Environment vs Tokenization

There are then (at least) two techniques for handling configuration. I’ll call them “config per environment” and “tokenization”.

In “config per environment”, you essentially hard-code a config file per environment. At deploy time, you overwrite the target environment config with the config from source control. This could be an xcopy operation, but hopefully something a bit more intelligent – like an ARM Template param.json file. When you define an ARM template, you define parameters that are passed to the template when it is executed. You can also then define a param.json file that supplies the parameters. For example, look at the FullEnvironmentSetup.json and FullEnvironmentSetup.param.json file in this folder of the PartsUnlimited repo. Here’s the param.json file:

{
    "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "WebsiteName": {
            "value": ""
        },
        "PartsUnlimitedServerName": {
            "value": ""
        },
        "PartsUnlimitedHostingPlanName": {
            "value": ""
        },
        "CdnStorageAccountName": {
            "value": ""
        },
        "CdnStorageContainerName": {
            "value": ""
        },
        "CdnStorageAccountNameForDev": {
            "value": ""
        },
        "CdnStorageContainerNameForDev": {
            "value": ""
        },
        "CdnStorageAccountNameForStaging": {
            "value": ""
        },
        "CdnStorageContainerNameForStaging": {
            "value": ""
        }
    }
}

You can see how the parameters match the parameters defined in the template json file. In this case, since the repo is public, the values are just empty strings – but you can imagine how you could define “dev.param.json” and “staging.param.json” and so on – each environment gets its own param.json file. Then at deploy time, you specify to the release which param.json file to use for that environment in the Deploy Azure Resource Group task.

I’m still not sure I like hard-coding values and committing them to source control. The Microsoft team argued that this is “config as code” – but I still think that defining values in Release Management constitutes config as code, even if the code isn’t committed into source control. I’m willing to concede if you’re deploying to Azure using ARM – but I don’t think too many people are at present. Also, there’s the issue of sensitive information going to source control – in this case, the template actually requires a password field (not defined in the param file) – are you going to hardcode usernames/passwords into source control? And even if you do, if you just want to change a value, you need to create a new build since there’s no way to use the existing build – which is probably not what you want!

Let’s imagine you’re deploying your web app to IIS instead of Azure. How do you manage your configuration in that case? “Use config transformations!” you cry. The problem – as I pointed out in my previous post – is that if you have a config transform for each environment, you have to build a package for each environment, since the transformation occurs at build time, not at deploy time. Hence my preference for a single transform that inserts tokens into the WebDeploy package at build time that can be filled in with actual values at deploy time. This is what I call “tokenization”.

So when do you use config-per-environment and when do you use tokenization? I think that if you’ve got ARM templates, use config-per-environment. It’s powerful and elegant. However, even if you’re using ARM, if you have numerous environments, and environment configs change frequently, you may want to opt for tokenization. When you use config-per-environment, you’ll have to queue a new build to get the new config files into the drop that the release is deploying – while tokenization lets you change the value in Release Management and re-deploy an existing package. So if you prefer not to rebuild your binaries just to change an environment variable, then use tokenization. Also, if you don’t want to store usernames/passwords or other sensitive data in source control, then tokenization is better – sensitive information can be masked in Release Management. Of course you could do a combination – storing some config in source code and then just using Release Management for defining sensitive values.

Docker Environment Variables

As an aside, I think that Docker encourages tokenization. Think about how you wouldn’t hard-code config into the Dockerfile – you’d “assume” that certain environment variables are set. Then when you run an instance of the image, you would specify the environment variable values as part of the run command. This is (conceptually anyway) tokenization – the image has “placeholders” for the config that are “filled in” at deploy time. Of course, nothing stops you from specifying a Dockerfile per environment, but it would seem a bit strange to do so in the context of Docker.

You, dear reader, will have to decide which is better for yourself!

New Custom Build Tasks – Replace Tokens and Azure WebDeploy

So I still like WebDeploy with tokenization – but the PowerShell-based solution I hacked out in my previous post still felt like it could use some work. I set about seeing if I could wrap the PowerShell scripts into custom Tasks. I also felt that I could improve on the arguments passed to the WebDeploy cmd file – specifically for Azure Web Apps. Why should you download the Web App publishing profile manually if you can specify credentials to the Azure subscription as a Service Endpoint? Surely it would be possible to suck down the publishing profile of the website automatically? So I’ve created two new build tasks – Replace Tokens and Azure WebDeploy.

Replace Tokens Task

I love how Octopus Deploy automatically replaces web.config keys if you specify matching environment variables in a deployment project. I did something similar in my previous post with some PowerShell. The Replace Tokens task does exactly that – using some Regex, it will replace any matching token with the environment variable (if defined) in Release Management. It will work nicely on the WebDeploy SetParams.xml file, but could be used to replace tokens on any file you want. Just specify the path to the file (and optionally configure the Regex) and you’re done. This task is implemented in node, so it’ll work on any platform that the VSTS agent can run on.

Azure WebDeploy Task

I did indeed manage to work out how to get the publishing username and password of an Azure website from the context of an Azure subscription. So now you drop a “Replace Tokens” task to replace tokens in the SetParams.xml file, and then drop an Azure WebDeploy task into the Release. This looks almost identical to the out-of-the-box “Azure Web App Deployment” task except that it will execute the WebDeploy command using the SetParams.xml file to override environment variables.

Using the Tasks

I tried the same hypothetical deployment scenario I used in my previous post – I have a website that needs to be deployed to IIS for Dev, to a staging deployment slot in Azure for staging, and to the production slot for Production. I wanted to use the same tokenized build that I produced last time, so I didn’t change the build at all. Using my two new tasks, however, made the Release a snap.

Dev Environment

Here’s the definition in the Dev environment:

image

 

You can see the “Replace Tokens” task – I just specified the path to the SetParams.xml file as the “Target File”. The environment variables look like this:

image

Note how I define the app setting (CoolKey), the connection string (EntityDB) and the site name (the IIS virtual directory name of the website). The “Replace Tokens” path finds the corresponding tokens and replaces them with the values I’ve defined.

To publish to IIS, I can just use the “Batch Script” task:

image

I specify the path to the cmd file (produced by the build) and then add the arguments “/Y” to do the deployment (as opposed to a what-if) and use the “/M” argument to specify the IIS server I’m deploying to. Very clean!

Staging and Production Environments

For the staging environment, I use the same “Replace Tokens” task. The variables, however, look as follows:

image

The SiteName variable has been removed. This is because the “Azure WebDeploy” task will work out the site name internally before invoking WebDeploy.

Here’s what the Azure WebDeploy task looks like in Staging:

image

 

The parameters are as follows:

  • Azure Subscription – the Azure subscription Service Endpoint – this sets the context for the execution of this task
  • Web App Name – the name of the Web App in Azure
  • Web App Location – the Azure region that the site is in
  • Slot – the deployment slot (leave empty for production slot)
  • Web Deploy Package Path – the path to the webdeploy zip, SetParams.xml and cmd files

Internally, the task connects to the Azure subscription using the Endpoint credentials. It then gets the web app object (via the name) and extracts the publishing username/password and site name, taking the slot into account (the site name is different for each slot). It then replaces the SiteName variable in the SetParametes.xml file before calling WebDeploy via the cmd (which uses the zip and the SetParameters.xml file). Again, this looks really clean.

The production environment is the same, except that the Slot is empty, and the variables have production values.

IIS Web Application Deployment Task

After my last post, a reader tweeted me to ask why I don’t use the out-of-the-box IIS Web Application Deployment task. The biggest issue I have with this task is that it uses WinRM to remote to the target machine and then invokes WebDeploy “locally” in the WinRM session. That means you have to install and configure WinRM on the target machine before deploying. On the plus side, it does allow you to specify the SetParameters.xml file and even override values at deploy time. It can work against Azure Web Apps too. You can use it if you wish – just remember to use the “Replace Tokens” task before to get environment variables into your SetParameters.xml file!

Conclusion

Whichever method you prefer – config per environment or tokenization – Release Management makes your choice a purely philosophical debate. Due to its customizable architecture, there’s not too much technical difference between the methods when it comes to defining the Release Definition. That, to my mind, assures me that Release Management in VSTS is a fantastic tool.

So make your choice and happy releasing!

Building VS 2015 Setup Projects in Team Build

$
0
0

Remember when Visual Studio had a setup project template? And then it was removed? Then you moved to WiX and after learning it for 3 months and still being confused, you just moved to Web Apps?

Well everyone complained about the missing setup project templates and MS finally added it back in as an extension. Which works great if you build out of Visual Studio – but what about automated builds? Turns out they don’t understand the setup project, so you have to do some tweaking to get it to work.

Setup Project Options

There are a couple of options if you’re going to use setup projects.

  1. ClickOnce. This is a good option if you don’t have a deployment solution that can deploy new versions of your application (like System Center or the like). It requires fudging on the builds to get versioning to work in some automated fashion. At least it’s free.
  2. WiX. Free and very powerful, but really hard to learn and you end up programming in XML – which is a pain. However, if you need your installer to do “extra” stuff (like create a database during install) then this is a good option. Automation is also complicated because you have to invoke Candle.exe and Light.exe to “build” the WiX project.
  3. VS Setup Projects. Now that they’re back in VS, you can use these projects to create installers. You can’t do too much crazy stuff – this just lays down the exe’s and gets you going. It’s easy to maintain, but you need to tweak the build process to build these projects. Also free.
  4. InstallShield and other 3rd party paid installer products. These are typically powerful, but expensive. Perhaps the support you get is worth the price, but you’ll have to decide if the price is worth the support and other features you don’t get from the other free solutions.

Tweaking Your Build Agent

You unfortunately won’t be able to build setup projects on the Hosted build agent because of these tweaks. So if you’ve got a build agent, here’s what you have to do:

  1. Install Visual Studio 2015 on the build machine.
  2. Install the extension onto your build machine.
  3. Configure the build agent service to run under a known user account (not local service, but some user account on the machine).
  4. Apply a registry hack – you have to edit HKCU\SOFTWARE\Microsoft\VisualStudio\14.0_Config\MSBuild\EnableOutOfProcBuild to have a DWORD of 0 (I didn’t have the key, so I just added it). If you don’t do this step, then you’ll probably get an obscure error like this: “ERROR: An error occurred while validating.  HRESULT = '8000000A'
  5. Customize the build template (which I’ll show below).

It’s fairly nasty, but once you’ve done it, your builds will work without users having to edit the project file or anything crazy.

Customizing the Build Definition

You’ll need to configure the build to compile the entire solution first, and then invoke Visual Studio to create the setup package.

Let’s walk through creating a simple build definition to build a vdproj.

  1. Log in to VSTS or your TFS server and go to the build hub. Create a new build definition and select the Visual Studio template. Select the source repo and set the default queue to the queue that your build agent is connected to.
  2. Just after the Visual Studio Build task, add a step and select the “Command Line” task from the Utility section.
  3. Enter the path to devenv.com for the Tool parameter (this is typically “C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\devenv.com”).
  4. The arguments have the following format: solutionPath /build configurationprojectPath
    1. solutionPath is the path to the solution file
    2. configuration is the config (debug, release etc.)
    3. projectPath is the path to the vdproj file
  5. Finally, expand the “Advanced” group and set the working folder to the path of the sln file and check the “Fail on Standard Error” checkbox.

Here’s an example:

image

For reference, here’s how my source is structured:

image

 

You can then publish the setup exe or msi if you need to. You can run tests or scripts or anything else during the build (for ease I delete the unit test task in the above example).

I now have a successful build:

image

And the msi is in my drop, ready to be deployed in Release Management:

image

Happy setup building!

Staging Servers Must Die – Or Must They?

$
0
0

Edith Harbaugh published a though-provoking post called Staging Servers Must Die with the byline “so continuous delivery may live.” She asserts something which I’d never really considered before: that separate, cascading Dev, QA, Staging and Prod environments is a hangover from Waterfall development.

Agility = No Build or Staging?

Harbaugh makes some bold assertions about what she calls “DevOps 2.0”. First, she states that teams should ditch the concept of build (which she calls antiquated). Developers should be checking source into their mainline and and deploying immediately to Prod – with feature flags. The flag of a new feature being deployed defaults to “off for everyone” – no need to keep staging in sync with Prod, and no delay. The QA team are then given access to the feature, then beta customers, and slowly the number of users with access to the feature is increased until everyone has it and the feature is “live”.

She calls out four problems with cascading environments. The first one is time: she argues that a pipeline of environments slows delivery since builds have to be queued and then progressively moved through the pipeline. Secondly, staging environments increase costs since they require infrastructure. Thirdly, she says that the effectiveness of staging environments is moot since they can almost never reproduce production exactly. Finally, she recounts bad experiences where she needed users to test on staging servers, and users continually logged into Prod instead of Staging (or vice-versa) and so the effectiveness of having a staging environment became eclipsed user confusion.

Feature Flags

I think that Harbaugh’s view of feature flags may be a tad biased, since she is the CEO of a LaunchDarkly, a product that allows developers to introduce and manage feature flags. Still, feature flags are a great solution to some of the challenges she lists. However, feature flags are hard to code and manage (that’s why she has a product that helps teams manage it).

LaunchDarkly is a really neat idea – in your code, you call an API that queries LaunchDarkly to determine if this feature is on for this user. Then you can manage which users have which features outside the app in LaunchDarkly – great for A/B testing or releasing features to beta customers and so on.

Feature flags always sound great in theory, but how do you manage database schema differences? How do you fix a bug (what bug?) – do you need a feature flag for the bug fix? What about load testing a new feature – do you do that against Prod?

Agility

So are feature flags and ditching builds and staged environments the way to increase agility and progress to “DevOps 2.0”? It may be in some cases, but I don’t think so. Automated deployment doesn’t make you DevOps – DevOps is far more that just that. 

Here are my thoughts on what you should be thinking about in your DevOps journey.

Microservices

You may be able to go directly to microservices, but even if you can’t (and in some cases you probably shouldn’t), you should be thinking about breaking large, monolithic applications into smaller, loosely-coupled components. Besides better architecture, isolating components allows you deployment granularity. That is, you can deploy a component of your application without having to deploy the entire application. This makes for much faster cycles, since teams that complete functionality in one component can deploy immediately without waiting for teams that are working on other components to be ready to deploy. Smaller, more frequent, asynchronous deployments are far better than large, infrequent, synchronized deployments.

Automated Builds with Automated Testing

This has always seemed so fundamental to me – I battle to understand why so many dev teams do not have builds and unit tests. This is one of my biggest disagreements with Harbaugh – when a developer checks in, the code should trigger a build that not only compiles, but goes through a number of quality checks. The most non-negotiable is unit testing with coverage analysis – that way you have some measure of code quality. Next, consider static code analysis, and better yet, integration with SonarQube or some other technical debt management system.

Every build should produce metrics about the quality of your code – tests passed/failed, coverage percentage, maintainability indexes and so on. You should know these things about your code – deploying directly to production (even with feature switches) bypasses any sort of quality analysis on your code.

Your build should also produce a deployable package – that is environment agnostic. You should be able to deploy your application to any environment, and have the deployment process take care of environment specific configuration.

Beyond unit testing, you should be creating automated integration tests. These should be running on an environment (we’ll discuss that shortly) so that you’re getting quality metrics back frequently. These tests typically take longer to run than unit tests, so they should at least be run on a schedule if you don’t want them running on each check-in. Untested code should never be deployed to production – that means you’re going to have to invest into keeping your test suites sharp – treat your test code as “real” code and help it to help you!

Automated Deployment with Infrastructure As Code

Harbaugh does make a good point – that cascading dev/test/staging type pipelines originate in Waterfall. I constantly try to get developers to separate branch from environment in their minds – it’s unfortunate that we have dev/test/prod branches and dev/test/prod environments – that makes developers think that the code on a branch is the code in the environment. This is almost never the case – I usually recommend a dev/prod branching structure and let the build track which code is in which environment (with proper versioning and labeling of course).

So we should repurpose our cascading environments – call them integration and load or something appropriate if you want to. You need somewhere to run all these shiny tests you’ve invested in. And go Cloud – pay as you use models mean that you don’t have to have hardware idling – you’ll get much more efficient usage of environments that are spun up/down as you need them. However, if you’re spinning environments up and down, you’ll need to use Infrastructure as Code in some form to automate the deployment and configuration of your infrastructure – ARM Templates, DSC scripts and the like.

You’ll then also need a tool for managing deployments in the pipeline – for example, Release Management. Release Management allows you to define tasks – that can deploy build outputs or run tests or do whatever you want to – in a series of environments. You can automate the entire pipeline (stopping when tests fail) or insert manual approval points. You can then configure triggers, so when a new build is available the pipeline automatically triggers for example. Whichever tool you use though, you’ll need a way to monitor what builds are where in which pipelines. And you can of course deploy directly to Production when it is appropriate to do so, so the pipeline won’t slow critical bugfixes if you don’t want it to.

Load Testing

So what about load and scale testing? Doing this via feature switches is almost impossible if you don’t want to bring your production environment to a grinding halt. If you’re frequently doing this, then consider replication of your databases so that you always have an exact copy of production that you can load test against. Of course, most teams can use a subset of prod and extrapolate results – so you’ll have to decide if matching production exactly is actually necessary for load testing.

Having large enough datasets should suffice – load testing should ideally be a relative operation. In other words, you’re not testing for an absolute number, like how many requests per second your site can handle. Rather, you should be base lining and comparing runs. Execute load test on current code to set a base line, then implement some performance improvement, then re-run the tests. You now compare the two runs to see if your tweaks were effective. This way, you don’t necessarily need an exact copy of production data or environments – you just need to run the tests with the same data and environment so that comparisons make sense.

A/B Testing

Of course feature switches can be manipulated and managed in such as way as to enable A/B testing – having some users go to “version A” and some to “version B” of your application. It’s still possible to do A/B testing without deploying to production – for example, using deployment slots in Azure. In an Azure site, you’d create a staging slot on your production site. The staging slot can have the same config as your production slot or have different config, so it could point to production databases if necessary. Then you’d use Traffic Manager to divert some percentage of traffic to the staging slot until you’re happy (which the users will be unaware of – they go to the production URL and are none the wiser that they’ve been redirected to the staging slot). Then just swap the slots – instant deployment, no data loss, no confusion.

Conclusion

Staging environments shouldn’t die – they should be repurposed, rising like a Phoenix out of the ashes of Waterfall’s cremation. Automated builds with solid automated testing (which requires staging infrastructure) should be what you’re aiming for. That way, you can deploy into production quickly with confidence, something that’s hard to do if you deploy directly to production, even with feature switches.

Happy staging!

DevOps is a Culture, Not a Team

$
0
0

This post was originally posted on our Northwest Cadence blog– but I feel it’s a really important post, so I’m cross-posting it here!

I recently worked at a customer that had created a DevOps team in addition to their Ops and Development teams. While I appreciated the attempt to improve their processes, inwardly I was horrified. Just as DevOps is not a product, I think it is bad practice to create a DevOps team. DevOps is a culture and a mindset that should pervade every member of your organization – beyond even developers and operations.

What is DevOps?

So how do you define DevOps? Donovan Brown, a DevOps product manager at Microsoft, defines DevOps succinctly as follows: DevOps is the union of people, process, and products to enable continuous delivery of value to end users. Unfortunately, since the name is an amalgamation of development and operations, most organizations get developers and ops to collaborate, and then boldly declare, “We do DevOps!”

Wider than Dev and Ops

However, true DevOps is a culture that should involve everyone that is involved in delivery of value to end users. This means that business should understand their role in DevOps. Testers and Quality Assurance (QA) should understand their role in DevOps. Project Management Offices (PMOs), Human Resources (HR) and any other part of the organization that touches on the flow of value should be aware of their role in DevOps.

That’s why creating a DevOps team is a fundamentally bad decision. It distances people outside the “DevOps” team from being involved in the culture of DevOps. If there’s a DevOps team, and I’m not on it, why should I worry about DevOps? In the same manner, DevOps that is confined solely to dev and ops is indicative of the culture not pervading the organization. To fully benefit from DevOps, the entire organization needs to embrace the mindset and culture of DevOps.

DevOps Values

What then is a DevOps culture? What values should be upheld by people as they improve their processes and utilize tools to aid in implementing DevOps practices? Here are a few:

  1. Whatever we do should ultimately deliver value to the end users
    1. This is absolutely key to good DevOps – everyone, from stakeholders to developers, to testers and ops should be thinking of how to deliver value. If you can’t ship it, it’s not delivered value – so fix it until you can deliver.
  2. There’s no such thing as a DevOps Hero
    1. DevOps is not the domain of a single individual or team. Everyone needs to buy in to the culture, and everyone needs to own it. And we need to build a culture of “team”, an ubuntu for value, within the entire organization.
  3. If we touch it, we own it
    1. If developers hand off their code to testers, then they ultimately assume “someone else” will check their code. If a developer is responsible for the after hours support calls, they’re more likely to ensure good quality. Of course enlisting the help of some testers will help that effort!
  4. We should examine everything we do for efficiencies
    1. Sometimes we need to step back and examine why we do certain things. Before we automated our deployments, we needed a change control board to wade through pages of “installation instructions”. Now we’ve automated deployments – so we do we still need the documentation or the checkpoint? We could go faster if we removed the “legacy” processes.
  5. We should be allowed to experiment
    1. Will automated deployment help us deliver value faster? Perhaps yes, perhaps no. We’ll never find out if we never have the permission (and time) to experiment. And we can learn from failed experiments too – so we should value experimentation

Everyone has a responsibility in DevOps

DevOps is more than just developers and ops getting together and automating a dew things. While this is certainly a fast-track to better DevOps, the DevOps mindset has to widen to include other teams not traditionally associated with DevOps – like HR, PMOs and Testers.

Human Resources (HR)

Human Resources should be looking for people who are passionate about delivering value. DevOps culture is built faster when people have passion and care about their end users. Don’t just inspect a candidates technical competency – get a feel for how much of a team player they are and how much they will take responsibility for what they do. Don’t hire people who just want to clock in and out and do as little as possible. Also, you may have to “trim the fat” – get rid of positions that are not delivering value.

A further boost for developing DevOps culture is the right working environment. Make your workplace a place people love – but make sure they don’t burn out too! Force them to go home and decompress with friends and family. If your teams are always working overtime, it’s an indication that something isn’t right. Find and improve the erroneous practice so that your team members can have a life – this will reinforce the passion and loyalty they have to delivering value.

PMOs (Project Management Offices)

The PMO needs to rethink in many areas – especially in utilization. Most PMOs strive to make sure that every team member is running at 100% utilization. However, there are problems with this approach:

  1. Humans don’t multitask
    1. Humans don’t multitask – they switch quickly. However, unlike computers that can switch context perfectly to give the illusion of multitasking, humans have fallible memories. Switching costs, since our memories are not perfect. If there are no gaps in our schedules, we will inevitably run late since we don’t usually account for the cost of switching
  2. No time for thinking, discussion and experimenting
    1. If you’re at 100% utilization, you inevitably feel like you don’t have time to think. You can’t get involved in discussions with other team members about how to best solve a problem. And you can’t reflect on what is working and what is not. You certainly won’t have time to experiment. Over the long run, this will hamper delivery of value, since you won’t be innovating.
  3. High utilization increases wait-time
    1. The busier a resource is, the longer you have to wait to access it. A simple formula proves this – wait time = % busy / % idle. So if a resource is 50% utilized, wait time is 50/50 = 1 unit. If that same resource is 90% utilized, the wait time is 90/10 = 9 units. So you have to wait 9 times longer to access a resource that’s busy 90% of the time than when it’s busy 50% of the time. Long wait times means longer cycle times and lower throughput.

PMOs need to embrace the innovative nature of DevOps – and that means giving team members time in their schedules. And it means embracing uncertainty – don’t be afraid to trust the team.

Testers

As Infrastructure as Code, Continuous Integration (CI) and Continuous Deployment (CD) speed the delivery time, testers need to jump in and start automating their testing efforts. In fact, just as I think that a DevOps team is a bad idea, I think that a Testing team is just as bad. Testers should be part of the development/operations team, not a separate entity. And traditional “manual” testers need to beef up on their automation skills, since manual testing becomes a bottleneck in the delivery pipeline. Remember, testers that “find bugs” are not thinking DevOps – testers that aim to automate their tests so that results are faster and more accurate are thinking about real quality improvement – and that means they’re thinking about delivering value to the end users.

Conclusion

DevOps is not a team or a product – it is a culture that needs to pervade everyone in the organization. Everyone – from HR to PMOs to Testers, not just developers and ops – needs to embrace DevOps values, making sure that value is being delivered continually to their end users.

AppInsights Analytics in the Real World

$
0
0

Ever since Application Insights (AppInsights) was released, I’ve loved it. Getting tons of analytics about site usage, performance and diagnostics – pretty much for free – makes adding Application Performance Monitoring (APM) to you application a no-brainer. If you aren’t using AppInsights, then you really should be.

APM is the black sheep of DevOps – most teams are concentrating on getting continuous integration and deployment and release management, which are critical pillars of DevOps. But few teams are taking DevOps beyond deployment into APM, which is also fundamental to successful DevOps. AppInsights is arguably the easiest, least-friction method of quickly and easily getting real APM into your applications. However, getting insights from your AppInsights data has not been all that easy up until now.

Application Insights Analytics

A few days ago Brian Harry wrote a blog post called Introducing Application Insights Analytics. Internally, MS was using a tool called Kusto to do log analytics for many systems – including Visual Studio Team Services (VSTS) itself. (Perhaps Kusto is a reference perhaps to the naval explorer Jacques Cousteau – as in, Kusto lets you explore the oceans of data?) MS then productized their WPF Kusto app into web-based Application Insights Analytics. App Insights Analytics adds phenomenal querying and visualizations onto AppInsights telemetry, allowing you to really dig into the data AppInsights logs. Later on I’ll show you some really simple queries that we use to analyze our usage data.

Brian goes into detail about how fast the Application Insights Analytics engine is – and he should know since they process terrabytes worth of telemetry. Our telemetry is nowhere near that large, so performance of the query language isn’t that big a deal for us. What is a big deal is the analytics and visualizations that the engine makes possible.

In this post I want to show you how to get AppInsights into a real world application. Northwest Cadence has a Knowledge Library application and in order to generate tracing diagnostics and usage telemetry, we added AppInsights. We learned some lessons about AppInsights on the way, and here are some of our lessons-learned.

Configuring AppInsights

We have 4 sites that we deploy the same code to – there are 2 production sites, Azure Library and Knowledge Library, and each has a dev environment too. By default the AppInsights key is configured in ApplicationInsights.config. We wanted to have a separate AppInsights instance for each site, so we created 4 in Azure. Now we had the problem of where to set the key so that each site logs to the correct AppInsights instance.

Server-side telemetry is easy to configure. Add an app setting called “AIKey” in the web.config. In a startup method somewhere, you make a call to the Active TelemetryConfig:

Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.Active.InstrumentationKey = WebConfigurationManager.AppSettings["AIKey"];

This call then sets the AIKey for all serve-side telemetry globally. But what about client side?

For that we added a static getter to a class like this:

private static string aiKey;
public static string AIKey
{
    get
    {
        if (string.IsNullOrEmpty(aiKey))
        {
            aiKey = WebConfigurationManager.AppSettings.Get("AIKey");
        }
        return aiKey;
    }
}

In the master.cshtml file, we added the client-side script for AppInsights and made a small modification to get the key injected in instead of hard-coded:

<script type="text/javascript">
    var appInsights=window.appInsights||function(config){
        function s(config){t[config]=function(){var i=arguments;t.queue.push(function(){t[config].apply(t,i)})}}var t={config:config},r=document,f=window,e="script",o=r.createElement(e),i,u;for(o.src=config.url||"//az416426.vo.msecnd.net/scripts/a/ai.0.js",r.getElementsByTagName(e)[0].parentNode.appendChild(o),t.cookie=r.cookie,t.queue=[],i=["Event","Exception","Metric","PageView","Trace"];i.length;)s("track"+i.pop());return config.disableExceptionTracking||(i="onerror",s("_"+i),u=f[i],f[i]=function(config,r,f,e,o){var s=u&&u(config,r,f,e,o);return s!==!0&&t["_"+i](config,r,f,e,o),s}),t
    }({
        instrumentationKey: "@Easton.Web.Helpers.Utils.AIKey"
    });

    window.appInsights=appInsights;
    appInsights.trackPageView();
</script>
You can see how we’re using Razor syntax to get the AIKey static property value for the instrumentationKey value.

The next thing we wanted was to set the application version (assembly version) and site type (either KL for “Knowledge Library” or Azure for “Azure Library”). Perhaps this is a bit overkill since we have 4 separate AppInsights instances anyway, but if we decide to consolidate at some stage we can do so and preserve partitioning in the data.

Setting telemetry properties for every log entry is a little harder – there used to be an IConfigurationInitializer interface, but it seems it was deprecated. So we implemented an ITelmetryInitializer instance:

public class AppInsightsTelemetryInitializer : ITelemetryInitializer
{
    string appVersion = GetApplicationVersion();
    string siteType = GetSiteType();

    private static string GetSiteType()
    {
        return WebConfigurationManager.AppSettings["SiteType"];
    }

    private static string GetApplicationVersion()
    {
        return typeof(AppInsightsTelemetryInitializer).Assembly.GetName().Version.ToString();
    }

    public void Initialize(ITelemetry telemetry)
    {
        telemetry.Context.Component.Version = appVersion;
        telemetry.Context.Properties["siteType"] = siteType;
    }
}

In order to tell AppInsights to use the initializer, you need to add an entry to the ApplicationInsights.config file:

<TelemetryInitializers>
  ...
  <Add Type="Easton.Web.AppInsights.AppInsightsTelemetryInitializer, Easton.Web"/>
</TelemetryInitializers>

Now the version and siteType properties are added to every server-side log. Of course we could add additional “global” properties using the same code if we needed more.

Tracing

Last week we had an issue with our site – there’s a signup process in which we generate an access code and customers then enter the access code and enable integration with their Azure Active Directory so that their users can authenticate against their AAD when logging into our site. Customers started reporting that the access code “wasn’t found”. The bug turned out to be the fact that a static variable on a base class is shared across all child instances too – so our Azure Table data access classes were pointing to the incorrect tables (We fixed the issue using a curiously recurring generic base class– a study for another day) but the issue had us stumped for a while.

Initially I thought, “I can debug this issue quickly – I have AppInsights on the site so I can see what’s going on.” Turns out that there wasn’t any exception for the issue – the data access searched for an entity and couldn’t find it, so it reported the “access code not found” error that our customers were seeing. I didn’t have AppInsights tracing enabled – so I immediately set about adding it.

First, you install the Microsoft.ApplicationInsights.TraceListener package from NuGet. Then you can pepper your code with trace calls to System.Diagnostics.Trace – each one is sent to AppInsights by the TraceListener.

We decided to create an ILogger interface and a base class that just did a call to System.Diagnostics.Trace. Here’s a snippet:

public abstract class BaseLogger : ILogger
{
    public virtual void TraceError(string message)
    {
        Trace.TraceError(message);
    }

    public virtual void TraceError(string message, params object[] args)
    {
        Trace.TraceError(message, args);
    }

    public virtual void TraceException(Exception ex)
    {
        Trace.TraceError(ex.ToString());
    }

    // ... TraceInformation and TraceWarning methods same as above

    public virtual void TraceCustomEvent(string eventName, IDictionary<string, string> properties = null, IDictionary<string, double> metrics = null)
    {
        var propertiesStr = "";
        if (properties != null)
        {
            foreach (var key in properties.Keys)
            {
                propertiesStr += string.Format("{0}{1}{2}", key, properties[key], Environment.NewLine);
            }
        }


        var metricsStr = "";
        if (metrics != null)
        {
            foreach (var key in metrics.Keys)
            {
                metricsStr += string.Format("{0}{1}{2}", key, metrics[key], Environment.NewLine);
            }
        }

        Trace.TraceInformation("Custom Event: {0}{1}{2}{1}{3}", eventName, Environment.NewLine, propertiesStr, metricsStr);
    }
}

The TraceInformation and TraceError methods are pretty straightforward – the TraceCustomEvent was necessary to enable custom telemetry. Using the logger to add tracing and exception logging is easy. We inject an instance of our AppInsightsLogger (more on this later) and then we can use it to log. Here’s an example of our GET videos method (we use NancyFx which is why this is an indexer method):

Get["/videos"] = p =>
{
    try
    {
        logger.TraceInformation("[/Videos] Returning {0} videos", videoManager.Videos.Count);
        return new JsonResponse(videoManager.Videos, new EastonJsonNetSerializer());
    }
    catch (Exception ex)
    {
        logger.TraceException(ex);
        throw ex;
    }
};

Custom Telemetry

Out of the box you get a ton of great logging in AppInsights – page views (including browser type, region, language and performance) and server side requests, exceptions and performance. However, we wanted to start doing some custom analytics on usage. Our application is multi-tenant, so we wanted to track the tenantId as well as the user. We want to track each time a user views a video so we can see which users (across which tenants) are accessing which videos. Here’s the call we make to log that a user has accessed a video:

logger.TraceCustomEvent("ViewVideo", new Dictionary<string, string>() { { "TenantId", tenantId }, { "User", userId }, { "VideoId", videoId } });

The method in the AppInsightsLogger is as follows:

public override void TraceCustomEvent(string eventName, IDictionary<string, string> properties = null, IDictionary<string, double> metrics = null)
{
    AppInsights.TrackEvent(eventName, properties, metrics);
}

Pretty simple.

Analytics Queries

Now that we’re getting some telemetry, including requests and custom events, we can start to query. Logging on to the Azure Portal I navigate to the AppInsights instance and click on the Analytics button in the toolbar:

image

That will open the AppInsights Analytics page. Here I can start querying my telemetry. There are several “tables” that you can query – requests, traces, exceptions and so on. If I want to see the performance percentiles of my requests in 1 hour bins for the last 7 days, I can use this query which calculates the percentiles and then renders to a time chart:

image

requests
| where timestamp >= ago(7d)
| summarize percentiles(duration,50,95,99) by bin (timestamp, 1h)
| render timechart

The query syntax is fairly “natural” though I did have to look at these help docs to get to grips with the language.

Sweet!

You can even join the tables. Here’s an example from Brian Harry’s post that correlates exceptions and requests:

image

requests
| where timestamp > ago(2d)
| where success == "False"
| join kind=leftouter (
    exceptions
    | where timestamp > ago(2d)
) on operation_Id
| summarize exceptionCount=count() by operation_Name
| order by exceptionCount asc

Note that I did have some trouble with the order by direction – it could be a bug (this is still in preview) or maybe I just don’t understand the ordering will enough.

Here are a couple of queries against our custom telemetry:

image

customEvents
| where timestamp > ago(7d)
| where name == "ValidateToken"
| extend user = tostring(customDimensions.User), tenantId = tostring(customDimensions.TenantId)
| summarize logins = dcount(user) by tenantId, bin(timestamp, 1d)
| order by logins asc

Again, the ordering direction seems odd to me.

I love the way that the customDimensions (which is just a json snippet) is directly addressable. Here’s what the json looks like for our custom events:

image

You can see how the “siteType” property is there because of our ITelemetryInitializer.

Visualizations

After writing a couple queries, we can then add a visualization by adding a render clause. You’ve already seen the “render timechart“ above – but there’s also piechart, barchart and table. Here’s a query that renders a stacked bar chart showing user views (per tenant) in hourly bins:

customEvents
| where timestamp >= ago(7d)
| extend user = tostring(customDimensions.User), videoId = tostring(customDimensions.VideoId), tenantId = tostring(customDimensions.TenantId)
| summarize UserCount = dcount(user) by tenantId, bin (timestamp, 1h)
| render barchart

image

This is just scratching the surface, but I hope you get a feel for what this tool can bring out of your telemetry.

Exporting Data to PowerBI

The next step is to make a dashboard out of the queries that we’ve created. You can export to Excel, but for a more dynamic experience, you can also export to PowerBI. I was a little surprised that when I clicked “Export to PowerBI” I got a text file. Here’s the same bar chart query exported to PowerBI:

/*
The exported Power Query Formula Language (M Language ) can be used with Power Query in Excel 
and Power BI Desktop. 
For Power BI Desktop follow the instructions below: 
 1) Download Power BI Desktop from https://powerbi.microsoft.com/en-us/desktop/ 
 2) In Power BI Desktop select: 'Get Data' -> 'Blank Query'->'Advanced Query Editor' 
 3) Paste the M Language script into the Advanced Query Editor and select 'Done' 
*/


let
Source = Json.Document(Web.Contents("https://management.azure.com/subscriptions/someguid/resourcegroups/rg/providers/microsoft.insights/components/app-insights-instance/api/query?api-version=2014-12-01-preview", 
[Query=[#"csl"="customEvents| where timestamp >= ago(7d)| extend user = tostring(customDimensions.User), videoId = tostring(customDimensions.VideoId), tenantId = tostring(customDimensions.TenantId)| summarize UserCount = dcount(user) by tenantId, bin (timestamp, 1h)| render barchart"]])),
SourceTable = Record.ToTable(Source), 
SourceTableExpanded = Table.ExpandListColumn(SourceTable, "Value"), 
SourceTableExpandedValues = Table.ExpandRecordColumn(SourceTableExpanded, "Value", {"TableName", "Columns", "Rows"}, {"TableName", "Columns", "Rows"}), 
RowsList = SourceTableExpandedValues{0}[Rows], 
ColumnsList = SourceTableExpandedValues{0}[Columns],
ColumnsTable = Table.FromList(ColumnsList, Splitter.SplitByNothing(), null, null, ExtraValues.Error), 
ColumnNamesTable = Table.ExpandRecordColumn(ColumnsTable, "Column1", {"ColumnName"}, {"ColumnName"}), 
ColumnsNamesList = Table.ToList(ColumnNamesTable, Combiner.CombineTextByDelimiter(",")), 
Table = Table.FromRows(RowsList, ColumnsNamesList), 
ColumnNameAndTypeTable = Table.ExpandRecordColumn(ColumnsTable, "Column1", {"ColumnName", "DataType"}, {"ColumnName", "DataType"}), 
ColumnNameAndTypeTableReplacedType1 = Table.ReplaceValue(ColumnNameAndTypeTable,"Double",Double.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType2 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType1,"Int64",Int64.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType3 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType2,"Int32",Int32.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType4 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType3,"Int16",Int16.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType5 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType4,"UInt64",Number.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType6 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType5,"UInt32",Number.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType7 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType6,"UInt16",Number.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType8 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType7,"Byte",Byte.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType9 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType8,"Single",Single.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType10 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType9,"Decimal",Decimal.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType11 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType10,"TimeSpan",Duration.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType12 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType11,"DateTime",DateTimeZone.Type,Replacer.ReplaceValue,{"DataType"}),
ColumnNameAndTypeTableReplacedType13 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType12,"String",Text.Type,Replacer.ReplaceValue,{"DataType"}),
ColumnNameAndTypeTableReplacedType14 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType13,"Boolean",Logical.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType15 = Table.ReplaceValue(ColumnNameAndTypeTableReplacedType14,"SByte",Logical.Type,Replacer.ReplaceValue,{"DataType"}), 
ColumnNameAndTypeTableReplacedType16 = Table.SelectRows(ColumnNameAndTypeTableReplacedType15, each [DataType] is type), 
ColumnNameAndTypeList = Table.ToRows(ColumnNameAndTypeTableReplacedType16), 
TypedTable = Table.TransformColumnTypes(Table, ColumnNameAndTypeList) 
in
TypedTable

Ah, so I’ll need PowerBI desktop. No problem. Download it, open it and follow the helpful instructions in the comments at the top of the file:

image

Now I can create visualizations, add custom columns – do whatever I would normally do in PowerBI.

One thing I did want to do was fix up the nasty “tenantId”. This is a guid which is the Partition Key for an Azure Table that we use to store our tenants. So I just added a new Query to the report to fetch the tenant data from the table. Then I was able to create a relationship (i.e. foreign key) that let me use the tenant name rather than the nasty guid in my reports:

image

Here’s what the relationship looks like for the “Users Per Tenant Per Hour Query”:

image
Once I had the tables in, I could create reports. Here’s a performance report:

image

One tip – when you add the “timestamp” property, PowerBI defaults to a date hierarchy (Year, Quarter, Month, Day). To use the timestamp itself, you can just click on the field in the axis box and select “timestamp” from the values:

image

Here’s one of our usage reports:

image

And of course, once I’ve written the report, I can just upload it to PowerBI to share with the team:

image

Look ma – it’s the same report!

Conclusion

If you’re not doing APM, then you need to get into AppInsights. If you’re already using AppInsigths, then it’s time to move beyond logging telemetry to actually analyzing telemetry and gaining insights from your applications using AppInights Analytics.

Happy analyzing!

Viewing all 114 articles
Browse latest View live