Quantcast
Channel: Colin's ALM Corner
Viewing all 114 articles
Browse latest View live

Continuous Deployment of Service Fabric Apps using VSTS (or TFS)

$
0
0

Azure’s Service Fabric is breathtaking – the platform allows you to create truly “born in the cloud” apps that can really scale. The platform takes care of the plumbing for you so that you can concentrate on business value in your apps. If you’re looking to create cloud apps, then make sure you take some time to investigate Service Fabric.

Publishing Service Fabric Apps

Unfortunately, most of the samples (like this getting started one or this more real-world one) don’t offer any guidance around continuous deployment. They just wave hands and say, “Publish from Visual Studio” or “Publish using PowerShell”. Which is all well and good – but how do you actually do proper DevOps with ServiceFabric Apps?

Publishing apps to Service Fabric requires that you package the app and then publish it. Fortunately VSTS allows you to fairly easily package the app in an automated build and then publish the app in a release.

There are two primary challenges to doing this:

  1. Versioning. Versioning is critical to Service Fabric apps, so your automated build is going to have to know how to version the app (and its constituent services) correctly
  2. Publishing – new vs upgrade. The out-of-the-box publish script (that you get when you do a File->New Service Fabric App project) needs to be invoked differently for new apps as opposed to upgrading existing apps. In the pipeline, you want to be able to publish the same way – whether or not the application already exists. Fortunately a couple modifications to the publish script do the trick.

Finally, the cluster should be created or updated on the fly during the release – that’s what the ARM templates do.

To demonstrate a Service Fabric build/release pipeline, I’m going to use a “fork” of the original VisualObjects sample from the getting started repo (it’s not a complete fork since I just wanted this one solution from the repo). I’ve added an ARM template project to demonstrate how to create the cluster using ARM during the deployment and then I’ve added two publishing profiles – one for Test and one for Prod. The ARM templates and profiles for both Test and Prod are exactly the same in the repo – in real life you’ll have a beefier cluster in Prod (with different application parameters) than you will in test, so the ARM templates and profiles are going to look different. Having two templates and profiles gives you the idea of how to separate environments in the Release, which is all I want to demonstrate.

This entire flow works on TFS as well as VSTS, so I’m just going to show you how to do this using VSTS. I’ll call out differences for TFS when necessary.

Getting the Code

The easiest way is to just fork this repo on Github. You can of course clone the repo, then push it to a VSTS project if you prefer. For this post I’m going to use code that I’ve imported into a VSTS repo. If you’re on TFS, then it’s probably easiest to clone the repo and push it to your TFS server.

Setting up the Build

Unfortunately the Service Fabric SDK isn’t installed on the hosted agent image in VSTS, so you’ll have to use a private agent. Make sure the Service Fabric SDK is installed on the build machine. Use this help doc to get the bits.

The next thing you’ll need is my VersionAssemblies custom build task. I’ve bundled it into a VSTS marketplace extension. If you’re on VSTS, just click “Install” – if you’re on TFS, you’ll need to download it and upload it. You’ll only be able to do this on TFS 2015 Update 2 or later.

Now go to your VSTS account and navigate to the Code hub. Create a new Build definition using the Visual Studio template. Select the appropriate source repo and branch (I’m just going to use master) and select the queue with your private agent. Select Continuous Integration to queue the build whenever a commit is pushed to the repo:

image

 

Change the name of the build – I’ve called mine “VisualObjects”. Go to the General tab and change the build number format to be

1.0$(rev:.r)

This will give the build number 1.0.1, then 1.0.2, 1.0.3 and so on.

Now we want to change the build so that it will match the ApplicationTypeVersion (from the application manifest) and all the Service versions within the ServiceManifests for each service within the application. So click “Add Task” and add two “VersionAssembly” tasks. Drag them to the top of the build (so that they are the first two tasks executed).

Configure the first one as follows:

image

Configure the second one as follows:

image

The first task finds the ApplicationManifest.xml file and replaces the version with the build number. The second task recursively finds all the ServiceManifest.xml files and then also replaces the version number of each service with the build number. After the build, the application and service versions will all match the build number.

The next 3 tasks should be “NuGet Installer”, “Visual Studio Build” and “Visual Studio Test”. You can leave those as is.

Add a new “Visual Studio Build” task and place it just below the test task. Configure the Solution parameter to the path of the .sfproj in the solution (src/VisualObjects/VisualObjects/VisualObjects.sfproj). Make the MSBuild Arguments parameter “/t:Package). Finally, add $(BuildConfiguration) to the Configuration parameter. This task invokes Visual Studio to package the Service Fabric app:

image

Now you’ll need to do some copying so that we get all the files we need into the artifact staging directory, ready for publishing. Add a couple “Copy” tasks to the build and configure them as follows:

image

This copies the Service Fabric app package to the staging directory.

image

This copies the Scripts folder to the staging directory (we’ll need this in the release to publish the app).

image

image

These tasks copy the Publish Profiles and ApplicationParameters files to the staging directory. Again, these are needed for the release.

You’ll notice that there isn’t a copy task for the ARM project – that’s because the ARM project automagically puts its output into the staging directory for you when building the solution.

You can remove the Source Symbols task if you want to – it’s not going to harm anything if it’s there. If you really want to keep the symbols you’ll have to specify a network share for the symbols to be copied to.

Finally, make sure that your “Publish Build Artifacts” task is configured like this:

image

Of course you can also choose a network folder rather than a server drop if you want. The tasks should look like this:

image

Run the build to make sure that it’s all happy. The artifacts folder should look like this:

image

Setting up the Release

Now that the app is packaged, we’re almost ready to define the release pipeline. There’s a decision to make at this point: to ARM or not to ARM. In order to create the Azure Resource Group containing the cluster from the ARM template, VSTS will need a secure connection to the Azure subscription (follow these instructions). This connection is service principal based, so you need to have an AAD backing your Azure subscription and you need to have permissions to add new applications to the AAD (being an administrator or co-admin will work – there may be finer-grained RBAC roles for this, I’m not sure). However, if you don’t have an AAD backing your subscription or can’t create applications, you can manually create the cluster in your Azure subscription. Do so now if you’re going to create the cluster(s) manually (one for Test, one for Prod).

To create the release definition, go to the Release hub in VSTS and create a new (empty) Release. Select the VisualObjects build as the artifact link and set Continuous Deployment. This will cause the release to be created as soon as a build completes successfully. (If you’re on TFS, you will have to create an empty Release and then link the build in the Artifacts tab). Change the name of the release to something meaningful (I’ve called mine VisualObjects, just to be original).

Change the name of the first environment to “Test”. Edit the variables for the environment and add one called “AdminPassword” and another called “ClusterName”. Set the admin password to some password and padlock it to make it a secret. The name that you choose for the cluster is the DNS name that you’ll use to address your cluster. In my case, I’ve selected “colincluster-test” which will make the URL of my cluster “colincluster-test.eastus.cloudapp.azure.com”.

image

Create or Update the Cluster

If you created the cluster manually, skip to the next task. If you want to create (or update) the cluster as part of the deployment, then add a new “Azure Resource Group Deployment” task to the Test environment. Set the parameters as follows:

  • Azure Connection Type: Azure Resource Manager
  • Azure RM Subscription: set this to the SPN connection you created from these instructions
  • Action: Create or Update Resource Group
  • Resource Group: a name for the resource group
  • Location: the location of your resource group
  • Template: brows to the TestServiceFabricClusterTemplate.json file in the drop using the browse button (…)
  • Template Parameters: brows to the TestServiceFabricClusterTemplate.parameters.json file in the drop using the browse button (…)
  • Override Template Parameters: set this to -adminPassword (ConvertTo-SecureString '$(AdminPassword)' -AsPlainText -Force) –dnsName $(ClusterName)

You can override any other parameters you need to in the Override parameters setting. For now, I’m just overriding the clusterName and adminPassword parameters.

image

Replace Tokens

The Service Fabric profiles contain the cluster connection information. Since you could be creating the cluster on the fly, I’ve tokenized the connection setting in the profile files as follows:

<?xml version="1.0" encoding="utf-8"?>
<PublishProfile xmlns="http://schemas.microsoft.com/2015/05/fabrictools">
  <!-- ClusterConnectionParameters allows you to specify the PowerShell parameters to use when connecting to the Service Fabric cluster.
       Valid parameters are any that are accepted by the Connect-ServiceFabricCluster cmdlet.
       
       For a remote cluster, you would need to specify the appropriate parameters for that specific cluster.
         For example: <ClusterConnectionParameters ConnectionEndpoint="mycluster.westus.cloudapp.azure.com:19000" />

       Example showing parameters for a cluster that uses certificate security:
       <ClusterConnectionParameters ConnectionEndpoint="mycluster.westus.cloudapp.azure.com:19000"
                                    X509Credential="true"
                                    ServerCertThumbprint="0123456789012345678901234567890123456789"
                                    FindType="FindByThumbprint"
                                    FindValue="9876543210987654321098765432109876543210"
                                    StoreLocation="CurrentUser"
                                    StoreName="My" />

  -->
  <!-- Put in the connection to the Prod cluster here -->
  <ClusterConnectionParameters ConnectionEndpoint="__ClusterName__.eastus.cloudapp.azure.com:19000" />
  <ApplicationParameterFile Path="..\ApplicationParameters\TestCloud.xml" />
  <UpgradeDeployment Mode="Monitored" Enabled="true">
    <Parameters FailureAction="Rollback" Force="True" />
  </UpgradeDeployment>
</PublishProfile>

You can see that there is a __ClusterName__ token (the highlighted line). You’ve already defined a value for cluster name that you used in the ARM task. Wouldn’t it be nice if you could simply replace the token called __ClusterName__ with the value of the variable called ClusterName? Since you’ve already installed the Colin's ALM Corner Build and Release extension from the marketplace, you get the ReplaceTokens task as well, which does exactly that! Add a ReplaceTokens task and set it as follows:

image

IMPORTANT NOTE! The templates I’ve defined are not secured. In production, you’ll want to secure your clusters. The connection parameters then need a few more tokens like the ServerCertThumbprint and so on. You can also make these tokens that the ReplaceTokens task can substitute. Just note that if you make any of them secrets, you’ll need to specify the secret values in the Advanced section of the task.

Deploying the App

Now that we have a cluster and we have a profile that can connect to the cluster, and we have a package ready to deploy, we can invoke the PowerShell scrip to deploy! Add a “Powershell Script” task and configure it as follows:

  • Type: File Path
  • Script filename: browse to the Deploy-FabricApplication.ps1 script in the drop folder (under drop/SFPackage/Scripts)
  • Arguments: Set to -PublishProfileFile ../PublishProfiles/TestCloud.xml -ApplicationPackagePath ../Package

The script needs to take at least the PublishProfile path and then the ApplicationPackage path. These paths are relative to the Scripts folder, so expand Advanced and set the working folder to the Scripts directory:

image

That’s it! You can now run the release to deploy it to the Test environment. Of course you can add other tasks (like Cloud Load Tests etc.) and approvals. Go wild.

Changes to the OOB Deploy Script

I mentioned earlier that this technique has a snag: if the release creates the cluster (or you’ve created an empty cluster manually) then the Deploy script will fail. The reason is that the profile includes an <UpgradeDeployment> tag that tells the script to upgrade the app. If the app exists, the script works just fine – but if the app doesn’t exist yet, the deployment will fail. So to work around this, I modified the OOB script slightly. I just query the cluster to see if the app exists, and if it doesn’t, the script calls the Publish-NewServiceFabricApplication cmdlet instead of the Publish-UpgradedServiceFabricApplication. Here are the changed lines:

$IsUpgrade = ($publishProfile.UpgradeDeployment -and $publishProfile.UpgradeDeployment.Enabled -and $OverrideUpgradeBehavior -ne 'VetoUpgrade') -or $OverrideUpgradeBehavior -eq 'ForceUpgrade'

# check if this application exists or not
$ManifestFilePath = "$ApplicationPackagePath\ApplicationManifest.xml"
$manifestXml = [Xml] (Get-Content $ManifestFilePath)
$AppTypeName = $manifestXml.ApplicationManifest.ApplicationTypeName
$AppExists = (Get-ServiceFabricApplication | ? { $_.ApplicationTypeName -eq $AppTypeName }) -ne $null

if ($IsUpgrade -and $AppExists)

Lines 1 to 185 of the script are original, (I show line 185 as the first line of this snippet). The if statement alters slightly to take the $AppExists into account – the remainder of the script is as per the OOB script.

image

Now that you have the Test environment, you can clone it to the Prod environment. Change the parameter values (and the template and profile paths) to make the prod-specific and you’re done! One more tip: if you change the release name format (under the general tab) to $(Build.BuildNumber)-$(rev:r) then you’ll get the build number as part of the release number.

Here you can see my cluster with the Application Version matching the build number:

image

Sweet! Now I can tell which build was used for my application right from my cluster!

See the Pipeline in Action

A fun demo to do is to deploy the app and then open up the VisualObjects url – that will be at clustername.eastus.cloudapp.azure.com:8082/VisualObjects (where clustername is the name of your cluster). When you see the bouncing triangles.

Then you can edit src/VisualObjects/VisualObjects.ActorService/VisualObjectActor.cs in Visual Studio or in the Code hub in VSTS. Look around line 50 for visualObject.Move(false); and change it to visualObject.Move(true). This will cause the triangle to start rotating. Commit the change and push it to trigger the build and the release. Then monitor the Service Fabric UI to see the upgrade trigger (from the release) and watch the triangles to see how they are upgraded in the Service Fabric rolling upgrade.

Conclusion

Service Fabric is awesome – and creating a build/release pipeline for Service Fabric apps in VSTS is a snap thanks to an amazing build/release engine – and some cool custom build tasks!

Happy releasing!


Updating XAML Release Builds after Upgrading Release Management Legacy from 2013 to 2015

$
0
0

You need to get onto the new Release Management (the web-based one) in VSTS or TFS 2015 Update 2. The new version is far superior to the old version for numerous reasons – it uses the new Team Build cross-platform agent, has a much simpler UI for designing releases, has better logging etc. etc.

However, I know that lots of teams are invested in Release Management “legacy”. Over the weekend I helped a customer upgrade their TFS servers from 2013 to 2015.2.1. Part of this included upgrading their Release Management Server from 2013 to 2015. This customer has been using Release Management since it was still InRelease! They have a large investment in their current release tools, so they need it to continue working so that they can migrate over time.

The team also trigger releases in Release Management from their XAML builds. Unfortunately, their builds started breaking once we upgraded the Release Management client on the build servers. The build error was something like: “Invalid directory”. (Before upgrading the client, the release step failed saying that the build service needed to be set up as a Service User – which it was. This error is misleading – it’s an indication that you need to upgrade the RM Client on the build machine).

Upgrading XAML Build Definitions

It turns out that the Release Management XAML templates include a step that reads the registry to obtain the location of the Release Management client binaries. This registry key has changed from RM 2013 to 2015, so you have two options:

  1. If you used the older ReleaseGitTemplate.12.xaml or ReleaseTfvcTemplate12.xaml files from RM 2013, then you can replace them with the updated release management templates that ship with Release Management client (find them in \Program Files (x86)\ Microsoft Visual Studio 14.0\ReleaseManagement\bin)
  2. If you customized your own templates (or customized the RM 2013 templates), you need to update your release template

Fortunately updating existing templates to work with the new RM client is fairly trivial. Here are the steps:

  1. Check out your existing XAML template
  2. Open it in Notepad (or using the XML editor in VS)
  3. Find the task with DisplayName “Get the Release Management install directory”. One of the arguments is a registry key – it will be something like HKEY_LOCAL_MACHINE\Software\Microsoft\ReleaseManagement\12.0\Client\. Replace this key with this value: HKEY_LOCAL_MACHINE\Software\WOW6432Node\Microsoft\ReleaseManagement\14.0\Client\
  4. The task just below is for finding the x64 directory – you can do the same replacement in this task.
  5. Commit your changes and checkin
  6. Build and release
  7. Party

Thanks to Jesse Arens for this great find!

On a side note – the ALM Rangers have a project that will help you port your “legacy” RM workflows to the new web-based releases. You can find it here.

Happy releasing! (Just move off XAML builds and Release Management legacy as soon as possible – for your own sanity!)

DotNet Core, VS 2015, VSTS and Docker

$
0
0

I unashamedly love Docker. Late last year I posted some thoughts I had on Docker DevOps. In this post I’m going to take a look at Docker DevOps using DotNet Core 1.0.0, Docker Tools for Visual Studio, Docker for Windows and VSTS.

Just before I continue – I’m getting tired of typing “DotNet Core 1.0.0” so for the rest of this post when I say “.NET Core” I mean “DotNet Core 1.0.0” (unless otherwise stated!).

Highlights

For those of you that just want the highlights, here’s a quick summary:

  • .NET Core does indeed run in a Docker container
  • You can debug a .Net Core app running in a container from VS
  • You can build and publish a DotNet Core app to a docker registry using VSTS Build
  • You can run a Docker image from a registry using VSTS Release Management
  • You can get frustrated by the lack of guidance and the amount of hacking required currently

So what’s the point of this anyway? Well I wanted to know if I could create the following workflow:

  • Docker for Windows as a Docker host for local dev
  • Visual Studio with the Docker Tools for VS for local debugging within a container
  • VSTS for building (and publishing) a Docker image with an app
  • VSTS for releasing (running) an image

This is a pretty compelling workflow, and I was able to get it running relatively easily. One of the biggest frustrations was the lack of documentation and the immaturity of some of the tooling.

Grab a cup of coffee (or tea or chai latte – or if it’s after hours, a good IPA) and I’ll take you on my journey!

Docker Host in Azure

I started my journey from this post in article: Deploy ASP.NET Core 1.0 apps to a Docker Container (aka the VSTS Docker article). While certainly helpful, there are some issue with this article. Firstly, it’s designed for ASP.NET Core 1.0.0-rc1-update1 and not the RTM release (1.0.0). This mainly had some implications for the Dockerfile, but wasn’t too big an issue. The bigger issue is that it’s a little ambiguous in places, and the build instructions were quite useless. We’ll get to that later.

After skimming the article, I decided to first stand up a Docker host in Azure and create the service connections that VSTS requires for performing Docker operations. Then, I figured, I’d be ready to start coding and I’ll have a host to deploy to.

Immediately I hit a strange limitation – the Docker image in Azure can only be created using “classic” and not “Resource Group” mode. I ended up deciding that wasn’t too big a deal, but it’s still frustrating that the official image isn’t on the latest tech within Azure.

The next challenge was getting Docker secured. I followed the VSTS Docker articles link to instructions on how to protect the daemon socket. I generated the ssh keys without too much fuss. However, I ran into issues ensuring that the Docker daemon starts with the keys! The article doesn’t tell you how to do that (it tells you how to start Docker manually), so I had to scratch around a bit. I found that you could set the daemon startup options by editing /etc/default/docker, so I opened it up and edited the DOCKER_OPTS to look like this:

DOCKER_OPTS="--tlsverify --tlscacert=/var/docker/ca.pem --tlscert=/var/docker/server-cert.pem --tlskey=/var/docker/server-key.pem -H=0.0.0.0:2376”

Of course I copied the pem files to /var/docker. I then restarted the Docker service.

It didn’t work. After a long time of hacking, searching, sacrificing chickens and anything else I could think of to help, I discovered that the daemon ignores the /etc/default/docker file altogether! Perhaps it’s just the Azure VM and linux distro I’m on? Anyway, I had to edit the /etc/systemd/system/docker.service file. I changed the ExecStart command and added an EnviromentFile command in the [Service] section as follows:

EnvironmentFile=/etc/default/docker

ExecStart=/usr/bin/docker daemon $DOCKER_OPTS

Now when I restart the service (using sudo service restart docker) the daemon starts correctly and is protected with the keys.

I could now run docker commands on the machine itself. However, I couldn’t run commands from my local machine (which is running Docker for Windows) because:

Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)

I tried in vain to upgrade the Docker engine on the server, but could not for the life of me do it. The apt packages are on 1.23, and so eventually I gave up. I can run Docker commands by ssh-ing to the host machine if I really need to, so while irritating, it wasn’t a show-stopper.

.NET Core and Docker in VS

Now that I (finally) had a Docker host configured, I installed Docker Tools for Visual Studio onto my Visual Studio 2015.3. I also installed the .NET Core 1.0 SDK. I then did a File->New->Project and created an ASP.NET project – you know, the boilerplate one. I then followed the instructions from the VSTS article and right-clicked the project and selected “Add->Docker support”. This created a DockerTask.ps1 file, a Dockerfile (and Dockerfile.debug) and some docker-compose yml files. Great! However, nothing worked straight away (argh!) so I had to start debugging the scripts.

I kept getting this error: .\DockerTask.ps1 : The term '.\DockerTask.ps1' is not recognized as the name of a cmdlet, function, script file, or operable program. After lots of hacking, I finally found that there is a path issue somewhere. I opened up the Properties\Docker.targets file and edited the <DockerBuildCommand>: I changed “.\DockerTask.ps1” to the full path – c:\projects\docker\TestApp\src\TestApp\DockerTask.ps1. I did the same for the <DockerCleanCommand>. This won’t affect the build, but other developers who share this code will have to have the same path structure for this to work. Gah!

Now the command was being executed, but I was getting this error: No machine name(s) specified and no “default” machines exist. I again opened the DockerTask.ps1 script. It’s checking for a machine name to invoke docker-machine commands, but it’s only supposed to do this if the Docker machine name is specified. For Docker for Windows, you don’t have to use docker-machine, so the script makes provision for this by assuming Docker for Windows if the machine name is empty. At least, that’s what it’s supposed to do. For some reason, this line in the script is evaluating to true, even when $Machine was set to ‘’ (empty string):

if (![System.String]::IsNullOrWhiteSpace($Machine))

So I commented out the entire if block since I’ve got Docker for Windows and don’t need it to do and docker-machine commands.

Now at least the build operation was working, and I could see VS creating an image in my local Docker for Windows:

image

Next I tried debugging an app in the container. No dice. The issue seemed to be that the container couldn’t start on port 80. Looking at the Dockerfile and the DockerTask.ps1 files, I saw that the port is hard-coded to 80. So I changed the port to 5000 (making it a variable in the ps1 script and an ARG in my Dockerfile). Just remember that you have a Dockerfile.debug as well – and that the ports are hard-coded in the docker-compose.yml and docker-compose.debug.yml files too. The image name is also hardcoded all over the place to “username/appname”. I tried to change it, but ended up reverting back. This only affects local dev, so I don’t really care that much.

At this point I could get the container to run in Release, so I knew Docker was happy. However, I couldn’t debug. I was getting this error:

image

Again a bit of googling led me to enable volume sharing in Docker for Windows (which is disabled by default). I clicked the moby in my task bar, opened the Docker settings and enabled volume sharing on my c drive:

image

Debugging then actually worked – the script starts up a container (based on the image that gets created when you build) and attaches the remote debugger. Pretty sweet now that it’s working!

image

In the above image you can see how I’m navigating to the About page (the url is http://docker:5000) and VS is spewing logging into the console showing the server (in the container) responding to the request).

One more issue – the clean command wasn’t working. I kept getting this error: The handle could not be duplicated during redirection of handle 1. Turns out some over-eager developer had the following line in function Clean() in the DockerTask.ps1 file:

Invoke-Expression "cmd /c $shellCommand `"*>&1`"" | Out-Null

I changed *>&1 to 2>&1 like this:

Invoke-Expression "cmd /c $shellCommand `"2>&1`"" | Out-Null

And now the clean was working great.

So I could get an ASP.NET Core 1.0 app working in VS in a container (with some work). Now for build and release automation in VSTS!

Build and Release in VSTS

In order to execute Docker commands during build or release in VSTS, you need to install the Docker extension from the marketplace. Once you’ve installed it, you’ll get some new service endpoint types as well as a Docker task for builds and releases. You need two connections: one to a Docker registry (like DockerHub) and one for a Docker host. Images are built on the Docker host and published to the registry during a build. Then an image can be pulled from the registry and run on the host during a release. So I created a new private DockerHub repo (using the account that I created on DockerHub to get access to Docker for Windows). This info I used to create the Docker registry endpoint. Next I copied all the keys I created on my Azure Docker host and created a service endpoint for my Docker host. The trick here was the URL – initially I had “http://my-docker-host.cloudapp.net:2376” but that doesn’t work – it has to be “tcp://my-docker-host.cloudapp.net:2376”.

The cool thing about publishing to the repo is that you can have any number of hosts pull the image to run it!

Now I had the endpoints ready for build/deploy. I then added my solution to a Git repo and pushed to VSTS. Here’s the project structure:

image

I then set up a build. In the VSTS Docker article, they suggest just two Docker tasks: the first with a “Build” action and the second with a “Push” action. However, I think this is meant to copy the source to the image and have the image do a dotnet restore – else how it work? However, I wanted the build to do the dotnet restore and publish (and test) and then just have the output bundled into the Docker image (as well as uploaded as a build drop). So I had to include two “run command” tasks and a publish build artifacts task. Here’s what my build definition ended up looking like:

image

The first two commands are fairly easy – the trick is setting the working directory (to the folder containing the project) and the correct framework and runtimes for running inside a Docker container:

image

 

image

You’ll see that I output the published site to $(Build.ArtifactStagingDirectory)/site/app which is important for the later Docker commands.

I also created two variables (the values of which I got from the DockerTask.ps1 script) for this step:

image

For building the Docker image, I specified the following arguments:

image

I use the two service endpoints I created earlier and set the action to “Build an Image”. I then specify the path to the Dockerfile – initially I browsed to the location in the src folder, but I want the published app so I changed this to the path in the artifact staging directory (otherwise Docker complains that the Dockerfile isn’t within the context). I then specify a repo/tag name for the Image Name, and use the build number for the version. Finally, the context is the folder which contains the “app” folder – the Dockerfile needs to be in this location. This location is used as the root for any Dockerfile COPY commands.

Next step is publishing the image – I use the same endpoints, change the action to “Push an image” and specify the same repo/tag name:

image

Now after running the build, I can see the image in my DockerHub repo (you can see how the build number and tag match):

image

Now I could turn to the release. I have a single environment release with a single task:

image

I named the ARG for the port in my Dockerfile APP_PORT, so I make sure it’s set to 5000 in the “Environment Variables” section. The example I followed had the HOST_PORT specified as well – I left that in, though I don’t know if it’s necessary. I linked the release to the build, so I can use the $(Build.BuildNumber) to specify which version (tag) of the container this release needs to pull.

Initially the release failed while attempting to download the drops. I wanted the drops to enable deployment of the build somewhere else (like Azure webapps or IIS), so this release doesn’t need them. I configured this environment to “Skip artifact download”:

image

Lo and behold, the release worked after that! Unfortunately, I couldn’t browse to the site (connection refused). After a few moments of thinking, I realized that the Azure VM probably didn’t allow traffic on port 5000 – so I headed over to the portal and added an new endpoint (blegh – ARM network security groups are so much better):

image

After that, I could browse to the ASP.NET Core 1.0 App that I developed in VS, debugged in Docker for Windows, source controlled in Git in VSTS, built and pushed in VSTS build and released in VSTS Release Management. Pretty sweet!

image

Conclusion

The Docker workflow is compelling, and having it work (nearly) out the box for .NET Core is great. I think teams should consider investing into this workflow as soon as possible. I’ve said it before, and I’ll say it again – containers are the way of the future! Don’t get left behind – start learning Docker today and skill up for the next DevOps wave – especially if you’re embarking on .NET Core dev!

Happy Dockering!

Running the New DotNet Core VSTS Agent in a Docker Container

$
0
0

This week I finally got around to updating my VSTS extension (which bundle x-plat VersionAssembly and ReplaceTokens tasks) to use the new vsts-task-lib, which is used by the new DotNet Core vsts-agent. One of the bonuses of the new agent is that it can run in a DotNet Core Docker container! Since I am running Docker for Windows, I can now (relatively) easily spin up a test agent in a container to run test – a precursor to running the agent in a container as the de-facto method of running agents!

All you need to do this is a Dockerfile with a couple of commands that do the following:

  1. Install Git
  2. Create a non-root user and switch to it (since the agent won’t run as root)
  3. Copy the agent tar.gz file and extract it
  4. Configure the agent to connect it to VSTS

Pretty simple.

The Dockerfile

Let’s take a look at the Dockerfile (which you can find here in Github) for an agent container:

FROM microsoft/dotnet:1.0.0-core

# defaults - override them using --build-arg
ARG AGENT_URL=://github.com/Microsoft/vsts-agent/releases/download/v2.104.0/vsts-agent-ubuntu.14.04-x64-2.104.0.tar.gz
ARG AGENT_NAME=docker
ARG AGENT_POOL=default

# you must supply these to the build command using --build-arg
ARG VSTS_ACC
ARG PAT

# install git
#RUN apt-get update && apt-get -y install software-properties-common && apt-add-repository ppa:git-core/ppa
RUN apt-get update && apt-get -y install git

# create a user
RUN useradd -ms /bin/bash agent
USER agent
WORKDIR /home/agent

# download the agent tarball
#RUN curl -Lo agent.tar.gz $AGENT_URL && tar xvf agent.tar.gz && rm agent.tar.gz
COPY *.tar.gz .
RUN tar xzf *.tar.gz && rm -f *.tar.gz
RUN bin/Agent.Listener configure --url https://$VSTS_ACC.visualstudio.com --agent $AGENT_NAME --pool $AGENT_POOL --acceptteeeula --auth PAT --token $PAT --unattended

ENTRYPOINT ./run.sh

Notes:

  • Line 1: We start with the DotNet Core 1.0.0 image
  • Lines 4-6: We create some arguments and set defaults
  • Lines 9-10: We create some args that don’t have defaults
  • Line 14: Install Git
    • This installs Git 2.1.4 from the official Jesse packages. We should be installing Git 2.9, but the only way to install it from a package source is to add a package source (line 13, which I commented out). Unfortunately apt-add-repository is inside the package software-properties-common, which introduces a lot of bloat to the container which I decided against. The VSTS agent will work with Git 2.1.4 (at least at present) so I was happy to leave it at that.
  • Line 17: create a user called agent
  • Line 18: switch to the agent user
  • Line 19: switch to the agent home directory
  • Line 23: Use this to download the tarball as part of building the container. Do it if you have enough bandwidth. I ended up downloading the tarball and putting it in the same directory as the Dockerfile and using Line 24 to copy it to the container
  • Line 24: Extract the tarball and then delete it
  • Line 25: Run the command to configure the agent in an unattended mode. This uses the args supplied through the file or from the docker build command to correctly configure the agent.
  • Line 27: Set an entrypoint – this is the command that will be executed when you run the container.

Pretty straightforward. To build the image, just cd to the Dockerfile folder and download the agent tarball (from here) if you’re going to use Line 23 (otherwise if you use Line 22, just make sure Line 4 has the latest release URL for Ubuntu 14.04 or use the AGENT_URL arg to supply it when building the image). Then run the following command:

docker build . --build-arg VSTS_ACC=myVSTSAcc --build-arg PAT=abd64… --build-arg AGENT_POOL=docker –t colin/agent
  • Mandatory: VSTS_ACC (which is the 1st part of your VSTS account URL – so for https://myVSTSAcc.visualstudio.com the VSTS_ACC is myVSTSAcc.
  • Mandatory: PAT – your Personal Auth Token
  • Optional: AGENT_POOL – the name of the agent pool you want the agent to register with
  • Optional: AGENT_NAME – the name of the agent
  • Optional: AGENT_URL – the URL to the Ubuntu 14.04 agent (if using Line 22)
  • The –t is the tag argument. I use colin/agent.

This creates a new image that is registered with your VSTS account!

Now that you have an image, you can simply run it whenever you need your agent:

> docker run -it colin/agent:latest

Scanning for tool capabilities.
Connecting to the server.
2016-07-28 17:56:57Z: Listening for Jobs

After the docker run command, you should see the agent listening for jobs.

Gotcha – Self-Updating Agent

One issue I did run into is that I had downloaded agent 2.104.0. When the first build runs, the agent checks to see if there’s a new version available. In my case, 2.104.1 was available, so the agent updated itself. It also restarts – however, if it’s running in a container, when the agent stops, the container stops. The build fails with this error message:

The job has been abandoned because agent docker did not renew the lock. Ensure agent is running, not sleeping, and has not lost communication with the service.

Running the container again starts it with the older agent again, so you get into a loop. Here’s how to break the loop:

  1. Run docker run -it --entrypoint=/bin/bash colin/agent:latest
    1. This starts the container but just creates a prompt instead of starting the agent
  2. In the container, run “./run.sh”. This will start the agent.
  3. Start a build and wait for the agent to update. Check the version in the capabilities pane in the Agent Queue page in VSTS. The first build will fail with the above “renew lock” error.
  4. Run a second build to make sure the agent is working correctly.
  5. Now exit the container (by pressing Cntr-C and then typing exit).
  6. Commit the container to a new image by running docker commit --change='ENTRYPOINT ./run.sh' <containerId> (you can get the containerId by running docker ps)
  7. Now when you run the container using docker run –it colin/agent:latest your agent will start and will be the latest version. From there on, you’re golden!

Conclusion

Overall, I was happy with how (relatively) easy it was to get an agent running in a container. I haven’t yet tested actually compiling a DotNet Core app – that’s my next exercise.

Happy Dockering!

Parallel Testing in a Selenium Grid with VSTS

$
0
0

There are several different types of test – unit tests, functional test, load tests and so on. Generally, unit tests are the easiest to implement and have a high return on investment. Conversely, UI automation tests tend to be incredibly fragile, hard to maintain and don’t always deliver a huge amount of value. However, if you carefully design a good UI automation framework (especially for web testing) you can get some good mileage.

Even if you get a good framework going, you’re going to want to find ways of executing the tests in parallel, since they can take a while. Enter Selenium GridSelenium is a “web automation framework”. Under the hood, Selenium actually runs as a server which accepts ReST commands – those commands are wrapped in the Selenium client, so you usually see these HTTP commands. However, since the server is capable of driving a browser via HTTP, you can run tests remotely– that is, have tests run on one machine that execute over the wire to a browser on another machine. This allows you to scale your test infrastructure (by adding more machines). This is exactly what Selenium Grid does for you.

My intention with this post isn’t to show how you can create Selenium tests. Rather, I’ll show you some ins and outs of running Selenium tests in parallel in a Grid in a VSTS build/release pipeline.

Components for Testing in a Grid

There are a few moving parts we’ll need to keep track of in order for this to work:

  1. The target site
  2. The tests
  3. The Selenium Hub (the master that coordinates the Grid)
  4. The Selenium Nodes (the machines that are going to be executing the tests)
  5. Selenium drivers
  6. VSTS Build
  7. VSTS Release

The cool thing about a Selenium grid is that, from a test perspective, you only need to know about the Selenium Grid hub. The nodes register with the hub and await commands to execute (i.e. run tests). The tests themselves just target the hub: “Hey Hub, I’ve got this test I want you to run for me on a browser with these capabilities…”. The hub then finds a node that meets the required capabilities and executes the tests remotely (via HTTP) on the node. Pretty sweet. This means you can scale the grid out without having to modify the tests at all.

Again lifting Selenium’s skirts we’ll see that a Selenium Node receives an instruction (like “use Chrome and navigate to google.com”). The node uses a driver for each browser (Firefox doesn’t have a driver, since Selenium knows how to drive it “natively”) to drive the browser. So when you configure a grid, you need to configure the Selenium drivers for each browser you want to test with on the grid (and by configure I mean copy to a folder).

Setting Up a Selenium Grid

In experimenting with the grid, I decided to set up a two-machine Grid in Azure. Selenium Server (used to run the Hub and the Nodes) is a java application, so it’ll run wherever you can run Java. So I spun up a Resource Group with two VMs (Windows 2012 R2) and installed Java (and added the bin folder to the Path), Chrome and Firefox. I then downloaded the Selenium Server jar file, the IE driver and the Chrome driver (you can see the instructions on installing here). I put the IE and Chrome drivers into c:\selenium\drivers on both machines.

I wanted one machine to be the Hub and run Chrome/Firefox tests, and have the other machine run IE/Firefox tests (yes, Nodes can run happily on the same machine or even on the same machine as the Hub). There are a myriad of options you can specify when you start a Hub or Node, so I scripted a general config that I thought would work as a default case.

To start the Hub, I created a one-line bat file in c:\selenium\server folder (where I put the server jar file):

java -jar selenium-server-standalone-2.53.1.jar -role hub

This command starts up the Hub using port 4444 (the default) on the machine. Don’t forget to open the firewall for this port!

Configuring the nodes took a little longer to work out. The documentation is a bit all over the place (and ambiguous) so I eventually settled on putting some config in a JSON file and some I pass in to the startup command. Here’s the config JSON I have for the IE node:

{
  "capabilities":
  [
    {
      "browserName": "firefox",
      "platform": "WINDOWS",
      "maxInstances": 1
    },
    {
      "browserName": "internet explorer",
      "platform": "WINDOWS",
      "maxInstances": 1
    }
  ],
  "configuration":
  {
    "nodeTimeout": 120,
    "nodePolling": 2000,
    "timeout": 30000
  }
}

This is only the bare minimum of config that you can specify – there are tons of other options that I didn’t need to bother with. All I wanted was for the Node to be able to run Firefox and IE tests. In a similar vein I specify the config for the Chrome node:

{
  "capabilities": [
    {
      "browserName": "firefox",
      "platform": "WINDOWS",
      "maxInstances": 1
    },
    {
      "browserName": "chrome",
      "platform": "WINDOWS"
    }
  ],
  "configuration":
  {
    "nodeTimeout": 120,
    "nodePolling": 2000,
    "timeout": 30000
  }
}

You can see this is almost identical to the config of the IE node, except for the second browser type.

I saved these files as ieNode.json and chromeNode.json in c:\selenium\server\configs respectively.

Then I created a simple PowerShell script that would let me start a node:

param(
    $port,
    [string]$hubHost,
    $hubPort = 4444,

    [ValidateSet('ie', 'chrome')]
    [string]$browser,

    [string]$driverPath = "configs/drivers"
)

$hubUrl = "http://{0}:{1}/grid/register" -f $hubHost, $hubPort
$configFile = "./configs/{0}Node.json" -f $browser

java -jar selenium-server-standalone-2.53.1.jar -role node -port $port -nodeConfig $configFile -hub $hubUrl -D"webdriver.chrome.driver=$driverPath/chromedriver.exe" -D"webdriver.ie.driver=$driverPath/IEDriverServer.exe"

So now I can run the following command on the Hub machine to start a node:

.\startNode.ps1 -port 5555 -hubHost localhost -browser chrome -driverPath c:\selenium\drivers

This will start a node that can run Chrome/Firefox tests using drivers in the c:\selenium\server\drivers path running on port 5555. On the other machine, I copied the same files and just ran this command:

.\startNode.ps1 -port 5556 -hubHost 10.4.0.4 -browser ie -driverPath c:\selenium\drivers

This time the node isn’t on the same machine as the Hub, so I used the Azure vNet internal IP address of the Hub – I also specified I want IE/Firefox tests to run on this node.

Of course I made sure that all these files are in source control!

Again I had to make sure that the ports I specify were allowed in the Firewall. I just created a single Firewall rule to allow TCP traffic on ports 4444, 5550-5559 on both machines.

image

I also opened those ports in the Azure network security group that both machines’ network cards are connected to:

image

Now I can browse to the Selenium console of my Grid:

image

Now my hub is ready to run tests!

Writing Tests for Grid Execution

The Selenium Grid is capable of running tests in parallel, spreading tests across the grid. Spoiler alert: it doesn’t run tests in parallel. I can hear you now thinking, “What?!? You just said it can run tests in parallel, and now you say it can’t!”. Well, the grid can spread as many tests as you throw at it – but you have to parallelize the tests yourself!

It turns out that you can do this in Visual Studio 2015 and VSTS – but it’s not pretty. If you open up the Test Explorer Window, you’ll see an option to “Run Tests In Parallel” in the toolbar (next to the Group By button):

image

Again I hear you thinking: “Just flip the switch! Easy!” Whoa, slow down, Dear Reader – it’s not that easy. You have to consider the unit of parallelization. In other words – what does it mean to “Run Tests In Parallel”? Well, Visual Studio runs different assemblies in parallel. Which means that you have to have at least two test projects (assemblies) in order to get any benefit.

In my case I had two tests that I wanted to run against three browsers – IE, Chrome and Firefox. Of course if you have several hundred tests, you probably have them in different assemblies already – hopefully grouped by something meaningful. In my case I chose to group the tests by browser. Here’s what I ended up doing:

  1. Create a base (abstract) class that contains the Selenium test methods (the actual test code)
  2. Create three additional projects – one for each browser type – that contains a class that derives from the base class
  3. Run in parallel

It’s a pity really – since the Selenium libraries abstract the test away from the actual browser. That means you can run the same test against any browser that you have a driver for! However, since we’re going to run tests in a Hub, we need to use a special driver called a RemoteWebDriver. This driver is going to connect the test to the hub using “capabilities” that we define (like what browser to run in).

Let’s consider an example test. Here’s a test I created to check the Search functionality of my website:

protected void SearchTest()
{
    driver.Navigate().GoToUrl(baseUrl + "/");
    driver.FindElement(By.Id("search-box")).Clear();
    driver.FindElement(By.Id("search-box")).SendKeys("tire");

    driver.FindElement(By.Id("search-link")).Click();

    // check that there are 3 results
    Assert.AreEqual(3, driver.FindElements(By.ClassName("list-item-part")).Count);
}

This is a pretty simple test – and as I mentioned before, this post isn’t about the Selenium tests themselves as much as it is about running the tests in a build/release pipeline – so excuse the simple nature of the test code. However, I have to show you some code in order to show you how I got the tests running successfully in the grid.

You can see how the code assumes that there is a “driver” object and that it is instantiated? There’s also a “baseUrl” object. Both of these are essential to running tests in the grid: the driver is an instantiated RemoteWebDriver object that connects us to the Hub, while baseUrl is the base URL of the site we’re testing.

The base class is going to instantiate a RemoteWebDriver for each test (in the test initializer). Each child (test) class is going to specify what capabilities the driver should be instantiated with. The driver constructor needs to know the URL of the grid hub as well as the capabilities required for the test. Here’s the constructor and test initializer in the base class:

public abstract class PartsTests
{
    private const string defaultBaseUrl = "http://localhost:5001";
    private const string defaultGridUrl = "http://10.0.75.1:4444/wd/hub";

    protected string baseUrl;
    protected string gridUrl;

    protected IWebDriver driver;
    private StringBuilder verificationErrors;
    protected ICapabilities capabilities;
    public TestContext TestContext { get; set; }

    public PartsTests(ICapabilities capabilities)
    {
        this.capabilities = capabilities;
    }

    [TestInitialize]
    public void SetupTest()
    {
        if (TestContext.Properties["baseUrl"] != null) //Set URL from a build
        {
            baseUrl = TestContext.Properties["baseUrl"].ToString();
        }
        else
        {
            baseUrl = defaultBaseUrl;
        }
        Trace.WriteLine($"BaseUrl: {baseUrl}");

        if (TestContext.Properties["gridUrl"] != null) //Set URL from a build
        {
            gridUrl = TestContext.Properties["gridUrl"].ToString();
        }
        else
        {
            gridUrl = defaultGridUrl;
        }
        Trace.WriteLine($"GridUrl: {gridUrl}");

        driver = new RemoteWebDriver(new Uri(gridUrl), capabilities);
        verificationErrors = new StringBuilder();
    }

    [TestCleanup]
    public void Teardown()
    {
        try
        {
            driver.Quit();
        }
        catch (Exception)
        {
            // Ignore errors if unable to close the browser
        }
        Assert.AreEqual("", verificationErrors.ToString());
    }

    ...
}

The constructor takes an ICapabilities object which will allow us to specify how we want the test run (or at least which browser to run against). We hold on to these capabilities. The SetupTest() method then reads the “gridUrl” and the “baseUrl” from the TestContext properties (defaulting values if none are present). Finally it created a new RemoteWebDriver using the gridUrl and capabilities. The Teardown() method calls the driver Quit() method, which closes the browser (ignoring errors) and checks that there are no verification errors. Pretty standard stuff.

So how do we pass in the gridUrl and baseUrl? To do that we need a runsettings file – this sets the value of the parameters in the TestContext object.

I added a new XML file to the base project called “selenium.runsettings” with the following contents:

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
  <TestRunParameters>
    <Parameter name="baseUrl" value="http://localhost:5001" />
    <Parameter name="gridUrl" value="http://localhost:4444/wd/hub" />
  </TestRunParameters>
</RunSettings>

Again I’m using default values for the values of the parameters – this is how I debug locally. Note the “/wd/hub” on the end of the grid hub URL.

Now I can set the runsettings file in the Test menu:

image

 

So what about the child test classes? Here’s what I have for the Firefox tests:

[TestClass]
public class FFTests : PartsTests
{
    public FFTests()
        : base(DesiredCapabilities.Firefox())
    {
    }

    [TestMethod]
    [TestCategory("Firefox")]
    public void Firefox_AddToCartTest()
    {
        AddToCartTest();
    }

    [TestMethod]
    [TestCategory("Firefox")]
    public void Firefox_SearchTest()
    {
        SearchTest();
    }
}

I’ve prepended the test name with Firefox_ (you’ll see why when we run the tests in the release pipeline). I’ve also added the [TestClass] and [TestMethod] attributes as well as [TestCategory]. This is using the MSTest framework, but the same will work with nUnit or xUnit too. Unfortunately the non-MSTest frameworks don’t have the TestContext, so you’re going to have to figure out another method of providing the baseUrl and gridUrl to the test. The constructor is using a vanilla Firefox capability for this test – you can instantiate more complex capabilities if you need them here.

Just for comparison, here’s my code for the ChromeTests file:

[TestClass]
public class ChromeTests : PartsTests
{
    public ChromeTests()
        : base(DesiredCapabilities.Chrome())
    {
    }

    [TestMethod]
    [TestCategory("Chrome")]
    public void Chrome_AddToCartTest()
    {
        AddToCartTest();
    }

    [TestMethod]
    [TestCategory("Chrome")]
    public void Chrome_SearchTest()
    {
        SearchTest();
    }
}

You can see it’s almost identical except for the class name prefix, the [TestCategory] and the capabilities in the constructor.

Here’s my project layout:

image

At this point I can run tests (in parallel) against my “local” grid (I started a hub and two nodes locally to test). Next we need to put all of this into a build/release pipeline.

Creating a Build

You could run the tests during the build, but I wanted my UI tests to be run against a test site that I deploy to, so I felt it more appropriate to run the tests in the release. Before we get to that, we have to build the application (and test code) so that it’s available in the release.

I committed all the code to source control in VSTS and created a build definition. Here’s what the tasks look like:

image

The first three steps are for building the application and the solution – I won’t bore you with the details. Let’s look at the next five steps though:

The first three “Copy Files” steps copy the binaries for the three test projects I want (one for IE, for Chrome and Firefox):

image

In each case I’m copying the compiled assemblies of the test project (from say test/PartsUnlimitedSelenium.Chrome\bin\$(BuildConfiguration) to $(Build.ArtifactStagingDirectory)\SeleniumTests.

The fourth copy task copies the runsettings file:

image

The final task publishes the $(Build.ArtifactStagingDirectory) to the server:

image

After running the build, I have the following drop:

image

The “site” folder contains the webdeploy package for my site – but the important bit here is that all the test assemblies (and the runsettings file) are in the SeleniumTests folder.

Running Tests in the Release

Now that we have the app code (the site) and the test assemblies in a drop location, we’re ready to define a Release. In the release for Dev I have the following steps:

image

I have all the steps that I need to deploy the application (in this case I’m deploying to Azure). Again, that’s not the focus of this post. The Test Assemblies task is the important step to look at here:

image

It turns out to be pretty straightforward. I just make sure that “Test Assembly” includes all the assemblies I want to execute – remember you need at least two in order for “Run In Parallel” to have any effect. For Filter Criteria I’ve excluded IE tests – IE tests seem to fail for all sorts of arbitrary reasons that I couldn’t work out – you can leave this empty (or put in a positive expression) if you want to only run certain tests. I specify the path to the runsettings file, and then in “Override TestRun Parameters” I specify the gridUrl and baseUrl that I want to test in this particular environment. I’ve used variables that I define on the environment so that I can clone this for other environments if I need to.

Now when I release, I see that the tests run as part of the release. Clicking on the Tests tab I see the test results. I changed the Outcome filter to show Passed tests and configured the columns to show the “Date started” and “Date completed”. Sure enough I can see that the tests are running in parallel:

image

Now you can see why I wanted to add the prefix to the test names – this lets me see exactly which browsers are behaving and which aren’t (ahem, IE).

Final Thoughts

Running Selenium Tests in a Grid in VSTS is possible –  there are a few hacks required though. You need to create multiple assemblies in order to take advantage of the grid scalability, and this can lead to lots of duplicated and error-prone code (for example when I initially created the Firefox tests, I copied the Chrome class and forgot to change the prefix and [TestCategory] which lead to interesting results). There are probably other ways of dividing your tests into multiple assemblies, and then you could pass the browser in as a Test Parameter and have multiple runs – but then the runs wouldn’t be simultaneous across browsers. A final gotcha is that the runsettings only work for MSTest – if you’re using another framework, chances are you’ll end up creating a json file that you read when the tests start.

You can see that there are challenges whichever way you slice it up. Hopefully the work the test team is doing in VSTS/TFS will improve this story at some stage.

For now, happy Grid testing!

DacPac Change Report Task for VSTS Builds

$
0
0

Most development requires working against some kind of database. Some teams choose to use Object Relational Mappers (ORMs) like Entity Framework. I think that should be the preferred method of dealing with databases (especially code-first), but there are times when you just have to work with a database schema.

Recently I had to demo ReadyRoll in VSTS. I have to be honest that I don’t like the paradigm of ReadyRoll – migration-based history seems like a mess compared to model-based history (which is the approach that SSDT takes). That’s a subject for another post (some day) or a discussion over beers. However, there was one thing that I really liked – the ability to preview database changes in a build. The ReadyRoll extension on the VSTS marketplace allows you to do just that.

So I stole the idea and made a task that allows you to see SSDT schema changes from build to build.

Using the Task

Let’s consider the scenario: you have an SSDT project in source control and you’re building the dacpac in a Team Build. What the task does is allow you to see what’s changed from one build to the next. Here’s what you need to do:

  1. Install Colin’s ALM Corner Build Tasks Extension from the VSTS Marketplace
  2. Edit the build definition and go to Options. Make sure “Allow Scripts to Access OAuth Token” is checked, since the task requires this. (If you forget this, you’ll see 403 errors in the task log).
  3. Make sure that the dacpac you want to compare is being published to a build drop.
  4. Add a “DacPac Schema Compare” task

That’s it! Here’s what the task looks like:

image

Enter the following fields:

  1. The name of the drop that your dacpac file is going to be published to. The task will look up the last successful build and download the drop in order to get the last dacpac as the source to compare.
  2. The name of the dacpac (without the extension). This is typically the name of the SSDT project you’re building.
  3. The path to the compiled dacpac for this build – this is the target dacpac path and is typically the bin folder of the SSDT project.

Now run your build. Once the build completes, you’ll see a couple new sections in the Build Summary:

image

The first section shows the schema changes, while the second shows a SQL-CMD file so you can see what would be generated by SqlPackage.exe.

Now you can preview schema changes of your SSDT projects between builds! As usual, let me know here, on Twitter or on Github if you have issues with the task.

Happy building!

Load Balancing DotNet Core Docker Containers with nginx

$
0
0

Yes, I’ve been playing with Docker again – no big surprise there. This time I decided to take a look at scaling an application that’s in a Docker container. Scaling and load balancing are concepts you have to get your head around in a microservices architecture!

Another consideration when load balancing is of course shared memory. Redis is a popular mechanism for that (and since we’re talking Docker I should mention that there’s a Docker image for Redis) – but for this POC I decided to keep the code very simple so that I could see what happens on the networking layer. So I created a very simple .NET Core ASP.NET Web API project and added a single MVC page that could show me the name of the host machine. I then looked at a couple of load balancing options and started hacking until I could successfully (and easily) load balance three Docker container instances of the service.

The Code

The code is stupid simple – for this POC I’m interested in configuring the load balancer more than anything, so that’s ok. Here’s the controller that we’ll be hitting:

namespace NginXService.Controllers
{
    public class HomeController : Controller
    {
        // GET: /<controller>/
        public IActionResult Index()
        {
            // platform agnostic call
            ViewData["Hostname"] = Environment.GetEnvironmentVariable("COMPUTERNAME") ??
                Environment.GetEnvironmentVariable("HOSTNAME");

            return View();
        }
    }
}

Getting the hostname is a bit tricky for a cross-platform app, since *nix systems and windows use different environment variables to store the hostname. Hence the ?? code.

Here’s the View:

@{
    <h1>Hello World!</h1>
    <br/>

    <h3>Info</h3>
    <p><b>HostName:</b> @ViewData["Hostname"]</p>
    <p><b>Time:</b> @string.Format("{0:yyyy-MM-dd HH:mm:ss}", DateTime.Now)</p>
}

I had to change the Startup file to add the MVC route. I just changed the app.UseMvc() line in the Configure() method to this:

app.UseMvc(routes =>
{
    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}");
});

Finally, here’s the Dockerfile for the container that will be hosting the site:

FROM microsoft/dotnet:1.0.0-core

# Set the Working Directory
WORKDIR /app

# Configure the listening port
ARG APP_PORT=5000
ENV ASPNETCORE_URLS http://*:$APP_PORT
EXPOSE $APP_PORT

# Copy the app
COPY . /app

# Start the app
ENTRYPOINT dotnet NginXService.dll

Pretty simple so far.

Proxy Wars: HAProxy vs nginx

After doing some research it seemed to me that the serious contenders for load balancing Docker containers boiled down to HAProxy and nginx (with corresponding Docker images here and here). In the end I decided to go with nginx for two reasons: firstly, nginx can be used as a reverse proxy, but it can also serve static content, while HAProxy is just a proxy. Secondly, the nginx website is a lot cooler – seemed to me that nginx was more modern than HAProxy (#justsaying). There’s probably as much religious debate about which is better as there is about git rebase vs git merge. Anyway, I picked nginx.

Configuring nginx

I quickly pulled the image for nginx (docker pull nginx) and then set about figuring out how to configure it to load balance three other containers. I used a Docker volume to keep the config outside the container – that way I could tweak the config without having to rebuild the image. Also, since I was hoping to spin up numerous containers, I turned to docker-compose. Let’s first look at the nginx configuration:

worker_processes 1;

events { worker_connections 1024; }

http {

    sendfile on;

    # List of application servers
    upstream app_servers {

        server app1:5000;
        server app2:5000;
        server app3:5000;

    }

    # Configuration for the server
    server {

        # Running port
        listen [::]:5100;
        listen 5100;

        # Proxying the connections
        location / {

            proxy_pass         http://app_servers;
            proxy_redirect     off;
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Host $server_name;

        }
    }
}

This is really a bare-bones config for nginx. You can do a lot in the config. This config does a round-robin load balancing, but you can also configure least_connected, provide weighting for each server and more. For the POC, there are a couple of important bits:

  • Lines 10-16: this is the list of servers that nginx is going to be load balancing. I’ve used aliases (app1, app2 and app3, all on port 5000) which we’ll configure through docker-compose shortly.
  • Lines 22-23: the nginx server itself will listen on port 5100.
  • Line 26, 28: we’re passing all traffic on to the configured servers.

I’ve saved this config to a file called nginx.conf and put it into the same folder as the Dockerfile.

Configuring the Cluster

To configure the whole cluster (nginx plus three instances of the app container) I use the following docker-compose yml file:

version: '2'

services:
  app1:
    image: colin/nginxservice:latest
  app2:
    image: colin/nginxservice:latest
  app3:
    image: colin/nginxservice:latest

  nginx:
    image: nginx
    links:
     - app1:app1
     - app2:app2
     - app3:app3
    ports:
     - "5100:5100"
    volumes:
     - ./nginx.conf:/etc/nginx/nginx.conf

That’s 20 lines of code to configure a cluster – pretty sweet! Let’s take a quick look at the file:

  • Lines 4-9: Spin up three containers using the image containing the app (that I built separately, since I couldn’t figure out how to build and use the same image multiple times in a docker-compose file).
  • Line 12: Spin up a container based on the stock nginx image.
  • Lines 13-16: Here’s the interesting bit: we tell docker to create links between the nginx container and the other containers, aliasing them with the same names. Docker creates internal networking (so it’s not exposed publically) between the containers. This is very cool – the nginx container can reference app1, app2 and app3 (as we did in the nginx config file) and docker takes care of figuring out the IP addresses on the internal network.
  • Line 18: map port 5100 on the nginx container to an exposed port 5100 on the host (remember we configured nginx to listen on the internal 5100 port).
  • Line 20: map the nginx.conf file on the host to /etc/nginx/nginx.conf within the container.

Now we can simply run docker-compose up to run the cluster!

image

You can see how docker-compose pulls the logs into a single stream and even color-codes them!

The one thing I couldn’t figure out was how to do a docker build on an image and use that image in another container within the docker-compose file. I could just have three build directives, but that felt a bit strange to me since I wanted to supply build args for the image. So I ended up doing the docker build to create the app image and then just using the image in the docker-compose file.

Let’s hit the index page and then refresh a couple times:

image

image

image

You can see in the site (the hostname) as well as in the logs how the containers are round-robining:

image

Conclusion

Load balancing containers with nginx is fairly easy to accomplish. Of course the app servers don’t need to be running .NET apps – nginx doesn’t really care, since it’s just directing traffic. However, I was pleased that I could get this working so painlessly.

Happy load balancing!

Using Release Management to Manage Ad-Hoc Deployments

$
0
0

Release Management (RM) is awesome – mostly because it works off the amazing cross platform build engine. Also, now that pricing is announced, we know that it won’t cost an arm and a leg!

When I work with customers to adopt RM, I see two kinds of deployments: repeatable and ad-hoc. RM does a great job at repeatable automation – that is, it is great at doing the same thing over and over. But what about deployments that are different in some way every time? Teams love the traceability that RM provides – not just the automation logs, but also the approvals. It would be great if you could track ad-hoc releases using RM.

The Problem

The problem is that RM doesn’t have a great way to handle deployments that are slightly different every time. Let’s take a very typical example: ad-hoc SQL scripts. Imagine that you routinely perform data manipulation on a production database using SQL scripts. How do you audit what script was run and by whom? “We can use RM!” I hear you cry. Yes you can – but there are some challenges.

Ah-hoc means (in this context) different every time. That means that the script you’re running is bound to change every time you run the release. Also, depending on how dynamic you want to go, even the target servers could change – sometimes you’re executing against server A, sometimes against server B, sometimes both. “Just make the script name or server name a variable that you can change at queue time,” I hear you say. Unfortunately, unlike builds, you can’t specify parameter values at queue time. You could create a release in draft and then edit the variables for that run of the release, but this isn’t a great experience since you’re bound to forget things – and you’ll have to do this every time you start a release.

A Reasonable Solution

I was at a customer who were trying to convert to RM from a home-grown deployment tool. Besides “repeatable” deployments their tool was handling several hundred ad-hoc deployments a month, so they had to decide whether or not to keep the ad-hoc deployments in the home-grown tool or migrate to RM. So I mailed the Champs List – a mailing list direct to other MVPs and the VSTS Product Group in Microsoft (being an MVP has to have some benefits, right?) – and asked them what they do for ad-hoc deployments. It turns out that they use ad-hoc deployments to turn feature switches on and off, and they run their ad-hoc deployments with RM – and while I didn’t get a lot of detail, I did get some ideas.

I see three primary challenges for ad-hoc release definitions:

  1. What to execute
  • Where does the Release get the script to execute? You could create a network share and put a script in there called “adhoc.sql” and get the release to execute that script every time it runs. Tracking changes is then a challenge – but we’re developers and already know how to track changes, right? Yes: source control. So source control the script – that way every time the release runs, it gets the latest version of the script and runs that. Now you can track what executed and who changed the script. And you can even perform code-review prior to starting the release – bonus!
  • What to execute
    • Is there an echo here? Well no – it’s just that if the release is executing the same script every time, there’s a danger that it could well – execute the same script. That means you have to either write your script defensively – that is, in such a manner that it is idempotent (has the same result no matter how many times you run it) or you have to keep a record of whether or not the script has been run before, say using a table in a DB or an Azure table or something. I favor idempotent scripts, since I think it’s a good habit to be in when you’re automating stuff anyway – so for SQL that means doing “if this record exists, skip the following steps” kind of logic or using MERGE INTO etc.
  • Where to execute
    • Are you executing against the same server every time or do the targets need to be more dynamic? There are a couple of solutions here – you could have a text doc that has a list of servers, and the release definition reads in the file and then loops, executing the script against the target servers one by one. This is dynamic, but dangerous – what if you put in a server that you don’t mean to? Or you could create an environment per server (if you have a small set of servers this is ok) and then set each environment to manual deployment (i.e. no trigger). Then when you’re ready to execute, you create the release, which just sits there until you explicitly tell it which environment (server) to deploy to.

    Recommended Steps

    While it’s not trivial to set up an ad-hoc deployment pipeline in RM, I think it’s feasible. Here’s what I’m going to start recommending to my customers:

    1. Create a Git repository with a well-defined script (or root script)
    2. Create a Release that has a single artifact link – to the Git repo you just set up, on the master branch
    3. Create an environment per target server. In that environment, you’re conceptually just executing the root script (this could be more complicated depending on what you do for ad-hoc deployments). All the server credentials etc. are configured here so you don’t have to do them every time. You can also configure approvals if they’re required for ad-hoc scripts. Here’s an example where a release definition is targeting (potentially) ServerA, ServerB and/or ServerC. This is only necessary if you have a fixed set of target servers and you don’t always know which server you’re going to target:
      1. image
      2. Here I’ve got an example of copying a file (the root script, which is in a Git artifact link) to the target server and then executing the script using the WinRM SQL task. These tasks are cloned to each server – of course the server name (and possibly credentials) are different for each environment – but you only have to set this up once.
    4. Configure each environment to have a manual trigger (under deployment conditions). This allows you to select which server (or environment) you’re deploying to for each instance of the release:
      1. image
    5. Enable a branch policy on the master branch so that you have to create a Pull Request (PR) to get changes into master. This forces developers to branch the repo, modify the script and then commit and create a PR. At that point you can do code review on the changes before queuing up the release.
    6. When you’ve completed code review on the PR, you then create a release. Since all the environments are set to manual trigger, you now can go and manually select which environment you want to deploy to:
      1. image
      2. Here you can see how the status on each environment is “Not Deployed”. You can now use the deploy button to manually select a target. You can of course repeat this if you’re targeting multiple servers for this release.

    Conclusion

    With a little effort, you can set up an ad-hoc release pipeline. This gives you the advantages of automation (since the steps and credentials etc. are already set up) as well as auditability and accountability (since you can track changes to the scripts as well as any approvals). How do you, dear reader, handle ad-hoc deployments? Sound off in the comments!

    Happy releasing!


    End to End Walkthrough: Deploying Web Applications Using Team Build and Release Management

    $
    0
    0

    I’ve posted previously about deploying web applications using Team Build and Release Management (see Config Per Environment vs Tokenization in Release Management and WebDeploy, Configs and Web Release Management). However, reviewing those posts recently at customers I’ve been working with, I’ve realized that these posts are a little outdated, you need pieces of both to form a full picture and the scripts that I wrote for those posts are now encapsulated in Tasks in my marketplace extension. So in this post I’m going to do a complete end-to-end walkthrough of deploying web applications using Team Build and Release Management. I’ll be including handling configs – arguably the hardest part of the whole process.

    Overview

    Let’s start with an overview of the process. I like to think of three distinct “areas” – source control, build and release. Conceptually you have tokenized config in source control (more on how to do this coming up). Then you have a build that takes in the source control and produces a WebDeploy package – a single tokenized package that is potentially deployable to multiple environments. The build should not have to know about anything environment specific (such as the correct connection string for Staging or Production, for example). Then release takes in the package and (conceptually) performs two steps: inject environment values for the tokens, and then deploy using WebDeploy. Here’s a graphic (which I’ll refer to as the Flow Diagram) of the process:

    image

    I’ve got some details in this diagram that we’ll cover shortly, so don’t worry about the details for now. The point is to clearly separate responsibilities (especially for build and release): build compiles source code, runs tests and performs static code analysis and so on. It shouldn’t have to know anything about environments – the output of the build is a single package with “holes” for environment values (tokens). Release will take in this single package and deploy it to each environment, plugging environment values into the holes as it goes. This guarantees that the bits you test in Dev and Staging are the same bits that get deployed to Prod!

    Deep Dive: Configuration

    Let’s get down into the weeds of configuration. If you’re going to produce a single package from build, then how should you handle config? Typically you have (at least) a connection string that you want to be different for each environment. Beyond that you probably have appSettings as well. If the build shouldn’t know about these values when it’s creating the package, then how do you manage config? Here are a some options:

    1. Create a web.config for each environment in source control
      • Pros: All your configs are in source control in their entirety
      • Cons: Lots of duplications – and you have to copy the correct config to the correct environment; requires a re-build to change config values in release management
    2. Create a config transform for each environment
      • Pros: Less duplication, and you have all the environment values in source control
      • Cons: Requires a project (or solution) config per environment, which can lead to config bloat; requires that you create a package per environment during build; requires a re-build to change config values in release management
    3. Tokenize using a single transform and parameters.xml
    • Pros: No duplication; enables a single package that can be deployed to multiple environments; no rebuild required to change config values in release management
    • Cons: Environment values aren’t in source control (though they’re in Release Management); learning curve

    Furthermore, if you’re targeting Azure, you can use the same techniques as targeting IIS, or you can use Azure Resource Manager (ARM) templates to manage your configuration. This offloads the config management to the template and you assume that the target Azure Web App is correctly configured at the time you run WebDeploy.

    Here’s a decision tree to make this a bit easier to digest:

    image

    Let’s walk through it:

    • If you’re deploying to Azure, and using ARM templates, just make sure that you configure the settings correctly in the template (I won’t cover how to do this in this post)
    • If you’re deploying to IIS (or you’re deploying to Azure and don’t have ARM templates or just want to manage config in the same manner as you would for IIS), you should create a single publish profile using right-click->Publish (on the web application) called “Release”. This should target the release configuration and you should tokenize the connection strings in the wizard (details coming up)
    • Next, if you have appSettings, you’ll have to create a parameters.xml file (details coming up)
    • Commit to source control

    For the remainder of this post I’m going to assume that you’re deploying to IIS (or to Azure and handling config outside of an ARM template).

    Creating a Publish Profile

    So what is this publish profile and why do you need it? The publish profile enables you to:

    • provide a single transform (via the Web.release.config) that makes your config release-ready (removing the debug compilation property, for example)
    • tokenize the connection strings

    To create the profile, right-click your web application project and select “Publish…”. Then do the following:

    • Select “Custom” to create a new custom profile. Name this “Release” (you can name this anything, but you’ll need to remember the name for the build later)
    • image
    • On the Connection page, change the Publish method to “Web Deploy Package”. Type anything you want for the filename and leave the site name empty. Click Next.
    • image
    • On the Settings page, select the configuration you want to compile. Typically this is Release – remember that the name of the configuration here is how the build will know which transform to apply. If you set this to Release, it will apply Web.Release.config – if you set it to Debug it will apply Web.Debug.Release. Typically you want to specify Release here since you’re aiming to get this site into Prod (otherwise why are you coding at all?) and you probably don’t want debug configuration in Prod!
    • You’ll see a textbox for each connection string you have in your Web.config. You can either put a single token in or a tokenized connection string. In the example below, I’ve used a single token (“__AppDbContextStr__”) for the one connection string and a tokenized string (“Server=__DbServer__;Initial Catalog=__DbName__;User Name=__DbUser__;Password=__DbPassword__”) for the other (just so you can see the difference). I’m using double underscore pre- and post-fix for the tokens:
    • image
    • Now click “Close” (don’t hit publish). When prompted to save the profile, select yes. This creates a Release.pubxml file in the Properties folder (the name of the file is the name of the profile you selected earlier):
    • image

    Creating a parameters.xml File

    The publish profile takes care of the connection strings – but you will have noticed that it doesn’t ask for values for appSettings (or any other configuration) anywhere. In order to tokenize anything in your web.config other than connection strings, you’ll need to create a parameters.xml file (yes, it has to be called that) in the root of your web application project. When the build runs, it will use this file to expose properties for you to tokenize (it doesn’t actually transform the config at build time).

    Here’s a concrete example: in my web.config, I have the following snippet:

    <appSettings>
      ...<add key="Environment" value="debug!" /></appSettings>

    There’s an appSetting key called “Environment” that has the value “debug!”. When I run or debug out of Visual Studio, this is the value that will be used. If I want this value to change on each environment I target for deployment, I need to add a parameters.xml file to the root of my web application with the following xml:

    <?xml version="1.0" encoding="utf-8" ?><parameters><parameter name="Environment" description="doesn't matter" defaultvalue="__Environment__" tags=""><parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/appSettings/add[@key='Environment']/@value"></parameterentry></parameter></parameters>

    Lines 3-6 are repeated for each parameter I want to configure. Let’s take a deeper look:

    • parameter name (line 3) – by convention it should be the name of the setting you’re tokenizing
    • parameter description (line 3) – totally immaterial for this process, but you can use it if you need to. Same with tags.
    • parameter defaultvalue (line 3) – this is the token you want injected – notice the double underscore again. Note that this can be a single token or a tokenized string (like the connection strings above)
    • parameterentry match (line 4) – this is the xpath to the bit of config you want to replace. In this case, the xpath says in the “configuration” element, find the “appSetting” element, then find the “add” element with the key property = ‘Environment’ and replace the value parameter with the defaultvalue.

    Here you can see the parameters.xml file in my project:

    image

    To test your transform, right-click and publish the project using the publish profile (for this you may want to specify a proper path for the Filename in the Connection page of the profile). After a successful publish, you’ll see 5 files. The important files are the zip file (where the bits are kept – this is all the binary and content files, no source files) and the SetParameters.xml file:

    image

    Opening the SetParameters.xml file, you’ll see the following:

    <?xml version="1.0" encoding="utf-8"?><parameters><setParameter name="IIS Web Application Name" value="Default Web Site/WebDeployMe_deploy" /><setParameter name="Environment" value="__Environment__" /><setParameter name="DefaultConnection-Web.config Connection String" value="__AppDbContextStr__" /><setParameter name="SomeConnection-Web.config Connection String" value="Server=__DbServer__;Initial Catalog=__DbName__;User Name=__DbUser__;Password=__DbPassword__" /></parameters>

    You’ll see the tokens for the appSetting (Environment, line 4) and the connection strings (lines 5 and 6). Note how the tokens live in the SetParameters.xml file, not in the web.config file! In fact, if you dive into the zip file and view the web.config file, you’ll see this:

    <connectionStrings><add name="DefaultConnection" connectionString="$(ReplacableToken_DefaultConnection-Web.config Connection String_0)" providerName="System.Data.SqlClient" /><add name="SomeConnection" connectionString="$(ReplacableToken_SomeConnection-Web.config Connection String_0)" providerName="System.Data.SqlClient" /></connectionStrings><appSettings>
      ...<add key="Environment" value="debug!" /></appSettings>

    You can see that there are placeholders for the connection strings, but the appSetting is unchanged from what you have in your web.config! As long as your connection strings have placeholders and your appSettings are in the SetParameters.xml file, you’re good to go – don’t worry, WebDeploy will still inject the correct values for your appSettings at deploy time (using the xpath you supplied in the parameters.xml file).

    Deep Dive: Build

    You’re now ready to create the build definition. There are some additional build tasks which may be relevant – such as creating dacpacs from SQL Server Data Tools (SSDT) projects to manage database schema changes – that are beyond the scope of this post. As for the web application itself, I like to have builds do the following:

    • Version assemblies to match the build number (optional, but recommended)
    • Run unit tests, code analysis and other build verification tasks
    • Create the WebDeploy package

    To version the assemblies, you can use the VersionAssemblies task from my build tasks extension in the marketplace. You’ll need the ReplaceTokens task for the release later, so just install the extension even if you’re not versioning. To show the minimum setup required to get the release working, I’m skipping unit tests and code analysis – but this is only for brevity. I highly recommend that unit testing and code analysis become part of every build you have.

    Once you’ve created a build definition:

    • Click on the General tab and change the build number format to 1.0.0$(rev:.r). This makes the first build have the number 1.0.0.1, the second build 1.0.0.2 etc.
    • Add a VersionAssemblies task as the first task. Set the Source Path to the folder that contains the projects you want to version (typically the root folder). Leave the rest defaulted.
      • image
    • Leave the NuGet restore task as-is (you may need to edit the solution filter if you have multiple solutions in the repo)
    • On the VS Build task, edit the MSBuild Arguments parameter to be /p:DeployOnBuild=true /p:PublishProfile=Release /p:PackageLocation=$(build.artifactstagingdirectory)
      • This tells MSBuild to publish the site using the profile called Release (or whatever name you used for the publish profile you created) and place the package in the build artifact staging directory
      • image
    • Now you should put in all your code analysis and test tasks – I’m omitting them for brevity
    • The final task should be to publish the artifact staging directory, which at this time contains the WebDeploy package for your site
      • image

    Run the build. When complete, the build drop should contain the site zip and SetParameters.xml file (as well as some other files):

    image

    You now have a build that is potentially deployable to multiple environments.

    Deep Dive: Release

    In order for the release to work correctly, you’ll need to install some extensions from the Marketplace. If you’re targeting IIS, you need to install the IIS Web App Deployment Using WinRM Extension. For both IIS and Azure deployments, you’ll need the ReplaceTokens task from my custom build tasks extension.

    There are a couple of ways you can push the WebDeploy package to IIS:

    • Use the IIS Web App Deployment using WinRM task. This is fairly easy to use, but requires that you copy the zip and SetParameters files to the target server deploying.
    • Use the cmd file that gets generated with the zip and SetParameters files to deploy remotely. This requires you to know the cmd parameters and to have the WebDeploy remote agent running on the target server.

    I recommend the IIS task generally – unless for some or other reason you don’t want to open up WinRM.

    So there’s some configuration required on the target IIS server:

    • Install WebDeploy
    • Install WebDeploy Remote Agent – for using the cmd. Note: if you install via Web Platform Installer you’ll need to go to Programs and Features and Modify the existing install, since the remote agent isn’t configured when installing WebDeploy via WPI
    • Configure WinRM – for the IIS task. You can run “winrm quickconfig” to get the service started. If you need to deploy using certificates, then you’ll have to configure that too (I won’t cover that here)
    • Firewall – remember to open ports for WinRM (5895 or 5986 for default WinRM HTTP or HTTPS respectively) or 8172 for the WebDeploy remote agent (again, this is the default port)
    • Create a service account that has permissions to copy files and “do stuff” in IIS – I usually recommend that this user account be a local admin on the target server

    Once you’ve done that, you can create the Release Definition. Create a new release and specify the build as the primary artifact source. For this example I’m using the IIS task to deploy (and create the site in IIS – this is optional). Here are the tasks you’ll need and their configs:

    • Replace Tokens Task
      • Source Path – the path to the folder that contains the SetParameters.xml file within the drop
      • Target File Pattern – set to *.SetParameters.xml
      • image
    • Windows Machine File Copy Task
      • Copy the drop folder (containing the SetParameters.xml and website zip files) to a temp folder on the target server. Use the credentials of the service account you created earlier. I recommend using variables for all the settings.
      • image
    • (Optional) WinRM – IIS Web App Management Task
      • Use this task to create (or update) the site in IIS, including the physical path, host name, ports and app pool. If you have an existing site that you don’t want to mess with, then skip this task.
      • image
    • WinRM – IIS Web App Deployment Task
      • This task takes the (local) path of the site zip file and the SetParameters.xml file and deploys to the target IIS site.
      • image
      • You can supply extra WebDeploy args if you like – there are some other interesting switches under the MSDeploy Additional Options section.

    Finally, open the Environment variables and supply name/value pairs for the values you want injected. In the example I’ve been using, I’ve got a token for Environment and then I have a tokenized connection string with tokens for server, database, user and password. These are the variables that the Replace Tokens task uses to inject the real environment-specific values into the SetParameters file (in place of the tokens). When WebDeploy runs, it transforms the web.config in the zip file with the values that are in the SetParameters.xml file. Here you can see the variables:

    image

    You’ll notice that I also created variables for the Webserver name and admin credentials that the IIS and Copy Files tasks use.

    You can of course do other things in the release – like run integration tests or UI tests. Again for brevity I’m skipping those tasks. Also remember to make the agent queue for the environment one that has an agent that can reach the target IIS server for that environment. For example I have an agent queue called “webdeploy” with an agent that can reach my IIS server:

    image

    I’m now ready to run the deployment. After creating a release, I can see that the tasks completed successfully! Of course the web.config is correct on the target server too.

    image

    Deploying to Azure

    As I’ve noted previously if you’re deploying to Azure, you can put all the configuration into the ARM template (see an example here– note how the connection strings and Application Insights appSettings are configured on the web application resource). That means you don’t need the publish profile or parameters.xml file. You’ll follow exactly the same process for the build (just don’t specify the PublishProfile argument). The release is a bit easier too – you first deploy the resource group using the Azure Deployment: Create or Update Resource Group task like so:

    image

    You can see how I override the template parameters – that’s how you “inject” environment specific values.

    Then you use a Deploy AzureRM Web App task (no need to copy files anywhere) to deploy the web app like so:

    image

    I specify the Azure Subscription – this is an Azure ARM service endpoint that I’ve preconfigured – and then the website name and optionally the deployment slot. Here I am deploying to the Dev slot – there are a couple extensions in the marketplace that allow you to swap slots (usually after you’ve smoke-tested the non-prod slot to warm it up and ensure it’s working correctly). This allows you to have zero downtime. The important bit here is the Package or Folder argument – this is where you’ll specify the path to the zip file.

    Of course if you don’t have the configuration in an ARM template, then you can just skip the ARM deployment task and run the Deploy AzureRM Web App task. There is a parameter called SetParameters file (my contribution to this task!) that allows you to specify the SetParameters file. You’ll need to do a Replace Tokens task prior to this to make sure that environment specific values are injected.

    For a complete walkthrough of deploying a Web App to Azure with an ARM template, look at this hands-on-lab.

    Conclusion

    Once you understand the pieces involved in building, packaging and deploying web applications, you can fairly easily manage configuration without duplicating yourself – including connection strings, appSettings and any other config – using a publish profile and a parameters.xml file. Then using marketplace extensions, you can build, version, test and package the site. Finally, in Release Management you can inject environment specific values for your tokens and WebDeploy to IIS or to Azure.

    Happy deploying!

    Managing Config for .NET Core Web App Deployments with Tokenizer and ReplaceTokens Tasks

    $
    0
    0

    Last week I posted an end-to-end walkthrough about how to build and deploy web apps using Team Build and Release Management – including config management. The post certainly helps you if you’re on the .NET 4.x Framework – but what about deploying .NET Core apps?

    The Build Once Principle

    If you’ve ever read any of my blogs you’ll know I’m a proponent of the “build once” principle. That is, your build should be taking source code and (after testing and code analysis etc.) producing a single package that can be deployed to multiple environments. The biggest challenge with a “build once” approach is that it’s non-trivial to manage configuration. If you’re building a single package, how do you deploy it to multiple environments when the configuration is different on those environments? I present a solution in my walkthrough – use a publish profile and a parameters.xml file to tokenize the configuration file during build. Then replace the tokens with environment values at deploy time. I show you how to do that starting with the required source changes, how the build works and finally how to craft your release definition for token replacements and deployment.

    AppSettings.json

    However, .NET Core apps are a different kettle of fish. There is no web.config file (by default). If you File->New Project and create a .NET Core web app, you’ll get an appsettings.json file. This is the “new” web.config if you will. If you then go to the .NET Core documentation, you’ll see that you can create multiple configuration files using “magic” names like appsettings.dev.json and appsettings.prod.json (these are loaded up during Startup.cs). I understand the appeal of this approach, but to me it feels like having multiple web.config files which you replace at deployment time (like web.dev.config and web.prod.config). I’m not even talking about config transforms – just full config files that you keep in source control and (conceptually) overwrite during deployment. So you’re duplicating code – which is bad juju.

    I got to thinking about how to handle configuration for .NET Core apps, and after mulling it over and having a good breakfast chat fellow MVP Scott Addie, I thought about tokenizing the appsettings.json file. If I could figure out a clean way to tokenize the file at build time, then I could use my existing ReplaceTokens task (part of my marketplace extension) during deploy time to fill in environment specific values. Unfortunately there’s no config transform for JSON files, so I decided to create a Tokenizer task that could read in a JSON file and then auto-replace values with tokens (based on the object hierarchy).

    Tokenizer Task

    To see this in action, I created a new .NET Core Web App in Visual Studio. I then added a custom config section. I ended up with an appsettings.json file that looks as follows:

    {
      "ConnectionStrings": {
        "DefaultConnection": "Server=(localdb)\\mssqllocaldb;Database=aspnet-WebApplication1-26e8893e-d7c0-4fc6-8aab-29b59971d622;Trusted_Connection=True;MultipleActiveResultSets=true"
      },
      "Tricky": {
        "Gollum": "Smeagol",
        "Hobbit": "Frodo"
      },
      "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
          "Default": "Debug",
          "System": "Information",
          "Microsoft": "Information"
        }
      }
    }
    

    Looking at this config, I can see that I might want to change the ConnectionStrings.DefaultConnection as well as the Tricky.Gollum and Tricky.Hobbit settings (yes, I’m reading the Lord of the Rings – I’ve read it about once a year since I was 11). I may want to change Logging.LogLevel.Default too.

    Since the file is JSON, I figured I could create a task that reads the file in and then walks the object hierarchy, replacing values with tokens as it goes. But I realized that you may not want to replace every value in the file, so the task would have to take an explicit include (for only replacing certain values) or exclude list (for replacing all but certain values).

    I wanted the appsettings file to look like this once the tokenization had completed:

    {
      "ConnectionStrings": {
        "DefaultConnection": "__ConnectionStrings.DefaultConnection__"
      },
      "Tricky": {
        "Gollum": "__Tricky.Gollum__",
        "Hobbit": "__Tricky.Hobbit__"
      },
      "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
          "Default": "__Logging.LogLevel.Default__",
          "System": "Information",
          "Microsoft": "Information"
        }
      }
    }
    

    You can see the tokens on the highlighted lines.

    After coding for a while on the plane (#RoadWarrior) I was able to create a task for tokenizing a JSON file (perhaps in the future I’ll make more file types available – or I’ll get some Pull Requests!). Having recently added unit tests for my Node tasks, I was able to bang this task out rather quickly.

    The Build Definition

    With my shiny new Tokenize task, I was ready to see if I could get the app built and deployed. Here’s what my build definition looks like:

    image

    The build tasks perform the following operations:

    1. Run dotnet with argument “restore” (restores the package dependencies)
    2. Tokenize the appsettings.json file
    3. At this point I should have Test, Code Annalysis etc. – I’ve omitted these quality tasks for brevity
    4. Run dotnet with arguments “publish src/CoreWebDeployMe --configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)/Temp” (I’m publishing the folder that contains my .NET Core web app with the BuildConfiguration and placing the output in the Build.ArtifactStagingDirectory/Temp folder)
    5. Zip the published folder (the zip task comes from this extension)
    6. Remove the temp folder from the staging directory (since all the files I need are now in the zip)
    7. Upload the zip as a build drop

    The Tokenize task is configured as follows:

    image

    Let’s look at the arguments:

    • Source Path – the path containing the file(s) I want to tokenize
    • File Pattern – the mini-match pattern for the file(s) within the Source Path I want to tokenize
    • Tokenize Type – I only have json for now
    • IncludeFields – the list of properties in the json file that I want the Tokenizer to tokenize
    • ExcludeFields – I could have used a list of properties I wanted to exclude from tokenization here instead of using the Include Fields property

    Once the build completes, I now have a potentially deployable .NET Core web application with a tokenized appsettings file. I could have skipped the zip task and just uploaded the site unzipped, but uploading lots of little files takes longer than uploading a single larger file. Also, I was thinking about the deployment – downloading a single larger file (I guessed) was going to be faster than downloading a bunch of smaller files.

    The Release

    I was expecting to have to unzip the zip file, replace the tokens in the appsettings.json file and then re-zip the file before invoking WebDeploy to push the zip file to Azure. However, the AzureRM WebDeploy task recently got updated, and I noticed that what used to be “Package File” was now “Package File or Folder”. So the release turned out to be really simple:

    1. Unzip the zip file to a temp folder using an inline PowerShell script (why is there no complementary Unzip task from the Trackyon extension?)
    2. Run ReplaceTokens on the appsettings.json file in the temp folder
    3. Run AzureRM WebDeploy using the temp folder as the source folder

    image

    Here’s how I configured the PowerShell task:

    image

    The script takes in the sourceFile (the zip file) as well as the target path (which I set to a temp folder in the drop folder):

    param(
      $sourceFile,
      $targetPath)
    
    Expand-Archive -Path $sourceFile -DestinationPath $targetPath -Force
    

    My first attempt deployed the site – but the ReplaceTokens task didn’t replace any tokens. After digging a little I figured out why – the default regex pattern – __(\w+)__ – doesn’t work when the token name have periods in them. So I just updated the regex to __(\w+[\.\w+]*)__ (which reads “find double underscore, followed by a word, followed by a period and word repeated 0 or more times, ending with double underscore”.

    image

    That got me closer – one more change I had to make was replacing the period with underscore in the variable names on the environment:

    image

    Once the ReplaceTokens task was working, the Deploy task was child’s play:

    image

    I just made sure that the “Package or Folder” was set to the temp path where I unzipped the zip file in the first task. Of course at this point the appsettings.json now contains real environment-specific values instead of tokens, so WebDeploy can go and do its thing.

    Conclusion

    It is possible to apply the Build Once principle to .NET Core web applications, with a little help from my friends Tokenizer and ReplaceTokens in the build and release respectively. I think this approach is fairly clean – you get to avoid duplication in source code, build a single package and deploy to multiple environments. Of course my experimentation is available to your for free from the tasks in my marketplace extension! Sound off in the comments if you think this is useful (or horrible)…

    Happy releasing!

    You Suck: Or, How to Process Criticism

    $
    0
    0

    image

    Recently I received some criticism from a customer. Sometimes I find it difficult to process criticism – I justify or argue or dismiss. Some of that is my personality – I like to be right! Part of that is the fact that I strive for excellence, so when I’m told I missed the mark, it can feel like I’m being told I’m a failure. You, dear reader, probably strive for perfection – but let’s face facts: we’re not perfect. If you’re like me and you have a difficult time receiving criticism, then this post is for you – hopefully you can learn something from how I process.

    I’m Not Perfect

    This one’s tough. My natural instinct when receiving criticism is to justify. For example, the criticism might be, “You didn’t finish when you said you would.” My inclination is to retort: “Well, you weren’t clear enough on what you wanted,” or something like that. However, the most critical key to successfully processing criticism is to remain teachable– and that means acknowledging that I missed the mark. I have to tell myself not to argue, not to justify. I have to take a step back and see the situation from the other perspective.

    I Have Blind Spots

    That leads to the second critical principle – I have blind spots. No matter how much I stare in the mirror to gel my hair to perfection, I still can’t see what’s going on with that stubborn crown on the back of my head! Even if I’m prone to introspection and self-improvement, I’m going to miss stuff. About me. If I reject criticism outright, I’ll never get a chance to see into those blind spots. I have to let criticism be a catalyst to stepping back and honestly assessing what I said or did from someone else’s perspective. I can only improve if there’s something to work on – so I have to let criticism surface things that I can work on.

    I Am Not Defined By a Moment

    This is a big one for me – I can tend to take criticism hard, so it becomes overwhelming. I have to realize that even if I blow it, that moment (or engagement) doesn’t define me. I’m more than this one moment. I may have gotten this thing wrong, but I get a lot of things right too! Remembering previous moments where I got things right helps me process moments when I get things wrong.

    I Can’t Always Win

    Sometimes, no matter how hard I try, I can’t win. Someone is going to be disappointed in something I did or said. Most of the time I don’t set out to disappoint, but life happens. Expectations aren’t clear, or are just different, or communication fails. Things beyond my control happen. I have admit that I lost a round – as long as I get up and keep on going!

    Learning is a Team Sport

    Sometimes criticism is deserved. Sometimes it isn’t. And sometimes it’s hard to tell the difference. I make sure I surround myself with people that know and love me – that way, when I’m criticized I have a team I can go to. I like to make my team diverse – my colleagues of course, but also my friends and family. Even if the criticism is work-related, sometimes having a “personal” perspective can help process a “professional” issue. I also make sure I get someone who’s more experienced than me who can mentor me through a situation.

    Often criticism has some meat and some bones. Take the meat, spit out the bones. My team helps me to sort the meat from the bones. They help me to keep things in perspective.

    Make it Right

    Finally, if it’s appropriate to do so, make it right. Sometimes I can take some criticism and just improve, learn and get better. Sometimes I may need to make things right. My team helps me figure out “action items” – things I can do to improve, but also things that I can do to make it right. This doesn’t always apply, but I like to look for things to do or say that will make things right. Although doing this without justifying myself is challenging for me!

    Conclusion

    Unless you’re particularly reclusive, you’ll get criticized at some point. Learning how to embrace and deal with criticism is an important skill to learn. If you use it as a chance to learn and improve, and surround yourself with people who can coach and encourage you, you can process criticism positively and become better!

    Happy learning!

    * Image by innoxiuss used under Creative Commons

    DevOps Drives Better Architecture–Part 1 of 2

    $
    0
    0

    (Read part 2 here)

    I haven’t blogged for a long while – it’s been a busy few months!

    One of the things I love about being a DevOps consultant is that I have to be technically proficient – I can’t help teams develop best practices if I don’t know the technology to at least a reasonable depth – but I also get to be a catalyst for change. I love the cultural dynamics of DevOps. After all, as my friend Donovan Brown says, “DevOps is the union of people, processes and tools…”. When you involve people, then you get to watch (or, in my case, influence) culture. And it fascinates me.

    I only recently read Continuous Delivery by Jez Humble and David Farley. I was pleased at how much of their insights I’ve been advocating “by instinct” over my years of ALM and DevOps consulting. Reading their book sparked the thoughts that I’ll put into this two-part post.

    This part will introduce the thought that DevOps and architecture are symbiotic – good architecture makes for good DevOps, and good DevOps drives good architecture. Ill look at Builds and Source Control in particular. In part 2, I’ll discuss infrastructure as code, database design, automated testing and monitoring and how they relate to DevOps and vice versa.

    Tools, tools, tools

    Over the past few months, the majority of my work has been to help teams implement Build/Release Pipelines. This seems inevitable to me given the state of DevOps in the market in general – most teams have made (or are in the process of making) a shift to agile, iterative frameworks for delivering their software. As they get faster, they need to release more frequently. And if they’ve got manual builds and deployments, the increasing frequency becomes a frustration because they can’t seem to deploy fast enough (or consistently enough). So teams are starting to want to automate their build/release flows.

    It’s natural, therefore, to immediately look for a tool to help automation. And for a little help from your friends at Northwest Cadence to help you do it right!

    Of course my tool of choice for build/release pipelines is Visual Studio Team Services (VSTS) or Team Foundation Server (TFS) for a number of reasons:

    1. The build agent is cross platform (it’s built on .NET Core, so runs wherever .NET Core runs)
    2. The build agent is also the release agent
    3. The build agent can run tests 
    4. The task-based system has a good Web-based UI, allowing authoring from wherever you have a browser
    5. The logging is great – allowing fast debugging of build issues
    6. Small custom logic can easily be handled with inline scripts
    7. If you can script it, the agent can do it – whether it’s bat, ps1 or sh 
    8. Extensions are fairly easy to create
    9. There is a large and thriving marketplace for extensions

    Good Architecture Means Easier DevOps

    Inevitably implementing build automation impacts how you organize your source control. And implementing a release pipeline impacts how you test. And implementing continuous deployment impacts IT, since there’s suddenly a need to be able to spin up and configure and tear down environments on the fly. I love seeing this progression – but it’s often painful for the teams I’m working with. Why? Because teams start realizing that if their architecture was better, it would make other parts of the DevOps pipeline far easier to implement.

    For example, if you start automating releases, pretty soon you start wanting to run automated tests since your tests start becoming the bottleneck to delivery. At this point, if you’ve used good architectural principles like interfaces and inversion of control, writing unit tests is far easier. If you haven’t, you have a far harder time writing tests.

    Good architecture can make DevOps easier for you and your team. We’ve been told to do these things, and often we’ve found reasons not to do them (“I don’t have time to make an interface for everything!” or “I’ll refactor that class to make it more testable in the next sprint” etc. etc.). Hopefully I can show you how these architectural decisions, if done with DevOps in mind, will not only make your software better but help you to implement better DevOps, more easily!

    The Love Triangle: Source Control, Branches and Builds

    I really enjoy helping teams implement their first automated builds. Builds are so foundational to good DevOps – and builds tend to force teams to reevaluate their code layout (structure), dependencies and branching strategy.

    Most of the time, the teams have their source code in some sort of source control system. Time and time again, the teams that have a good structure and simple branching strategies have a far easier time getting builds to work well.

    Unfortunately, most repositories I look at are not very well structured. Or the branches represent environments so you see MAIN/DEV/PROD (which is horrible even though most teams don’t know why – read on if this is you). Or they have checked binaries into source control instead of using a package manager like NuGet. Or they have binary references instead of project references.

    Anyway, as we get the build going, we start to uncover potential issues most teams don’t even know they have (like missing library references or conflicting package versions). After some work and restructuring, we manage to get a build to compile. Whoop!

    Branching

    After the initial elation and once the party dies down, we take a look at the branching strategy. “We need a branch for development, then we promote to QA, and once QA signs off we promote to PROD. So we need branches for each of these environments, right?” This is still a very pervasive mindset. However, DevOps – specifically release pipelines – should operate on a simple principle: build once, deploy many times. In other words, the bits that you deploy to DEV should be the same bits that you deploy to QA and then to PROD. Don’t just take my work for it: read Continuous Delivery – the authors emphasize this over and over again!. You can’t do that if you have to merge and build each time you want to promote code between environments. So how do you track what code is where and promote parallel development?

    Builds and Versioning

    I advocate for a master/feature branch strategy. That is, you have your stable code on master and then have multiple feature branches (1 to n at any one time) that developers work on. Development is done on the feature branch and then merged into master via Pull Request when it’s ready. At that point, a build is queued which versions the assemblies and tags the source with the version (which is typically the build number).

    That’s how you keep track of what code is where – by versions and tags that your build keeps the keys for. That way, you can do hotfixes directly onto master even if you’ve already merged code that is in the pipeline and not yet in production. For example, say you have 1.0.0.6 in prod and you merge some code in for a new feature. The build kicks in and produces version 1.0.0.7 which gets automatically deployed to the DEV environment for integration testing. While that’s going on, you get a bug report from PROD. Oh no! We’ve already merged in code that isn’t yet in PROD, so how do we fix it on master?!?

    It’s easy – we know that 1.0.0.6 is in PROD, so we branch the code using tag 1.0.0.6 (which the build tagged in the repo when it ran the 1.0.0.6 build). We then fix the issue in the branch build off of this branch. A new build – 1.0.0.8. We take a quick look at this and fast-track it through until it’s deployed and business can continue. In the meantime, we can abandon the 1.0.0.7 build that’s currently in the deployment pipeline. We merge the hotfix branch back to the master and do a new build – 1.0.0.9 that now has the hotfix as well as the new feature. No sweat.

    “Hang on. Pull Requests? Feature branches? That sounds like Git.” Yes, it does. If you’re not on Git, then you’d better have a convincing reason not to be. You can do a lot of this with TFVC, but it’s just harder. So just get to Git. And as a side benefit, you’ll get a far richer code review experience (in the form of Pull Requests) so your quality is likely to improve. And merging is easier. And you can actually cherry-pick. I could go on and on, but there’s enough Git bigots out there that I don’t need to add my voice too. But get to Git. Last word. Just Do It.

    Small Repos, Microservices and Package Management

    So you’re on Git and you have a master/feature branch branching strategy. But you have multiple components or layers and you need them to live together for compilation, so you put them all into one repo, right? Wrong. You need to separate out your components and services into numerous small repos. Each repo should have Continuous Integration (CI) on it. This change forces teams to start decomposing their monolithic apps into shared libraries and microservices. “Wait – what? I need to get into microservices to get good DevOps?” I hear you yelling. Well, another good DevOps principle is releasing small amounts of change often. And if everything is mashed together in a giant repo, it’s hard to do that. So you need to split up your monoliths into smaller components that can be independently released. Yet again, good architecture (loose coupling, strict service boundaries) promotes good DevOps – or is it DevOps finally motivating you to Do It Right™ like you should have all along?

    You’ve gone ahead and split out shared code and components. But now your builds don’t work because your common code (your internal libraries) are suddenly in different repos. Yes, you’re going to need some package management tool. Now, as a side benefit, teams can opt-in to changes in common libraries rather than being forced to update project references. This is a great example of how good DevOps influences good development practices! Even if you just use a file share as a NuGet source (you don’t necessarily need a full-blown package management system) you’re better off.

    Conclusion

    In this post, we’ve looked at how good source control structure, branching strategies, loosely coupled architectures and package management can make DevOps easier. Or perhaps how DevOps pushes you to improve all of these. As I mentioned, good architecture and DevOps are symbiotic, feeding off each other (for good or bad). So make sure it’s for good! Now go and read part 2 of this post.

    Happy architecting!

    DevOps Drives Better Architecture–Part 2 of 2

    $
    0
    0

    In part 1 I introduced some thoughts as to how good architecture makes DevOps easier. And how good DevOps drives better architecture – a symbiotic relationship. I discussed how good source control structure, branching strategies, loosely coupled architectures and package management can make DevOps easier. In this post I’ll share some thoughts along the same lines for infrastructure as code, database design and management, monitoring and test automation.

    Infrastructure as Code

    Let’s say you get your builds under control, you’re versioning, you get your repos sorted and you get package management in place. Now you’re starting to produce lots and lots of builds. Unfortunately, a build doesn’t add value to anyone until it’s in production! So you’ll need to deploy it. But you’ll need infrastructure to deploy to. So you turn to IT and fill in the forms and wait 2 weeks for the servers…

    Even if you can quickly spin up tin (or VMs more likely) or you’re deploying to PaaS, you still need to handle configuration. You need to configure your servers (if you’re on IaaS) or your services (on PaaS). You can’t afford to do this manually each time you need infrastructure. You’re going to need to be able to automatically spin up (and down) resources when you need them.

    Spinning Up From Scratch

    I was recently at a customer that were using AWS VMs for their DevTest infrastructure. We were evaluating if we could replicate their automated processes in Azure. The problem was they hadn’t scripted creation of their environments from scratch– they would manually configure a VM until it was “golden” and then use that as a base image for spinning up instances. Now that we were wanting to change their cloud host, they couldn’t do it easily because someone would have to spin up an Azure VM and manually configure it. If they had rather used the principle of scripting and automating configuration, we could have used the existing scripts to quickly spin up test machines on any platform or host. Sometimes you don’t know you need something until you actually need it – so do the right things early and you’ll be better off in the long run. Get into the habit of configuring via code rather than via UI.

    Deploy Whole Components – Always

    In Continuous Delivery, Humble and Farley argue that it’s easier to deploy your entire system each time than trying to figure out what the delta is and only deploy that. If you craft your scripts and deployments so that they are idempotent, then this shouldn’t be a problem. Try to prefer declarative scripting (such as PowerShell DSC) over imperative scripting (like pure PowerShell). Not only is it easier to “read” the configuration, but the system can check if a component is in the required state, and if it is, just “no-op”. Make sure your scripts work irrespective of the initial state of the system.

    If you change a single class, should you just deploy that assembly? It’s far easier to deploy the entire component (be that a service or a web app) than trying to work out what changed and what need to be deployed. Tools can also help here – web deploy, for example, only deploys files that are different. You build the entire site and it calculates at deployment time what the differences are. Same with SDDT for database schema changes.

    Of course, getting all the requirements and settings correct in a script in source control is going to mean that you need to cooperate and team up with the IT guys (and gals). You’re going to need to work together to make sure that you’re both happy with the resulting infrastructure. And that’s good for everyone.

    Good Database Design and Management

    Where does your logic live?

    How does DevOps influence database design and management? I used to work for a company where the dev manager insisted that all our logic be in stored procedures. “If we need to make a change quickly,” he reasoned, “then we don’t need to recompile, we can just update the SP!” Needless to say, our code was virtually untestable, so we just deployed and hoped for the best. And spent a lot of time debugging and fighting fires. It wasn’t pretty.

    Stored procedures are really hard to test reliably. And they’re hard to code and debug. So you’re better off leaving your database to store data. Placing logic into component or services lets you test the logic without having to spin up databases with test data – using mocks or fakes or doubles lets you abstract away where the data is stored and test the logic of your apps. And that makes DevOps a lot easier since you can test a lot more during the build phase. And the earlier you find issues (builds are “earlier” than releases) the lest it costs to fix them and the easier it is to fix them.

    Managing Schema

    What about schema? Even if you don’t have logic in your database in the form of stored procedures, you’re bound to change the schema at some stage. Don’t do it using manual scripts. Start using SQL Server Data Tools (SSDT) for managing your schema. Would you change the code directly on a webserver to implement new features? Of course not – you want to have source control and testing etc. So why don’t we treat databases the same way? Most teams seem happy to “just fix it on the server” and hope they can somehow replicate changes made on the DEV databases to QA and PROD. If that’s you – stop it! Get your schema into an SSDT project and turn off DDL-write permissions so that they only way to change a database schema is to change the project, commit and let the pipeline make the change.

    The advantage of this approach is that you get a full history (in source control) of changes made to your schema. Also, sqlpackage calculates the diff at deployment time between your model and the target database and only updates what it needs to to make the database match the model. Idempotent and completely uncaring as to the start state of the database. Which means hassle free deployments.

    Automated Testing

    I’ve already touched on this topic – using interfaces and inversion of control makes your code testable, since you can easily mock out external dependencies. Each time you have code that interacts with an external system (be it a database or a web API) you should abstract it as an interface. Not only does this uncouple your development pace from the pace of the external system, it allows you to much more easily test your application by mocking/stubbing/faking the dependency. Teams that have well-architected code are more likely to test their code since the code is easier to test! And tested code produces fewer defects, which means more time delivering features rather than fighting fires. Once again, good architecture is going to ease your DevOps!

    Once you’ve invested in unit testing, you’ll want to start doing some integration testing. This requires code to be deployed to environments so that it can actually hit the externals systems. If everything is a huge monolithic app, then as tests fail you won’t know why they failed. Smaller components will let you more easily isolate where issues occur, leading to faster mean time to detection (MTTD). And if you set up so that you can deploy components independently (since they’re loosely coupled, right!) then you can recover quickly, leading to faster mean time to recovery (MTTR).

    You’ll want to have integration tests that operate “headlessly”. Prefer API calls and HTTP requests over UI tests since UI tests are notoriously hard to create correctly and tend to be fragile. However, if you do get to UI tests, then good architecture can make a big difference here too. Naming controls uniquely means UI test frameworks can find them more easily (and faster) so that UI testing is faster and more reliable. The point surfaces again that DevOps is making you think about how you structure even your UI!

    Monitoring

    Unfortunately, very few teams that I come across have really good monitoring in place. This is often the “forgotten half-breed” of the DevOps world – most teams get source code right, test right and deploy right – and then wash their hands. “Prod isn’t my responsibility – I’m a dev!” is a common culture. However, good monitoring means that you’re able to more rapidly diagnose issues, which is going to save you time and effort and keep you delivering value (debugging is not delivering value). So you’ll need to think about how to monitor your code, which is going to impact on your architecture.

    Logging is just monitoring 1.0. What about utilization? How do you monitor how much resources your code is consuming? And how do you know when to spin up more resources for peak loads? Can you even do that – or do your web services require affinity? Ensuring that your code can run on 1 or 100 servers will make scaling a lot easier.

    But beyond logging and performance monitoring, there’s a virtually untapped wealth of what I call “business monitoring” that very few (if any) teams seem to take advantage of. If you’re developing an e-commerce app, how can you monitor what product lines are selling well? And can you correlate user profiles to spending habits? The data is all there – if you can tap into it. Application Insights, coupled with analytics and PowerBI can empower a new level of insight that your business didn’t even know existed. DevOps (which included monitoring) will drive you to architect good “business monitoring” into your apps.

    Build and Release Responsibilities

    One more nugget that’s been invaluable for successful pipelines: know what builds do and what releases do. Builds should take source code (and packages) as inputs, run quality checks such as code analysis and unit testing, and produce packaged binaries as outputs. These packages should be “environment agnostic” – that is, they should not need to know about environments or be tied to environments. Similarly your builds should not need connections strings or anything like that since the testing that occurs during a build should be unit tests that are fast and have no external dependencies.

    This means that you’ll have to have packages that have “holes” in them where environment values can later be injected. Or you may decide to use environment variables altogether and have no configuration files. However you do it, architecting configuration correctly (and fit for purpose, since there can be many correct ways) will make deployment far easier.

    Releases need to know about environments. After all, they’re going to be deploying to the environments, or perhaps even spinning them up and configuring them. This is where your integration and functional tests should be running, since some infrastructure is required.

    Conclusion

    Good architecture makes good DevOps a lot easier to implement – and good DevOps feeds back into improving the architecture of your application as well as your processes. The latest trend of “shift left” means you need to be thinking about more than just solving a problem in code – you need to be thinking beyond just coding. Think about how the code you’re writing is going to be tested. And how it’s going to be configured on different environments. And how it’s going to be deployed. And how you’re going to spin up the infrastructure you need to run it. And how you’re going to monitor it.

    The benefits, however, of this “early effort” will pay off many times over in the long run. You’ll be faster, leaner and meaner than ever. Happy DevOps architecting!

    Running Selenium Tests in Docker using VSTS Release Management

    $
    0
    0

    The other day I was doing a POC to run some Selenium tests in a Release. I came across some Selenium docker images that I thought would be perfect – you can spin up a Selenium grid (or hub) container and then join as many node containers as you want to (the node container is where the tests will actually run). The really cool thing about the node containers is that the container is configured with a browser (there are images for Chrome and Firefox) meaning you don’t have to install and configure a browser or manually run Selenium to join the grid. Just fire up a couple containers and you’re ready to test!

    The source code for this post is on Github.

    Here’s a diagram of the components:

    image

    The Tests

    To code the tests, I use Selenium WebDriver. When it comes to instantiating a driver instance, I use the RemoteWebDriver class and pass in the Selenium Grid hub URL as well as the capabilities that I need for the test (including which browser to use) – see line 3:

    private void Test(ICapabilities capabilities)
    {
        var driver = new RemoteWebDriver(new Uri(HubUrl), capabilities);
        driver.Navigate().GoToUrl(BaseUrl);
        // other test steps here
    }
    
    [TestMethod]
    public void HomePage()
    {
        Test(DesiredCapabilities.Chrome());
        Test(DesiredCapabilities.Firefox());
    }
    

    Line 4 includes a setting that is specific to the test – in this case the first page to navigate to.

    When running this test, we need to be able to pass the environment specific values for the HubUrl and BaseUrl into the invocation. That’s where we can use a runsettings file.

    Test RunSettings

    The runsettings file for this example is simple – it’s just XML and we’re just using the TestRunParameters element to set the properties:

    <?xml version="1.0" encoding="utf-8" ?><RunSettings><TestRunParameters><Parameter name="BaseUrl" value="http://bing.com" /><Parameter name="HubUrl" value="http://localhost:4444/wd/hub" /></TestRunParameters></RunSettings>

    You can of course add other settings to the runsettings file for the other environment specific values you need to run your tests. To test the setting in VS, make sure to go to Test->Test Settings->Select Test Settings File and browse to your runsettings file.

    The Build

    The build is really simple – in my case I just build the test project. Of course in the real world you’ll be building your application as well as the test assemblies. The key here is to ensure that you upload the test assemblies as well as the runsettings file to the drop (more on what’s in the runsettings file later). The runsettings file can be uploaded using two methods: either copy it using a Copy Files task into the artifact staging directory – or you can mark the file’s properties in the solution to “Copy Always” to ensure it’s copied to the bin folder when you compile. I’ve selected the latter option.

    Here’s what the properties for the file look like in VS:

    image

    Here’s the build definition:

    image

    The Docker Host

    If you don’t have a docker host, the fastest way to get one is to spin it up in Azure using the Azure CLI – especially since that will create the certificates to secure the docker connection for you! If you’ve got a docker host already, you can skip this section – but you will need to know where the certs are for your host for later steps.

    Here are the steps you need to take to do that (I did this all in my Windows Bash terminal):

    1. Install node and npm
    2. Install the azure-cli using “npm install –g azure-cli
    3. Run “azure login” and log in to your Azure account
    4. Don’t forget to set your subscription if you have more than one
    5. Create an Azure Resource Group using “azure group create <name> <location>
    6. Run “azure vm image list –l westus –p Canonical” to get a list of the Ubuntu images. Select the Urn of the image you want to base the VM on and store it – it will be something like “Canonical:UbuntuServer:16.04-LTS:16.04.201702240”. I’ve saved the value into $urn for the next command.
    7. Run the azure vm docker create command – something like this:

        azure vm docker create --data-disk-size 22 --vm-size "Standard_d1_v2" --image-urn $urn --admin-username vsts --admin-password $password --nic-name "cd-dockerhost-nic" --vnet-address-prefix "10.2.0.0/24" --vnet-name "cd-dockerhost-vnet" --vnet-subnet-address-prefix "10.2.0.0/24" --vnet-subnet-name "default" --public-ip-domain-name "cd-dockerhost"  --public-ip-name "cd-dockerhost-pip" --public-ip-allocationmethod "dynamic" --name "cd-dockerhost" --resource-group "cd-docker" --storage-account-name "cddockerstore" --location "westus" --os-type "Linux"

    Here’s the run from within my bash terminal:image

    Here’s the result in the Portal:

    image

    Once the docker host is created, you’ll be able to log in using the certs that were created. To test it, run the following command:

    docker -H $dockerhost --tls info

    SNAGHTML2b3ba0

    I’ve included the commands in a fish script here.

    The docker-compose.yml

    The plan is to run multiple containers – one for the Selenium Grid hub and any number of containers for however many nodes we want to run tests in. We can call docker run for each container, or we can be smart and use docker-compose!

    Here’s the docker-compose.yml file:

    hub:
      image: selenium/hub
      ports:
        - "4444:4444"
    chrome-node:
      image: selenium/node-chrome
      links:
        - hub
    
    ff-node:
      image: selenium/node-firefox
      links:
        - hub
    
    
    Here we define three containers – named hub, chrome-node and ff-node. For each container we specify what image should be used (this is the image that is passed to a docker run command). For the hub, we map the container port 4444 to the host port 4444. This is the only port that needs to be accessible outside the docker host. The node containers don’t need to map ports since we’re never going to target them directly. To connect the nodes to the hub, we simple use the links keyword and specify the name(s) of the containers we want to link to – in this case, we’re linking both nodes to the hub container. Internally, the node containers will use this link to wire themselves up to the hub – we don’t need to do any of that plumbing ourselves - really elegant!

    The Release

    The release requires us to run docker commands to start a Selenium hub and then as many nodes as we need. You can install this extension from the marketplace to get docker tasks that you can use in build/release. Once the docker tasks get the containers running, we can run our tests, passing in the hub URL so that the Selenium tests hit the hub container, which will distribute the tests to the nodes based on the desired capabilities. Once the tests complete, we can optionally stop the containers.

    Define the Docker Endpoint

    In order to run commands against the docker host from within the release, we’ll need to configure a docker endpoint. Once you’ve installed the docker extension from the marketplace, navigate to your team project and click the gear icon and select Services. Then add a new Docker Host service, entering your certificates:

    image

    Docker VSTS Agent

    We’re almost ready to create the release – but you need an agent that has the docker client installed so that it can run docker commands! The easiest way to do this – is to run the vsts agent docker image on your docker host. Here’s the command:

    docker -H $dockerhost --tls run --env VSTS_ACCOUNT=$vstsAcc --env VSTS_TOKEN=$pat --env VSTS_POOL=docker -it microsoft/vsts-agent

    I am connecting this agent to a queue called docker – so I had to create that queue in my VSTS project. I wanted a separate queue because I want to use the docker agent to run the docker commands and then use the hosted agent to run the tests – since the tests need to run on Windows. Of course I could have just created a Windows VM with the agent and the docker bits – that way I could run the release on the single agent.

    The Release Definition

    Create a new Release Definition and start from the empty template. Set the build to the build that contains your tests so that the tests become an artifact for the release. Conceptually, we want to spin up the Selenium containers for the test, run the tests and then (optionally) stop the containers. You also want to deploy your app, typically before you run your tests – I’ll skip the deployment steps for this post. You can do all three of these phases on a single agent – as long as the agent has docker (and docker-compose) installed and VS 2017 to run tests. Alternatively, you can do what I’m doing and create three separate phases – the docker commands run against a docker-enabled agent (the VSTS docker image that I we just got running) while the tests run off a Windows agent. Here’s what that looks like in a release:

    image

    Here are the steps to get the release configured:

    1. Create a new Release Definition and rename the release by clicking the pencil icon next to the name
    2. Rename “Environment 1” to “Test” or whatever you want to call the environment
    3. Add a “Run on agent” phase (click the dropdown next to the “Add Tasks” button)
    4. Set the queue for that phase to “docker” (or whatever queue you are using for your docker-enabled agents)
      1. image
    5. In this phase, add a “Docker-compose” task and configure it as follows:
      1. image
      2. Change the action to “Run service images” (this ends up calling docker-compose up)
      3. Uncheck Build Images and check Run in Background
      4. Set the Docker Host Connection
    6. In the next phase, add tasks to deploy your app (I’m skipping these tasks for this post)
    7. Add a VSTest task and configure it as follows:
      1. image
      2. I’m using V2 of the Test Agent task
      3. I update the Test Assemblies filter to find any assembly with UITest in the name
      4. I point the Settings File to the runsettings file
      5. I override the values for the HubUrl and BaseUrl using environment variables
      6. Click the ellipses button on the Test environment and configure the variables, using the name of your docker host for the HubUrl (note also how the port is the port from the docker-compose.yml file):
      7. image
    8. In the third (optional) phase, I use another Docker Compose task to run docker-compose down to shut down the containers
      1. image
      2. This time set the Action to “Run a Docker Compose command” and enter “down” for the Command
      3. Again use the docker host connection

    We can now queue and run the release!

    My release is successful and I can see the tests in the Tests tab (don’t forget to change the Outcome filter to Passed – the grid defaults this to Failed):

    image

    Some Challenges

    Docker-compose SSL failures

    I could not get the docker-compose task to work using the VSTS agent docker image. I kept getting certificate errors like this:

    SSL error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)

    I did log an issue on the VSTS Docker Tasks repo, but I’m not sure if this is a bug in the extension or the VSTS docker agent. I was able to replicate this behavior locally by running docker-compose. What I found is that I can run docker-compose successfully if I explicitly pass in the ca.pem, cert.pem and key.pem files as command arguments – but if I specified them using environment variables, docker-compose failes with the SSL error. I was able to run docker commands successfully using the Docker tasks in the release – but that would mean running three commands (assuming I only want three containers) in the pre-test phase and another three in the post-test phase to stop each container. Here’s what that would look like:

    image

    You can use the following commands to run the containers and link them (manually doing what the docker-compose.yml file does):

    run -d -P --name selenium-hub selenium/hub

    run -d --link selenium-hub:hub selenium/node-chrome

    run -d --link selenium-hub:hub selenium/node-firefox

    To get the run for this post working, I just ran the docker-compose from my local machine (passing in the certs explicitly) and disabled the Docker Compose task in my release.

    Running Tests in the Hosted Agent

    I also could not get the test task to run successfully using the hosted agent – but it did run successfully if I used a private windows agent. This is because at this time VS 2017 is not yet installed on the hosted agent. Running tests from the hosted agent will work just fine once VS 2017 is installed onto it.

    Pros and Cons

    This technique is quite elegant – but there are pros and cons.

    Pros:

    • Get lots of Selenium nodes registered to a Selenium hub to enable lots of parallel testing (refer to my previous blog on how to run tests in parallel in a grid)
    • No config required – you can run tests on the nodes as-is

    Cons:

    • Only Chrome and Firefox tests supported, since there are only docker images for these browsers. Technically you could join any node you want to to the hub container if you wanted other browsers, but at that point you may as well configure the hub outside docker anyway.

    Conclusion

    I really like how easy it is to get a Selenium grid up and running using Docker. This should make testing fast – especially if you’re running tests in parallel. Once again VSTS makes advanced pipelines easy to tame!

    Happy testing!

    Easy Config Management when Deploying Azure Web Apps from VSTS

    $
    0
    0

    A good DevOps pipeline should utilize the principle of build once, deploy many times. In fact, I’d go so far as to say it’s essential for a good DevOps pipeline. That means that you have to have a way to manage your configuration in such a way that the package coming out of the build process is tokenized somehow so that when you release to different environments you can inject environment-specific values. Easier said that done – until now.

    Doing it the Right but Hard Way

    Currently I’ve recommended that you use WebDeploy to do this. You define a publish profile to handle connection string and a parameters.xml file to handle any other config you want to tokenize during build. This produces a WebDeploy zip file along with a (now tokenized) SetParameters.xml file. Then you use the ReplaceTokens task from my VSTS build/release task pack extension and inject the environment values into the SetParameters.xml file before invoking WebDeploy. This works, but it’s complicated. You can read a full end to end walkthrough in this post.

    Doing it the Easy Way

    A recent release to the Azure Web App deploy task in VSTS has just dramatically simplified the process! No need for parameters.xml or publish profiles at all.

    Make sure your build is producing a WebDeploy zip file. You can read my end to end post on how to add the build arguments to the VS Build task – but now you don’t have to specify a publish profile. You also don’t need a parameters.xml in the solution. The resulting zip file will deploy (by default) with whatever values you have in the web.config at build time.

    Here’s what I recommend:

    /p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactStagingDirectory)"

    You can now just paste that into the build task:

    image

    You can see the args (in the new build UI). This tells VS to create the WebDeploy zip and put it into the artifact staging directory. The Publish Artifact Drop task uploads anything that it’s the artifact staging directory (again, by default) – which at the time it runs should be the WebDeploy files.

    The Release

    Here’s where the magic comes in: drop in an Azure App Service Deploy task. Set it’s version to 3.*(preview). You’ll see a new section called “File Transforms & Variable Substitution Options”. Just enable the “XML Override substitution”.

    image

    That’s it! Except for defining the values we want to use for the said substitution. To do this, open the web.config and look at your app setting keys or connection string names. Create a variable that matches the name of the setting and enter a value. In my example, I have Azure B2C so I need a key called “ida:Tenant” so I just created a variable with that name and set the value for the DEV environment. I did the same for the other web.config variables:

    image

    Now you can run your release!

    Checking the Web.Config Using Kudu

    Once the release had completed, I wanted to check if the value had been set. I opened up the web app in the Azure portal, but there were no app settings defined there. I suppose that makes sense – the substitutions are made onto the web.config itself. So I just opened the Kudu console for the web app and cat’ed the web.config by typing “cat Web.config”. I could see that the environment values had been injected!

    image

    Conclusion

    It’s finally become easy to manage web configs using the VSTS Azure Web App Deploy task. No more publish profiles, parameters.xml files, SetParameters.xml files or token replacement. It’s refreshingly clean and simple. Good job VSTS team!

    I did note that there is also the possibility of injecting environment-specific values into a json file – so if you have .NET CORE apps, you can easily inject values at deploy time.

    Happy releasing!


    New Task: Tag Build or Release

    $
    0
    0

    I have a build/release task pack in the marketplace. I’ve just added a new task that allows you to add tags to builds or releases in the pipeline, inspired by my friend and fellow MVP Rene van Osnabrugge’s excellent post.

    Here are a couple of use cases for this task:

    1. You want to trigger releases, but only for builds on a particular branch with a particular tag. This trigger only works if the build is tagged during the build. So you could add a TagBuild task to your build that is only run conditionally (for example for buildreason = Pull Request). Then if the condition is met, the tag is set on the build and the release will trigger in turn, but only for builds that have the tag set.
      1. image
    2. You want to tag a build from a release once a release gets to a certain environment. For example, you can add a TagBuild task and tag the primary build once all the integration tests have passed in the integration environment. That way you can see which builds have passed integration tests simply by querying the tags.
      1. image

    Of course you can use variables for the tags – so you could tag the build with the release(s) that have made it to prod by specifying $(Release.ReleaseNumber) as the tag value.

    There are of course a ton of other use cases!

    Tag Types

    You can see the tag type matrix for the “tag type” (which can be set to Build or Release) in the docs.

    Conclusion

    Let me know if you have issues or feedback. Otherwise, happy taggin’!

    Testing in Production: Routing Traffic During a Release

    $
    0
    0

    DevOps is a journey that every team should at least have started by now. Most of the engagements I have been on in the last year or so have been in the build/release automation space. There are still several practices that I think teams must invest in to remain competitive – unit testing during builds and integration testing during releases are crucial foundations for more advanced DevOps, which I’ve blogged about (a lot) before. However, Application Performance Monitoring (APM) is also something that I believe is becoming more and more critical to successful DevOps teams. And one application of monitoring is hypothesis driven development.

    Hypothesis Driven Development using App Service Slots

    There are some prerequisites for hypothesis driven development: you need to have metrics that you can measure (I highly, highly recommend using Application Insights to gather the metrics) and you have to have a hypothesis that you can quickly test. Testing in production is the best way to do this – but how do you manage that?

    If you’re deploying to Azure App Services, then it’s pretty simple: create a deployment slot on the Web App that you can deploy the “experimental” version of your code to and divert a small percentage of traffic from the real prod site to the experimental slot. Then monitor your metrics. If you’re happy, swap the slots, instantly promoting the experiment. If it does not work, then you’ve failed fast – and you can back out.

    Sounds easy. But how do you do all of that in an automated pipeline? Well, you can already deploy to a slot using VSTS and you can already swap slots using OOB tasks. What’s missing is the ability to route a percentage of traffic to a slot.

    Route Traffic Task

    To quote Professor Farnsworth, “Good news everyone!” There is now a VSTS task in my extension pack that allows you to configure a percentage of traffic to a slot during your release – the Route Traffic task. To use it, just deploy the new version of the site to a slot and then drop in a Route Traffic task to route a percentage of traffic to the staging site. At this point, you can approve or reject the experiment – in both cases, take the traffic percentage down to 0 to the slot (so that 100% traffic goes to the production slot) ad then if the experiment is successful, swap the slots.

    What He Said – In Pictures

    To illustrate that, here’s an example. In this release I have DEV and QA environments (details left out for brevity), and then I’ve split prod into Prod-blue, blue-cleanup and Prod-success. There is a post-approval set on Prod-blue. For both success and failure of the experiment, approve the Prod-blue environment. At this stage, blue-cleanup automatically runs, turning the traffic routing to 0 for the experimental slot. Then Prod-success starts, but it has a pre-approval set that you can approve only if the experiment is successful: it swaps the slots.

    Here is the entire release in one graphic:

    image

    In Prod-blue, the incoming build is deployed to the “blue” slot on the web app:

    image

    Next, the Route Traffic task routes a percentage of traffic to the blue slot (in this case, 23%):

    image

    If you now open the App Service in the Azure Portal, click on “Testing in Production” to view the traffic routing:

    image

    Now it’s time to monitor the two slots to check if the experiment is successful. Once you’ve determined the result, you can approve the Prod-blue environment, which automatically triggers the blue-cleanup environment, which updates the traffic routing to route 0% traffic to the blue slot (effectively removing the traffic route altogether).

    image

    Then the Prod-success environment is triggered with a manual pre-deployment configured – reject to end the experiment (if it failed) or approve to execute the swap slot task to make the experimental site production.

    image

    Whew! We were able to automate an experiment fairly easily using the Route Traffic task!

    Conclusion

    Using my new Route Traffic task, you can easily configure traffic routing into your pipeline to conduct true A/B testing. Happy hypothesizing!

    Aurelia, Azure and VSTS

    $
    0
    0

    I am a huge fan of Aurelia– and that was even when I was working with it in the beta days. I recently had to do some development to display d3 graphs, and needed a simple SPA app. Of course I decided to use Aurelia. During development, I was again blown away by how well thought out Aurelia is – and using some new (to me) tooling, the experience was super. In this post I’ll walk through the tools that I used as well as the build/release pipeline that I set up to host the site in Azure.

    Tools

    Here are the tools that I used:

    1. aurelia-cli to create the project, scaffold and install components, build and run locally
    2. VS Code for frontend editing, with a great Aurelia extension
    3. Visual Studio 2017 for coding/running the API
    4. TypeScript for the Aurelia code
    5. Karma (with phantomJS) and Istanbul for frontend testing and coverage
    6. .NET Core for the Aurelia host as well as for an API
    7. Azure App Services to host the web app
    8. VSTS for Git source control, build and release

    The Demo App and the Challenges

    To walk through the development process, I’m going to create a stupid-simple app. This isn’t a coding walkthrough per se – I want to focus on how to use the tooling to support your development process. However, I’ll demonstrate the challenges as well as the solutions, hopefully showing you how quickly you can get going and do what you do best – code!

    The demo app will be an Aurelia app with just a REST call to an API. While it is a simple app, I’ll walk through a number of important development concepts:

    1. Creating a new project
    2. Configuring VS Code
    3. Installing components
    4. Building, bundling and running the app locally
    5. Handling different configs for different environments
    6. Automated build, test and deployment of the app

    Creating the DotNet Projects

    There are some prerequisites to getting started, so I installed all of these:

    • nodejs
    • npm
    • dotnet core
    • aurelia-cli
    • VS Code
    • VS 2017

    Once I had the prereqs installed, I created a new empty folder (actually I cloned an empty Git repo – if you don’t clone a repo, remember to git init). Since I wanted to peg the dotnet version, I created a new file called global.json:

    {
      "sdk": {
        "version": "1.0.4"
      }
    }
    
    I also created a .gitignore (helpful tip: if you open the folder in Visual Studio and use Team Explorer->Settings->Repository Settings, you can create a default .gitignore and .gitattributes file).

    Then I created a new dotnet webapi project to “host” the Aurelia app in a folder called frontend and another dotnet project to be the API in a folder called API:

    image

    The commands are:

    mkdir frontend
    cd frontend
    dotnet new webapi
    cd ..
    mkdir API
    cd API
    dotnet new webapi
    

    I then opened the API project in Visual Studio. Pressing save prompted me to create a solution file, which I did in the API folder. I also created an empty readme.txt file in the wwwroot folder (I’ll explain why when we get to the build) and changed the Launch URL in the project properties to “api/values”:

    image

    When I press F5 to debug, I see this:

    image

    Creating the Aurelia Project

    I was now ready to create the Aurelia skeleton. The last time I used Aurelia, there was no such thing as the aurelia-cli– so it was a little bumpy getting started. I found using the cli and the project structure it creates for building/bundling made development smooth as butter. So I cd’d back to the frontend folder and ran the aurelia-cli command to create the Aurelia project: au new --here. The “--here” is important because it tells the aurelia-cli to create the project in this directory without creating another subdirectory. A wizard then walked me through some choices: here are my responses:

    • Target platform: .NET Core
    • Transpiler: TypeScript
    • Template: With minimum minification
    • CSS Processor: Less
    • Unit testing: Yes
    • Install dependencies: Yes

    That created the Aurelia project for me and installed all of the nodejs packages that Aurelia requires. Once the install completed, I was able to run by typing “au run”:

    image

    Whoop! The skeleton is up, so it’s time to commit!

    You can find the repo I used for this post here. There are various branches – start is the the start of the project up until now – in other words, the absolute bare skeleton of the project.

    Configuring VS Code

    Now that I have a project structure, I can start coding. I’ve already got Visual Studio for the API project, which I could use for the frontend editing, but I really like doing nodejs development in VS Code. So I open up the frontend folder in VS Code.

    I’ve also installed some VS Code extensions:

    1. VSCode Great Icons– makes the icons in the file explorer purdy (don’t forget to configure your preferences after you install the extension!)
    2. TSLint– lints my TypeScript as I code
    3. aurelia– palette commands and html intellisense

    Configuring TSLint

    There is already an empty tslint.json file in the root of the frontend project. Once you’ve installed the VS Code TSLint extension, you’ll see lint warnings in the status bar: though you have to first configure which rules you want to run. I usually start by extending the tslint:latest rules. Edit the tslint.json file to look like this:

    {
      "extends": ["tslint:latest"],
      "rules": {
        
      }
    }
    

    Now you’ll see some warnings and green squigglies in the code:

    image

    I don’t care about the type of quotation marks (single or double) and I don’t care about alphabetically ordering my imports, so I override those rules:

    {
      "extends": ["tslint:latest"],
      "rules": {
        "ordered-imports": [
          false
        ],
        "quotemark": [
          false
        ]
      }
    }
    

    Of course you can put whatever ruleset you want into this file – but making a coding standard for your team that’s enforced by a tool rather than in a wiki or word doc is a great practice! A helpful tip is that if you edit the json file in VS Code you get intellisense for the rules – and you can see the name of the rule in the warnings window.

    Installing Components

    Now we can use the aurelia cli (au) to install components. For example, I want to do some REST calls, so I want to install the fetch-client:

    au install aurelia-fetch-client whatwg-fetch

    This not only adds the package, but amends the aurelia.json manifest file (in the aurelia_project folder) so that the aurelia-fetch-client is bundled when the app is “compiled”. I also recommend installing whatwg-fetch which is a fetch polyfill. Let’s create a new class which is a wrapper for the fetch client:

    import { autoinject } from 'aurelia-framework';
    import { HttpClient } from 'aurelia-fetch-client';
    
    const baseUrl = "http://localhost:1360/api";
    
    @autoinject
    export class ApiWrapper {
        public message = 'Hello World!';
        public values: string[];
    
        constructor(public client: HttpClient) {
    		client.configure(config => {
    			config
    				.withBaseUrl(baseUrl)
    				.withDefaults({
    					headers: {
    						Accept: 'application/json',
    					},
    				});
    		});
    	}
    }
    

    Note that (for now) we’re hard-coding the baseUrl. We’ll address config shortly.

    We can now import in the ApiWrapper (via injection) and call the values method:

    import { autoinject } from 'aurelia-framework';
    import { ApiWrapper } from './api';
    
    @autoinject
    export class App {
      public message = 'Hello World!';
      public values: string[];
    
      constructor(public api: ApiWrapper) {
        this.initValues();
      }
    
      private async initValues() {
        try {
          this.values = await this.api.client.fetch("/values")
            .then((res) => res.json());
        } catch (ex) {
          console.error(ex);
        }
      }
    }
    

    Here’s the updated html for the app.html page:

    Nothing too fancy – but shown here to be complete. I’m not going to make a full app here, since that’s not the goal of this post.

    Finally, we need to enable CORS on the Web API (since it does not allow CORS by default). Add the Microsoft.AspNet.Cors package to the API project and then add the services.AddCors() and app.UseCors() lines (see this snippet):

    public void ConfigureServices(IServiceCollection services)
    {
      // Add framework services.
      services.AddMvc();
    	services.AddCors();
    }
    
    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
      loggerFactory.AddConsole(Configuration.GetSection("Logging"));
      loggerFactory.AddDebug();
    
      app.UseCors(p => p.AllowAnyOrigin().AllowAnyMethod());
      app.UseMvc();
    }
    

    Now we can get this when we run the project (using “au run”):

    image

    If you’re following along in the repo code, the changes are on the “Step1-AddFetch” branch.

    Running Locally

    Running locally is trivial. I end up with Visual Studio open and pressing F5 to run the backend API project – the frontend project is just as trivial. In VSCode, with the frontend folder open, just hit ctrl-shift-p to bring up the command palette and then type/select “au run --watch” to launch the frontend “build”. This transpiles the TypeScript to JavaScript, compiles Less (or SASS) to css, bundles and minifies all your html, compiled css and JavaScript into a single app-bundle.js in wwwroot\scripts. It also minifies and bundles Aurelia and its dependencies into vendor-bundle.js, using the settings from the aurelia.json file. It’s a lot of work, but Aurelia takes care of it all for you – just run “au run” to do all that stuff and launch a server. If you add the --watch parameter, the process watches your source files (html, Less, TypeScript) and automatically recompiles everything and refreshes the browser automagically using browsersync. It’s as smooth as butter!

    Config Management

    Attempt 1 – Using environment.ts

    Let’s fix up the hard-coded base URL for the api class. Aurelia does have the concept of “environments” – you can see that by looking in the src\environment.ts file. You would be tempted to change the values of that file, but you’ll see that if you do, the contents get overwritten each time Aurelia compiles. Instead, open up the aurelia-project\environments folder, where you’ll see three environment files – dev, stage and prod.ts. To change environment, just enter “au run --env dev” to get the dev environment or “au run --env prod” to get the prod environment. (Unfortunately you can’t change the environment using VSCode command palette, so you have to run the run command from a console or from the VSCode terminal).

    Let’s edit the environments to put the api base URL there instead of hard-coding it:

    export default {
      apiBaseUrl: "http://localhost:64705/api",
      debug: true,
      testing: true,
    };
    

    Of course we add the apiBaseUrl property to the stage and prod files too!

    With that change, we can simply import the environment and use the value of the property in the api.ts file:

    import { autoinject } from 'aurelia-framework';
    import { HttpClient } from 'aurelia-fetch-client';
    import environment from './environment';
    
    @autoinject
    export class ApiWrapper {
        public message = 'Hello World!';
        public values: string[];
    
        constructor(public client: HttpClient) {
            client.configure(config => {
                config
                    .withBaseUrl(environment.apiBaseUrl)
                    .withDefaults({
                        headers: {
                            Accept: 'application/json',
                        },
                    });
            });
        }
    }
    

    The important changes are on line 2 (reading in the environment settings) and line 13 (using the value). Now we can run for different environments. If you’re following along in the repo code, the changes are on the “Step2-EnvTsConfig” branch.

    Attempt 2 – Using a Json File

    There’s a problem with the above approach though – if we have secrets (like access tokens or keys) then we don’t want them checked into source control. Also, when we get to build/release, we want the same build to go to multiple environments – using environment.ts means we have to build once for each environment and then select the correct package for the corresponding environment – it’s nasty. Rather, we want to be able to configure the environment settings during a release. This puts secret information in the release tool instead of source control, which is much better, and allows a single build to be deployed to any number of environments.

    Unfortunately, it’s not quite so simple (at first glance). The environment.ts file is bundled into app-bundle.js, so there’s no way to inject values at deploy time, unless you want to monkey with the bundle itself. It would be much better to take a leaf out of the .NET CORE playbook and set up a Json config file. Fortunately, there’s an Aurelia plugin that allows you to do just that! Conveniently, it’s called aurelia-configuration.

    Run “au install aurelia-configuration” to install the module.

    Now (by convention) the config module looks for a file called “config\config.json”. So in the src folder, add a new folder called config and add a new file into the config folder called config.json:

    {
          "api": {
                "baseUri": "http://localhost:12487/api"
          }
    }
    

    We can then inject the AureliaConfiguration class into our classes and call the get() method to retrieve a variable value. Let’s change the api.ts file again:

    import { autoinject } from 'aurelia-framework';
    import { HttpClient } from 'aurelia-fetch-client';
    import { AureliaConfiguration } from 'aurelia-configuration';
    
    @autoinject
    export class ApiWrapper {
        public message = 'Hello World!';
        public values: string[];
    
        constructor(public client: HttpClient, private aureliaConfig: AureliaConfiguration) {
            client.configure(config => {
                config
                    .withBaseUrl(aureliaConfig.get("api.baseUri"))
                    .withDefaults({
                        headers: {
                            Accept: 'application/json',
                        },
                    });
            });
        }
    }
    

    Line 3 has us importing the type, line 10 has the constructor arg for the autoinjection and we get the value on line 13.

    We also have to tell Aurelia to use the config plugin. Open main.ts and add the plugin code (line 8 below):

    import {Aurelia} from 'aurelia-framework';
    import environment from './environment';
    
    export function configure(aurelia: Aurelia) {
      aurelia.use
        .standardConfiguration()
        .feature('resources')
        .plugin('aurelia-configuration');
      ...
    

    There’s one more piece to this puzzle: the config.json file doesn’t get handled anywhere, so running the program won’t work. We need to tell the Aurelia bundler that it needs to add in the config.json file and publish it to the wwwroot folder. To do that, we can add in a copyFiles target onto the aurelia.json settings file:

    {
      "name": "frontend",
      "type": "project:application",
      "platform": {
        ...
      },
      ...
      "build": {
        "targets": [
         ...
        ],
        "loader": {
          ...
        },
        "options": {
          ...
        },
        "bundles": [
          ...
        ],
        "copyFiles": {
          "src/config/*.json": "wwwroot/config"
        }
      }
    }
    

    At the bottom of the file, just after the build.bundles settings, we add the copyFiles target. The config.json file is now copied to the wwwroot/config folder when we build, ready to be read at run time! If you’re following along in the repo code, the changes are on the “Step3-JsonConfig” branch.

    Testing

    Authoring the Tests

    Of course the API project would require tests – but doing .NET testing is fairly simple and there’s a ton of guidance on how to do that. I was more interested in testing the frontend (Aurelia) code with coverage results.

    When I created the frontend project, Aurelia created a test stub project. If you open the test folder, there’s a simple test spec in unit\app.spec.ts:

    import {App} from '../../src/app';
    
    describe('the app', () => {
      it('says hello', () => {
        expect(new App().message).toBe('Hello World!');
      });
    });
    

    We’ve changed the App class, so this code won’t compile correctly. Now we need to pass an ApiWrapper to the App constructor. And if we want to construct an ApiWrapper, we need an AureliaConfiguration instance as well as an HttpClient instance. We’re going to want to mock the API calls that the frontend makes, so let’s stub out a mock implementation of HttpClient. I add a new class in src\test\unit\utils\mock-fetch.ts:

    import { HttpClient } from 'aurelia-fetch-client';
    
    export class HttpClientMock extends HttpClient {
    }
    

    We’ll flesh this class out shortly. For now, it’s enough to get an instance of HttpClient for the ApiWrapper constructor. What about the AureliaConfiguration instance? Fortunately, we can create (and even configure) one really easily:

    let aureliaConfig = new AureliaConfiguration();
    aureliaConfig.set("api.baseUri", "http://test");
    

    We add the “api.BaseUri” key since that’s the value that the ApiWrapper reads from the configuration object. We can now flesh out the remainder of our test:

    import {App} from '../../src/app';
    import {ApiWrapper} from '../../src/api';
    import {HttpClientMock} from './utils/mock-fetch';
    import {AureliaConfiguration} from 'aurelia-configuration';
    
    describe('the app', () => {
      it('says hello', async done => {
        // arrange
        let aureliaConfig = new AureliaConfiguration();
        aureliaConfig.set("api.baseUri", "http://test");
    
        const client = new HttpClientMock();
        client.setup({
          data: ["testValue1", "testValue2", "testValue3"],
          headers: {
            'Content-Type': "application/json",
          },
          url: "/values",
        });
        const api = new ApiWrapper(client, aureliaConfig);
    
        // act
        let sut: App;
        try {
          sut = new App(api);
        } catch (e) {
          console.error(e);
        }
    
        // assert
        setTimeout(() => {
          expect(sut.message).toBe('Hello World!');
          expect(sut.values.length).toBe(3);
          expect(sut.values).toContain("testValue1");
          expect(sut.values).toContain("testValue2");
          expect(sut.values).toContain("testValue3");
          done();
        }, 10);
      });
    });
    

    Notes:

    • Lines 13-19: configure the mock fetch response (we’ll see the rest of the mock HttpClient class shortly)
    • Line 20: instantiate a new ApiWrapper
    • Lines 23-28: call the App constructor
    • Lines 31-38: we wrap the asserts in a timeout since the App constructor calls an async method (perhaps there’s a better way to do this?)

    Let’s finish off the test code by looking at the mock-fetch class:

    import { HttpClient } from 'aurelia-fetch-client';
    export class HttpClientMock extends HttpClient {
    }
    
    export interface IMethodConfig {
        url: string;
        method?: string;
        status?: number;
        statusText?: string;
        headers?: {};
        data?: {};
    };
    
    export class HttpClientMock extends HttpClient {
        private config: IMethodConfig[] = [];
    
        public setup(config: IMethodConfig) {
            this.config.push(config);
        }
    
        public async fetch(input: Request | string, init?: RequestInit) {
            let url: string;
            if (typeof input === "string") {
                url = input;
            } else {
                url = input.url;
            }
    
            // find the matching setup method
            let methodConfig: IMethodConfig;
            methodConfig = this.config.find(c => c.url === url);
            if (!methodConfig) {
                console.error(`---MockFetch: No such method setup: ${url}`);
                return Promise.reject(new Response(null,
                    {
                        status: 404,
                        statusText: `---MockFetch: No such method setup: ${url}`,
                    }));
            }
    
            // set up headers
            let responseInit: ResponseInit = {
                headers: methodConfig.headers || {},
                status: methodConfig.status || 200,
                statusText: methodConfig.statusText || "",
            };
    
            // get a unified request object
            let request: Request;
            if (Request.prototype.isPrototypeOf(input)) {
                request = ( input);
            } else {
                request = new Request(input, responseInit || {});
            }
    
            // create a response object
            let response: Response;
            const data = JSON.stringify(methodConfig.data);
            response = new Response(data, responseInit);
    
            // resolve or reject accordingly
            return response.status >= 200 && response.status < 300 ?
                Promise.resolve(response) : Promise.reject(response);
        }
    }
    

    I won’t go through the whole class, but essentially you configure a mapping of routes to responses so that when the mock object is called it can return predictable data.

    With those changes in place, we can run the tests using “au test”. This launches Chrome and runs the test. The Aurelia project did the heavy lifting to configure paths for the test runner (Karma) so that the tests “just work”.

    Going Headless and Adding Reports and Coverage

    Now that we can run the tests in Chrome with results splashed to the console, we should consider how these tests would run in a build. Firstly, we want to produce a report file of some sort so that the build can save the results. We also want to add coverage. Finally, we want to run headless so that we can run this on an agent that doesn’t need access to a desktop to launch a browser!

    We’ll need to add some development-time node packages to accomplish these changes:

    yarn add karma-phantomjs-launcher karma-coverage karma-tfs gulp-replace --dev

    With those package in place, we can change the karma.conf.js file to use phantomjs (a headless browser) instead of Chrome. We’re also going to add in the test result reporter, coverage reporter and a coverage remapper. The coverage will report coverage on the JavaScript files, but we would ideally want coverage on the TypeScript files – that’s what the coverage remapper will do for us.

    Here’s the new karma.conf.js:

    'use strict';
    const path = require('path');
    const project = require('./aurelia_project/aurelia.json');
    const tsconfig = require('./tsconfig.json');
    
    let testSrc = [
      { pattern: project.unitTestRunner.source, included: false },
      'test/aurelia-karma.js'
    ];
    
    let output = project.platform.output;
    let appSrc = project.build.bundles.map(x => path.join(output, x.name));
    let entryIndex = appSrc.indexOf(path.join(output, project.build.loader.configTarget));
    let entryBundle = appSrc.splice(entryIndex, 1)[0];
    let files = [entryBundle].concat(testSrc).concat(appSrc);
    
    module.exports = function(config) {
      config.set({
        basePath: '',
        frameworks: [project.testFramework.id],
        files: files,
        exclude: [],
        preprocessors: {
          [project.unitTestRunner.source]: [project.transpiler.id],
          'wwwroot/scripts/app-bundle.js': ['coverage']
        },
        typescriptPreprocessor: {
          typescript: require('typescript'),
          options: tsconfig.compilerOptions
        },
        reporters: ['progress', 'tfs', 'coverage', 'karma-remap-istanbul'],
        port: 9876,
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch: true,
        browsers: ['PhantomJS'],
        singleRun: false,
        // client.args must be a array of string.
        // Leave 'aurelia-root', project.paths.root in this order so we can find
        // the root of the aurelia project.
        client: {
          args: ['aurelia-root', project.paths.root]
        },
    
        phantomjsLauncher: {
          // Have phantomjs exit if a ResourceError is encountered (useful if karma exits without killing phantom)
          exitOnResourceError: true
        },
    
        coverageReporter: {
          dir: 'reports',
          reporters: [
            { type: 'json', subdir: 'coverage', file: 'coverage-final.json' },
          ]
        },
    
        remapIstanbulReporter: {
          src: 'reports/coverage/coverage-final.json',
          reports: {
            cobertura: 'reports/coverage/cobertura.xml',
            html: 'reports/coverage/html'
          }
        }
      });
    };
    

    Notes:

    • Line 25: add a preprocessor to instrument the code that we’re going to execute
    • Line 31: we add reporters to produce results files (tfs), coverage and remapping
    • Lines 45-48: we configure a catch-all to close phantomjs if something fails
    • Lines 50-55: we configure the coverage to output a Json coverage file
    • Lines 57-63: we configure the remapper so that we get TypeScript coverage results

    One gotcha I had that I couldn’t find a work-around for: the html files that are generated showing which lines of code were hit is generated with incorrect relative paths and the src folder (with detailed coverage) generated outside the html report folder. Eventually, I decided that a simple replace and file move was all I needed, so I modified the test.ts task in the aurelia-project\tasks folder:

    // hack to fix the relative paths in the generated mapped html report
    let fixPaths = done => {
      let repRoot = path.join(__dirname, '../../reports/');
      let repPaths = [
        path.join(repRoot, 'src/**/*.html'),
        path.join(repRoot, 'src/*.html'),
      ];
      return gulp.src(repPaths, { base: repRoot })
            .pipe(replace(/(..\/..\/..\/)(\w)/gi, '../coverage/html/$2'))
            .pipe(gulp.dest(path.join(repRoot)));
    };
    
    let unit;
    
    if (CLIOptions.hasFlag('watch')) {
      unit = gulp.series(
        build,
        gulp.parallel(
          watch(build, onChange),
          karma,
          fixPaths
        )
      );
    } else {
      unit = gulp.series(
        build,
        karma,
        fixPaths
      );
    }
    

    I add new tasks called “updateIndex” and “copySrc” that fix up the paths for me. Perhaps there’s a config setting for the remapper that will render this obsolete, but this was the best I could come up with.

    Now when you run “au test” you get a result file and coverage results for the TypeScript code all in the html folder with the correct paths. If you’re following along in the repo code, these changes are on the master branch (this is the final state of the demo code).

    Automated Build and Test

    We now have all the pieces in place to do a build. The build is fairly straightforward once you work out how to invoke the Arelia cli. Starting with a .NET Core Web App template, here is the definition I ended up with:

    image

    Here are the task settings:

    1. .NET Core Restore – use defaults
    2. .NET Core Build
      1. Change “Arguments” to --configuration $(BuildConfiguration) --version-suffix $(Build.BuildNumber)
      2. The extra bit added is the version-suffix arg which produces binaries with the same version as the build number
    3. npm install
      1. Change “working folder” to frontend (this is the directory of the Aurelia project).\node_modules\aurelia-cli\bin\aurelia-cli.js test
    4. Run command
        1. Set “Tool” to node
        2. Set “Arguments” to .\node_modules\aurelia-cli\bin\aurelia-cli.js test
        3. Expand “Advanced” and set “Working folder” to frontend
        4. This runs the tests and produces the test results and coverage results files
    5. Run command
      1. Set “Tool” to node
      2. Set “Arguments” to .\node_modules\aurelia-cli\bin\aurelia-cli.js build --env prod
      3. Expand “Advanced” and set “Working folder” to frontend
      4. This does transpilation, minification and bundling so that we’re ready to deploy
    6. Publish Test Results
      1. Set “Test Result Format” to VSTest
      2. Set “Test results files” to frontend/testresults/TEST*.xml
      3. Set “Test run title” to Aurelia
    7. Publish code coverage Results
      1. Set “Code Coverage Tool” to Cobertura
      2. Set “Summary File” to $(Build.SourcesDirectory)/frontend/reports/coverage/cobertura.xml
      3. Set “Report Directory” to $(System.DefaultWorkingDirectory)/frontend/reports/coverage/html
    8. .NET Core Publish
      1. Make sure “Publish Web Projects” is checked – this is why I added a dummy readme file into the wwwroot folder of the API app, otherwise it’s not published as a web project
      2. Set “Arguments” to --configuration $(BuildConfiguration) --output $(build.artifactstagingdirectory) --version-suffix $(Build.BuildNumber)
      3. Make sure “Zip Published Projects” is checked
    9. On the Options Tab
      1. Set the build number format to 1.0.0$(rev:.r) to give the build number a 1.0.0.x format
      2. Set the default agent queue to Hosted VS2017 (or you can select a private build agent with VS 2017 installed)

    Now when I run the build, I get test and coverage results in the summary:

    SNAGHTML323918f

    The coverage files are there if you click the Code Coverage results tab, but there’s a problem with the css.

    image

    The <link> elements are stripped out of the html pages when the iFrame for the coverage results shows – I’m working with the product team to find a workaround for this. If you download the results from the Summary page and unzip them, you get the correct rendering.

    I can also see both web projects ready for deployment in the Artifacts tab:

    image

    We’re ready for a release!

    The Release Definition

    I won’t put the whole release to Azure here – the key point to remember is the configuration. We’ve done the work to move the configuration into the config.json file for this very reason.

    Once you’ve set up an Azure endpoint, you can add in an “Azure App Services Deploy” task. Select the subscription and app service and then change the “Package or folder” from “$(System.DefaultWorkingDirectory)/**/*.zip” to “$(System.DefaultWorkingDirectory)/drop/frontend.zip” (or API.zip) to deploy the corresponding site. To handle the configuration, you simply add “wwwroot/config/config.json” to the “JSON variable substitution”.

    image

    Now we can define an environment variable for the substitution. Just add one with the full “JSON path” for the variable. In our case, we want “api.baseUri” to be the name and then put in whatever the corresponding environment value is:

    image

    We can repeat this for other variables if we need more.

    Conclusion

    I really love the Aurelia framework – and with the solid Aurelia cli, development is a really good experience. Add to that simple build and release management to Azure using VSTS, and you can get a complete site skeleton with full CI/CD in half a day. And that means you’re delivering better software, faster – always a good thing!

    Happy Aurelia-ing!

    DevOps with Kubernetes and VSTS: Part 1

    $
    0
    0

    If you've read my blog before, you'll probably know that I am huge fan of Docker and containers. When was the last time you installed software onto bare metal? Other than your laptop, chances are you haven't for a long time. Virtualization has transformed how we thing about resources in the datacenter, greatly increasing the density and utilization of resources. The next evolution in density is containers - just what VMs are to physical servers, containers are to VMs. Soon, almost no-one will work against VMs anymore - we'll all be in containers. At least, that's the potential.

    However, as cool as containers are for packaging up apps, there's still a lot of uncertainty about how to actually run containers in production. Creating a single container is a cool and satisfying experience for a developer, but how do you run a cluster and scale containers? How do you monitor your containers? How do you manage faults? This is where we enter the world of container orchestration.

    Orchestrator Wars

    There are three popular container orchestration systems - Mesos, Kubernetes and Docker Swarm. I don't want to go into a debate on which one you should go with (yet) - but they're all conceptually similar.  They all work off configuration as code for spinning up lots of containers across lots of nodes. Kubernetes does have a couple features that I think are killer for DevOps: ConfigMaps, Secrets and namespaces.

    In short, namespaces allow you to segregate logical environments in the same cluster - the canonical example is a DEV namespace where you can run small copies of your PROD environment for testing. You could also use namespaces for different security contexts or multi-tenancy. ConfigMaps (and Secrets) allow you to store configuration outside of your containers - which means you can have the same image running in various contexts without having to bake environment-specific code into the images themselves.

    Kubernetes Workflow and Pipeline

    In this post, I want to look at how you would develop with Kubernetes in mind. We'll start by looking at the developer workflow and then move on to how the DevOps pipeline looks in the next post. Fortunately, having MiniKube (a one-node Kubernetes cluster that runs in a VM) means that you can develop against a fully features cluster on your laptop! That means you can take advantage of cluster features (like ConfigMaps) without having to be connected to a production cluster.

    So what would the developer workflow look like? Something like this:

    1. Develop code
    2. Build image from Dockerfile or docker-compose files
    3. Run service in MiniKube (which spins up containers from the images you just built)

    It turns out that Visual Studio 2017 (and/or VS Code), Docker and MiniKube make this a really smooth experience.

    Eventually you're going to move to the DevOps pipeline - starting with a build. The build will take the source files and Dockerfiles and build images and push them to a private container registry. Then you'll want to push configuration to a Kubernetes cluster to actually run/deploy the new images. It turns out that using Azure and VSTS makes this DevOps pipeline smooth as butter! That will be the subject of Part 2 - for now, we'll concentrate on the developer workflow.

    Setting up the Developer Environment

    I'm going to focus on a Windows setup, but the same setup would apply to Mac or Linux environments as well. To set up a local development environment, you need to install the following:

    1. Docker
    2. Kubectl
    3. MiniKube

    You can follow the links and run the installs. I had a bit of trouble with MiniKube on HyperV - by default, MiniKube start (the command that creates the MiniKube VM) just grabs the first HyperV virtual network it finds. I had a couple, and the one that MiniKube grabbed was an internal network, which caused MiniKube to fail. I created a new virtual network called minikube in the HyperV console and made sure it was an external network. I then used the following command to create the MiniKube VM:

    c:
    cd \
    minikube start --vm-driver hyperv --hyperv-virtual-switch minikube

    Note: I had to cd to c:\ - if I did not, MiniKube failed to create the VM.

    My external network if connected to my WiFi. That means when I join a new network, my minikube VM gets a new IP. Instead of having to update the kubeconfig each time, I just added a hosts entry in my hosts file (c:\windows\system32\drivers\etc\hosts on Windows) using "<IP> kubernetes", where IP is the IP address of the minikube VM - obtained by running "minikube ip". To update the kubeconfig, run this command:

    kubectl config set-cluster minikube --server=https://kubernetes:8443 --certificate-authority=c:/users/<user>/.minikube/ca.crt

    where <user> is your username, so that the cert points to the ca.crt file generated into your .minikube directory.

    Now if you join a new network, you just update the IP in the hosts file and your kubectl commands will still work. The certificate is generated for a hostname "kubernetes" so you have to use that name.

    If everything is working, then you should get a neat response to "kubectl get nodes":

    PS:\> kubectl get nodes
    NAME       STATUS    AGE       VERSION
    minikube   Ready     11m       v1.6.4

    To open the Kubernetes UI, just enter "minikube dashboard" and a browser will launch:

    image

    Finally, to "re-use" the minikube docker context, run the following command:

    & minikube docker-env | Invoke-Expression

    Now you are sharing the minikube docker socket. Running "docker ps" will return a few running containers - these are the underlying Kubernetes system containers. It also means you can create images here that the minikube cluster can run.

    You now have a 1-node cluster, ready for development!

    Get Some Code

    I recently blogged about Aurelia development with Azure and VSTS. Since I already had a couple of .NET Core sites, I thought I would see if I could get them running in a Kubernetes cluster. Clone this repo and checkout the docker branch. I've added some files to the repo to support both building the Docker images as well as specifying Kubernetes configuration. Let's take a look.

    The docker-compose.yml file specifies a composite application made up of two images: api and frontend:

    version: '2'
    
    services:
      api:
        image: api
        build:
          context: ./API
          dockerfile: Dockerfile
    
      frontend:
        image: frontend
        build:
          context: ./frontend
          dockerfile: Dockerfile

    The Dockerfile for each service is straightforward: start from the ASP.NET Core 1.1 image, copy the application files into the container, expose port 80 and run "dotnet app.dll" (frontend.dll and api.dll for each site respectively) as the entry point for each container:

    FROM microsoft/aspnetcore:1.1
    ARG source
    WORKDIR /app
    EXPOSE 80
    COPY ${source:-obj/Docker/publish} .
    ENTRYPOINT ["dotnet", "API.dll"]

    To build the images, we need to dotnet restore, build and publish. Then we can build the images. Once we have images, we can configure a Kubernetes service to run the images in our minikube cluster.

    Building the Images

    The easiest way to get the images built is to use Visual Studio, set the docker-compose project as the startup project and run. That will build the images for you. But if you're not using Visual Studio, then you can build the images by running the following commands from the root of the repo:

    cd API
    dotnet restore
    dotnet build
    dotnet publish -o obj/Docker/publish
    cd ../frontend
    dotnet restore
    dotnet build
    dotnet publish -o obj/Docker/publish
    cd ..
    docker-compose -f docker-compose.yml build

    Now if you run "docker images" you'll see the minikube containers as well as images for the frontend and the api:

    image

    Declaring the Services - Configuration as Code

    We can now define the services that we want to run in the cluster. One of the things I love about Kubernetes is that it pushes you to declare the environment you want rather than running a script. This declarative model is far better than an imperative model, and we can see that with the rise of Chef, Puppet and PowerShell DSC. Kubernetes allows us to specify the services we want exposed as well as how to deploy them. We can define various Kubernetes objects using a simple yaml file. We're going to declare two services: an api service and a frontend service. Usually, the backend services won't be exposed outside the cluster, but since the demo code we're deploying is a single page app (SPA), we need to expose the api outside the cluster.

    The services are rarely going to change - they specify what services are available in the cluster. However, the underlying containers (or in Kubernetes speak, pods) that make up the service will change. They'll change as they are updated and they'll change as we scale out and then back in. To manage the containers that "make up" the service, we use a construct known as a Deployment. Since the service and deployment are fairly tightly coupled, I've placed them into the same file, so that we have a frontend service/deployment file (k8s/app-demo-frontend-minikube.yml) and an api service/deployment file (k8s/app-demo-backend-minikube.yml). The service and deployment definitions could live separately too if you want. Let's take a look at the app-demo-backend.yml file:

    apiVersion: v1
    kind: Service
    metadata:
      name: demo-backend-service
      labels:
        app: demo
    spec:
      selector:
        app: demo
        tier: backend
      ports:
        - protocol: TCP
          port: 80
          nodePort: 30081
      type: NodePort
    ---
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: demo-backend-deployment
    spec:
      replicas: 2
      template:
        metadata:
          labels:
            app: demo
            tier: backend
        spec:
          containers:
          - name: backend
            image: api
            ports:
            - containerPort: 80
            imagePullPolicy: Never

    Notes:

    • Lines 1 - 15 declare the service
    • Line 4 specified the service name
    • Line 8 - 10 specify the selector for this service. Any pod that has the labels app=demo and tier=frontend will be load balanced for this service. As requests come into the cluster that target this service, the service will know how to route the traffic to its underlying pods. This makes adding, removing or updating pods easy since all we have to do is modify the selector. The service will get a static IP, but the underlying pods will get dynamic IPs that will change as they move through their lifecycle. However, this is transparent to us, since we just target the service and all is good.
    • Line 14 - we want this service exposed on port 30081 (mapping to port 80 on the pods, as specified in line 13)
    • Line 15 - the type NodePort specifies that we want Kubernetes to give the service a port on the same IP as the cluster. For "real" clusters (in a cloud provider like Azure) we would change this to get an IP from the cloud host.
    • Lines 17 - 34 declare the Deployment that will ensure that there are containers (pods) to do the work for the service. If a pod dies, the Deployment will automatically start a new one. This is the construct that ensures the service is up and running.
    • Line 22 specifies that we want 2 instances of the container for this service at all times
    • Lines 26 and 27 are important: they must match the selector labels from the service
    • Line 30 specifies the name of the container within the pod (in this case we only have a single container in this pod anyway, which is generally what you want to do)
    • Line 31 specifies the name of the image to run - this is the same name as we specified in the docker-compose file for the backend image
    • Line 33 exposes port 80 on this container to the cluster
    • Line 34 specifies that we never want Kubernetes to pull the image since we're going to build the images into the minikube docker context. In a production cluster, we'll want to specify other policies so that the cluster can get updated images from a container registry (we'll see that in Part 2).

    The frontend definition for the frontend service is very similar - except there's also some "magic" for configuration. Let's take a quick look:

    spec:
      containers:
        - name: frontend
          image: frontend
          ports:
          - containerPort: 80
          env:
          - name: "ASPNETCORE_ENVIRONMENT"
            value: "Production"
          volumeMounts:
            - name: config-volume
              mountPath: /app/wwwroot/config/
          imagePullPolicy: Never
      volumes:
        - name: config-volume
          configMap:
            name: demo-app-frontend-config

    Notes:

    • Line 30: name the container in the pod
    • Line 31: specify the name of the image for this container - matching the name in the docker-compose file
    • Lines 34 - 36: an example of how to specify environment variables for a service
    • Lines 37 - 39: this is a reference to a volume mount (specified lower down) for mounting a config file, telling Kuberenetes where in the container file system to mount the file. In this case, Kubernetes will mount the volume with name "config-volume" to the path /app/wwwroot/config inside the container.
    • Lines 41 - 44: this specifies a volume - in this case a configMap volume to use for the configuration (more on this just below). Here we tell Kubernetes to create a volume called config-volume (referred to by the container volumeMount) and to base the data for the volume off a configMap with the name demo-app-frontend-config

    Handling Configuration

    We now have a couple of container images and can start running them in minikube. However, before we start that, let's take a moment to think a little about configuration. If you've ever heard me speak or read my blog, you'll know that I am a huge proponent of "build once, deploy many times". This is a core principle of good DevOps. It's no different when you consider Kubernetes and containers. However, to achieve that you'll have to make sure you have a way to handle configuration outside of your compiled bits - hence mechanisms like configuration files. If you're deploying to IIS or Azure App Services, you can simply use the web.config (or for DotNet Core the appsettings.json file) and just specify different values for different environments. However, how do you do that with containers? The entire app is self-contained in the container image, so you can't have different versions of the config file - otherwise you'll need different versions of the container and you'll be violating the build once principle.

    Fortunately, we can use volume mounts (a container concept) in conjunction with secrets and/or configMaps (a Kubernetes concept). In essence, we can specify configMaps (which are essentially key-value pairs) or secrets (which are masked or hidden key-value pairs) in Kubernetes and then just mount them via volume mounts into containers. This is really powerful, since the pod definition stays the same, but if we have a different configMap we get a different configuration! We'll see how this works when we deploy to a cloud cluster and use namespaces to separate dev and production environments.

    The configMaps can also be specified using configuration as code. Here's the configuration for our configMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: demo-app-frontend-config
      labels:
        app: demo
        tier: frontend
    data:
      config.json: |
        {
          "api": {
            "baseUri": "http://kubernetes:30081/api"
          }
        }

    Notes:

    • Line 2: we specify that this is a configMap definition
    • Line 4: the name we can refer to this map by
    • Line 9: we're specifying this map using a "file format" - the name of the file is "config.json"
    • Lines 10 - 14: the contents of the config file

    Aside: Static Files Symlink Issue

    I did have one issue when mounting the config file using configMaps: inside the container the volume mount to /app/www/config/config.json ends up being a symlink. I got the idea of using a configMap in the container from this excellent post by Anthony Chu, in which he mounts an application.json file that the Startup.cs file can consume. Apparently he didn't have any issues with the symlink in the Startup file. However, in the case of my demo frontend app, I am using a config file that is consumed by the SPA app - and that means, since it's on the client side, the config file needs to be served from the DotNet Core app, just like the html or js files. No problem - we've already got a UseStaticFiles call in Startup, so that should just serve the file, right? Unfortunately, it doesn't. At least, it only serves the first few bytes of the file.

    I took a couple of days to figure this out - there's a conversation on Github you can read if you're interested. In short, the symlink length is not the length of the file, but the length of the path to the file. The StaticFiles middleware reads FileInfo.Length bytes when the file is requested, but since the length isn't the full length of the file, only the first few bytes were being returned. I was able to create a FileProvider that worked around the issue.

    Running the Images in Kubernetes

    To run the services we just created in minikube, we can just use kubectl to apply the configurations. Here's the list of commands (the highlighted lines):

    PS:\> cd k8s
    PS:\> kubectl apply -f .\app-demo-frontend-config.yml
    configmap "demo-app-frontend-config" created
    
    PS:\> kubectl apply -f .\app-demo-backend-minikube.yml
    service "demo-backend-service" created
    deployment "demo-backend-deployment" created
    
    PS:\> kubectl apply -f .\app-demo-frontend-minikube.yml
    service "demo-frontend-service" created
    deployment "demo-frontend-deployment" created

    And now we have some services! You can open the minikube dashboard by running "minikube dashboard" and check that the services are green:

    image

    And you can browse to the frontend service by navigating to http://kubernetes:30080:

    image

    The list (value1 and value2) are values coming back from the API service - so the frontend is able to reach the backend service in minikube successfully!

    Updating the Containers or Containers

    If you update your code, you're going to need to rebuild the container(s). If you update the config, you'll have to re-run the "kubectl apply" command to update the configMap. Then, since we don't need high-availability in dev, we can just delete the running pods and let the replication set restart them - this time with updated config and/or code. Of course in production we won't do this - I'll show you how to do rolling updates in the next post when we do CI/CD to a Kubernetes cluster.

    For dev though, I get the pods, delete them all and then watch Kubernetes magically re-start the containers again (with new IDs) and voila - updated containers.

    PS:> kubectl get pods
    NAME                                       READY     STATUS    RESTARTS   AGE
    demo-backend-deployment-951716883-fhf90    1/1       Running   0          28m
    demo-backend-deployment-951716883-pw1r2    1/1       Running   0          28m
    demo-frontend-deployment-477968527-bfzhv   1/1       Running   0          14s
    demo-frontend-deployment-477968527-q4f9l   1/1       Running   0          24s
    
    PS:> kubectl delete pods demo-backend-deployment-951716883-fhf90 demo
    -backend-deployment-951716883-pw1r2 demo-frontend-deployment-477968527-bfzhv demo-frontend-deployment-477968527-q4f9l
    pod "demo-backend-deployment-951716883-fhf90" deleted
    pod "demo-backend-deployment-951716883-pw1r2" deleted
    pod "demo-frontend-deployment-477968527-bfzhv" deleted
    pod "demo-frontend-deployment-477968527-q4f9l" deleted
    
    PS:> kubectl get pods
    NAME                                       READY     STATUS    RESTARTS   AGE
    demo-backend-deployment-951716883-4dsl4    1/1       Running   0          3m
    demo-backend-deployment-951716883-n6z4f    1/1       Running   0          3m
    demo-frontend-deployment-477968527-j2scj   1/1       Running   0          3m
    demo-frontend-deployment-477968527-wh8x0   1/1       Running   0          3m

    Note how the pods get updated IDs - since they're not the same pods! If we go to the frontend now, we'll see updated code.

    Conclusion

    I am really impressed with Kubernetes and how it encourages infrastructure as code. It's fairly easy to get a cluster running locally on your laptop using minikube, which means you can develop against a like-for-like environment that matched prod - which is always a good idea. You get to take advantage of secrets and configMaps, just like production containers will use. All in all this is a great way to do development, putting good practices into place right from the start of the development process.

    Happy sailing! (Get it? Kubernetes = helmsman)

    DevOps with Kubernetes and VSTS: Part 2

    $
    0
    0

    In Part 1 I looked at how to develop multi-container apps using Kubernetes (k8s) - and more specifically, minikube, which is a full k8s environment that runs a single node on a VM on your laptop. In that post I walk through cloning this repo (be sure to look at the docker branch) which contains two containers: a DotNet Core API container and a frontend SPA (Aurelia) container (also hosted as static files in a DotNet Core app). I show how to build the containers locally and get them running in minikube, taking advantage of ConfigMaps to handle configuration.

    In this post, I will show you how to take the local development into CI/CD and walk through creating an automated build/release pipeline using VSTS. We'll create an Azure Container Registry and Azure Container Services using k8s as the orchestration mechanism.

    I do recommend watching Nigel Poulton's excellent Getting Started with Kubernetes PluralSight course and reading this post by Atul Malaviya from Microsoft. Nigel's course was an excellent primer into Kubernetes and Atul's post was helpful to see how VSTS and k8s interact - but neither course nor post quite covered a whole pipeline. How do you update your images in a CI/CD pipeline was a question not answered to my satisfaction. So after some experimentation, I am writing this post!

    Creating a k8s Environment using Azure Container Services

    You can run k8s on-premises, or in AWS or Google Cloud. However, I think Azure Container Services makes spinning up an k8s cluster really straightforward. However, the pipeline I walk through in this post is cloud-host agnostic - it will work against any k8s cluster. We'll also set up a private Container Registry in Azure, though once again, you can use any container registry you choose.

    To spin up a k8s cluster you can use the portal, but the Azure CLI makes it a snap and you get to save the keys you'll need to connect, so I'll use that mechanism. I'll also use Bash for Windows with kubectl, but any platform running kubectl and the Azure CLI will do.

    Here are the commands:

    # set some variables
    export RG="cd-k8s"
    export clusterName="cdk8s"
    export location="westus"
    # create a folder for the cluster ssh-keys
    mkdir cdk8s
    
    # login and create a resource group
    az login
    az group create --location $location --name $RG
    
    # create an ACS k8s cluster
    az acs create --orchestrator-type=kubernetes --resource-group $RG --name=$ClusterName --dns-prefix=$ClusterName --generate-ssh-keys --ssh-key-value ~/cdk8s/id_rsa.pub --location $location --agent-vm-size Standard_DS1_v2 --agent-count 2
    
    # create an Azure Container Registry
    az acr create --resource-group $RG --name $ClusterName --location $location --sku Basic --admin-enabled
    
    # configure kubectl
    az acs kubernetes get-credentials --name $ClusterName --resource-group $RG --file ~/cdk8s/kubeconfig --ssh-key-file ~/cdk8s/id_rsa
    export KUBECONFIG="~/cdk8s/kubeconfig"
    
    # test connection
    kubectl get nodes
    NAME                    STATUS                     AGE       VERSION
    k8s-agent-96607ff6-0    Ready                      17m       v1.6.6
    k8s-agent-96607ff6-1    Ready                      17m       v1.6.6
    k8s-master-96607ff6-0   Ready,SchedulingDisabled   17m       v1.6.6

    Notes:

    • Lines 2-4: create some variables
    • Line 6: create a folder for the ssh-keys and kubeconfig
    • Line 9: login to Azure (this prompts you to open a browser with the device login - if you don't have an Azure subscription create a free one now!)
    • Line 10: create a resource group to house all the resources we're going to create
    • Line 13: create a k8s cluster using the resource group we just created and the name we pass in; generate ssh-keys and place them in the specified folder; we want 2 agents (nodes) with the specified VM size
    • Line 16: create an Azure Container registry in the same resource group with admin access enabled
    • Line 19: get the credentials necessary to connect to the cluster using kubectl; use the supplied ssh-key and save the creds to the specified kubeconfig file
    • Line 20: tell kubectl to use this config rather than the default config (which may have other k8s clusters or minikube config)
    • Line 23: test that we can connect to the cluster
    • Lines 24-27: we are indeed connecting successfully!

    If you open a browser and navigate to the Azure portal and then open your resource group, you'll see how much stuff got created by the few preceding simple commands:

    image

    Don't worry - you'll not need to manage these resources yourself. Azure and the k8s cluster manage them for you!

    Namespaces

    Before we actually create the build and release for our container apps, let's consider the promotion model. Typically there's Dev->UAT->Prod or something similar. In the case of k8s, minikube is the local dev environment - and that's great since this is a full k8s cluster on your laptop - so you get to run your code locally including using k8s "meta-constructs" such as configMaps. So what about UAT and Prod? You could spin up separate clusters, but that could end up being expensive. You can also share the prod cluster resources by leveraging namespaces. Namespaces in k8s can be security boundaries, but they can also be isolation boundaries. I can deploy new versions of my app to a dev namespace - and even though that namespace shares the resources of the prod namespace, it's completely invisible, including its own IPs etc. Of course I shouldn't load test in this configuration since loading the dev namespace is going to potentially steal resources from prod apps. This is conceptually similar to deployment slots in Azure App Services - they can be used to test apps lightly before promoting to prod.

    When you spin up a k8s cluster, besides kube-system and kube-public namespaces (which house the k8s pods) there is a "default" namespace. If you don't specify otherwise, any services, deploymens or pods you create will go to this namespace. However, let's create two additional namespaces: dev and prod. Here's the yml:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: dev
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: prod

    This file contains the definitions for both namespaces. Run the apply command to create the namespaces. Once completed, you can list all the namespaces in the cluster:

    kubectl apply -f namespaces.yml
    namespace "dev" created
    namespace "prod" created
    
    kubectl get namespaces
    NAME          STATUS    AGE
    default       Active    27m
    dev           Active    20s
    kube-public   Active    27m
    kube-system   Active    27m
    prod          Active    20s

    Configuring the Container Registry Secret

    One more piece of setup before we get to the code: when the k8s cluster is pulling images to run, we're going to want it to pull from the Container Registry we just created. Access to this registry is secured since this is a private registry. So we need to configure a registry secret that we can just reference in our deployment yml files. Here are the commands:

    az acr credential show --name $ClusterName --output table
    USERNAME    PASSWORD                          PASSWORD2
    ----------  --------------------------------  --------------------------------
    cdk8s       some-long-key-1                   some-long-key-2
    
    kubectl create secret docker-registry regsecret --docker-server=$ClusterName.azurecr.io --docker-username=$ClusterName --docker-password=<some-long-key-1> --docker-email=admin@azurecr.io
    secret "regsecret" created

    The first command uses az to get the keys for the admin user (the admin username is the same as the name of the Container registry: so I created cdk8s.azurecr.io and so the admin username is cdk8s). Pass in one of the keys (it doesn't really matter which one) as the password. The email address is not used, so this can be anything. We now have a registry secret called "regsecret" that we can refer to when deploying to the k8s cluster. K8s will use this secret to authenticate to the registry.

    Configure VSTS Endpoints

    We now have the k8s cluster and container registry configured. Let's add these endpoints to VSTS so that we can push containers to the registry during a build and perform commands against the k8s cluster during a release. The endpoints allow us to abstract away authentication so that we don't need to store credentials in our release definitions directly. You can also restrict who can view/consume the endpoints using endpoint roles.

    Open VSTS and navigate to a Team Project (or just create a new one). Go to the team project and click the gear icon to navigate to the settings hub for that Team Project. Then click Services. Click "+ New Services" and create a new Docker Registry endpoint. Use the same credentials you used to create the registry secret in k8s using kubectl:

    image

    Next create a k8s endpoint. For the url, it will be https://$ClusterName.$location.cloudapp.azure.com (where clustername and location are the variables we used earlier to create the cluster). You'll need to copy the entire contents of the ~/cdk8s/kubeconfig file (or whatever you called it) that was output when you ran the az acs kubernetes get-credential command into the credentials textbox:

    image

    We now have two endpoints that we can use in the build/release definitions:

    image

    The Build

    We can now create a build that compiles/tests our code, creates docker images and pushes the images to the Container Registry, tagging them appropriately. Click on Build & Release and then click on Builds to open the build hub. Create a new build definition. Select the ASP.NET Core template and click apply. Here are the settings we'll need:

    • Tasks->Process: Set the name to something like k8s-demo-CI and select the "Hosted Linux Preview" queue
    • Options: change the build number format to "1.0.0$(rev:.r)" so that the builds have a 1.0.0.x format
    • Tasks->Get Sources: Select the Github repo, authorizing via OAuth or PAT. Select the AzureAureliaDemo and set the default branch to docker. You may have to fork the repo (or just import it into VSTS) if you're following along.
    • Tasks->DotNet Restore - leave as-is
    • Tasks->DotNet Build - add "--version-suffix $(Build.BuildNumber)" to the build arguments to match the assembly version to the build number
    • Tasks->DotNet Test - disable this task since there are no DotNet tests in this solution (you can of course re-enable this task when you have tests)
    • Tasks->Add an "npm" task. Set the working folder to "frontend" and make sure the command is "install"
    • Tasks->Add a "Command line" task. Set the tool to "node", the arguments to "node_modules/aurelia-cli/bin/aurelia-cli.js test" and the working folder to "frontend". This will run Aurelia tests.
    • Tasks->Add a "Publish test results" task. Set "Test Results files" to "test*.xml" and "Search Folder" to "$(Build.SourcesDirectory)/frontend/testresults". This publishes the Aurelia test results.
    • Tasks->Add a "Publish code coverage" task. Set "Coverage Tool" to "Cobertura", "Summary File" to "$(Build.SourcesDirectory)/frontend/reports/coverage/cobertura.xml" and "Report Directory" to "$(Build.SourcesDirectory)/frontend/reports/coverage/html". This publishes the Aurelia test coverage results.
    • Tasks->Add a "Command line" task. Set the tool to "node", the arguments to "node_modules/aurelia-cli/bin/aurelia-cli.js build --env prod" and the working folder to "frontend". This transpiles, processes and packs the Aurelia SPA app.
    • Tasks->DotNet Publish. Change the Arguments to "-c $(BuildConfiguration) -o publish" and uncheck "Zip Published Projects"
    • Tasks->Add a "Docker Compose" task. Set the "Container Registry Type" to "Azure Container Registry" and set your Azure subscription and container registry to the registry we created an endpoint for earlier. Set "Additional Docker Compose Files" to "docker-compose.vsts.yml", the action to "Build service images" and "Additional Image Tags" to "$(Build.BuildNumber)" so that the build number is used as the tag for the images.
    • Clone the "Docker Compose" task. Rename it to "Push service images" and change the action to "Push service images". Check the "Include Latest Tag" checkbox.
    • Tasks->Publish Artifact. Set both "Path to Publish" and "Artifact Name" to k8s. This publishes the k8s yml files so that they are available in the release.

    The final list of tasks looks something like this:

    image

    You can now Save and Queue the build. When the build is complete, you'll see the test/coverage information in the summary.

    image

    You can also take a look at your container registry to see the newly pushed service images, tagged with the build number.

    image

    The Release

    We can now configure a release that will create/update the services we need. For that we're going to need to manage configuration. Now we could just hard-code the configuration, but that could mean sensitive data (like passwords) would end up in source control. I prefer to tokenize any configuration and have Release Management keep the sensitive data outside of source control. VSTS Release Management allows you to create secrets for individual environments or releases or you can create them in reusable Variable Groups. You can also now easily integrate with Azure Key Vault.

    To replace the tokens with environment-specific values, we're going to need a task that can do token substitution. Fortunately, I've got a (cross-platform) ReplaceTokens task in Colin's ALM Corner Build & Release Tasks extension on the VSTS Marketplace. Click on the link to navigate to the page and click install to install the extension onto your account.

    From the build summary page, scroll down on the right hand side to the "Deployments" section and click the "Create release" link. You can also click on Releases and create a new definition from there. Start from an Empty template and select your team project and the build that you just completed as the source build. Check the "Continuous Deployment" checkbox to automatically trigger a release for every good build.

    Rename the definition to "k8s" or something descriptive. On the "General" tab change the release number format to "$(Build.BuildNumber)-$(rev:r)" so that you can easily see the build number in the name of the release. Back on Environments, rename Environment 1 to "dev". Click on the "Run on Agent" link and make sure the Deployment queue is "Hosted Linux Preview". Add the following tasks:

    • Replace Tokens
      • Source Path: browse to the k8s folder
      • Target File Pattern: "*-release.yml". This performs token replacement on any yml file with a name that ends in "-release." There's 3: back- and frontend service/deployment files and the frontend config file. This task finds the tokens in the file (with pre- and postfix __) and looks for variables with the same name. Each variable is replaced with its corresponding value. We'll create the variables shortly.
    • Kubernetes Task 1 (apply frontend config)
      • Set the k8s connection to the endpoint you created earlier. Also set the connection details for the Azure Container Registry. This applies to all the Kubernetes tasks. Set the Command to "apply", check the "Use Configuration Files" option and set the file to the k8s/app-demo-frontend-config-release.yml file using the file picker. Add "--namespace $(namespace)" to the arguments textbox.
      • image
    • Kubernetes Task 2 (apply backend service/deployment definition)
      • Set the same connection details for the k8s service and Azure Container Registry. This time, set "Secret Name" to "regsecret" (this is the name of the secret we created when setting up the k8s cluster, and is also the name we refer to for the imagePullSecret in the Deployment definitions). Check the "Force update secret" setting. This ensures that the secret value in k8s matches the key from Azure. You could also skip this option since we created the key manually.
      • Set the Command to "apply", check the "Use Configuration Files" option and set the file to the k8s/app-demo-backend-release.yml file using the file picker. Add "--namespace $(namespace)" to the arguments textbox.
      • image
    • Kubernetes Task 3 (apply frontend service/deployment definition)
      • This is the same as the previous task except that the filename is k8s/app-demo-frontend-release.yml.
    • Kubernetes Task 4 (update backend image)
      • Set the same connection details for the k8s service and Azure Container Registry. No secret required here. Set the Command to "set" and specify Arguments as "image deployment/demo-backend-deployment backend=$(ContainerRegistry)/api:$(Build.BuildNumber) --record --namespace=$(namespace)".
      • This updates the version (tag) of the container image to use. K8s will do a rolling update that brings new containers online and takes the old containers offline in such a manner that the service is still up throughout the bleed over.
      • image
    • Kubernetes Task 5 (update the frontend image)
      • Same as the previous task except the Arguments are "image deployment/demo-frontend-deployment frontend=$(ContainerRegistry)/frontend:$(Build.BuildNumber) --record --namespace=$(namespace)"
    • Click on the "…" button on the "dev" card and click Configure Variables. Set the following values:
      • BackendServicePort: 30081
      • FrontendServicePort: 30080
      • ContainerRegistry: <your container reg>.azurecr.io
      • namespace: $(Release.EnvironmentName)
      • AspNetCoreEnvironment: development
      • baseUri: http://$(BackendServiceIP)/api
      • BackendServiceIP: 10.0.0.1
      • image

    This sets environment-specific values for all the variables in the yml files. The Replace Tokens task takes care of injecting into the files for us. Let's take a quick look at one of the tokenized files (tokenized lines are highlighted):

    apiVersion: v1
    kind: Service
    metadata:
      name: demo-frontend-service
      labels:
        app: demo
    spec:
      selector:
        app: demo
        tier: frontend
      ports:
        - protocol: TCP
          port: 80
          nodePort: __FrontendServicePort__
      type: LoadBalancer
    ---
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: demo-frontend-deployment
    spec:
      replicas: 2
      template:
        metadata:
          labels:
            app: demo
            tier: frontend
        spec:
          containers:
            - name: frontend
              image: __ContainerRegistry__/frontend
              ports:
              - containerPort: 80
              env:
              - name: "ASPNETCORE_ENVIRONMENT"
                value: "__AspNetCoreEnvironment__"
              volumeMounts:
                - name: config-volume
                  mountPath: /app/wwwroot/config/
              imagePullPolicy: Always
          volumes:
            - name: config-volume
              configMap:
                name: demo-app-frontend-config
          imagePullSecrets:
            - name: regsecret

    A note on the value for BackendServiceIP: we use 10.0.0.1 as a temporary placeholder, since Azure will create an IP for this service when k8s spins up the backend service (you'll see a public IP address in the resource group in the Azure portal). We will have to run this once to create the services and then update this to the real IP address so that the frontend service works correctly. We also use $(Release.EnvironmentName) as the value for namespace - so "dev" (and later "prod") need to match the namespaces we created, including casing.

    If the service/deployment and config don't change, then the first 3 k8s tasks are essentially no-ops. Only the "set" commands are actually going to do anything. But this is great - since the service/deployment and config files can be applied idempotently! They change when they have to and don't mess anything up when they don't change - perfect for repeatable releases!

    Save the definition. Click "+ Release" to create a new release. Click on the release number (it will be something like 1.0.0.1-1) to open the release. Click on logs to see the logs.

    image

    Once the release has completed, you can see the deployment in the Kubernetes dashboard. To open the dashboard, execute the following command:

    az acs kubernetes browse -n $ClusterName -g $RG --ssh-key-file ~/cdk8s/id_rsa
    
    Proxy running on 127.0.0.1:8001/ui
    Press CTRL+C to close the tunnel...
    Starting to serve on 127.0.0.1:8001

    The last argument is the path to the SSH key file that got generated when we created the cluster - adjust your path accordingly. You can now open a browser to http://localhost:8001/ui. Change the namespace dropdown to "dev" and click on Deployments. You should see 2 successful deployments - each showing 2 healthy pods. You can also see the images that are running in the deployments - note the build number as the tag!

    image

    To see the services, click on Services.

    image

    Now we have the IP address of the backend service, so we can update the variable in the release. We can then queue a new release - this time, the frontend configuration is updated with the correct IP address for the backend service (in this case 23.99.58.48). We can then browse to the frontend service IP address and see our service is now running!

    image

    Creating Prod

    Now that we are sure that the dev environment is working, we can go back to the release and clone "dev" to "prod". Make sure you specify a post-approval on dev (or a pre-approval on prod) so that there's a checkpoint between the two environments.

    image

    We can then just change the node ports, AspNetCoreEnvironment and BackendServiceIP variables and we're good to go! Of course we need to deploy once to the prod namespace before we see the k8s/Azure assigned IP address for the prod backend and then re-run the release to update the config.

    image

    We could also remove the nodePort from the definitions altogether and let k8s decide on a node port - but if it's explicit then we know what port the service is going to run on within the cluster (not externally).

    I did get irritated having to specify "--namespace" for each command - so irritated, in fact, that I've created a Pull Request in the vsts-tasks Github repo to expose namespace as an optional UI element!

    End to End

    Now that we have the dev and prod environments set up in a CI/CD pipeline, we can make a change to the code. I'll change the text below the version to "K8s demo" and commit the change. This triggers the build, creating a newer container image and running tests, which in turn triggers the release to dev. Now I can see the change in dev (which is on 1.0.0.3 or some newer version than 1.0.0.1), while prod is still on version 1.0.0.1.

    image

    Approve dev in Release Management and prod kicks off - and a few seconds later prod is now also on 1.0.0.3.

    I've exported the json definitions for both the build and the release into this folder - you can attempt to import them (I'm not sure if that will work) but you can refer to them in any case.

    Conclusion

    k8s shows great promise as a solid container orchestration mechanism. The yml infrastructure-as-code is great to work with and easy to version control. The deployment mechanism means you can have very minimal (if any) downtime when deploying and having access to configMaps and secrets makes the entire process secure. Using the Azure CLI you can create a k8s cluster and Azure Container registry with a couple simple commands. The VSTS integration through the k8s tasks makes setting up CI/CD relatively easy - all in all it's a great development workflow. Throw in minikube as I described in Part 1 of this series, which gives you a full k8s cluster for local development on your laptop, and you have a great dev/CI/CD workflow.

    Of course a CI/CD pipeline doesn't battle test the actual applications in production! I would love to hear your experiences running k8s in production - sound out in the comments if you have some experience of running apps in a k8s cluster in prod!

    Happy k8sing!

    Viewing all 114 articles
    Browse latest View live