Quantcast
Channel: Colin's ALM Corner
Viewing all 114 articles
Browse latest View live

Imaginet Timesheet: Time Tracking for TFS and Visual Studio Online

$
0
0

We’ve been working on a rewrite of our Timetracking tool (formerly Notion Timesheet) and it’s going live today – Imaginet Timesheet! Timesheet lets you log time against TFS work items using a web interface. The web site can be installed on any IIS server (if you want to host it on-premises) or even onto Windows Azure Web Sites (WAWS) if you have a public-facing TFS or are using Visual Studio Online. Once you’ve installed it, just log in, select a date-range (week) and a query and start logging time.

It’s free for up to 5 users so you can try it out to see if it works for you and your organization. There are some report samples out-the-box and you can also easily create your own reports using PowerPivot.

We used Entity Framework, MVC, Bootstrap and Knockout to make the site. Most of our JavaScript is typed using TypeScript. Of course we have unit tests (.NET and JavaScript) and a build that builds the installer (Wix) package. It was a fun project to work on and I think we’ve turned out a great product. Download your copy today!

Here’s the overview video:

 

Happy timetracking!


PowerShell DSC: Remotely Configuring a Node to “RebootNodeIfNeeded”

$
0
0

I’ve started to experiment a bit with some PowerShell DSC– mostly because it’s now supported in Release Management (in Update 3 CTP at least).

Sometimes when you apply a configuration to a node (machine), the node requires a reboot (for example adding .NET4.5 requires the node to reboot). You can configure the node to reboot immediately (instead of just telling you “a reboot is required”) by changing a setting in the node’s LocalConfigurationManager. Of course, since this is configuration, it’s tempting to try to do this in a DSC script – for example:

Configuration SomeConfig
{
   Node someMachine
   {
      LocalConfigurationManager
      {
         RebootNodeIfNeeded = $true
      }
   }
}

This configuration “compiles” to a mof file and you can apply it successfully. However, it doesn’t actually do anything.

Set-DscLocalConfigurationManager on a Remote Node

Fortunately, there is a way to change the settings on the LocalConfigurationManager remotely – you use the cmdlet Set-DscLocalConfigurationManager with a CimSession object (i.e. you invoke it remotely). I stumbled across this when looking at the documentation for DSC Local Configuration Manager where the very last sentence says “To see the current Local Configuration Manager settings, you can use the Get-DscLocalConfigurationManager cmdlet. If you invoke this cmdlet with no parameters, by default it will get the Local Configuration Manager settings for the node on which you run it. To specify another node, use the CimSession parameter with this cmdlet.”

Here’s a script that you can modify to set “RebootNodeIfNeeded” on any node:

Configuration ConfigureRebootOnNode
{
    param (
        [Parameter(Mandatory=$true)]
        [ValidateNotNullOrEmpty()]
        [String]
        $NodeName
    )

    Node $NodeName
    {
        LocalConfigurationManager
        {
            RebootNodeIfNeeded = $true
        }
    }
}

Write-Host "Creating mofs"
ConfigureRebootOnNode -NodeName fabfiberserver -OutputPath .\rebootMofs

Write-Host "Starting CimSession"
$pass = ConvertTo-SecureString "P2ssw0rd" -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential ("administrator", $pass)
$cim = New-CimSession -ComputerName fabfiberserver -Credential $cred

Write-Host "Writing config"
Set-DscLocalConfigurationManager -CimSession $cim -Path .\rebootMofs -Verbose

# read the config settings back to confirm
Get-DscLocalConfigurationManager -CimSession $cim

Just replace “fabfiberserver” with your node name and .\ the script. The last line of the script reads back the LocalConfigurationManager settings on the remote node, so you should see the RebootNodeIfNeeded setting is true.

image Happy configuring!

Using PowerShell DSC in Release Management: The Hidden Manual

$
0
0

Just in case you missed it, Release Management Update 3 CTP now supports deploying using PowerShell DSC. I think this is a great feature and adds to the impressive toolset that Microsoft is putting out into the DevOps area. So I decided to take this feature for a spin!

Bleeding Edge

<rant>I had a boss once who hated being on the bleeding edge – he preferred being at “latest version – 1” of any OS, SQL Server or VS version (with a few notable exceptions). Being bleeding edge can mean risk and churn, but I prefer being there all the same. Anyway, in the case of the Release Management (RM) CTP, it was a little painful – mostly because the documentation is poor. Hopefully this is something the Release Management team will improve on. I know the release is only CTP, but how can the community provide feedback if they can’t even figure out how to use the tool?</rant>

On top of the Release Management struggles, PowerShell DSC itself isn’t very well documented (yet) since it itself is pretty new technology. This is bleeding BLEEDING edge stuff.

Anyway, after struggling on my own for a few days I mailed the product group and got “the hidden manual” as a reply (more on this later). At least the team responds fairly quickly when MVPs contact them!

Issues

So here’s a summary of the issues I faced:

  • The DSC feature only works on domain joined machines. I normally don’t use domains on my experimental VMs, so I had to make one, but most organizations nowadays use domains anyway, so this isn’t such a big issue.
  • Following the RM DSC manual, I wanted to enable CredSSP. I ran the Enable-WSManCredSSP command from the manual, but got some credential issues later on.
  • The current public documentation on DSC in RM is poor – in fact, without mailing the product group I would never have gotten my Proof-of-Concept release to work at all (fortunately you now have this post to help you!)
  • You have to change your DSC scripts to use in Release Management (you can’t have the exact same script run in RM and in a console – the mof compilation is invoked differently, especially with config data)

Proof of Concept – A “Start of Release” Walkthrough

I want to eventually build up to a set of scripts that will allow me to deploy a complete application (SQL database and ASP.NET website) onto a set of “fresh” servers using only DSC. This will enable me to create some new and unconfigured servers and target them in the Release – the DSC will ensure that SQL gets installed and configured correctly, that IIS, ASP.NET, MVC and any other prerequisites get set up correctly on the IIS server and finally that the database and website are deployed correctly. All without having to install or configure anything manually. That’s the dream. The first step was to create a few DSC scripts and then to get Release Management to execute them as part of the deployment workflow.

I had to create a custom DSC resource (I may change this later) – but that’s a post for another day. Assume that I have the resource files ready for deployment to a node (a machine). Here’s the script to copy an arbitrary resource to the modules folder of a target node so that subsequent DSC scripts can utilize the custom resource:

Configuration CopyDSCResource {
    param (
        [Parameter(Mandatory=$false)]
        [ValidateNotNullOrEmpty()]
        [String]
        $ModulePath = "$env:ProgramFiles\WindowsPowershell\Modules"
    )

    Node $AllNodes.NodeName
    {
        #
        # Copy the custom DSC Resource to the target server
        #
        File DeployWebDeployResource
        {
            Ensure = "Present"
            SourcePath = "$($Node.SourcePath)\$($Node.ModuleName)"
            DestinationPath = "$ModulePath\$($Node.ModuleName)"
            Recurse = $true
            Force = $true
            Type = "Directory"
        }
    }
}

CopyDSCResource -ConfigurationData $configData -Verbose

The last  of the script “compiles” the DSC script into a mof file that is then used to push this configuration to the target node. I wanted to parameterize the script, so I tried to introduce the RM parameter notation, which is __ pre- and post-fix (such as __ModuleName__). No such luck. I have to hardcode configuration data in the configuration data file.

To accomplish that I’m using configuration data for executing this script. This is standard DSC practice – however, there’s one trick. For RM, you need to put the configuration data into a variable. Here’s what an “ordinary” config data script looks like:

@{
    AllNodes = @(
        @{
            NodeName = "*"
            SourcePath = "\\rmserver\Assets\Resources"
            ModuleName = "DSC_ColinsALMCorner.com"
         },

        @{
            NodeName = "fabfiberserver"
            Role = "WebServer"
         }
    );
}

To get this to work with RM, you need to change the 1st line to this:

$configData = @{

This puts the configuration hashtable into a variable called “$configData”. This is the variable that I’m using in the CopyDSCResource DSC script to specify configuration data (see the last line of the previous script).

Meanwhile, in RM, I’ve set up an environment (using “New Standard Environment”) and added my target server (defaulting to port 5985 for PSRemoting). I’ve configured a Release Path and now I want to configure the Component that is going to execute the script for me.

I click on “Configure Apps” –> Components and add a new component. I give it a name and specify the package path:

imageYou can access the package path in your scripts using “$applicationPath”.

Now I click on the “Deployment” tab and configure the tool – I select the “Run PowerShell on Standard Environment” tool (which introduces some parameters) and leave everything as default.

imageNow let’s configure the Release Template. Click on “Configure Apps” –> “Release Templates” and add a new Template. Give it a name and select a Release Path. In the toolbox, right-click on the Components node and add in the DSC script component we just created. Now drag into the designer the server and into the server activity drag the DSC component. We’ll then enter the credentials and the paths to the scripts:

image Since I’m accessing a network share, I specify “UseCredSSP” to true. Both ScriptPath and ConfigurationFilePath are relative to the package path (configured in the Source tab of the component). I specify the DSC script for the ScriptPath and the config data file for the ConfigurationFilePath. Finally, I supply a username and password for executing the command. We can now run the deployment!

Create a new Release and select the newly created template. Specify the build number (either a TFS build or external folder, depending on how you configured your components) and Start it.

imageHopefully you get a successful deployment!

image

Issues You May Face

Of course, not everything will run smoothly. Here are some errors I faced and what I did to rectify them.

Credential Delegation Failure

Symptom: You get the following error message in the deployment log:

System.Management.Automation.Remoting.PSRemotingTransportException: Connecting to remote server fabfiberserver.fab.com failed with the following error message : The WinRM client cannot process the request. A computer policy…

Fix: In the RM DSC manual, they tell you to run an Enable-WSManCredSSP command to allow credential delegation. I have VMs that have checkpoints, so I’ve run this PoC several times, and each time I get stuck I just start again at the “clean” checkpoint. Even though this command always works in PowerShell, I found that sometimes I would get this error. The fix is to edit a group policy on the RM server machine. Type gpedit.msc to open up the console and browse to “Computer Configuration\Administrative Templates\System\Credentials Delegation”. Then click on the “Allow delegating fresh credentials with NTLM-only server authentication”. Enable this rule and then add in your target servers (click the “Show…” button). You can use wildcards if you want to delegate any machine on a domain. Interestingly, the Enable-WSManCredSSP command seems to “edit” the “Allow delegating fresh credentials” setting, not the NTLM-only one. Perhaps there’s a PowerShell command or extra argument that will edit the NTLM-only setting?

image

Configuration Data Errors

Symptom: You get the following error message in the deployment log:

System.Management.Automation.RemoteException: Errors occurred while processing configuration 'SomeConfig'.

Fix: I found that this message occurs for 2 main reasons: first, you forget to put your config data hashtable into a variable (make sure your line 1 is $configData = @{) or you have an error in your hashtable (like a forgotten comma or extra curly brace). If you get this error, then check your configuration data file.

Cannot Find Mof File

Symptom: You get the following error message in the deployment log:

System.Management.Automation.RemoteException: Unable to find the mof file.

Fix: This could mean that you’ve got an “-OutputPath” specified when you invoke your config (the last line of the config script) so that the mof file ends up in some other directory. Or you have the name of your node incorrect. I found that specifying “fabfiberserver.fab.com” caused this error in my scenario – but when I changed the name to “fabfiber” I didn’t get this error. You’ll have to try the machine name or the FQDN to see which one RM is happy with.

Challenges

The ability to run DSC during Releases is a promising tool – but there are some challenges. Here is my list of pros and cons with this feature:

Pros of DSC in Release Management

  • You don’t have to install a deployer agent on the target nodes
  • You can use existing DSC PowerShell scripts (with some small RM specific tweaks) in your deployment workflows

Cons of DSC in Release Management

  • Only works on domain machines at present
  • Poor documentation makes figuring out how to structure scripts and assets to RM’s liking a challenge
  • You have to change your “normal” DSC script structure to fit the way RM likes to invoke DSC
  • You can’t parameterize the scripts (so that you can reuse scripts in different workflows)

Conclusion

The ability to run DSC in Release Management workflows is great – not having to install and configure the deployer agent is a bonus and being able to treat “config as code” in a declarative manner is a fantastic feature. However, since DSC is so new (and poorly documented) there’s a steep learning curve. The good news is that if you’ve already invested in DSC, the latest Release Management allows you to leverage that investment during deployments. This is overall a very exciting feature and I look forward to seeing it grow and mature.

I’ll be posting more in this series as I get further along with my experimentation!

Happy releasing!

More DSC Release Management Goodness: Readying a Webserver for Deployment

$
0
0

In my previous couple of posts (PowerShell DSC: Configuring a Remote Node to “RebootIfNeeded” and Using PowerShell DSC in Release Management: The Hidden Manual) I started to experiment with Release Management’s new PowerShell DSC capabilities. I’ve been getting some great help from Bishal Prasad, one of the developers on Release Management – without his help I’d never have gotten this far!

Meta Mofs

To configure a node (the DSC parlance for a machine) you need to create a DSC script that configures the LocalConfigurationManager. When I first saw this, I thought this was a great feature – unfortunately, when you invoke the config script, it doesn’t produce a mof file (like “normal” DSC scripts that use resources like File and WindowsFeature) so you can’t use Start-DscConfiguration to push it to remote servers. You need to invoke Set-DscLocalConfigurationManager. The reason is that a config that targets LocalConfigurationManager produces a meta.mof instead of a mof file.

If you try to run a PowerShell script in Release Management that produces a meta.mof, you’ll see a failure like this:

Unable to find the mof file. Make sure you are generating the MOF in the DSC script in the current directory.

Of course this is because Release Management expects a mof file, and if you’re just producing a meta.mof file, the invocation will fail.

We may see support for meta.mofs in future versions of Release Management (hopefully sooner than later) but until then the workaround is to make sure that you include the LocalConfigurationManager settings inside a config that produces a mof file. Then you include two commands at the bottom of the script: first the command to “compile” the configuration – this produces a mof file as well as a meta.mof file. Then you call Set-DscLocalConfigurationManager explicitly to push the meta.mof and let Release Management handle the mof file. Here’s an example that configures a node to reboot if needed and ensures that the Webserver role is present:

Configuration WebServerPreReqs
{
    Node $AllNodes.where{ $_.Role -eq "WebServer" }.NodeName
    {
        # tell the node to reboot if necessary
        LocalConfigurationManager
        {
            RebootNodeIfNeeded = $true
        }

        WindowsFeature WebServerRole
        {
            Name = "Web-Server"
            Ensure = "Present"
        }
    }
}

WebServerPreReqs -ConfigurationData $configData

# invoke Set-DscLocalConfigurationManager directly since RM doesn't yet support this
Set-DscLocalConfigurationManager -Path .\WebServerPreReqs -Verbose

You can see that there is a LocalConfigurationManager setting (line 6). Line 19 “compiles” the config – given a list of nodes in $configData that includes just a single node (say fabfiberserver) you’ll see fabfiberserver.mof and fabfiberserver.meta.mof files in the current directory after calling the script. Since RM itself takes care of pushing the mof file, we need to explicitly call Set-DscLocalConfigurationManager in order to push the meta.mof file (line 22).

Now you can use this script just like you would any other DSC script in RM.

Setting up the Release

Utilizing this script in a Release is easy – create a component that has “Source” set to your build output folder (or a network share for deploying bins that are not built in TFS build) and set the deployment tool to “Run PowerShell on Standard Environment”. I’ve called my component “Run DSC Script”.

image

Now on the Release Template, right-click the Components node in the toolbox and add in the script component, then drag it onto the designer inside your server block (which you’ll also need to drag on from your list of servers). Then just set the paths and username and password correctly and you’re good to go.imageI’ve saved this script as “WebServerPreReqs.ps1” in the Scripts folder of my build output folder – you can see the path there in the ScriptPath parameter. My configData script is also in the scripts folder (remember the ScriptPath and ConfigurationFilePath are relative to the source path that you configure in the component). Now you can start a release!

Inspecting the Logs

Once the release has completed, you can open the tool logs for the “Run DSC Script” component and you’ll see two “sets” of entrties. Both sets are prefixed with [SERVERNAME], indicating which node the logs pertain to. Here we can see a snippet of the Set-DscLocalConfigurationManager invocation logs (explicitly deploying the meta.mof):

[FABFIBERSERVER]: LCM:  [ Start  Set      ]
[FABFIBERSERVER]: LCM:  [ Start  Resource ]  [MSFT_DSCMetaConfiguration]
[FABFIBERSERVER]: LCM:  [ Start  Set      ]  [MSFT_DSCMetaConfiguration]
[FABFIBERSERVER]: LCM:  [ End    Set      ]  [MSFT_DSCMetaConfiguration]  in 0.0620 seconds.
[FABFIBERSERVER]: LCM:  [ End    Resource ]  [MSFT_DSCMetaConfiguration]
[FABFIBERSERVER]: LCM:  [ End    Set      ]
Operation 'Invoke CimMethod' complete.
Set-DscLocalConfigurationManager finished in 0.207 seconds.

Just after these entries, you’ll see a second set of entries – this time for the remainder of the DSC invocation that RM initiates (which deploys the mof):

An LCM method call arrived from computer FABFIBERSERVER with user sid S-1-5-21-3349151495-1443539541-1735948571-1106.
[FABFIBERSERVER]: LCM:  [ Start  Resource ]  [[WindowsFeature]WebServerRole]
[FABFIBERSERVER]: LCM:  [ Start  Test     ]  [[WindowsFeature]WebServerRole]
[FABFIBERSERVER]:                            [[WindowsFeature]WebServerRole] The operation 'Get-WindowsFeature' started: Web-Server
[FABFIBERSERVER]:                            [[WindowsFeature]WebServerRole] The operation 'Get-WindowsFeature' succeeded: Web-Server
[FABFIBERSERVER]: LCM:  [ End    Test     ]  [[WindowsFeature]WebServerRole]  in 0.8910 seconds.

In the next post I’ll look at using DSC to configure the rest of my webserver features as well as create a script for installing and configuring SQL Server. Then we’ll be in a good position to configure deployment of a web application (and its database) onto an environment that we know has everything it needs to run the application.

In the meantime – happy releasing!

Install and Configure SQL Server using PowerShell DSC

$
0
0

I’m well into my journey of discovering the capabilities of PowerShell DSC and Release Management’s DSC feature (See my previous posts: PowerShell DSC: Configuring a Remote Node to “Reboot If Needed”, Using PowerShell DSC in Release Management: The Hidden Manual and More DSC Release Management Goodness: Readying a Webserver for Deployment). I’ve managed to work out how to use Release Management to run DSC scripts on nodes. Now I am trying to construct a couple of scripts that I can use to deploy applications to servers – including, of course, configuring the servers – using DSC. (All scripts for this post are available for download here).

SQL Server Installation

To install SQL Server via a script, there are two prerequisites: the SQL install sources and a silent (or unattended) installation command.

Fortunately the SQL server installer takes care of the install command – you run the install wizard manually, specifying your installation options as you go. On the last page, just before clicking “Install”, you’ll see a path to the ini conifguration file. I saved the configuration file and cancelled the install. Then I opened the config file and tweaked it slightly (see this post and this post on some tweaking ideas)– till I could run the installer from the command line (using the /configurationFile switch). That takes care of the install command itself.

image

There are many ways to make the SQL installation sources available to the target node. I chose to copy the ISO to the node (using the File DSC resource) from a network share, and then use a Script resource to mount the iso. Once it’s mounted, I can run the setup command using the ini file.

SQL Server requires .NET 3.5 to be installed on the target node, so I’ve added that into the script using the WindowsFeature resource. Here’s the final script:

Configuration SQLInstall
{
    param (
        [Parameter(Mandatory=$true)]
        [ValidateNotNullOrEmpty()]
        [String]
        $PackagePath,

        [Parameter(Mandatory=$true)]
        [ValidateNotNullOrEmpty()]
        [String]
        $WinSources
    )

    Node $AllNodes.where{ $_.Role.Contains("SqlServer") }.NodeName
    {
        Log ParamLog
        {
            Message = "Running SQLInstall. PackagePath = $PackagePath"
        }

        WindowsFeature NetFramework35Core
        {
            Name = "NET-Framework-Core"
            Ensure = "Present"
            Source = $WinSources
        }

        WindowsFeature NetFramework45Core
        {
            Name = "NET-Framework-45-Core"
            Ensure = "Present"
            Source = $WinSources
        }

        # copy the sqlserver iso
        File SQLServerIso
        {
            SourcePath = "$PackagePath\en_sql_server_2012_developer_edition_x86_x64_dvd_813280.iso"
            DestinationPath = "c:\temp\SQLServer.iso"
            Type = "File"
            Ensure = "Present"
        }

        # copy the ini file to the temp folder
        File SQLServerIniFile
        {
            SourcePath = "$PackagePath\ConfigurationFile.ini"
            DestinationPath = "c:\temp"
            Type = "File"
            Ensure = "Present"
            DependsOn = "[File]SQLServerIso"
        }

        #
        # Install SqlServer using ini file
        #
        Script InstallSQLServer
        {
            GetScript = 
            {
                $sqlInstances = gwmi win32_service -computerName localhost | ? { $_.Name -match "mssql*" -and $_.PathName -match "sqlservr.exe" } | % { $_.Caption }
                $res = $sqlInstances -ne $null -and $sqlInstances -gt 0
                $vals = @{ 
                    Installed = $res; 
                    InstanceCount = $sqlInstances.count 
                }
                $vals
            }
            SetScript = 
            {
                # mount the iso
                $setupDriveLetter = (Mount-DiskImage -ImagePath c:\temp\SQLServer.iso -PassThru | Get-Volume).DriveLetter + ":"
                if ($setupDriveLetter -eq $null) {
                    throw "Could not mount SQL install iso"
                }
                Write-Verbose "Drive letter for iso is: $setupDriveLetter"
                
                # run the installer using the ini file
                $cmd = "$setupDriveLetter\Setup.exe /ConfigurationFile=c:\temp\ConfigurationFile.ini /SQLSVCPASSWORD=P2ssw0rd /AGTSVCPASSWORD=P2ssw0rd /SAPWD=P2ssw0rd"
                Write-Verbose "Running SQL Install - check %programfiles%\Microsoft SQL Server\120\Setup Bootstrap\Log\ for logs..."
                Invoke-Expression $cmd | Write-Verbose
            }
            TestScript =
            {
                $sqlInstances = gwmi win32_service -computerName localhost | ? { $_.Name -match "mssql*" -and $_.PathName -match "sqlservr.exe" } | % { $_.Caption }
                $res = $sqlInstances -ne $null -and $sqlInstances -gt 0
                if ($res) {
                    Write-Verbose "SQL Server is already installed"
                } else {
                    Write-Verbose "SQL Server is not installed"
                }
                $res
            }
        }
    }
}

# command for RM
#SQLInstall -ConfigurationData $configData -PackagePath "\\rmserver\Assets" -WinSources "d:\sources\sxs"

# test from command line
SQLInstall -ConfigurationData configData.psd1 -PackagePath "\\rmserver\Assets" -WinSources "d:\sources\sxs"
Start-DscConfiguration -Path .\SQLInstall -Verbose -Wait -Force

Here’s some analysis:

  • (Line 7 / 12) The config takes in 2 parameters: $PackagePath (location of SQL ISO and config ini file) and $WinSources (Path to windows sources).
  • (Line 15) I changed my config data so that I can specify a comma-separated list of roles (since a node might be a SQLServer and a WebServer) so I’ve made the comparer a “contains” rather than an equals (as I’ve had in my previous scripts) – see the config script below.
  • (Line 22 / 29) Configure .NET 3.5 and .NET 4.5 Windows features, using the $WinSources path if the sources are required
  • (Line 37) Copy the SQL iso to the target node from the $PackagePath folder
  • (Line 46) Copy the ini file to the target node from the $PackagePath folder
  • (Line 58) Begins the Script to install SQL server
  • The Get-Script does a check to see if there is a SQL server service running. If there is, it returns the SQL instance count for the machine.
  • The Set-Script mounts the iso, saving the drive letter to a variable. Then I invoke the setup script (passing in the config file and required passwords) writing the output to Write-Verbose, which will appear on the DSC invoking machine as the script executes.
  • The Test-Script does the same basic “is there a SQL server service running” check. If there is, skip the install – else run the install. Of course this could be refined to ensure each and every component is installed, but I didn’t want to get that granular.
  • The last couple of lines of the script show the command for Release Management (commented out) as well as the command to run the script manually from a PowerShell prompt.

Here’s my DSC config script:

#$configData = @{
@{
    AllNodes = @(
        @{
            NodeName = "*"
            PSDscAllowPlainTextPassword = $true
         },

        @{
            NodeName = "fabfiberserver"
            Role = "WebServer,SqlServer"
         }
    );
}

# Note: different 1st line for RM or command line invocation
# use $configData = @{ for RM
# use @{ for running from command line

You can download the above scripts (and my SQL configuration ini file for reference) here.

What’s Next

After running this script, I have a server with SQL Server installed and configured according to my preferences (which are contained in the ini file). From here, I can run restores or dacpac deployments and so on. Of course this is going to be executed from within Release Management as part of the release pipeline.

Next up will be the full WebServer DSC script – and then we’ll be ready to tackle the actual application deployment, since we’ll have servers ready to host our applications.

Until then, happy releasing!

Bulk Migrate Work Item Comments, Links and Attachments

$
0
0

I was working at a customer that had set up a test TFS environment. When we set up their “real” TFS, they did a get-latest of their code and imported their code – easy enough. They did have about 100 active work items that they also wanted to migrate. Not being a big fan of TFS Integration Platform, I usually recommend using Excel to port work items en masse.

There are a couple of “problems” with the Excel approach:

  1. When you create work items in the new Team Project, they have to go into the “New” state (or the first state for the work item)
  2. You can’t migrate test cases (since the steps don’t play nicely in Excel) – and you can’t migrate test results either.
  3. You can’t migrate comments, hyperlinks or attachments in Excel (other than opening each work item one by one)

You can mitigate the “new state” limitation by creating several sheets – one for “New” items, one for “Active” items, one for “Resolved” items and so on. The “New” items are easy – just import “as-is”. For the other states, import them into the “New” state and then bulk update the state to the “target” state. Keeping the sheets separated by state makes this easier to manage. Another tip I advise is to add a custom field to the new Team Project (you don’t have to expose it on the forms if you don’t want to) called “OldID” that you set to the id of the old work item – that way you’ve always got a link back to the original work item if you need it.

For test case, you have to go to the API to migrate them over to the new team project – I won’t cover that topic in this post.

For comments, hyperlinks and attachments I quickly wrote a PowerShell script that does exactly that! I’ve uploaded it to OneDrive so you can download it here.

Here’s the script itself:

$oldTpcUrl = "http://localhost:8080/tfs/oldCollection"
$newTpcUrl = "http://localhost:8080/tfs/newCollection"

$csvFile = ".\map.csv" #format: oldId, newId
$user = "domain\user"
$pass = "password"

[Reflection.Assembly]::LoadWithPartialName('Microsoft.TeamFoundation.Common')
[Reflection.Assembly]::LoadWithPartialName('Microsoft.TeamFoundation.Client')
[Reflection.Assembly]::LoadWithPartialName('Microsoft.TeamFoundation.WorkItemTracking.Client')

$oldTpc = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory]::GetTeamProjectCollection($oldTpcUrl)
$newTpc = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory]::GetTeamProjectCollection($newTpcUrl)

$oldWorkItemStore = $oldTpc.GetService([Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore])
$newWorkItemStore = $newTpc.GetService([Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore])

$list = Import-Csv $csvFile
$cred = new-object System.Net.NetworkCredential($user, $pass)

foreach($map in $list) {
    $oldItem = $oldWorkItemStore.GetWorkItem($map.oldId)
    $newItem = $newWorkItemStore.GetWorkItem($map.newId)

    Write-Host "Processing $($map.oldId) -> $($map.newId)" -ForegroundColor Cyan
    
    foreach($oldLink in $oldItem.Links | ? { $_.BaseType -eq "HyperLink" }) {
        Write-Host "   processing link $($oldLink.Location)" -ForegroundColor Yellow

        if (($newItem.Links | ? { $_.Location -eq $oldLink.Location }).count -gt 0) {
            Write-Host "      ...link already exists on new work item"
        } else {
            $newLink = New-Object Microsoft.TeamFoundation.WorkItemTracking.Client.Hyperlink -ArgumentList $oldLink.Location
            $newLink.Comment = $oldLink.Comment
            $newItem.Links.Add($newLink)
        }
    }

    if ($oldItem.Attachments.Count -gt 0) {
        foreach($oldAttachment in $oldItem.Attachments) {
            mkdir $oldItem.Id | Out-Null
            Write-Host "   processing attachment $($oldAttachment.Name)" -ForegroundColor Magenta

            if (($newItem.Attachments | ? { $_.Name.Contains($oldAttachment.Name) }).count -gt 0) {
                Write-Host "      ...attachment already exists on new work item"
            } else {
                $wc = New-Object System.Net.WebClient
                $file = "$pwd\$($oldItem.Id)\$($oldAttachment.Name)"

                $wc.Credentials = $cred
                $wc.DownloadFile($oldAttachment.Uri, $file)

                $newAttachment = New-Object Microsoft.TeamFoundation.WorkItemTracking.Client.Attachment -ArgumentList $file, $oldAttachment.Comment
                $newItem.Attachments.Add($newAttachment)
            }
        }
    
        try {
            $newItem.Save();
            Write-Host "   Attachments and links saved" -ForegroundColor DarkGreen
        }
        catch {
            Write-Error "Could not save work item $newId"
            Write-Error $_
        }
    }

    $comments = $oldItem.GetActionsHistory() | ? { $_.Description.length -gt 0 } | % { $_.Description }
    if ($comments.Count -gt 0){
        Write-Host "   Porting $($comments.Count) comments..." -ForegroundColor Yellow
        foreach($comment in $comments) {
            Write-Host "      ...adding comment [$comment]"
            $newItem.History = $comment
            $newItem.Save()
        }
    }
    
    Write-Host "Done!" -ForegroundColor Green
}

Write-Host
Write-Host "Migration complete"

When you run this, open the script and fix the top 5 lines (the variables for this script). Enter in the Team Project Collection URL’s (these can be the same if you’re migrating links from one Team Project to another in the same Collection). The person running the script needs to have read permissions on the old server and contributor permission on the new server. You then need to make a cvs file with 2 columns: oldId and newId. Populate this with the mapping from the old work item Id to the new work item Id. Finally, enter a username and password (this is simply for fetching the attachments) and you can run the script.

Happy migrating!

Branch Is Not Equal to Environment: CODE-PROD Branching Strategy

$
0
0

Over the last couple of months I’ve done several implementations and upgrades of TFS 2013. Most organizations I work with are not developing boxed software – they’re developing websites or apps for business. The major difference is that boxed software often has more than one version of a product “in production” – some customers will be on version 1.0 while others will be on version 2.0 and so on. In this model, branches for each major version, with hot-fix branches where necessary – are a good way to keep these code bases separate while still being able to merge bug fixes across versions. However, I generally find that this is overkill for a “product” that only ever has one version in production at any one time – like internal applications or websites.

In this case, a well-established branching model is Dev-Main-Live.

Dev-Main-Live

image

Dev-Main-Live (or sometimes Dev-Integration-Prod or other variants) is a fairly common branching model – new development is performed on the Dev branch (with multiple developers coding simultaneously). When changes are to be tested, they are merged to Main. There code is tested in a test or UAT environment, and when testing is complete the changes are merged to Live before being deployed to production. This means that if there are production issues (what? we have bugs?!?) those can be fixed on the Live branch – thus they can be tested and deployed independently from the Dev code which may not be production ready.

There are some issues with this approach:

  1. You shouldn’t be taking so long to test that you need a separate Main branch. I only advise this for extensive test cycles – but you should be aiming to shorten your test cycles anyway. This makes the Main branch fairly obsolete – I’ve seen teams who always “merge through” Main to get changes from Dev to Live – so I’ve started advising getting rid of the Main branch altogether.
  2. If you build code from Main, deploy it to Test and sign-off, you have to merge to Live before doing a build from the Live branch. This means that what you’re deploying isn’t what you tested (since you tested pre-merge). I’ve seen some teams deploy from the Main branch build, wait for several days, and then merge to the Live branch. Also a big no-no!
  3. Usually bug fixes that are checked in on the Live branch don’t make it back to the Dev branch since you have to merge through Main – so the merge of new dev and bug fixes on the Live branch get done when Dev gets merged onto Live (through Main). This is too late in the cycle and can introduce merge bugs or rework.

This model seems to work nicely since the branches “represent” the environments – what I have in Dev is in my dev environment, what’s on Main is in my Test environment and what’s in Live is in production, right? This “branch equals environment” mindset is actually hard to manage, so I’m starting to recommend a new approach.

The Solution: Code-Prod with Builds

image

So how should you manage code separation as well as know what code is in which environment at any time? The answer is to simplify the branching model and make use of builds.

In this scenario new development is done on the CODE branch (the name is to consciously separate the idea of the branch from the environment). When you’re ready to go to production, merge into PROD and do a build. The TFS build will (by default) label the code that is used to build the binaries. You’ll be able to tie the binary version to the build label if you use my versioning script you can always match binaries to builds. So you’ll be able to recreate a build, even if you lose the binaries somehow.

So now you have built “the bits” – notice how there is no mention of environment yet. You should be thinking of build and deploy as separate activities. Why? Because then you’ll be able to build a single package that can be deployed (and tested) in a number of environments. Of course you’re going to have to somehow manage configuration files for your different environments – for web projects you can refer to my post about how to parameterize the web.config so that you can deploy to any environment (the post is specific to Release Management, but the principles are the same for other deployment mechanisms and for any type of application that needs different configurations for different environments).

Deployment – To Lab or To Release?

Let’s start off considering the “happy path” – you’ve done some coding in CODE, merged to PROD and produced a “production build”. It needs to be tested (of course you’ve already unit tested as part of your build). Now you have two choices – Lab Management or Release Management. I like using a combination of Lab and Release, since each has a some good benefits. You can release to test using Lab Management (including automated deploy and test) so that your testers have an environment to test against – Lab Management allows rich data diagnostic collection during both automated and manual testing. You then use Release Management to get the bits into the release pipeline for deployment to UAT and Production environments, including automated deployment workflows and sign-offs. This way you only get builds into the release pipeline that have passed several quality gates (unit testing, automated UI testing and even manual testing) before getting into UAT. Irrespective of what approach you take, make sure you can take one build output and deploy it to multiple environments.

But What About Bugs in Production?

If you get bugs in production before you do the merge, the solution is simple – fix the bug on the PROD branch, then build, test and release back to production. No messy untested dev CODE anywhere.

But what do you do if you have bugs after your merge, but before you’ve actually deployed to production? Hopefully you’re moving towards shorter release / test cycles, so this window should be short (and rare). But even if you do hit this scenario, there is a way to do the bug fix and keep untested code out. It’s a bit complicated (so you should be trying to avoid this scenario), but let me walk you through the scenario.

Let’s say we have a file in a web project called “Forecast.cs” that looks like this:

public class Forecast
{
    public int ID { get; set; }

    public DateTime Date { get; set; }
    public DayOfWeek Day 
    {
        get { return Date.DayOfWeek; } 
    }

    public int Min { get; set; }

    public int Max { get; set; }
}

We’ve got a PROD build (1.0.0.4) and the label for 1.0.0.4 shows this file to be on version 51.

image

We now make a change and add a property called “CODEProperty” (line 15) on the CODE branch:

public class Forecast
{
    public int ID { get; set; }

    public DateTime Date { get; set; }
    public DayOfWeek Day 
    {
        get { return Date.DayOfWeek; } 
    }

    public int Min { get; set; }

    public int Max { get; set; }

    public int CODEProperty { get; set; }
}

We then check-in, merge to PROD and do another build (1.0.0.5). This version is then deployed out for testing in our UAT environment. Forecast.cs is now on version 53 in the 1.0.0.5 label, while all other files are on 51.

image

Suddenly, the proverbial paw-paw hits the fan and there’s an urgent business-stopping bug in our currently deployed production version (1.0.0.4). So we go to source control, search for the 1.0.0.4 label in the PROD branch that the build created and select “Get This Version” to get the 1.0.0.4 version locally.

imageWe fix the bug (by adding a property called “HotfixProperty” – line 15 below). Note how there is no “CODEProperty” since this version of Forecast is before the CODEProperty checkin.

public class Forecast
{
    public int ID { get; set; }

    public DateTime Date { get; set; }
    public DayOfWeek Day 
    {
        get { return Date.DayOfWeek; } 
    }

    public int Min { get; set; }

    public int Max { get; set; }

    public int HotfixProperty { get; set; }
}

Since we’re not on the latest version (we did a “Get-label”) we won’t be able to check in. So we shelve the change (calling the shelveset “1.0.0.4 Hotfix”). We then open the build template and edit the Get Version property and tell the build to get 1.0.0.4 too by specifying L followed by the label name – so the full “Get version” value is LPROD_1.0.0.4:

imageNext we queue the build, telling the build to apply the Shelveset too:

image We won’t be able to “Check in changes after successful build” since the build won’t be building with the Latest version. We’ll have to do that ourselves later. The build completes – we now have build 1.0.0.6 which can be deployed straight to production to “handle” the business-stopping bug.

Finally we do a Get Latest of the solution in PROD, unshelve the changeset to merge the Hotfix with the development code, clear the Get version property on the build and queue the next build that includes both the changes from CODE as well as the hotfix from PROD. This build is now 1.0.0.7. Meanwhile, testing is completed on 1.0.0.5, and so we can then fast-track the testing for 1.0.0.7 to release the new CODEProperty feature, including the hotfix from build 1.0.0.6.

Here’s a summary of what code is in what build:

  • 1.0.0.4 – baseline PROD code
  • 1.0.0.5 – CODEProperty change coming from a merge from CODE branch into PROD branch
  • 1.0.0.6 – baseline PROD plus the hotfix shelveset (no CodeProperty at all) which includes the HotfixProperty
  • 1.0.0.7 – CODEProperty merged with HotfixProperty

Here’s the 1.0.0.7 version of Forecast.cs (see lines 15 and 17):

public class Forecast
{
    public int ID { get; set; }

    public DateTime Date { get; set; }
    public DayOfWeek Day 
    {
        get { return Date.DayOfWeek; } 
    }

    public int Min { get; set; }

    public int Max { get; set; }

    public int CODEProperty { get; set; }

    public int HotfixProperty { get; set; }
}

If we turn on Annotation, you’ll see that CODEProperty is changeset 52 (in green below), and HotfixProperty is changeset 54 (in red below):

image

Yes, it’s a little convoluted, but it’ll work – the point is that this is possible without a 3rd branch in Source Control. Also, you should be aiming to shorten your test / release cycles so that this situation is very rare. If you hit this scenario often, you could introduce the 3rd branch (call it INTEGRATION or MAIN or something) that can be used to isolate bug-fixes in PROD from new development in CODE that isn’t ready to go out to production.

Here’s a summary of the steps if there is a bug in current production when you haven’t deployed the PROD code (after a merge from CODE) to production yet:

  1. PROD code is built (1.0.0.4) and released to production.
  2. CODE is merged to PROD and build 1.0.0.5 is created, but not deployed to production yet
  3. Get by Label – the current PROD label (1.0.0.4)
  4. Fix the bug and shelve your changes
  5. Edit the build to change the Get version to the current PROD label (1.0.0.4)
  6. Queue the build with your hotfix shelveset (this will be build 1.0.0.6)
  7. Test and deploy the hotfix version (1.0.0.6) to production
  8. Get Latest and unshelve to merge the CODE code and the hotfix
  9. Clear the Get version field of the build and queue the new build (1.0.0.7)
  10. Test and deploy to production

Conclusion

The key to good separation of work streams is to not mistake the branch for the environment, nor confuse build with deploy. Using the CODE-PROD branching scenario, builds with versioning and labels, parameterized configs and Lab/Release management you can:

  • Isolate development code from production code, so that you can do new features while still fixing bugs in production and not have untested development pollute the hotfixes
  • Track which code is deployed where (using binary versions and labels)
  • Recreate builds from labels
  • Deploy a single build to multiple environments, so that what you test in UAT is what you deploy to production

Happy building and deploying!

A Day of DevOps, Release Management, Software Quality and Agile Project Requirements Management

$
0
0

Unfortunately there is no TechEd Africa this year – Microsoft have opted to go for smaller, more focused sessions in multiple cities (or at least that’s what I gather). I think it’s a shame, since the TechEd Africa event was always fantastic – and who doesn’t like getting out the office for a couple of days?

Anyway, the good news is that Microsoft are hosting “A Day of DevOps, Release Management, Software Quality and Agile Project Requirements Management” in Cape Town (Wed 10th at Crystal Towers Hotel) and in Johannesburg (on Mon 15th at the Microsoft offices on William Nicol). I’ll be presenting at both events, so make sure you get along.

The event has 2 themes – DevOps in the morning and Agile Project Management in the afternoon. You can attend one or the other or both events.

I’m particularly excited about the DevOps session – here’s some of the content that I’ll be covering:

  • Release management and automation, release pipelines and approvals to accelerate deployment to operations, including using DSC, Chef and Puppet
  • Treating Configuration as Code
  • Application Insights 
  • Cloud Based Load Testing
  • Production Debugging and Monitoring
  • Leveraging Azure for DevOps and Dev/Test Environments
  • System Centre and TFS Integration

For more details, go to the Microsoft SA Developer blog post here.

Hope to see you there!


Test Result Traceability Matrix Tool

$
0
0

I am often asked if there is a way to see a “traceability matrix” in TFS. Different people define a “traceability matrix” in different ways. If you want to see how many tests there are for a set of requirements, then you can use SmartExcel4TFS. However, this doesn’t tell you what state the current tests are in – so you can’t see how many tests are passing / failing etc.

Test Points

Of course this is because there is a difference between a test case and a test point in TFS. A test point is the combination of Test Case, Test Suite and Test Configuration. So let’s say you have Test ABC in Suite 1 and Suite 2 and have it for 2 configurations (Win7 and Win8, for example). Then you’ll really have 1 test case and 4 test points (2 suites x 2 configurations). So if you want to know “is this test case passing?” you really have to ask, “Is this test case passing in this suite and for this configuration?”.

However, you can do a bit of a “cheat” by making an assumption: if the most recent result is Pass/Fail/Not Run/Blocked, then assume the “result of the test” is Pass/Fail/Not Run/Blocked. Of course if the “last result” is failed, you’d have to find exactly which suite/configuration the failure relates to in order to get any detail. Anyway, for most situations this assumption isn’t too bad.

Test Result Traceability Matrix Tool

Given the assumption that the most recent test point result is the “result” of the Test Case, it’s possible to create a “test result traceability matrix”. If you plot Requirement vs Test Case in a grid, and then color the intersecting cells with the appropriate “result”, you can get a good idea of what state tests are in in relation to your requirements. So I’ve written a utility that will generate this matrix for you (see the bottom of this post for the link).

Here’s the output of a run:

image The first 3 columns are:

  • Requirement ID
  • Requirement Title
  • Requirement State

Then I sum the total of the test case results per category for that requirement – you can see that Requirement 30 has 2 Passed Tests and 1 Failed test (also 0 blocked and 0 not run). If you move along the same row, you’ll see the green and red blocks where the tests cases intersect with their requirements. The colors are as follows:

  • Green = Passed
  • Red = Failed
  • Orange = Blocked
  • Blue = Not Run

You can see I’ve turned on conditional formatting for the 4 totals columns. I’ve also added filtering to the header, so you can sort / filter the requirements on id, title or state.

Some Notes

This tool requires the following arguments:

  1. TpcUrl – the URL to the team project collection
  2. ProjectName – the name of the Team Project you’re creating the matrix for
  3. (Optional) RequirementQueryName – if you don’t specify this, you’ll get the matrix for all requirements in the team project. Alternatively, you can create a flat-list query to return only requirements you want to see (for example all Stories in a particular area path) and the matrix will only show those requirements.

I speak of “requirements” – the tool essentially gets all the work items in the “requirements category” as a top-level query and then fetches all work items in the “test case category” that are linked to the top-level items. So this will work as long as your process template has a Requirements / Test Case category.

The tool isn’t particularly efficient – so if you have large numbers of requirements, test cases and test plans the tool could take a while to run. Also, the tool selects the first “requirementsQuery” that matches the name you pass in – so make sure the name of your requirements query is unique. The tool doesn’t support one-hop or tree queries for this query either.

Let me know what you think!

Download

Here’s a link to the executable: you’ll need Team Explorer 2013 and Excel to be installed on the machine you run this tool from. To run it, download and extract the zip. The open up a console and run TestResultMatrix.exe.

Happy matrix generating!

Source Control Operations During Deployments in Release Management

$
0
0

Before we start: Don’t ever do this.

But if you really have to, then it can be done. There are actually legitimate cases for doing source control operations during a deployment. For example, you don’t have source control and you get “files” from a vendor that need to be deployed to servers. Or you have a vendor application that has “extensions” that are just some sort of script file that is deployed onto the server – so you don’t compile anything for customizations. Typically these sorts of applications are legacy applications.

Simple Solution: Install Team Explorer on the Target Servers

The simplest way to do source control operations is just to install Team Explorer on your target server. Then you can use the “Run Command” tool from Release Management and invoke tf.exe directly, or create a script that does a number of tf operations.

However, I was working at a customer where they have hundreds of servers, so they don’t want to have to manually maintain Team Explorer on all their servers.

Creating a TF.exe Tool

Playing around a bit, I realized that you can actually invoke tf.exe on a machine that doesn’t have Team Explorer. You copy tf.exe to the target machine – as well as all its dependencies – and you’re good to go. Fortunately it’s not a huge list of files – around 20 altogether.

That covers the exe itself – however, a lot of TF commands are “location dependent” – they use the directory you’re in to give context to the command. For example, running “tf get” will get files for the current directory (assuming there is a mapping in the workspace). When RM deploys a tool to the target server, it copies the tool files to a temporary directory and executes them from there. This means that we need a script that can “remember” the path where the tool (tf.exe) is but execute from a target folder on the target server.

PowerShell is my scripting language of choice – so here’s the PowerShell script to wrap the tf.exe call:

param( [string]$targetPath, [string]$tfArgs ) try { $tf = "$pwd\tf.exe" Push-Location if (-not(Test-Path $targetPath)) { mkdir $targetPath } cd $targetPath &$tf $tfArgs.Split(" ") if (-not($?)) { throw "TF.exe failed" } } finally { Pop-Location }

 

Notes:

  • Line 2: We pass in the $targetPath – this is the path on the target server we want to perform tf commands from
  • Line 3: We in $tfArgs – these are the arguments to pass to tf.exe
  • Line 7-8: get the path to tf.exe and store it
  • Line 10-12: if the $targetPath does not exist, create it
  • Line 14: change directory to the $targetPath
  • Line 15: Invoke tf.exe passing the $tfArgs we passed in as parameters
  • Line 17-19: Since this script invokes tf.exe, you could get a failure from the invocation, but have the script still “succeed”. In order to make sure the deployment fails if tf.exe fails, we need to check if the tf.exe invocation succeeded or not – that’s what these lines are doing
  • Line 22: Change directory back to the original directory we were in – not strictly necessary, but “clean”

Here’s the list of dependencies for tf.exe:

  • Microsoft.TeamFoundation.Build.Client.dll
  • Microsoft.TeamFoundation.Build.Common.dll
  • Microsoft.TeamFoundation.Client.dll
  • Microsoft.TeamFoundation.Common.dll
  • Microsoft.TeamFoundation.TestManagement.Client.dll
  • Microsoft.TeamFoundation.VersionControl.Client.dll
  • Microsoft.TeamFoundation.VersionControl.Common.dll
  • Microsoft.TeamFoundation.VersionControl.Common.Integration.dll
  • Microsoft.TeamFoundation.VersionControl.Common.xml
  • Microsoft.TeamFoundation.VersionControl.Controls.dll
  • Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll
  • Microsoft.TeamFoundation.WorkItemTracking.Client.dll
  • Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll
  • Microsoft.TeamFoundation.WorkItemTracking.Common.dll
  • Microsoft.TeamFoundation.WorkItemTracking.Proxy.dll
  • Microsoft.VisualStudio.Services.Client.dll
  • Microsoft.VisualStudio.Services.Common.dll
  • TF.exe
  • TF.exe.config

Open up the Release Management client and navigate to Inventory->Tools. Click New to create a new tool, and specify a good name and description. For the command, specify “powershell” and for arguments type the following:

-command ./tf.ps1 –targetPath ‘__TargetPath__’ –tfArgs ‘__TFArgs__’

Note that the quotes around the parameters __TargetPath__ and __TFArgs__ should be single-quotes.

Finally, click “Add” on the Resources section and add all the tf files – don’t forget the tf.ps1 file!

image

Creating TF Actions

Once you have the tf.exe tool, you can then create TF.exe actions – like “Create Workspace” and “Get Files”. Let’s do “Create Workspace”:

Navigate to Inventory->Actions and click “New”. Enter an appropriate name and description. I created a new Category called “TFS Source Control” for these actions, but this is up to you. For “Tool used” specify the TF.exe tool you just created. When you select this tool, it will bring in the arguments for the tool – we’re going to edit those to be more specific for this particular Action. I set my arguments to:

-command ./tf.ps1 -targetPath '__TargetPath__' -tfArgs 'workspace /new /noprompt /collection:http://rmserver:8080/tfs/__TPC__ "__WorkspaceName__"'

(Note where the single and double quotes are).

The parameters are as follows:

  • __TargetPath__: the path we want to create the workspace in
  • __TPC__: the name of the Team Project Collection in the rmserver TFS – this can be totally hardcoded (if you only have one TFS server) or totally dynamic (if you have multiple TFS servers). In this case, we have a single server but can run deployments for several collections, so that’s why this parameter is “partly hardcoded” and “partly dynamic”
  • __WorkspaceName__: the name we want to give to the workspace

Using Create Workspace Action in a Release Template

Now that you have the action, you can use it in a release template:

image Here you can see that I’ve create some other actions (Delete Workspace and TF Get) to perform other TF.exe commands. This workflow deletes the workspace called “Test”, then creates a new Workspace in the “c:\files” folder, and then gets a folder from source control. From there, I can copy or run or do whatever I need to with the files I got from TFS.

Happy releasing from Source Control (though you can’t really be happy about this – it’s definitely a last-resort).

New vNext Config Variable Options in RM Update 4 RC

$
0
0

Update 4 RC for Release Management was released a few days ago. There are some good improvements– some are minor, like the introduction of “Agent-based” labels improves readability for viewing agent-based vs non-agent based templates and components. Others are quite significant – like being able to use the Manual Intervention activity and tags in vNext templates, being able to use server-drops as release source and others. By far my favorite new feature of the update is the new variable capabilities.

Variables: System, Global, Server, Component and Action

Be aware that, unfortunately, these capabilities are only for vNext components (so they won’t work with regular agent-based components or workflows). It’s also unlikely that agent-based components will ever get these capabilities. I’ve mentioned before that I think PowerShell DSC is the deployment mechanism of the future, so you should be investing in it now already. If you’re currently using agent-based components, they do have variables that can be specified at design-time (in the deployment workflow surface) – just as they’ve always had.

The new vNext variable capabilities allow you to use variables inside your PowerShell scripts without having to pass them or hard-code them. For example, if you define a global variable called “MyGlobalVar” you can just use it by accessing $MyGlobalVar in your PowerShell script.

Global Variables

Global variables are defined under “Administration->Settings->Configuration Variables”. Here you can defined variables, giving them a name, type, default value and description.

image Server Variables

Server variables can be defined on vNext servers under “Configure Paths->Servers”. Same format as System variables.

image Component Variables

vNext components can now have configuration variables defined on them “at design time”.

image

You can also override values and event specify additional configuration variables when you add the “DSC” component onto the design surface:

image

Another cool new feature is the fact that ComponentName and ServerName are now dropdown lists on the “Deploy using DSC/PS” and “Deploy using Chef” activities, so you don’t have to type them manually:

image

All these variables are available inside the script by simple using $variableName. You may event get to the point where you no longer need a PSConfiguration file at all!

You can also see all your variables by opening the “Resource Variables” tab:

image

System Variables

RM now exposes a number of system variables for your scripts. These are as follows:

  • Build directory
  • Build number (for component in the release)
  • Build definition (for component)
  • TFS URL (for component)
  • Team project (for component)
  • Tag (for server which is running the action)
  • Application path (destination path where component is copied)
  • Environment (for stage)
  • Stage

You can access these variable easily by simply using $name (for example: $BuildDirectory or $Stage). If you mouse over the “?” icon on right of the Component or Server screens, the tooltip will tell you what variables you have access to.

image

Release Candidate

Finally, remember that this Release Candidate (as opposed to CTPs) is “go-live” so you can install it on your production TFS servers and updating to the RTM is supported. There may be minor glitches with the RC, but you’ll get full support from MS if you encounter any.

Happy releasing!

Matching Binary Version to Build Number Version in TFS 2013 Builds

$
0
0

Jim Lamb wrote a post about how to use a custom activity to match the compiled versions of your assemblies to the TFS build number. This was not a trivial exercise (since you have to edit the workflow itself) but is the best solution for this sort of operation. Interestingly the post was written in November 2009 and updated for TFS 2010 RTM in February 2010.

I finally got a chance to play with a VM that’s got TFS 2013 Preview installed. I was looking at the changes to the build engine. The Product Team have simplified the default template (they’ve collapsed a lot of granular activities into 5 or 6 larger activities). In fact, if you use the default build template, you won’t even see it (it’s not in the BuildProcessTemplates folder – you have to download it if you want to customize it).

The good news is that the team have added pre- and post-build and pre- and post-test script hooks into the default workflow. I instantly realised this could be used to solve the assembly-version-matches-build-number problem in a much easier manner.

Using the Pre-Build Script

The solution is to use a PowerShell script that can replace the version in the AssemblyInfo files before compiling with the version number in the build. Here’s the procedure:

  1. Import the UpdateVersion.ps1 script into source control (the script is below)
  2. Change the build number format of your builds to produce something that contains a version number
  3. Point the pre-build script argument to the source control path of the script in step 1

The script itself is pretty simple – find all the matching files (AssemblyInfo.* by default) in a target folder (the source folder by default). Then extract the version number from the build number using a regex pattern, and do a regex replace on all the matching files.

If you’re using TFVC, the files are marked read-only when the build agent does a Get Latest, so I had to remove the read-only bit as well. The other trick was getting the source path and the build number – but you can use environment variables when executing any of the pre- or post- scripts (as detailed here).

Param(
  [string]$pathToSearch = $env:TF_BUILD_SOURCESDIRECTORY,
  [string]$buildNumber = $env:TF_BUILD_BUILDNUMBER,
  [string]$searchFilter = "AssemblyInfo.*",
  [regex]$pattern = "\d+\.\d+\.\d+\.\d+"
)

try
{
    if ($buildNumber -match $pattern -ne $true) {
        Write-Host "Could not extract a version from [$buildNumber] using pattern [$pattern]"
        exit 1
    } else {
        $extractedBuildNumber = $Matches[0]
        Write-Host "Using version $extractedBuildNumber"

        gci -Path $pathToSearch -Filter $searchFilter -Recurse | %{
            Write-Host "  -> Changing $($_.FullName)" 
        
            # remove the read-only bit on the file
            sp $_.FullName IsReadOnly $false

            # run the regex replace
            (gc $_.FullName) | % { $_ -replace $pattern, $extractedBuildNumber } | sc $_.FullName
        }

        Write-Host "Done!"
    }
}
catch {
    Write-Host $_
    exit 1
}

Save this script as “UpdateVersion.ps1” and put it into Source Control (I use a folder called $/Project/BuildProcessTemplates/CommonScripts to house all the scripts like this one for my Team Project).


image


The open your build and specify the source control path to the pre-build script (leave the arguments empty, since they’re all defaulted) and add a version number to your build number format. Don’t forget to add the script’s containing folder as a folder mapping in the Source Settings tab of your build.


image


image


Now you can run your build, and your assembly (and exe) versions will match the build number:


image


I’ve tested this script using TFVC as well as a TF Git repository, and both work perfectly.


Happy versioning!

Moving to Northwest Cadence

$
0
0

image

That’s right – I’m moving from Imaginet to Northwest Cadence (NWC) at the end of this month. That’s the high-level version – read no further if you don’t need back-story! A huge thanks to all at Imaginet for an awesome 4 years. It was a pleasure working with all of you.

Getting into ALM

I’m often asked how I got into ALM. Well, I studied Computer Science up to Masters at Rhodes University (which was an awesome 6 years!). When I left in 2002 I knew I could program – and one of my favorite courses of all time by Prof. Terry was building our own parser and compiler (using Pascal) so I knew I had enough meta-knowledge to learn any language. My first job was for a small company (Systems Fusion) that made ISP software using C++ and CORBA on Linux. Those were the bad old days – CVS for source control and builds that took about 3 hours using make files. Blegh!

During that time I got married, and my wife and I didn’t enjoy the Johannesburg lifestyle. So I called up an old study mate who happened to be working for a financial services company in East London. In Oct 2004, we moved to East London and I joined Real People (a financial services company). We had around 20 developers in the MS stack – SQL server, webservices and ASP.NET websites. The team were very cowboy, using zip files for source control and deploying to Production to test. Even though I didn’t know what ALM was at that time, I knew this wasn’t a sustainable way to do development.

After 2 or 3 months, I got my grubby paws on Team Foundation Server 2005 beta 2. I (eventually) got it installed and configured and we adopted TFS as our primary ALM tool. Our processes were still chaotic, but at least we were starting to utilize good source control, branching and even automated builds. Over the next 5 years, I became the architect and ALM-guy for the team – when I left in Oct 2010 we had around 60 developers, 15 team projects, 250 build definitions and I’d done countless customizations to work items, templates, reports and builds. I’d also learned (intuitively) a lot about process – the good, the bad and the ugly! We did a lot of things right in those days – but we also had lots of room for improvement.

During my time as the “accidental admin” for TFS, I spent a lot of time on forums and blogs as I tried to figure out how to do stuff in TFS. Sadly, it appeared that there were very few people in South Africa that were doing any ALM using TFS. Or if they were, they certainly weren’t publishing any content!

Notion Solutions

In 2009 I attended my first TechEd Africa. There I listened to an ALM talk by legend Chris Menegay, who explained that he ran a company called Notion Solutions that did ALM consulting in the US. I loved the idea and thought that there was probably some scope for ALM consulting in SA. I saw Chris again at TechEd Africa in 2010 and this time asked if he did any work in South Africa. We hit if off and Chris offered to hire me or help me start ALM Consulting in SA. Eventually we started Notion South Africa in Oct of 2010, and I officially became an ALM consultant. At around that time, Notion Solutions became part of Imaginet.

Seattle

When I was studying my Masters, I was sponsored by a company in Seattle – a startup that was focusing on FireWire technologies. During June of 2000, the company flew me to Seattle for a 3 week working holiday. It was my fist time flying (I was 22 at the time) and my first time to the US. I instantly fell in love with Seattle – I got to see some of the city, meet a few people and even go camping while I was there. I didn’t even mind the rain! In fact, I loved it so much I decided to move there.

Unfortunately, the startup company was unable to offer me a position at the end of 2001 when I graduated – they’d gone under in the dot bomb. And so my dream of moving to Seattle went cold.

MVP Award

Zoom forward to 2011 – I was awarded my MVP award in ALM. In Feb 2012, I got to attend the MVP Summit in Bellevue, Seattle. It was great being back in one of my favorite cities of all time! I attended the summit in Feb 2013, and then the summit was changed to Nov, so there was another summit in Nov 2013. Each time I visited Seattle, I was more and more keen to live and work there. Also, one of my best mates from Rhodes moved there in 2006 or so to work for Adobe – even more incentive to move there!

At MVP Summit in Feb, 2012, I shared a room with Chris Menegay – an experience in itself! When we were chatting, I mentioned that I’d love to live in Seattle. Chris said I should chat to Steven Borg, cofounder and strategist of Northwest Cadence – an ALM company based in Seattle. I didn’t approach Steve then – I wasn’t ready for a move yet. However, at Summit in Feb 2013, I made a tongue-in-cheek comment to Martin Hinshelwood: “If I moved to Seattle, would NWC hire me?”. He immediately suggested that I chat to Steve (I didn’t know that Martin was in the process of moving back to Scotland), which I did. Steve and I hit it off from the first conversation we had, and over the next couple of months we got to know each other and we decided I’d be a fit for NWC (and NWC would be a fit for me!). We’re now processing legal paperwork to get me over to start working – and while we wait, I’m going to be working remotely for NWC from Cape Town (I moved from East London to Cape Town earlier this month).

Conclusion

This is going to be an exciting transition for me and my family – but I must mention that I learned a lot during my time with Imaginet. My colleagues were an amazing bunch to work with – a lot of the “old guard” (the original Notion Solutions crowd that I met when I joined) have moved on – most of them to Microsoft – but I still have frequent contact with most of them. Big ups to Steve St. Jean, Ed Blankenship, Donovan Brown, Abel Wang, Dave McKinstry and others! I’ll never forget how confident I was on my first ALM gig – because I knew that if I got stuck I could always reach out to some of the most knowledgeable ALM Consultants on the planet simply by mailing the internal distribution list we had called “NotionTech”. Later it changed to ALMTech, but it was still the same level of awesomeness!

I’ll still be involved in ALM (though I won’t be working much in SA anymore) and I’ll continue to blog here, so you’ll still see plenty of content from me.

Blessings!

Using WebDeploy in vNext Releases

$
0
0

A few months ago Release Management (RM) Update 3 preview was released. One of the big features in that release was the ability to deploy without agents using PowerShell DSC. Once I saw this feature, I started a journey to see how far I could take deployments using this amazing technology. I had to learn how DSC worked, and from there I had to figure out how to use DSC with RM! The ride was a bit rocky at first, but I feel comfortable with what I am able to do using RM with PowerShell DSC.

Readying Environments for Deployment

In my mind there were two distinct steps that I wanted to be able to manage using RM/DSC:

  • Configure an environment (set of machines) to make them ready to run my application
  • Deploy my application to these servers

The RM/DSC posts I’ve blogged so far deal with readying the environment:

So we’re now at a point where we can ensure that the machines that we want to deploy our application to are ready for our application – in the case of a SQL server SQL is installed and configured correctly. In the case of a webserver, IIS is installed and configured, additional runtimes are present (like MVC) and Webdeploy is installed and ports opened so that I can deploy using Webdeploy. So how then do I deploy my application?

Good Packages

Good deployment always beings with good packages. To get a good package, you’ll need an automated build that ties into source control (and hopefully work items) and performs automated unit testing with coverage. This gives you some metrics as to the quality of your builds. The next critical piece that you’ll need is to make sure that you can manage multiple configurations – after all, you’ll be wanting to deploy the same package to Production that you deployed and testing in UAT, so the package shouldn’t have configuration hard-coded in. In my agent-based Webdeploy/RM post, I show how you can create a team build that puts placeholders into the SetParameters.xml file, so that you can put in environment-specific values when you deploy. The package I created for that deployment process can be used for deployment via DSC as well – just showing that if you create a good package during build, you have more release options available to you.

Besides the package, you’ll want to source control your DSC scripts. This way you can track changes that you make to your scripts over time. Also, having the scripts “travel” with your binaries means you only have to look in one location to find both deployment packages (or binaries) and the scripts you need to deploy them. Here’s how I organized my website and scripts in TF Version Control:

imageThe actual solution (with my websites, libraries and database schema project) is in the FabrikamFiber.CallCenter folder. I have some 3rd party libraries that are checked into the lib folder. The build folder has some utilities for running the build (like the xunit test adapter). And you can also see the DscScripts folder where I keep the scripts for deploying this application.

By default on a team build, only compiled output is placed into the drop folder – you don’t typically get any source code. I haven’t included the scripts in my solution or projects, so I used a post-build script to copy the scripts from the source folder to the bin folder during the build – the build then copies everything in the bin folder to the drop folder. You could use this technique if you wanted to share scripts with multiple solutions – in that case you’d have the scripts in a higher level folder in SC. Here’s the script:

Param(
  [string]$srcPath = $env:TF_BUILD_SOURCESDIRECTORY,
  [string]$binPath = $env:TF_BUILD_BINARIESDIRECTORY,
  [string]$pathToCopy
)

try
{
    $sourcePath = "$srcPath\$pathToCopy"
    $targetPath = "$binPath\$pathToCopy"

    if (-not(Test-Path($targetPath))) {
        mkdir $targetPath
    }

    xcopy /y /e $sourcePath $targetPath

    Write-Host "Done!"
}
catch {
    Write-Host $_
    exit 1
}
  • Lines 2-3: you can use the $env parameters that get set when team build executes a custom script. Here I am using the sources and binaries directory settings.
  • Line 4: the subfolder to copy from the $srcPath to the $binPath.
  • Line 12-14: ensure that the target path exists.
  • Line 16: xcopy the files to the target folder.

Calling the script with $pathToCopy set to DscScripts will result in my DSC scripts being copied to the drop folder along with my build binaries. Using the TFVC 2013 default template, here’s what my advanced build parameters look like:

image 

  • The MSBuild arguments build a Webdeploy package for me. The profile (specified when you right-click the project and select “Publish”) also inserts RM placeholders into environment specific settings (like connection strings, for example). I don’t hard-code the values since this same package can be deployed to multiple environments. Later we’ll see how the actual values replace the tokens at deploy time.
  • The post-build script is the script above, and I pass “-pathToCopy DscScripts” to the script in order to copy the scripts to the bin (and ultimately the drop) folder.
  • I also use a pre-build script to version my assemblies so that I can match the binary file versions with the build.

Here’s what my build output folders look like:

image There are 3 “bits” that I really care about here:

  • The DscScripts folder has all the scripts I need to deploy this application.
  • The FabrikamFiber.Schema.dacpac is the binary of my database schema project.
  • The _PublishedWebsites folder contains 2 folders: the “xcopyable” site (which I ignore) and the FabrikamFiber.Web_package folder which is shown on the right in the figure above, containing the cmd file to execute WebDeploy, the SetParameters.xml file for configuration and the zip file containing the compiled site.

Here’s what my SetParameters file looks like:

<?xml version="1.0" encoding="utf-8"?>
<parameters>
  <setParameter name="IIS Web Application Name" value="__SiteName__" />
  <setParameter name="FabrikamFiber-Express-Web.config Connection String" value="__FabFiberExpressConStr__" />
</parameters>

Note the “__” (double underscore) pre- and post-fix, making SiteName and FabFiberExpressConStr parameters that I can use in both agent-based and agent-less deployments.

Now that all the binaries and scripts are together, we can look at how to do the deployment.

Deploying a DacPac

To deploy the database component of my application, I want to use the DacPac (the compiled output of my SSDT project). The DacPac is a “compiled model” of how I want the database to look. To deploy a DacPac, you invoke sqlpackage.exe (installed with SQL Server Tools when you install and configure SQL Server). SqlPackage then reverse engineers the target database (the database you’re deploying the model to) into another model, does a compare and produces a diff script. You can also make SqlPackage run the script (which will make the target database look exactly like the DacPac model you compiled your project into).

To do this inside a DSC script, I implement a “Script” resource. The Script resource has 3 parts: a Get-Script, a Set-Script and a Test-Script. The Get-Script is executed when you run DSC in interrogative mode – it won’t change the state of the target node at all. The Test-Script is used to determine if any action must be taken – if it return $true, then no action is taken (the target is already in the desired state). If the Test-Script returns $false, then the target node is not in the desired state and the Set-Script is invoked. The Set-Script is executed in order to bring the target node into the desired state.

A Note on Script Resource Parameters

A caveat here though: the Script resource can be a bit confusing in terms of parameters. The DSC script actually has 2 “phases” – first, the PowerShell script is “compiled” into a mof file. This file is then pushed to the target server and executed during the “deploy” phase. The parameters that you use in the configuration script are available on the RM server at “compile” time, while parameters in the Script resources are only available on the target node during “deploy” time. That means that you can’t pass a parameter from the config file “into” the Script resource – all parameters in the Script resource need to be hard-coded or calculated on the target node at execution time.

For example, let’s look at this example script:

Configuration Test
{
    params (
        [string]$logLocation
    )

    Node myNode
    {
        Log LogLocation
        {
            Message = "The log location is [$logLocation]"
        }

        Script DoSomething
        {
            Get-Script { @{ "DoSomething" = "Yes" } }
            Test-Script { $false }
            Set-Script
            {
                Write-Host "Log location is [$logLocation]"
                $localParam = "Hello there"
                Write-Host "LocalParam is [$localParam]"
            }
        }
    }
}

Here the intent is to have a parameter called $logLocation that we pass into the config script. When you see this script, it seems to make perfect sense – however, while the log will show the message “The log location is [c:\temp]”, for example (line 11), when the Set-Script of the Script resource runs on the target node, you’ll see the message “Log location is []” (Line 20). Why? Because the $logLocation parameter does not exist when this script is run at deploy time on the target node. The parameter is available to the Log resource (or other resources like File) but won’t be to the Script resource. You will be able to create other parameters “at deploy time” (like $localParam on Line 21). This is frustrating, but kind of understandable. The Script resource script blocks are not evaluated for parameters. I found a string manipulation hack that allows you to fudge config parameters into the script blocks, but decided against using it.

ConfigData

Before we look at the DSC script used to deploy the database, I need to show you my configData script:

#@{
$configData = @{
    AllNodes = @(
        @{
            NodeName = "*"
            PSDscAllowPlainTextPassword = $true
         },

        @{
            NodeName = "fabfiberserver"
            Role = "WebServer"
         },

        @{
            NodeName = "fabfiberdb"
            Role = "SqlServer"
         }
    );
}

# Note: different 1st line for RM or command line
# use $configData = @{ for RM
# use @{ for running from command line
  • Line 1: When running from the command line, you just specify a hash-table. DSC requires this hash-table to be put into a variable. I have both in the script (though I default to the format RM requires) just so that I can test the script outside of RM.
  • Line 3: AllNodes is a hash-table of all the nodes I want to affect with my configuration scripts.
  • Lines 5/6 – common properties for all nodes (the name is “*” so DSC applies these properties to all nodes).
  • Line 10/11 and 15/16: I specify the nodes I have as well as a Role property. This is so that I can deploy the same configuration to multiple servers that have the same role (like a web farm for example).
  • You can specify other parameters, each with another value for each server.

Here’s the DSC script I use to deploy a DacPac to a target server:

Configuration FabFibWeb_Db
{
    param (
        [Parameter(Mandatory=$true)]
        [ValidateNotNullOrEmpty()]
        [String]
        $PackagePath
    )

    Node fabfiberdb #$AllNodes.where{ $_.NodeName -ne "*" -and $_.Role.Contains("SqlServer") }.NodeName
    {
        Log DeployAppLog
        {
            Message = "Starting SqlServer node configuration. PackagePath = $PackagePath"
        }

        #
        # Update the application database
        #
        File CopyDBSchema
        {
            Ensure = "Present"
            SourcePath = "$PackagePath\FabrikamFiber.Schema.dacpac"
            DestinationPath = "c:\temp\dbFiles\FabrikamFiber.Schema.dacpac"
            Type = "File"
        }

        Script DeployDacPac
        {
            GetScript = { @{ Name = "DeployDacPac" } }
            TestScript = { $false }
            SetScript =
            {
                $cmd = "& 'C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe' /a:Publish /sf:c:\temp\dbFiles\FabrikamFiber.Schema.dacpac /tcs:'server=localhost; initial catalog=FabrikamFiber-Express'"
                Invoke-Expression $cmd | Write-Verbose
            }
            DependsOn = "[File]CopyDBSchema"
        }

        Script CreateLabUser
        {
            GetScript = { @{ Name = "CreateLabUser" } }
            TestScript = { $false }
            SetScript = 
            {
                $sql = @"
                    USE [master]
                    GO

                    IF (NOT EXISTS(SELECT name from master..syslogins WHERE name = 'Lab'))
                    BEGIN
                        CREATE LOGIN [lab] WITH PASSWORD=N'P2ssw0rd', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
    
                        BEGIN
                            USE [FabrikamFiberExpress]
                        END

                        CREATE USER [lab] FOR LOGIN [lab]
                        ALTER ROLE [db_owner] ADD MEMBER [lab]
                    END
"@
                
                $cmdPath = "c:\temp\dbFiles\createLogin.sql"
                sc -Path $cmdPath -Value ($sql -replace '\n', "`r`n")
                
                & "C:\Program Files\Microsoft SQL Server\110\Tools\Binn\sqlcmd.exe" -S localhost -U sa -P P2ssw0rd -i $cmdPath
            }
        }
    }
}

# command for RM
FabFibWeb_Db -ConfigurationData $configData -PackagePath $applicationPath

# test from command line
#FabFibWeb -ConfigurationData configData.psd1 -PackagePath "\\rmserver\builddrops\__ReleaseSite\__ReleaseSite_1.0.0.3"
#Start-DscConfiguration -Path .\FabFibWeb -Verbose -Wait

Let’s take a look at what is going on:

  • Line 7: I need a parameter to tell me where the DacPac is – this will be my build drops folder.
  • Line 10: I specify the node I want to bring into the desired state. I wanted to apply this config to all nodes that have the role “SqlServer” and this worked from the command line – for some reason I couldn’t get it to work with RM, so I hardcode the node-name here. I think this is particular to my environment, since this should work.
  • Lines 12-15: Log a message.
  • Lines 20-26: Use the File resource to copy the DacPac from a subfolder in the $PackagePath to a known folder on the local machine. I did this because I couldn’t pass the drop-folder path in to the Script resource – so I copied using the File Resource to a known location and can just “hard code” that location in my Script resources.
  • Line 28: This is the start of the script Resource for invoking sqlpackage.exe.
  • Line 30: Just return the name of the resource.
  • Line 31: Always return false – meaning that the Set-Script will always be run. You could have some check here if you didn’t want the script to execute for some specific condition.
  • Lines 32-36: This is the script that actually does the work – I create the command and then Invoke it, piping output to the verbose log for logging. I use “/a:Publish” to tell SqlPackage to execute the incremental changes on the database, using the DacPac as the source file (/sf) and targeting the database specified in the target connection string (/tcs).
  • Line 37: Invoking the DacPac is dependent on the DacPac being present, so I express the dependency.
  • The final resource in this script is also a Script resource – the Get- and Test-Scripts are self-explanatory. The Set-Script takes the SQL string I have in the script, writes it to a file (using sc – Set-Content) and then executes the file using sqlcmd.exe. This is specific to my environment, but shows that you can execute arbitrary SQL against a server fairly easily using the Script resource.
  • Line 73: When using DSC with RM, you need to compile the configuration (do this by invoking the Configuration) into mof files. Don’t call Start-DscConfiguration (which pushes the mof files to the target nodes for running the configuration) since RM will do this step. You can see how I use $applicationPath – this is the path that you specify when you create the vNext component (relative to a drop folder) – we’ll see later how to set this up. RM sets this parameter when before it calls the script. Also, you need to specify the parameter that contains the configuration hash-table. In my case this is $configData, which you’ll see at the top of the configData script above. RM “executes” this script so the parameter is in memory by the time the DSC script is executed.

When working with DSC, you have to think about idempotency. In other words, the script must produce the same result every time you run it – no matter what the starting state is. Since deploying a DacPac to a database is already idempotent, I don’t have too much to worry about in this case, so that’s why the Test-Script for the DeployDacPac Script resource always returns false.

Deploying a Website using WebDeploy

You could be publishing your website out of Visual Studio. But don’t – seriously, don’t EVER do this. So you’re smart: you’ve got an automated build to compile your website. Well done! Now you could be deploying this site using xcopy. Don’t – primarily because managing configuration is hard to do using this method, and you usually end up deploying all sorts of files that you don’t actually require (like web.debug.config etc.). You should be using WebDeploy!

I’ve got a post about how to use WebDeploy with agent-based templates. What follows is how to deploy sites using WebDeploy in vNext templates (using PowerShell DSC). In a previous post I show how you can use DSC to ready a webserver for your application. Now we can look at what we need to do to actually deploy a site using WebDeploy. Here’s the script I use:

Configuration FabFibWeb_Site
{
    param (
        [Parameter(Mandatory=$true)]
        [ValidateNotNullOrEmpty()]
        [String]
        $PackagePath
    )

    Node fabfiberserver #$AllNodes.where{ $_.NodeName -ne "*" -and $_.Role.Contains("WebServer") }.NodeName
    {
        Log WebServerLog
        {
            Message = "Starting WebServer node configuration. PackagePath = $PackagePath"
        }

        #
        # Deploy a website using WebDeploy
        #
        File CopyWebDeployFiles
        {
            Ensure = "Present"         
            SourcePath = "$PackagePath\_PublishedWebsites\FabrikamFiber.Web_Package"
            DestinationPath = "c:\temp\Site"
            Recurse = $true
            Force = $true
            Type = "Directory"
        }

        Script SetConStringDeployParam
        {
            GetScript = { @{ Name = "SetDeployParams" } }
            TestScript = { $false }
            SetScript = {
                $paramFilePath = "c:\temp\Site\FabrikamFiber.Web.SetParameters.xml"

                $paramsToReplace = @{
                    "__FabFiberExpressConStr__" = "data source=fabfiberdb;database=FabrikamFiber-Express;User Id=lab;Password=P2ssw0rd"
                    "__SiteName__" = "Default Web Site\FabrikamFiber"
                }

                $content = gc $paramFilePath
                $paramsToReplace.GetEnumerator() | % {
                    $content = $content.Replace($_.Key, $_.Value)
                }
                sc -Path $paramFilePath -Value $content
            }
            DependsOn = "[File]CopyWebDeployFiles"
        }
        
        Script DeploySite
        {
            GetScript = { @{ Name = "DeploySite" } }
            TestScript = { $false }
            SetScript = {
                & "c:\temp\Site\FabrikamFiber.Web.deploy.cmd" /Y
            }
            DependsOn = "[Script]SetConStringDeployParam"
        }

        #
        # Ensure App Insights cloud monitoring for the site is enabled
        #
        Script AppInsightsCloudMonitoring
        {
            DependsOn = "[Script]DeploySite"
            GetScript = 
            {
                @{
                    WebApplication = 'Default Web Site/FabrikamFiber';
                }
            }
            TestScript =
            {
                $false
            }
            SetScript =
            {
                # import module - requires change to PSModulePath for this session
                $mod = Get-Module -Name Microsoft.MonitoringAgent.PowerShell
                if ($mod -eq $null)
                {
                    $env:PSModulePath = $env:PSModulePath + ";C:\Program Files\Microsoft Monitoring Agent\Agent\PowerShell\"
                    Import-Module Microsoft.MonitoringAgent.PowerShell -DisableNameChecking
                }
        
                Write-Verbose "Starting cloud monitoring on FabFiber site"
                Start-WebApplicationMonitoring -Cloud -Name 'Default Web Site/FabrikamFiber'
            }
        }
    }
}

# command for RM
FabFibWeb_Site -ConfigurationData $configData -PackagePath $applicationPath

# test from command line
#FabFibWeb -ConfigurationData configData.psd1 -PackagePath "\\rmserver\builddrops\__ReleaseSite\__ReleaseSite_1.0.0.3"
#Start-DscConfiguration -Path .\FabFibWeb -Verbose -Wait

You’ll see some similarities to the database DSC script – getting nodes by role (“WebServer” this time instead of “SqlServer”), Log resources to log messages and the “compilation” command which passes in the $configData and $applicationPath.

  • Lines 20-28: I copy the entire FabrikamFiber.Web_package folder (containing the cmd, SetParameters and zip file) to a temp folder on the node.
  • Line 30: I use a Script Resource to do config replacement.
  • Lines 32-33: Always execute the Set-Script, and return the name of the resource when interrogating the target system.
  • Lines 34-47: The “guts” of this script – replacing the tokens in the SetParameters file with real values and then invoking WebDeploy.
  • Line 35: Set a parameter to the known local location of the SetParameters file.
  • Lines 37-40: Create a hash-table of key/value pairs that will be replaced in the SetParameters file. I have 2: the site name and the database connection string. You can see the familiar __ pre- and post-fix for the placeholders names – I can use this same package in agent-based deployments if I want to.
  • Line 42: read in the contents of the SetParameters file.
  • Lines 43-45: Replace the token placeholders with the actual values from the hash-table.
  • Line 46: overwrite the SetParameters file – it now has actual values instead of just placeholder values.
  • Lines 51-59: I use another Script resource to execute the cmd file (invoking WebDeploy).
  • Lines 64-90: This is optional – I include it here as a reference of how to ensure that the site is being monitored using Application Insights once it’s deployed.

The Release

In order to run vNext (a.k.a. agent-less a.k.a DSC) deployments, you need to import your target nodes. Since vNext servers are agent-less, you don’t need to install anything on the target node. You just need to make sure you can run remote PowerShell commands against the node and have the username/password for doing so. When adding a new server, just type in the name of the machine and specify the remote port, which is 5985 by default. This adds the server into RM as a “Standard” server. These servers always show their status as “Ready”, but this can be misleading since there is no agent. You can then compose your servers into “Standard Environments”. Next you’ll want to create a vNext Release Path (which specifies the environments you’re deploying to as well as who is responsible for approvals).

image

image You can specify other configuration variables and defaults in RM Update 4 RC.

vNext Components

In order to use the binaries and scripts we’ve created, we need to specify a vNext component in RM. Here’s how I specify the component:

image All this is really doing is setting the value of the $packagePath (which I set to the root of the drop folder here). Also note how I only need a single component even though I have several scripts to invoke (as we’ll see next).

The vNext Template

I create a new vNext template. I select a vNext release path. I right-click the “Components” node in the toolbox and add in the vNext component I just created. Since I am deploying to (at least) 2 machines, I drag a “Parallel” activity onto the design surface. On the left of the parallel, I want scripts for my SQL servers. On the right, I want scripts for my webservers. Since I’ve already installed SQL on my SQL server, I am not going to use that script – I’ll just deploy my database model. On the webserver, I want to run the prerequisites script (to make sure IIS, Webdeploy, MVC runtime and the MMA agent are all installed and correctly configured. Then I want to deploy my website using Webdeploy. So I drag on 3 “Deploy using PS/DSC” activities. I select the appropriate server and component from the “Server” and “Component” drop-downs respectively. I set the username/password for the identity that RM will use to remote onto the target nodes. Then I set the path to the scripts (relative to the root of the drop folder, which is the “Path to Package” in the component I sepcified (which becomes $applicationPath inside the DSC script). I also set the path to the PsConifgurationPath to my configData.psd1 script. Finally I set UseCredSSP and UseHTTPS both to false and SkipCaCheck to true (you can vary these according to your environment).

image

Now I can trigger the release (either through the build or manually). Here’s what a successful run looks like and a snippet of one of the logs:

image

To Agent or Not To Agent?

Looking at the features and improvements to Release Management Update 3 and Update 4, it seems that the TFS product team are not really investing in agent-based deployments and templates any more. If you’re using agent-based deployments, it’s a good idea to start investing in DSC (or at the very least just plain ol’ PowerShell) so that you can use agent-less (vNext) deployments. As soon as I saw DSC capabilities in Update 3, I guessed this was the direction the product team would pursue, and Update 4 seems to confirm that guess. While there is a bit of a learning curve, this technology is very powerful and will ultimately lead to better deployments – which means better quality for your business and customers.

Happy deploying!

Real Config Handling for DSC in RM

$
0
0

In my previous post I showed you how to use PowerShell DSC and Release Management to configure machines and deploy an application. There was one part of the solution that I wasn’t satisfied with, and in the comments section you’ll see that @BigFan picks it up: the configuration is hard-coded.

cScriptWithParams Resource

The primary reason I’ve had to hard-code the configuration is that I use the Script resource heavily. Unfortunately the Script resource cannot utilize configuration (or parameters)! I do explain this in my previous post (see the section headed “A Note on Script Resource Parameters”). For a while I tried to write my own custom resource, but eventually abandoned that project. However, after completing my previous post, I decided to have another stab at the problem. And voila! I created a custom Script resource that (elegantly, I think) can be parameterized. You can get it from GitHub.

Let’s first look at how to utilize the new resource – I’ll discuss how I created the resource after that.

Parameterized Scripts

In my previous solution, the Script resource that executed the Webdeploy command (which deploys my web application) looks like this:

Script SetConStringDeployParam
{
    GetScript = { @{ Name = "SetDeployParams" } }
    TestScript = { $false }
    SetScript = {
        $paramFilePath = "c:\temp\Site\FabrikamFiber.Web.SetParameters.xml"

        $paramsToReplace = @{
            "__FabFiberExpressConStr__" = "data source=fabfiberdb;database=FabrikamFiber-Express;User Id=lab;Password=P2ssw0rd"
            "__SiteName__" = "Default Web Site\FabrikamFiber"
        }

        $content = gc $paramFilePath
        $paramsToReplace.GetEnumerator() | % {
            $content = $content.Replace($_.Key, $_.Value)
        }
        sc -Path $paramFilePath -Value $content
    }
    DependsOn = "[File]CopyWebDeployFiles"
}

You can see how lines 9 and 10 are hard-coded. Ideally these values should be read from a configuration somewhere.

Here’s what the script looks like when you use the cScriptWithParams resource:

cScriptWithParams SetConStringDeployParam
{
    GetScript = { @{ Name = "SetDeployParams" } }
    TestScript = { $false }
    SetScript = {
        $paramFilePath = "c:\temp\Site\FabrikamFiber.Web.SetParameters.xml"

        $paramsToReplace = @{
            "__FabFiberExpressConStr__" = $conStr
            "__SiteName__" = $siteName
        }

        $content = gc $paramFilePath
        $paramsToReplace.GetEnumerator() | % {
            $content = $content.Replace($_.Key, $_.Value)
        }
        sc -Path $paramFilePath -Value $content
    }
    cParams =
    @{
        conStr = $conStr;
        siteName = $siteName;
    }
    DependsOn = "[File]CopyWebDeployFiles"
}

Some notes:

  • Line 1: The name of the resource is “cScriptWithParams” – the custom Script resource I created. In order to use this custom resource, you need the line “Import-DscResource -Name cScriptWithParams” at the top of your Configuration script (above the first Node element).
  • Lines 9/10: The values for the connection string and site name are now variables instead of hard-coded
  • Lines 19-23: This is the property that allows you to “pass in” values for the variables. It’s a hash-table of string key-value pairs, where the key is the name of the variable used in any of the Get, Set or Test scripts and the value is the value you want to set the variable to. We could get the values from anywhere – a DSC config file (where we would have $Node.ConStr for example) – in this case it’s from 2 global variables called $conStr and $siteName (we’ll see later where these get specified).

Removing Config Files Altogether

Now that we can (neatly) parameterize the custom scripts we want to run, we can use the new config variable options in RM to completely remove the need for a config file. Of course you could still use the config file if you wanted to. Here’s the final script for deploying my web application:

Configuration FabFibWeb_Site
{
    Import-DscResource -Name cScriptWithParams

    Node $ServerName
    {
        Log WebServerLog
        {
            Message = "Starting Site Deployment. AppPath = $applicationPath."
        }

        #
        # Deploy a website using WebDeploy
        #
        File CopyWebDeployFiles
        {
            Ensure = "Present"         
            SourcePath = "$applicationPath\_PublishedWebsites\FabrikamFiber.Web_Package"
            DestinationPath = "c:\temp\Site"
            Recurse = $true
            Force = $true
            Type = "Directory"
        }

        cScriptWithParams SetConStringDeployParam
        {
            GetScript = { @{ Name = "SetDeployParams" } }
            TestScript = { $false }
            SetScript = {
                $paramFilePath = "c:\temp\Site\FabrikamFiber.Web.SetParameters.xml"

                $paramsToReplace = @{
                    "__FabFiberExpressConStr__" = $conStr
                    "__SiteName__" = $siteName
                }

                $content = gc $paramFilePath
                $paramsToReplace.GetEnumerator() | % {
                    $content = $content.Replace($_.Key, $_.Value)
                }
                sc -Path $paramFilePath -Value $content
            }
            cParams =
            @{
                conStr = $ConStr;
                siteName = $SiteName;
            }
            DependsOn = "[File]CopyWebDeployFiles"
        }
        
        Script DeploySite
        {
            GetScript = { @{ Name = "DeploySite" } }
            TestScript = { $false }
            SetScript = {
                & "c:\temp\Site\FabrikamFiber.Web.deploy.cmd" /Y
            }
            DependsOn = "[cScriptWithParams]SetConStringDeployParam"
        }
    }
}

# command for RM
FabFibWeb_Site

<# 
#test from command line
$ServerName = "fabfiberserver"
$applicationPath = "\\rmserver\builddrops\__ReleaseSite\__ReleaseSite_1.0.0.3"
$conStr = "testing"
$siteName = "site Test"
FabFibWeb
Start-DscConfiguration -Path .\FabFibWeb -Verbose -Wait
#>

Notes:

  • Line 3: Importing the custom resource (presumes the custom resource is “installed” locally – see next section for how to do this)
  • Line 5: I leverage the $ServerName variable that RM sets – I don’t have to hard-code the node name
  • Lines 15-23: Copy the Webdeploy files from the build drop location to a local folder (again I use an RM parameter, $applicationPath, which is the drop folder)
  • Lines 25-49: Almost the same Script resource we had before, but subtly changed to handle variables by changing it to a cScriptWithParams resource.
  • Lines 33/34: The hard-coded values have been replaced with variables.
  • Lines 43-47: We need to supply a hash-table of key/value pairs for our parameterized scripts. In this case, we need to supply conStr and siteName. For the values, we pass in $conStr and $siteName, which RM will feed in for us (we’ll specify these on the Release Template itself)
  • Line 64: “Compile” the configuration (into a .mof file) for RM to push to the target server
  • Lines 66-74: If you test this script from the command line, you just create the variables required and execute it. This is exactly what RM does under the hood when executing this script.

Using the Script in a Release Template

Now that we have the script, let’s see how we consume it. (Of course it’s checked into source control, along with the Custom Resource, and part of a build so that it ends up in the build drop folder with our application. Of course – goes without saying!)

We define the vNext Component the same way we did last time:

image Nothing magical here – this really just defines the root folder of the build drop for use in the deployment.

Next we create the vNext template using our desired vNext release path. On the designer, you’ll see the major difference: we’re defining the variables on the surface itself:

image Our script uses $ServerName (which you can see is set to fabfiberserver). It also uses ConStr and SiteName (these are the parameter values we specified in lines 44/45 of the above script – $ConStr  and $SiteName). Of course if we deploy to another server (say in our production environment) we would simply specify other values for that server.

Deploying a Custom Resource

The final trick is how you deploy the custom resource. To import it using Import-DSCResource, you need to have it in ProgramFiles\WindowsPowerShell\Modules. If you’re testing the script from you workstation, you’ll need to copy it to this path on your workstation. You’ll also need to copy it to that folder on the target server. Sounds like a job for a DSC script with a File resource! Unfortunately it can’t be part of the web application script we created above since it needs to be on the server before you run the Import-DscResource command. No problem – we’ll run 2 scripts on the template. Here’s the script to deploy the custom resource:

Configuration CopyCustomResource
{
    Node $ServerName
    {
        File CopyCustomResource
        {
            Ensure = "Present"
            SourcePath = "$applicationPath\$modSubFolder\$modName"
            DestinationPath = "$env:ProgramFiles\WindowsPowershell\Modules\$modName"
            Recurse = $true
            Force = $true
            Type = "Directory"
        }
    }
}

<#
# test from command line
$ServerName = "fabfiberserver"
$applicationPath = "\\rmserver\builddrops\__ReleaseSite\__ReleaseSite_1.0.0.3"
$modSubfolder = "CustomResources"
$modName = "DSC_ColinsALMCorner.com"
#>

# copy the resource locally
#cp "$applicationPath\$modSubFolder\$modName" $env:ProgramFiles\WindowsPowerShell\Modules -Force -Recurse

# command for RM
CopyCustomResource

<#
# test from command line
CopyCustomResource
Start-DscConfiguration -Path .\CopyCustomResource -Verbose -Wait
#>

This is very straight-forward:

  • Line 3: Again we’re using RM’s variable so that don’t have to hard-code the node name
  • Lines 5-13: Copy the resource files to the PowerShell modules folder
  • Line 26: Use this to copy the resource locally to your workstation for testing it
  • Line 29: This “compiles” this config file before RM deploys it (and executes it) on the target server
  • Lines 17-23 and 31-35: Uncomment these to run this from the command line for testing

Here’s how to use the script in a release template:

image By now you should be able to see how this designer is feeding values to the script!

cScriptWithParams: A Look Inside

To make the cScriptWithParams custom resource, I copied the out-of-the-box script and added the cParams hash-table parameter to the Get/Set/Test TargetResource functions. I had some issues with type conversions, so I eventually changed the HashTable to an array of Microsoft.Management.Infrastructure.CimInstance. I then make sure this gets passed to the common function that actually invokes the script (ScriptExecutionHelper). Here’s a snippet from the Get-TargetResource function:

function Get-TargetResource 
{
    [CmdletBinding()]
     param 
     (         
       [parameter(Mandatory = $true)]
       [ValidateNotNullOrEmpty()]
       [string]
       $GetScript,
  
       [parameter(Mandatory = $true)]
       [ValidateNotNullOrEmpty()]
       [string]$SetScript,

       [parameter(Mandatory = $true)]
       [ValidateNotNullOrEmpty()]
       [string]
       $TestScript,

       [Parameter(Mandatory=$false)]
       [System.Management.Automation.PSCredential] 
       $Credential,

       [Parameter(Mandatory=$false)]
       [Microsoft.Management.Infrastructure.CimInstance[]]
       $cParams
     )

    $getTargetResourceResult = $null;

    Write-Debug -Message "Begin executing Get Script."
 
    $script = [ScriptBlock]::Create($GetScript);
    $parameters = $psboundparameters.Remove("GetScript");
    $psboundparameters.Add("ScriptBlock", $script);
    $psboundparameters.Add("customParams", $cParams);

    $parameters = $psboundparameters.Remove("SetScript");
    $parameters = $psboundparameters.Remove("TestScript");

    $scriptResult = ScriptExecutionHelper @psboundparameters;

Notes:

  • Lines 24-26: the extra parameter I added
  • Line 36: I add the $cParams to the $psboundparameters that will be passed to the ScriptExecutionHelper function
  • Line 41: this is the original call to the ScriptExecutionHelper function

Finally, I customized the ScriptExecutionHelper function to utilize the parameters:

function ScriptExecutionHelper 
{
    param 
    (
        [ScriptBlock] 
        $ScriptBlock,
    
        [System.Management.Automation.PSCredential] 
        $Credential,

        [Microsoft.Management.Infrastructure.CimInstance[]]
        $customParams
    )

    $scriptExecutionResult = $null;

    try
    {
        $executingScriptMessage = "Executing script: {0}" -f ${ScriptBlock} ;
        Write-Debug -Message $executingScriptMessage;

        $executingScriptArgsMessage = "Script params: {0}" -f $customParams ;
        Write-Debug -Message $executingScriptArgsMessage;

        # bring the cParams into memory
        foreach($cVar in $customParams.GetEnumerator())
        {
            Write-Debug -Message "Creating value $($cVar.Key) with value $($cVar.Value)"
            New-Variable -Name $cVar.Key -Value $cVar.Value
        }

        if($null -ne $Credential)
        {
           $scriptExecutionResult = Invoke-Command -ScriptBlock $ScriptBlock -ComputerName . -Credential $Credential
        }
        else
        {
           $scriptExecutionResult = &$ScriptBlock;
        }
        Write-Debug -Message "Completed script execution"
        $scriptExecutionResult;
    }
    catch
    {
        # Surfacing the error thrown by the execution of Get/Set/Test script.
        $_;
    }
}

Notes:

  • Lines 11/12: The new “hashtable” of variables
  • Lines 25-30: I use New-Variable to create global variables for each key/value pair in $customParams
  • The remainder of the script is unmodified

The only limitation I hit was the the values must be strings– I am sure this has to do with the way the values are serialized when a DSC configuration script is “compiled” into a .mof file.

As usual, happy deploying!


Azure Outage – I was a victim too, dear Reader

$
0
0

This morning I went to check on my blog – the very blog you’re busy reading – and I was greeted with a dreaded YSOD (Yellow Screen of Death). What? That can’t be! I haven’t deployed anything since about 10 days ago, so I know it wasn’t my code! What gives?

It turns out that I had some garbled post files. My blog is built on MiniBlog which stores all posts as xml files. One of the changes I made to my engine is to increment a view counter on each post so that I can track which posts are being hit. I suppose there is some risk in doing this, since there is a lot of writing to the files. Turns out about 6 files in my posts directory were either empty or partially empty – I suspect that IIS was writing the files when the outage happened a couple of days ago. Anyway, turns out MiniBlog isn’t that resilient when the xml files it’s reading are not well-formed xml! That just goes to show you that even though your code has been stable for months, stuff can still go wrong!

So I applied a fix (which moves the broken file out the way and sends me an email with the exception information) and it looks like the site is up again. Although at time of writing, the site is dog slow and I can’t seem to WebDeploy – I’ve had to use FTP to fix the posts and update my code, and I can’t see the files using the Azure SDK File Explorer (though I am able to see the logs, fortunately). I suspect there are still Azure infrastructure problems.

So other than having my counts go wonky, I may have lost a couple of comments. If one of your comments disappeared, Dear Reader, I humbly apologize.

Also, this is my first post from my Surface 3 Pro! Woot!

Happy reading-as-long-as-Azure-stays-up!

Don’t Just Fix It, Red-Green Refactor It!

$
0
0

I’m back to doing some dev again – for a real-life, going-to-charge-for application! It’s great to be based from home again and to be on some very cutting edge dev.

I’m very comfortable with ASP.NET MVC, but this project is the first Nancy project I’ve worked on. We’re also using Git on VSO for source control and backlogs, MyGet to host internal NuGet packages, Octopus deploy for deployment, Python (with various libs, of course!) for number crunching and Azure to host VMs and websites (which are monitored with AppInsights). All in all it’s starting to shape up to a very cool application – details to follow as we approach go-live (play mysterious music here)…

Ho Hum Dev

Ever get into a groove that’s almost too automatic? Ever been driving home and you arrive and think, “Wait a minute – how did I get here?”. You were thinking so intently on something else that you just drove “on automatic” without really paying attention to what you were doing.

Dev can sometimes get into this kind of groove. I was doing some coding a few days ago and almost missed a good quality improvement opportunity – fortunately, I was able to look up long enough to see a better way to do things, and hopefully save myself some pain down the line.

I was debugging some code, and something wasn’t working the way I expected. Here’s a code snippet showing two properties I was working with:

protected string _name;
public string Name
{
    get 
    {
        if (string.IsNullOrEmpty(_name))
        {
            SplitKey();
        }
        return _name;
    }
    set 
    {
        _name = value;
        CombineKey();
    }
}

protected ComponentType _type;
public string Type
{
    get
    {
        return _type.ToString();
    }
    set
    {
        _type = ParseTypeEnum(value);
        CombineKey();
    }
}

See how the getter for the Type property doesn’t match the code for the getter for Name? Even though I have unit tests for this getter, the tests are all passing!

Now the simple thing to do would have been to simply add the missing call to SplitKey() and carry on – but I wanted to know why the tests weren’t failing. I knew there were issues with the code (I had hit them while debugging) so I decided to take a step back and try some good practices: namely red-green refactor.

Working with Quality in Mind

When you’re coding you should be working with quality in mind – that’s why I love unit testing so much. If you’re doing dev without unit testing, you’re only setting yourself up for long hours of painful in-production debugging. Not fun. Build with quality up front – while it may feel like it’s taking longer to deliver, you’ll save time in the long run since you’ll be adding new features instead of debugging poor quality code.

Here’s what you *should* be doing when you come across “hanky” code:

  1. Do some coding
  2. While running / debugging, find some bug
  3. BEFORE FIXING THE BUG, write a FAILING unit test that exposes the bug
  4. Refactor/fix till the test passes

So I opened up the tests for this entity and found the issue: I was only testing one scenario. This highlights that while code coverage is important, it can give you a false sense of security!

Here’s the original test:

[TestMethod]
public void SetsPropertiesCorrectlyFromKeys()
{
    var component = new Component()
    {
        Key = "Logger_Log1"
    };

    Assert.AreEqual("Logger", component.Type);
    Assert.AreEqual("Log1", component.Name);
}

ComponentType comes from an enumeration – and since Logger is the 1st value in the enum, it defaults to Logger if you don’t explicitly set the value. So while I had a test that was covering the entire method, it wasn’t testing all the combinations!

So I added a new test:

[TestMethod]
public void SetsPropertiesCorrectlyFromKeys2()
{
    var component = new Component()
    {
        Key = "Service_S0"
    };

    Assert.AreEqual("Service", component.Type);
    Assert.AreEqual("S0", component.Name);
}

Now when I ran the tests, the 2nd test failed. Excellent! Now I’ve got a further test that will check for a bad piece of code.

To fix the bug, I had to add another enum value and of course, add in the missing SplitKey() call in the Type property getter:

public enum ComponentType
{
    Unknown,
    Logger,
    Service
}

...

protected ComponentType _type;
public string Type
{
    get
    {
        if (_type == ComponentType.Unknown)
        {
            SplitKey();
        }
        return _type.ToString();
    }
    set
    {
        _type = ParseTypeEnum(value);
        CombineKey();
    }
}

Now both tests are passing. Hooray!

Conclusion

I realize the red-green refactoring isn’t a new concept – but I wanted to show a real-life example of how you should be thinking about your dev and debugging. Even though the code itself had 100% code coverage, there were still bugs. But debugging with quality in mind means you can add tests that cover specific scenarios – and which will reduce the amount of buggy code going into production.

Happy dev’ing!

Gulp – Workaround for Handling VS Solution Configuration

$
0
0

We’ve got some TypeScript models for our web frontend. If you’re doing any enterprise JavaScript development, then TypeScript is a must. It’s much more maintainable and even gives you some compile-time checking.

Even though TypeScript is a subset of JavaScript, you still need to “compile” the TypeScript into regular JavaScript for your files to be used in a web application. Visual Studio 2013 does this out of the box (you may need to install TypeScript if you’ve never done so). However, what if you want to concat the compiled scripts to reduce requests from the site? Or minify them?

WebEssentials

WebEssentials provides functionality like bundling and minification within Visual Studio. The MVC bundling features let you bundle your client side scripts “server side” – so the server takes a bundle file containing a list of files to bundle as well as some settings (like minification) and produces the script file “on-the-fly”. However, if you’re using some over framework, such as Nancy– you’re out of luck. Well, not entirely.

Gulp

What if you could have a pipeline (think: build engine) that could do some tasks such as concatenation and minification (and other tasks) on client-side scripts? Turns out there is a tool for that – it’s called Gulp (there are several other tools in this space too). I’m not going to cover Gulp in much detail in this post – you can look to Scott Hanselman’s excellent intro post. There are also some excellent tools for Visual Studio that support Gulp – notably Task Runner Explorer Extension (soon to be baked into VS 2015).

Configurations for Gulp

After a bit of learning curve, we finally got our Gulp file into a good place – we were able to compile TypeScript to JavaScript, concat the files preserving ordering for dependencies, minify, include source maps and output to the correct directories. We even got this process kicked off as part of our TFS Team build for our web application.

However, I did run into a hitch – configurations. It’s easy enough to specify a configuration (or environment) for Gulp using the NODE_ENV setting from the command line. Just set the value in the CLI you’re using (so “set NODE_ENV Release” for DOS prompt, and “$env:NODE_ENV = ‘Release’” for PowerShell) and invoke gulp. However, it seems that configurations are not yet supported within Visual Studio. I wanted to minify only for Release configurations – and I found there was no obvious way to do this.

I even managed to find a reply to a question on the Task Runner Explorer on VS Gallery where Mads Krisensen states there is no configuration support for the extension yet – he says it’s coming though (see here– look for the question titled “Passing build configuration into Gulp”).

The good news is I managed to find a passable workaround.

The Workaround

In my gulp file I have the following lines:

// set a variable telling us if we're building in release
var isRelease = true;
if (process.env.NODE_ENV && process.env.NODE_ENV !== 'Release') {
    isRelease = false;
}

This is supposed to grab the value of the NODE_ENV from the environment for me. However, running within the Task Runner Explorer quickly showed me that it was not able to read this value from anywhere.

At the top of the Gulp file, there is a /// comment that allows you to bind VS events to the Gulp file – so if you want a task executed before a build, you can set BeforeBuild=’default’ inside the ///. At first I tried to set the environment using “set ENV_NODE $(Configuration)” in the pre-build event for the project, but no dice.

Here’s the workaround:

  • Remove the BeforeBuild binding from the Gulp file (i.e. when you build your solution, the Gulp is not triggered). You can see your bindings in the Task Runner Explorer – you want to make sure “Before Build” and “After Build” are both showing 0:

image

  • Add the following into the Pre-Build event on your Web project Build Events tab:
    • set NODE_ENV=$(ConfigurationName)
    • gulp

image

That’s it! Now you can just change your configuration from “Debug” to “Release” in the configuration dropdown in VS and when you build, Gulp will find the correct environment setting. Here you can see I set the config to Debug, the build executes in Debug and Gulp is correctly reading the configuration setting:

image

Caveats

There are always some – in this case only two that I can think of:

  1. You won’t see the Gulp output in the Task Runner Explorer Window on builds – however, if you use the Runner to invoke tasks yourself, you’ll still see the output. The Gulp output will now appear in the Build output console when you build.
  2. If you’ve set a Watch task (to trigger Gulp when you change a TypeScript file, for example) it won’t read the environment setting. For me it’s not a big deal since build is invoked prior to debugging from VS anyway. Also, for our build process I default the value to “release” just in case.

Happy gulping!

Reflections on DSC for Release Management

$
0
0

A couple of months ago I did a series of posts (this one has the summary of all my RM/DSC posts) about using PowerShell DSC in Release Management. I set out to see if I could create a DSC script that RM could invoke that would prep the environment and install the application. I managed to get it going, but never felt particularly good about the final solution – it always felt a little bit hacky. Not the entire solution per se – really just the application bit.

The main reason for this was the fact that I need to hack the Script Resource in order to let me run commands on the target node with parameters. Initially I thought that the inability to do this natively in DSC was short-sighted from the architecture of DSC – but the more I thought about it, the more I realized that I was trying to shoehorn application installation into DSC.

DSC scripts should be declarative – my scripts were mostly declarative, but the application-specific parts of the script were very much imperative – and that started to smell.

Idempotency

I wrote about what I consider to be the most important mental shift when working with PowerShell DSC – idempotency. The scripts you create need to be idempotent – that is they need to end up in the same end state no matter what the starting state is. This works really well for the environment that an application needs to run in – but it doesn’t really work so well for the application itself.

My conclusion is simple: use DSC to specify the environment, and use plain ol’ PowerShell to install your application.

PowerShell DSC resources are split into 3 actions – Get, Test and Set. The Get method gets the state of the resource on the node. The Set method “makes it so” – it enforces the state the script specifies. The Test method checks to see if the target node’s state matches the state the script specifies. Let’s consider an example: the WindowsFeature resource. Consider the following excerpt:

WindowsFeature WebServerRole
{
    Name = "Web-Server"
    Ensure = "Present"
}

When executing, this resource will check the corresponding WindowsFeature (IIS) on the target node using the Test method. If IIS is present, no action is taken (the node state matches the desired state specified in the script). If it’s not installed, the Set method is invoked to install/enable the IIS. Of course if we simply wanted to query the state of the WindowsFeature, the Get method would tell us the state (installed or not) of IIS.

This Get-Test-Set paradigm works well for environments – however, it starts to break down when you try to apply it to an application. Consider a Web Application with a SQL Database backend. How to you test if the application is in a particular state? You could check the schema of the database as an indication of the state; you could check if the site exists as an indication of the web site state. Of course this may not be sufficient for checking the state of your application.

(On a side note, if you’re using WebDeploy to deploy your website and you’re using Database Projects, you don’t need to worry, since these mechanisms are idempotent).

The point is, you may be deploying an application that doesn’t use an idempotent mechanism. In either case, you’re better off not trying to shoehorn application installation into DSC. Also, Release Management lets you execute both DSC and “plain” PowerShell against target nodes – so use them both.

WebServerPreReqs Script

I also realized that I never published my “WebServerPreReqs” script. I use this script to prep a Web Server for my web application. There are four major sections to the script: Windows Features, runtimes, WebDeploy and MMC.

First, I ensure that Windows is in the state I need it to be – particularly IIS. I ensure that IIS is installed, as well as some other options like Windows authentication. Also, I ensure that the firewall allows WMI.

Script AllowWMI 
{
    GetScript = { @{ Name = "AllowWMI" } }
    TestScript = { $false }
    SetScript = 
    {
        Set-NetFirewallRule -DisplayGroup "Windows Management Instrumentation (WMI)" -Enabled True
    }
}

WindowsFeature WebServerRole
{
    Name = "Web-Server"
    Ensure = "Present"
}

WindowsFeature WebMgmtConsole
{
    Name = "Web-Mgmt-Console"
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

WindowsFeature WebAspNet
{
    Name = "Web-Asp-Net"
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

WindowsFeature WebNetExt
{
    Name = "Web-Net-Ext"
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

WindowsFeature WebAspNet45
{
    Name = "Web-Asp-Net45"
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

WindowsFeature WebNetExt45
{
    Name = "Web-Net-Ext45"
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

WindowsFeature WebHttpRedirect
{
    Name = "Web-Http-Redirect"
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

WindowsFeature WebWinAuth
{
    Name = "Web-Windows-Auth"
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

WindowsFeature WebScriptingTools
{
    Name = "Web-Scripting-Tools"
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

Next I install any runtimes my website requires – in this case, the MVC framework. You need to supply a network share somewhere for the installer – of course you could use a File resource as well, but you’d still need to have a source somewhere.

#
# Install MVC4
#
Package MVC4
{
    Name = "Microsoft ASP.NET MVC 4 Runtime"
    Path = "$AssetPath\AspNetMVC4Setup.exe"
    Arguments = "/q"
    ProductId = ""
    Ensure = "Present"
    DependsOn = "[WindowsFeature]WebServerRole"
}

I can’t advocate WebDeploy as a web deployment mechanism enough – if you’re not using it, you should be! However, in order to deploy an application remotely using WebDeploy, the WebDeploy agent needs to be running on the target node and the firewall port needs to be opened. No problem – easy to specify declaratively using DSC. I add the required arguments to get the installer to deploy and start the WebDeploy agent (see the Arguments setting in the Package WebDeploy resource). I also use a Script resource to Get-Test-Set the firewall rule for WebDeploy:

#
# Install webdeploy
#
Package WebDeploy
{
    Name = "Microsoft Web Deploy 3.5"
    Path = "$AssetPath\WebDeploy_amd64_en-US.msi"
    Arguments = "ADDLOCAL=MSDeployFeature,MSDeployAgentFeature"
    ProductId = ""
    Ensure = "Present"
    Credential = $Credential
    DependsOn = "[WindowsFeature]WebServerRole"
}

#
# Enable webdeploy in the firewall
#
Script WebDeployFwRule
{
    GetScript = 
    {
        write-verbose "Checking WebDeploy Firewall exception status"
        $Rule = Get-NetFirewallRule -DisplayName "WebDeploy_TCP_8172"
        Return @{
            Result = "DisplayName = $($Rule.DisplayName); Enabled = $($Rule.Enabled)"
        }
    }
    SetScript =
    {
        write-verbose "Creating Firewall exception for WebDeploy"
        New-NetFirewallRule -DisplayName "WebDeploy_TCP_8172" -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8172
    }
    TestScript =
    {
        if (Get-NetFirewallRule -DisplayName "WebDeploy_TCP_8172" -ErrorAction SilentlyContinue) 
        {
            write-verbose "WebDeploy Firewall exception already exists"
            $true
        } 
        else 
        {
            write-verbose "WebDeploy Firewall exception does not exist"
            $false
        }
    }
    DependsOn = "[Package]WebDeploy"
}

Finally, I wanted to make sure that MMC is installed so that I can monitor my application using Application Insights. This one was a little tricky since there isn’t an easy way to install the agent quietly – I have to unzip the installer and then invoke the MSI within. However, it’s still not that hard.

#
# MMA
# Since this comes in an exe that can't be run silently, first copy the exe to the node,
# then unpack it. Then use the Package Resource with custom args to install it from the
# unpacked msi.
#
File CopyMMAExe
{
    SourcePath = "$AssetPath\MMASetup-AMD64.exe"
    DestinationPath = "c:\temp\MMASetup-AMD64.exe"
    Force = $true
    Type = "File"
    Ensure = "Present"
}

Script UnpackMMAExe
{
    DependsOn ="[File]CopyMMAExe"
    TestScript = { $false }
    GetScript = {
        @{
            Result = "UnpackMMAExe"
        }
    }
    SetScript = {
        Write-Verbose "Unpacking MMA.exe"
        $job = Start-Job { & "c:\temp\MMASetup-AMD64.exe" /t:c:\temp\MMA /c }
        Wait-Job $job
        Receive-Job $job
    }
}

Package MMA
{
    Name = "Microsoft Monitoring Agent"
    Path = "c:\temp\MMA\MOMAgent.msi"
    Arguments = "ACTION=INSTALL ADDLOCAL=MOMAgent,ACSAgent,APMAgent,AdvisorAgent AcceptEndUserLicenseAgreement=1 /qn /l*v c:\temp\MMA\mmaInstall.log"
    ProductId = ""
    Ensure = "Present"
    Dependson = "[Script]UnpackMMAExe"
}

After running this script against a Windows Server in any state, I can be sure that the server will run my application – no need to guess or hope.

You can download the entire script from here.

Release Management

Now releasing my application is fairly easy in Release Management – execute two vNext script tasks: the first runs WebServerPreReqs DSC against the target node; the second runs a plain PowerShell script that invokes WebDeploy for my application using the drop folder of my build as the source.

Conclusion

PowerShell DSC is meant to be declarative – any time you’re doing any imperative scripting, rip it out and put it into plain PowerShell. Typically this split is going to be along the line of environment vs application. Use DSC for environment and plain PowerShell scripts for application deployment.

Happy releasing!

JSPM, NPM, Gulp and WebDeploy in a TeamBuild

$
0
0

I’ve been coding a web project using Aurelia for the last couple of weeks (more posts about what I’m actually doing to follow soon!). Aurelia is an amazing SPA framework invented by Rob Eisenberg (@EisenbergEffect).

JSPM

Aurelia utilizes npm (Node Package Manager) as well as the relatively new jspm– which is like npm for “browser package management”. In fact Rob and his Aurelia team are working very closely with the jspm team in order to add in functionality that will improve how Aurelia is bundled and packaged – but I digress.

To utilize npm and jspm, you need to specify the dependencies that you have on any npm/jspm packages in a packages.json file. Then you can run “npm install” and “jspm install” and the package managers spring into action pulling down all your dependencies. This works great while you’re developing – but can be a bit strange when you’re deploying with WebDeploy (and you should be!)

WebDeploy (out of the box) only packages files that are included in your project. This is what you want for any of your source (or content) files. But you really don’t want to include dependencies in your project (or in source control for that matter) since the package managers are going to refresh the dependencies during the build anyway. That’s the whole point of using Package Managers in the first place! The problem is that when you package your website, none of the dependencies will be included in the package (since they’re not included in the VS project).

There are a couple solutions to this problem:

  1. You could execute the package manager install commands after you’ve deployed your site via WebDeploy. However, if you’re deploying to WAWS (or don’t have access to running scripts on the server where your site is hosted) you won’t be able to – and you are going to end up with missing dependencies.
  2. You could include the packages folder in your project. The problem with this is that if you upgrade a package, you’ll end up having to exclude the old package (and its dependencies) and include the new package (and any of its dependencies). You lose the value of using the Package Manager in the first place.
  3. Customize WebDeploy to include the packages folder when creating the deployment package. Now we’re talking!

Including Package Folders in WebDeploy

Of course as I considered this problem I was not happy with either running the Package Manager commands on my hosting servers (in the case of WAWS this isn’t even possible) or including the package files in my project. I then searched out Sayed Ibrahim Hashimi’s site to see what guidance he could offer (he’s a build guru!). I found an old post that explained how to include “extra folders” in web deployment– however, that didn’t quite work for me. I had to apply the slightly more up-to-date property group specified in this post. Sayed had a property group for <CopyAllFilesToSingleFolderForPackageDependsOn> but you need the same property group for <CopyAllFilesToSingleFolderForMsdeployDependsOn>.

My final customized target to include the jspm package folder in WebDeploy actions is as follows (you can add this to the very bottom of your web project file, just before the closing </Project> tag):

<!-- Include the jspm_packages folder when packaging in webdeploy since they are not included in the project -->
<PropertyGroup>
  <CopyAllFilesToSingleFolderForPackageDependsOn>
    CustomCollectFiles;
    $(CopyAllFilesToSingleFolderForPackageDependsOn);
  </CopyAllFilesToSingleFolderForPackageDependsOn>

  <CopyAllFilesToSingleFolderForMsdeployDependsOn>
    CustomCollectFiles;
    $(CopyAllFilesToSingleFolderForPackageDependsOn);
  </CopyAllFilesToSingleFolderForMsdeployDependsOn>
</PropertyGroup>

<Target Name="CustomCollectFiles">
  <ItemGroup>
    <_CustomFiles Include=".\jspm_packages\**\*">
      <DestinationRelativePath>%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
    </_CustomFiles>
    <FilesForPackagingFromProject Include="%(_CustomFiles.Identity)">
      <DestinationRelativePath>jspm_packages\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
    </FilesForPackagingFromProject>
  </ItemGroup>
</Target>

Now when I package my site, I get all the jspm packages included.

TeamBuild with Gulp, NPM, JSPM and WebDeploy

The next challenge is getting this all to work on a TeamBuild. Let’s quickly look at what you need to do manually to get a project like this to compile:

  1. Pull the sources from source control
  2. Run “npm install” to install the node pacakges
  3. Run “jspm install –y” to install the jspm packages
  4. (Optionally) Run gulp – in our case this is required since we’re using TypeScript. We’ve got gulp set up to transpile our TypeScript source into js, do minification etc.
  5. Build in VS – for our WebAPI backend
  6. Publish using WebDeploy (could just be targeting a deployment package rather than pushing to a server)

Fortunately, once you’ve installed npm and jspm and gulp globally (using –g) you can create a simple PowerShell script to do steps 2 – 4. The out of the box build template does the rest for you. Here’s my Gulp.ps1 script, which I specify in the “Pre-build script path” property of my TeamBuild Process:

param(
    [string]$sourcesDirectory = $env:TF_BUILD_SOURCESDIRECTORY
)

$webDirectory = $sourcesDirectory + "\src\MyWebProject"
Push-Location

# Set location to MyWebProject folder
Set-Location $webDirectory

# refresh the packages required by gulp (listed in the package.json file)
$res = npm install 2>&1
$errs = ($res | ? { $_.gettype().Name -eq "ErrorRecord" -and $_.Exception.Message.ToLower().Contains("err") })
if ($errs.Count -gt 0) {
    $errs | % { Write-Error $_ }
    exit 1
} else {
    Write-Host "Successfully ran 'npm install'"
}

# refresh the packages required by jspm (listed in the jspm section of package.json file)
$res = jspm install -y 2>&1
$errs = ($res | ? { $_.gettype().Name -eq "ErrorRecord" -and $_.Exception.Message.ToLower().Contains("err") })
if ($errs.Count -gt 0) {
    $errs | % { Write-Error $_ }
    exit 1
} else {
    Write-Host "Successfully ran 'jspm install -y'"
}

# explicitly set the configuration and invoke gulp
$env:NODE_ENV = 'Release'
node_modules\.bin\gulp.cmd build

Pop-Location

One last challenge – one of the folders (a lodash folder) ends up having a path > 260 characters. TeamBuild can’t remove this folder before doing a pull of the sources, so I had to modify the build template in order to execute a “CleanNodeDirs” command (I implemented this as an optional “pre-pull” script). However, this is a chicken-and-egg problem – if the pull fails because of old folders, then you can’t get the script to execute to clean the folders before the pull… So the logic I wrap the “pre-pull” invocation in an If activity that first checks if the “pre-pull” script exists. If it does, execute it, otherwise carry on.

The logic for this is as follows:

  1. On a clean build (say a first build) the pre-pull script does not exist
  2. When the build checks for the pre-pull script, it’s not there – the build continues
  3. The build executes jspm, and the offending lodash folder is created
  4. The next build initializes, and detects that the pre-pull script exists
  5. The pre-pull script removes the offending folders
  6. The pull and the remainder of the build can now continue

Unfortunately straight PowerShell couldn’t delete the folder (since the path is > 260 chars). I resorted to invoking cmd. I repeat it twice since the first time it complains that the folder isn’t empty – running the 2nd time completes the delete. Here’s the script:

Param(
  [string]$srcDir = $env:TF_BUILD_SOURCESDIRECTORY
)

# forcefully remove left over node module folders
# necessary because the folder depth means paths end up being > 260 chars
# run it twice since it sometimes complains about the dir not being empty
# supress errors
$x = cmd /c "rd $srcDir\src\MyWebProject\node_modules /s /q" 2>&1
$x = cmd /c "rd $srcDir\src\MyWebProject\node_modules /s /q" 2>&1

Conclusion

Getting NPM, JSPM, Gulp, WebDeploy and TeamBuild to play nicely is not a trivial exercise. Perhaps vNext builds will make this all easier – I’ve yet to play with it. For now, we’re happy with our current process.

Any build/deploy automation can be tricky to set up initially – especially if you’ve got as many moving parts as we have in our solution. However, the effort pays off, since you’ll be executing the build/deploy cycle many hundreds of times over the lifetime of an agile project – each time you can deploy from a single button-press is a win!

Happy packaging!

Viewing all 114 articles
Browse latest View live