Building Git repositories containing sub modules with Continua CI

Today I wanted to add a repository to the build server that contains submodules. The submodules are required because I prefer to share code via shared projects so they will always be compiled for the right platform as you can read here.

When you add a repository to a project or configuration in Continua CI, you can add authentication. However, the sub modules won’t use the same authentication which means that the submodule can never be retrieved. To solve this issue, use the steps below.

Create separate build server account (optional)

I prefer to create a read-only account for build servers so it can never screw anything up. For private projects I use Bitbucket so I created a new buildserver account and added it to the team with read-only access.

Create _netrc file on the server

The _netrc file can be used to provide default credentials to a server when they aren’t provided on the actual repository. I prefer to keep everything together so I followed the following steps:

  1. Create directory C:\Continua\Auth
  2. Create a %HOME% system variable pointing to this path (I prefer using a shared path for %HOME% so it is accessible to all users)
  3. Create the _netrc file (note that there is no extension) with the following content:

machine [host]
login [buildserveraccount]
password [password]

for example:

machine bitbucket.org
login mybuildserveraccount
password mypassword

Now Continua CI will be able to pull the submodules as well.

The unknown beauty of shared projects in .NET

Code sharing is something we all want to achieve in the most easy to use and elegant way possible. As a single developer or company, everyone in the end will end up with a “core library” that contains a lot of code that is used in all projects. And with this comes the problem of sharing, distributing, versioning, etc.

Note: when this article states “PCL”, it stands for Portable Class Libraries. When it states “SP”, it stands for Shared Projects.

Best practices in the .NET world

There are a lot of smart people that have tried to solve this problem, and recently it seems that the combination of NuGet and PCL is the standard to solve this problem.

Deploying NuGet is a great way to distribute binaries globally. People can easily reference your code where you had to distribute binaries manually in the past. Internally it might be an overkill in situations though.

Below is a visual presentation of the NuGet workflow:

image

Downsides of using NuGet for code distribution

One of the downsides of distribution via NuGet is that I have to redistribute for every small improvement or fix that I do. I have to go through the whole process again, where updating the package in Visual Studio is the worst part of the workflow. And in most cases I simply forget to update all projects (I cannot open them all in Visual Studio because I am at that time working on a single project and want to stay focused).

Introducing Shared Projects

SP are a new feature of Visual Studio 2013 Update 2. It was initially created to support universal apps (apps for both Windows Phone (RT) and Windows RT, and that’s what most people know about it.

However there is also this genius Visual Studio extension that allows SP on any .NET project. It means that you can create a project (shproj) that contains a list of C# files. This file can be referenced by any project and will be included at compile time.

Advantages of Shared Projects over Portable Class Libraries

There are quite some debates on the internet about whether one should use Shared Projects (and eventually create platform-specific assemblies) or using PCL. If you can work inside the borders of PCL, it is a really good solution. However, I am a bit (over?) demanding and PCL just doesn’t work for me. It works great for simple projects that only contain C# POCO stuff. As soon as you go into platform-specific details, you are basically screwed. SP enables you to really use all platform features available on the platform you are actually compiling to.

The new workflow with Shared Projects

When using shared projects, I create a directory structure (but you can use git submodules as well). Then I can simple reference the projects on disk and create renewed workflow:

image

As you can see it is much easier to update core libraries with this new workflow. Below are a few advantages of this workflow that I really like.

Less work

The less work I have to spent doing cumbersome things (such as waiting for NuGet to update my packages in Visual Studio), the better. This approach cuts out the “middle man called NuGet” and you no longer have to wait for the build server to test a fix. You write code, recompile and you are done. Does the fix work? You only have to commit the SP code.

Debugging through all source code

When referencing NuGet packages, you lose stepping through the code if the PDB files are now included in the NuGet packages. When the NuGet package is created on a build server, the PDB files will not see your local source files but the ones on the build server (that won’t match your local file system). With SP you are always able to debug through any references code. This makes it very easy to find and fix issues or test new features.

You can never forget to apply a fix to other projects

When using NuGet and a bug is fixed inside a library, you must update the projects using the code. As I stated earlier I have a habit to forget some problems and accidentally release a version without updating the library first. With SP this is no longer possible because the library code is compiled every time you compile your end-product. This means that if you fix the product in one place, it is automatically applied to all other projects as well without doing anything explicitly. The only thing required is to recompile the end-product so it contains the fix (but we have to do that anyway).

A critical look at this approach

When this idea first popped in my head, I was a bit skeptical because I thought it sounded too good to be true. Below are a few questions that I asked myself before writing this article.

Can I use NuGet packages in shared projects?

Absoluty. SP don’t have a context on its own. You can simply reference all the SP you need (including the ones that are dependent on each other) and the compilation will succeed. This is because the context is determined at compile time. Note that SP will never produce a binary, it’s the compilation of the final project that does this.

How about referencing other shared projects from a shared project?

You cannot directly reference another SP, but it works just as NuGet packages and SP. Simply reference all the required SP and your project will compile just fine.

But you cannot unit test shared projects, how to solve that problem?

You are right. SP are simply msbuild tasks and have no context when not being references. But as it is the case with normal projects, you need to create a unit test project and reference the SP there. You can unit test as you normally would and this means that you can actually create a CI build configuration to verify code when checked in.

What are the downsides of this approach?

So far I have found the following downsides of this approach:

  • You need to install the Visual Studio extension and use at least Visual Studio 2013 update 2
  • You must include the shared projects in your solution in order to reference them using the extension
  • You need to clone both the libraries repository as well as the end-product repository (but you will need that anyway if you want to make fixes)

Example solution / directory structure

To test this behavior, I have created a simple directory structure that mimics an in-house library that is used by several products. It is of course a very simplistic project  The example demonstrates the following usages.

 

Using shared projects in a separate directory (as in separate repository)

The example uses separate directories to mimic the use of separate repositories.

image

Unit test for the SP

I have created a unit test project in the SP solution. It shows that you can unit test the SP code and can create a CI server build configuration for it.

image

Several target platforms

The example contains both WPF and Console as target platform. I didn’t want to include Windows Phone or WinRT to not force anyone into installing that on their dev machine.

As you can see in the UI I also included a free bug in the example. You can fix it in one of the 3 solutions and see that it is automatically fixed in the others as well.

image

Introducing ContinuaInit – initialize your build server variables

I am using Continua CI as a continuous integration server. One of the things I always do is to try and mainsteam all the configurations so I know what is happening for each product. However, I always found myself doing a lot of if/else to determine the state of a build:

image

The advantage of these variables is that I can implement logic inside a configuration based on whether the build is a CI build and whether it is an official build. My goal was to replace this whole tree by a single call to an executable that contains rules to determine variables and init them all.

The result is ContinuaInit. You can replace the whole tree by the Execute Program action and call ContinuaInit.exe. That will result in a much cleaner initialization:

image

What initialization is supported?

For now, it supports the following rules:

PublishType

If the branch is master, the PublishType will be set to Official. Otherwise the PublishType will be set to Nightly.

IsOfficialBuild

true if the branch master, otherwise false

IsCiBuild

true if the branch does not equal master, otherwise false

DisplayVersion

Will be set to the version provided by the command line. Then it will apply one of the following values:

nightly => when nightly build

ci => when CI build

How can you get it?

ContinuaInit is available on both NuGet and Chocolatey.