The unknown beauty of shared projects in .NET

Code sharing is something we all want to achieve in the most easy to use and elegant way possible. As a single developer or company, everyone in the end will end up with a “core library” that contains a lot of code that is used in all projects. And with this comes the problem of sharing, distributing, versioning, etc.

Note: when this article states “PCL”, it stands for Portable Class Libraries. When it states “SP”, it stands for Shared Projects.

Best practices in the .NET world

There are a lot of smart people that have tried to solve this problem, and recently it seems that the combination of NuGet and PCL is the standard to solve this problem.

Deploying NuGet is a great way to distribute binaries globally. People can easily reference your code where you had to distribute binaries manually in the past. Internally it might be an overkill in situations though.

Below is a visual presentation of the NuGet workflow:

image

Downsides of using NuGet for code distribution

One of the downsides of distribution via NuGet is that I have to redistribute for every small improvement or fix that I do. I have to go through the whole process again, where updating the package in Visual Studio is the worst part of the workflow. And in most cases I simply forget to update all projects (I cannot open them all in Visual Studio because I am at that time working on a single project and want to stay focused).

Introducing Shared Projects

SP are a new feature of Visual Studio 2013 Update 2. It was initially created to support universal apps (apps for both Windows Phone (RT) and Windows RT, and that’s what most people know about it.

However there is also this genius Visual Studio extension that allows SP on any .NET project. It means that you can create a project (shproj) that contains a list of C# files. This file can be referenced by any project and will be included at compile time.

Advantages of Shared Projects over Portable Class Libraries

There are quite some debates on the internet about whether one should use Shared Projects (and eventually create platform-specific assemblies) or using PCL. If you can work inside the borders of PCL, it is a really good solution. However, I am a bit (over?) demanding and PCL just doesn’t work for me. It works great for simple projects that only contain C# POCO stuff. As soon as you go into platform-specific details, you are basically screwed. SP enables you to really use all platform features available on the platform you are actually compiling to.

The new workflow with Shared Projects

When using shared projects, I create a directory structure (but you can use git submodules as well). Then I can simple reference the projects on disk and create renewed workflow:

image

As you can see it is much easier to update core libraries with this new workflow. Below are a few advantages of this workflow that I really like.

Less work

The less work I have to spent doing cumbersome things (such as waiting for NuGet to update my packages in Visual Studio), the better. This approach cuts out the “middle man called NuGet” and you no longer have to wait for the build server to test a fix. You write code, recompile and you are done. Does the fix work? You only have to commit the SP code.

Debugging through all source code

When referencing NuGet packages, you lose stepping through the code if the PDB files are now included in the NuGet packages. When the NuGet package is created on a build server, the PDB files will not see your local source files but the ones on the build server (that won’t match your local file system). With SP you are always able to debug through any references code. This makes it very easy to find and fix issues or test new features.

You can never forget to apply a fix to other projects

When using NuGet and a bug is fixed inside a library, you must update the projects using the code. As I stated earlier I have a habit to forget some problems and accidentally release a version without updating the library first. With SP this is no longer possible because the library code is compiled every time you compile your end-product. This means that if you fix the product in one place, it is automatically applied to all other projects as well without doing anything explicitly. The only thing required is to recompile the end-product so it contains the fix (but we have to do that anyway).

A critical look at this approach

When this idea first popped in my head, I was a bit skeptical because I thought it sounded too good to be true. Below are a few questions that I asked myself before writing this article.

Can I use NuGet packages in shared projects?

Absoluty. SP don’t have a context on its own. You can simply reference all the SP you need (including the ones that are dependent on each other) and the compilation will succeed. This is because the context is determined at compile time. Note that SP will never produce a binary, it’s the compilation of the final project that does this.

How about referencing other shared projects from a shared project?

You cannot directly reference another SP, but it works just as NuGet packages and SP. Simply reference all the required SP and your project will compile just fine.

But you cannot unit test shared projects, how to solve that problem?

You are right. SP are simply msbuild tasks and have no context when not being references. But as it is the case with normal projects, you need to create a unit test project and reference the SP there. You can unit test as you normally would and this means that you can actually create a CI build configuration to verify code when checked in.

What are the downsides of this approach?

So far I have found the following downsides of this approach:

  • You need to install the Visual Studio extension and use at least Visual Studio 2013 update 2
  • You must include the shared projects in your solution in order to reference them using the extension
  • You need to clone both the libraries repository as well as the end-product repository (but you will need that anyway if you want to make fixes)

Example solution / directory structure

To test this behavior, I have created a simple directory structure that mimics an in-house library that is used by several products. It is of course a very simplistic project  The example demonstrates the following usages.

 

Using shared projects in a separate directory (as in separate repository)

The example uses separate directories to mimic the use of separate repositories.

image

Unit test for the SP

I have created a unit test project in the SP solution. It shows that you can unit test the SP code and can create a CI server build configuration for it.

image

Several target platforms

The example contains both WPF and Console as target platform. I didn’t want to include Windows Phone or WinRT to not force anyone into installing that on their dev machine.

As you can see in the UI I also included a free bug in the example. You can fix it in one of the 3 solutions and see that it is automatically fixed in the others as well.

image

Introducing ContinuaInit – initialize your build server variables

I am using Continua CI as a continuous integration server. One of the things I always do is to try and mainsteam all the configurations so I know what is happening for each product. However, I always found myself doing a lot of if/else to determine the state of a build:

image

The advantage of these variables is that I can implement logic inside a configuration based on whether the build is a CI build and whether it is an official build. My goal was to replace this whole tree by a single call to an executable that contains rules to determine variables and init them all.

The result is ContinuaInit. You can replace the whole tree by the Execute Program action and call ContinuaInit.exe. That will result in a much cleaner initialization:

image

What initialization is supported?

For now, it supports the following rules:

PublishType

If the branch is master, the PublishType will be set to Official. Otherwise the PublishType will be set to Nightly.

IsOfficialBuild

true if the branch master, otherwise false

IsCiBuild

true if the branch does not equal master, otherwise false

DisplayVersion

Will be set to the version provided by the command line. Then it will apply one of the following values:

nightly => when nightly build

ci => when CI build

How can you get it?

ContinuaInit is available on both NuGet and Chocolatey.

Using NDepend v5 on Catel

It’s been quite a while since I have given NDepend a spin on Catel. Catel is a good project to test-drive such a product because it contains a huge code-base and supports various different platforms (WPF, Silverlight, Windows Phone, WinRT, Xamarin.Android and Xamarin.iOS). Today I will give the latest version of NDepend a spin.

Upgrading from v4 to v5

Though normally moving from v4 => v5 introduces breaking changes and maybe even a new project structure, the update went very well. It recognized my “old” v4 project file without having to run some sort of transformation wizard. The first thing I noticed was the new look and feel of the software. They skipped the “black & white metro look” of Visual Studio 2012 theme and immediately went for the much better “colorized metro look” of Visual Studio 2013:

image

First things first

Normal people would read the manual and carefully provide the right input on the product. I decided not to follow the normal way because I like to do things differently (whether that is good or bad is up to you). I made so many changes to the upcoming Catel 4.0 release that I decided to simply plug in all the NET 4.5 assemblies of the core and all available extensions (up to 15 assemblies at the time of writing). Then I simply ran the project and it came up with this dashboard:

image

Luckily I didn’t violate any critical rules so that is a good thing Smile.

In-depth verification of rules

One of the rules is that the code might be poorly documented. The reason for this is that it checks the lines of code (LoC) against the documentation length. Though this method works in most cases, some methods are allowed to be long because splitting them would make no sense.

Another great feature that NDepend provides is detecting dead code. It checks all types and type members and will inform you when it can see dead code. As you probably know, I am a big fan of Fody, a great way to weave code into your assemblies. One of the artifacts of Fody is an interface called ProcessedByFody. This interface is injected to ensure that an assembly won’t be handled twice by Fody. However, NDepend sees that as dead code. I could simply modify the LINQ query at the left top by adding an additional check:

image

Conclusion

NDepend v5 is a solid improvement of an already great product. It makes the software easier to work with. Even though the tooling is nice, I want to spend as less time as possible inside it. It is a great way of automated code-reviewing which nearly all developers I know tend to forget. At first it requires some setup and configuration, but after that you can easily verify the code.

It is always good to have your code reviewed, even though you think are you are very skilled developer. Whether it is automated by a tool (such as NDepend) or by a co-worker is up to you. I prefer a tool because it saves my co-worker time + my co-workers are human beings and can make the same mistakes I make. Tools normally aren’t influenced by a bad day Winking smile.

What’s next in the future?

In the future I hope to add the NDepend analysis to my automated builds with Continua CI to see what comes out of it. The reports can be added to the build artifacts in Continua CI. Then I can break the build when there are warnings and I have two options:

1) Change the configuration if I think the code is correct
2) Change the code if I think the code is incorrect

But that is another blog post, another time!