GitFlow and Continuous Build

In the recent post I’ve described the idea how to ensure, that your feature-to-develop GitFlow merge commits are reviewed before being introduced to the develop branch. This preserves the quality of the develop branch, ensuring that it’s truly possibly deployable. How one would like to build his/her repository and provide artifacts? Which commits and which branches should be built? These questions are answered below.

Let’s start with the following observation. Whichever branch points at the given commit, if a proper modern build approach is used like PSake build script is used, the result is the same. The repository contains all the needed scripts to run the build, the output will be the same no matter which branch is selected as a source of the build (if two or more points at the same commit). After all the same commit is the same tree which results in the same build. This gives us a very powerful tool in ensuring even better quality of develop. One can easily setup TeamCity using branch selector to run the same build for all features:

+:refs/heads/feature/*

The build script creates artifacts, in my case NuGet packages, using the following versioning [major].[minor].[build_number]. The first two are the numbers stored in the repository. This requires that features are not long running (you don’t want to have a long running from 1.1.1 to get to later than 2.1.2). The build number is the same for all the features. For now, I’m not considering case whether the artifacts should be published to some gallery or not.

The question is, should we build the development commits? For now, considering only the feature branches, the answer is no! All the commits in the develop are merge commits which have been already reviewed and build in the feature branches! That’s why creating the merge commit in the feature is so powerful. You get the code reviewed, built and tested before it goes to the development! Again, the idea of postponing finishing a feature till it has its value acknowledged bring profit to the quality of the develop branch.

 

Processes and The Cone of Uncertainty

The Cone of Uncertainty is a management phrase describing the fact, that we cannot foresee the future of a project with a constant probability. The longer period we want to plan for, the less probable it’s going to be exact. That’s the reasoning behind sprints in Agile, to keep them short enough to be in the narrow part of the cone. But it isn’t only the planning! The shape of the cone can be modified by some good practices and reducing the manual labor. By modified I mean greatly narrowed.
The are plenty of processes and elements that can be introduced:

  • continues builds
  • proper db deployments
  • tests
  • continues deployments
  • promotion of application versions between environments

Each of them improves some parts of the development process making it less manual and more repeatable. Additionally, as you introduce tooling for majority of these cases, the tools run with a similar speed so you can greatly lower the uncertainty of some aspects of development. Then again, if some of aspects are constant, only the real development will affect the cone and your team with a manager get what you wanted: a more predictable process and a smaller cone of uncertainty.

Bounded context in deployment tools

Recently I’ve been moving around the topic of a deployment. Imagine a situation you’re being given a set of scripts, or script like objects used to deploy a set of applications. The so-called scripts are from the very basic like create-directory to complex, rooted in an organization infrastructure and tooling. Additionally, some of them are defined as groups of other scrips. For example, installing an application service starts with creation of a directory, then binaries are copied and finally, the service is registered.
The scripts are not covered with tests, but are hardened by years of successful usage. One could consider to rewrite them totally, and provide a full blown set of tests. This may be hard, as you throw away all the knowledge hidden behind scripts. Remember, that there were big companies that are no longer here, take Netscape as example.

I’ve spent quite a while about considering chef, PowerShell, Pupper even the msbuild with its tasks. What helped me to make up my mind was the famous Blue Book. Why not to consider a set of scrips as a bounded context? Just take a look at the picture provided by Martin Fowler here. Wrap all the older scripts in a context bubble providing mapping, mostly intellectual, to all the terms that are needed to be known outside. It’s more than wrapping all old scrips with an interface. There is a need of a real mapping with a glossary to let people which do not want to leave the bubble now exist in it for a while. What tool will be used for the new bounded context communicating with the other? That’s an open question. I’ll try to choose the best tool, with good enough test support. The only real requirement is the ability to provide the mapping to the old deployment tools’ context.

If you want to learn more, just take a loot at the great Eric Evan’s videos under this link: http://dddcommunity.org/?s=four+strategies

Continous delivery of an open source library

In the recent post I’ve dealt with a basic setup of git branching protecting an open source library author from a headache of not-so-production-ready master branch of a repository. Having two main branches setup (master & dev) one can think about introducing continuous delivery. How about automatic publishing any set of commits getting into production branch? TeamCity with its git support made it trivial.

The very first build configuration has been made for dev branch. The target branch was dev branch. It consists of only two steps:

  1. Build – done with MsBuild
  2. NUnit – running all the tests

The second was a bit longer, based on the master:

  1. Build – done with MsBuild
  2. NUnit – running all the tests
  3. NuGet pack – preparing a NuGet package with mixed NuSpec + csproj of the main library
  4. NUnit publish – pushing the prepared NuGet to the NuGet gallery.

As the master branch is considered as production ready, in my opinion, there’s nothing wrong with creating and uploading NuGet package for each set of commits, which goes through the tests. The very last case is versioning. For now (KISS), it’s based on the master branch build number preppended and appended with 0. This generates 0.1.0, 0.2.0, 0.3.0, etc.

Evolving your branching strategy

Recently I’ve been involved in two OSS projects, the first is ripple, a part of the Fubu family. The second is my own extension to Protobuf-net. Both are hosted on GitHub which brings a great opportunity to polish Git skills.

I started work on protobuf-linq on the master branch. It was the simplest and fastest way of getting this thing done. Throwing in a continuous built with TeamCity was simple: two steps of building and running tests. Then, I thought about stability of the master branch. Will it be always production ready? After a few conversations, rereading the great git book and going through the nvie’s post the idea was clear. Make the master branch always a production ready branch and add another dev branch. This would allow separate stable branch which latest version is always good to be built and publish from the development branch. The very same schema is used for instance by the Event Store public repository and many others.
This simple change, an evolutionary not revolutionary allows to separate production ready branch from dev branch malfunctions (history rewrites, errors and so on). Of course you can throw in tags, feature branches and hot-fix branches, but it isn’t needed from the beginning for an open source library. This always can be added as the ecosystem grows.

Ripple on your NuGet

Recently I’ve been involved in a project which consists of many teams collaborating to deliver a big project. The application architecture gives this teams an opportunity to work with their code in a module scope. There’s no one big Visual Studio solution, all modules are scoped in their own slns. Living in a world like this means, that your CI, package management and local deployment must be fast, effective and easy to use.
Ripple is a project from the FubuMVC family. It’s an alternative NuGet client, so all the changes in behavior are made on the client side. It makes it a perfect fit for a scenario where one can mix up a local NuGet feeds for their internal packages and official ones. What about its advantages?
The very first difference is a versioning. Ripple extracts packages into non versioned folders. It means that your projects will no longer be changed, when a new version of a package arrives. It helps a lot, especially in environments with packages being frequently published.
Ripple makes a distinction between referenced packages. There are float and fixed packages. Float packages are these changing rapidly. It’s meant that all your packages should be floating during development. The fixed packages are rock solid versions of libraries used by your system. They will not be updated, unless the force option are used.
The Ripple is still being developed. The most important thing it’s that it is active and thriving. If you develop applications in .NET with a plenty of solutions which are meant to publish packages, then give Ripple a try. It’s worth it.