In the recent post I’ve described the idea how to ensure, that your feature-to-develop GitFlow merge commits are reviewed before being introduced to the develop branch. This preserves the quality of the develop branch, ensuring that it’s truly possibly deployable. How one would like to build his/her repository and provide artifacts? Which commits and which branches should be built? These questions are answered below.
Let’s start with the following observation. Whichever branch points at the given commit, if a proper modern build approach is used like PSake build script is used, the result is the same. The repository contains all the needed scripts to run the build, the output will be the same no matter which branch is selected as a source of the build (if two or more points at the same commit). After all the same commit is the same tree which results in the same build. This gives us a very powerful tool in ensuring even better quality of develop. One can easily setup TeamCity using branch selector to run the same build for all features:
The build script creates artifacts, in my case NuGet packages, using the following versioning [major].[minor].[build_number]. The first two are the numbers stored in the repository. This requires that features are not long running (you don’t want to have a long running from 1.1.1 to get to later than 2.1.2). The build number is the same for all the features. For now, I’m not considering case whether the artifacts should be published to some gallery or not.
The question is, should we build the development commits? For now, considering only the feature branches, the answer is no! All the commits in the develop are merge commits which have been already reviewed and build in the feature branches! That’s why creating the merge commit in the feature is so powerful. You get the code reviewed, built and tested before it goes to the development! Again, the idea of postponing finishing a feature till it has its value acknowledged bring profit to the quality of the develop branch.
Many companies that uses GIT as its code repository use git-flow as the branching model for its projects. The question, that one can come up with is where and when the code review should be taken? Should it be before:
git flow feature finish MYFEATURE
If yes, then the reviewer looks through some version of code, but still, the person responsible for closing the feature creates a a new merge commit after all which can change a lot. On the other hand, if the reviewer creates the merge commit, he/she may not know all the aspects needed for a successful merge. There are a few pages trying to answer this, but still I haven’t found anything satisfying. Please read further for my proposal of clean and proper code review process with git-flow.
The problem with git-flow is the fact that finishing a feature is an atomic action. In one action the author does plenty of things
- Author: pushes the commit to the remote
- Reviewer: reviews the code
- A: creates a merge commit (new commit created!)
- A: moves the develop cursor to the new commit
- A: deletes the feature branch
- A: pushes to the remote
As always, splitting one action into multiple and then proper grouping might help. Consider the following flow including the author and the reviewer.
- Author: creates a merge commit
- A: moves the feature/a to the merge commit
- A: pushes the merge commit to the remote
- Reviewer: reviews the merge commit
- R: if the review is successful, rebase the develop with fast-forward to the merge commit (some automation can be introduced).
According to this flow, the reviewer always reviews the commit which will land in the develop. Additionally, this makes the author of the feature branch aware of merge problems before he/she pushes out his/her work to the review. Effectively, a feature can be completed and merged and simply wait for acceptance which then, is a simple “go/no go” without considering merge conflicts later on.
This isn’t a git-flow anymore, but still, it is a solid flow to lower the context switching of the author. Ones he/she finishes, it’s a real end, not an entry for incoming merge.
The 2015 edition of CraftConf ended a few days ago. One of the talks I was eager to listen to was Mary Poppendieck (Poppendieck.LLC) – The New New Software Development Game: Containers, … If you haven’t seen it yet, please do. It’s an hour which is well spent.
One of the topics Mary mentions is the pact mock. The pact is an intelligent recorder and player which lets you to turn your integration tests into tests that look like integration, but using prerecorded request-response pairs. The funny thing is that this hits the very same topic I was presenting and discussing in my current project.
I’m dealing with a legacy project right now. It includes VB.NET as well, so yes, this is a real legacy;) There are some web services on board. Yes, I mean good old-fashioned SOAP services, not the brand new, fancy REST or HATEOAS. How would you test this interaction? Would you put a layer or mock the services? My proposal was different from mocking the service. I thought that, if I setup a local server, just for sake of running tests, and record responses upfront, I could have a pretty highlevel, but still not integration level tests, which could help me to seal functionalities during the transition process to a new architecture. It’s similar a bit to the pact mock but still, do I really need a library for sth like this? A standard Fiddler + HttpListener can work just fine as well. Yes, the validation of the dependency against the recorded dialogue is hard to impossible, but still, what one gets is an easy way of testing your app without placing mocks all around the tests.
Even if you follow the pact mock path till the end, it’s worth considering that instead of dealing with mocks’ setup one can get the conversation recorded easily (just run your app) and reuse it later on. It may be not the unit test, but it can be the best test one can write being given the condition of communicating with other services.
What’s the size of nullable structure in C#? Does it depends on anything? How fields are held in the memory. Let’s consider following cases: bool?, int?, long?, Guid? One way to get the structure size is use of Marshal.SizeOf(Type). Unfortunately this method performs a check whether the passed type is generic. Every nullable is generic, hence we cannot use it. If you take a look at the implementation of this method, there is a call to the private method of the Marshal class, named SizeOfHelper. This method does not perform a check and can be easily used to calculate the size of the struct.
Nullable consists of two fields. The first is hasValue which answers to the question if the nullable has a value assigned. The other is the value. If a nullable has no value assigned, the value field will held default(T) value. How this members are aligned in the memory, does it depend on anything? What is the offset from the start of the structure of these specific fields?
To answer the two questions above (size and alignment) please take a look at the following table:
The first two: bool? and int? are easy to come up with. Bool is equal to int, it takes 4 bytes to store one, so the offset of the value is 4.
What about long? Why does it take 16 bytes, not 12? Why the value starts at 8, not 4? That’s because of the struct alignment, which CLR performs to enable nice packing up the given struct. In other way, the struct is aligned to the length of the 64bit CPU registries.
The final example with Guid breaks the rule for long. Or maybe not? The struct size is multiplication of 8 bytes, so it’s totally ok for CLR to use 24 bytes as it is already aligned.
If you want to do some checks on your own, you may use the gist I created.
Currently, I’m working with some pieces of a legacy code. There are good old-fashioned DAL, BLL layers which reside in separate projects. Additionally, there is a common
project with all the interfaces one could need elsewhere. The whole solution is deployed as one solid piece, without any of the projects used anywhere else. What is your opinion about this structure?
To my mind, splitting one solid piece into non-functional projects is not the best option you can get. Another approach which fits this scenario is using feature orientation and one project in solution to rule them all. An old, the deeper you get in namespace, the more internal you become, is the way to approach feature cross-referencing. So how would one could design a project:
- Admin.cs (entity)
I see the following advantages:
- If any of the features requires reference to another, it’s an easy thing to add one.
- There’s no need of thinking where to put the interface, if it is going to be used in another project of this solution.
- You don’t onionate all the things. Now, there are top-bottom pillars which one could later on transform into services if needed.
To sum up, you could deal with features oriented toward business or layers oriented toward programming layers. What would you choose?
It’s been a few days since the last Warsaw .NET User Group meeting. The main presentation was provided by me & Tomasz Frydrychewicz. The title was: “Event Driven Architecture in practice”. Being given a high number of answers to the pool and the overall was very positive response I may call it one of my best presentations ever. Anyway, I was being asked many questions during these days, the main one is what/who should I read/watch to immerse into this event-based approach. The list below tries to answer it somehow, grouped by author:
- Martin Fowler
- http://martinfowler.com/eaaDev/EventSourcing.html – the top 1 Google search result. Martin provides a good intro, mixing a bit a concept of storying commands and events. Anyway, this is a must read if you starts with this topic
- http://martinfowler.com/eaaDev/RetroactiveEvent.html – the article which one should become familiar with after spending some time with event modelling. Some domains are less prone to result in special cases for handling this kind of events, other may be very fragile and one should start with this
- Lokad, CQRS, Rinat Abdullin
- http://lokad.github.io/lokad-cqrs/ – a must-read if you want to choose the event way. Plenty of materials and tooling. To me some parts are a bit frameworkish, but still, it’s one of the best implementations I’ve seen. Understanding this might be your game changer.
Additionally, it provides an Azure storage implementation.
- Rinat Abdullin & Kerry Street
- Being the worst – how to become a master? Immerse yourself in a new field as the worst. That’s how winning is done! Am amazing journey through learning about DDD, Event Sourcing and many paradigms.
- Microsoft Patterns and Practices:
- CQRS Journey – a free book about a group of developers using event driven approach with DDD in mind, to build a new system. I love the personas they use to drive dialogues between different opinions/minds/approaches. It’s not a guide. I’d rather consider it a diary of all the different cases you can meet when implementing solutions using these approaches.
- Event Store
- The whole Event Store database is an actual event store for storying events from the event sourced systems. I encourage you to spend a week or more on reading its code. It’s a good codebase.
- Event sourcing documentation is a short introduction to the ES world. After all these years, it still uses the Word generated pictures :) but this doesn’t diminish its value.
- NEventStore is an open source library for storying and querying your events. It’s opinionated, for instance it stores all the events as one commit object. I’ve read it carefully, although I don’t like its approach still. One should read it though, it’s always worth to know what’s already provided.
It’s a bit long list but nobody said that you can learn a new paradigm over one weekend. So read, learn and apply it successfully :)
It’s quite to common to use GUIDs as a unique identifiers across systems as they are… unique :) The GUID generation algorithm has a very important property: it’s local. Your code doesn’t have to contact any service or a database. It just works. There’s one problem with GUIDs, they are pretty lengthy taking 16 bytes to be stored.
Consider following example: you want to provide a descriptor for a finite set of distinguishable items, like events in event sourcing. You can use:
- an event type, a string description of it to make it right. That’s the idea behind event storage within EventStore.
- GUIDs and generate them them on developers machines. They will be unique but still lengthy when it comes to storing them
- assign integers, but you will need to divide integers sets between module and be very careful to do not step into another module area
There’s one additional thing you can do. You can easily combine 2 and 3. Just use GUIDs on contracts, providing uniqueness, but when it comes to storing provide a translation table, persistent, ensured of its existence during start up, mapping GUIDs to ints (4 bytes) or even shorts (2 bytes). You can easily create this kind of table locally, one for each module/service, just to embrace all the contracts used by this module. This will lower the storage cost and still let you use nice properties of Guids.
Simple, easy and powerful.