Today I read about the next project from Ayende, which manifesto you can find here. The project is meant to deliver managed memory mapped file based storage for RavenDb. What caught my attention was a sentence asking for contributors.
I thought that it would be a great idea to write a storage engine or to be a part of a team creating one, so I asked about licensing. I’ve been given an answer that it will have a RavenDB compatible license which means nothing less that the product will be dual-licensed: it’s open for OSS projects and closed for commercial. There’s an exception of course: the Raven itself.
As Ayende stated in comments “Scooletz, My code, my rules, pretty much. You are free to do the same and publish it under any license you want.”. It’s true, but as he has right to make that kind of choice I’m allowed to dislike it. The most interesting part is asking for contributors to a project which will be non-free for non-OSS solutions.
Looking through various OSS projects, Event Store looks much better. It’s BSD. One can contribute to or take it and turn it into anything he/she can think of. I do prefer the other style.
It’s common, that some part of your NUnit tests are tests, that should log their execution time. One of the ways of providing such a behavior is to provide custom SetUp and TearDown methods whether in your fixture or in a base test fixture. I find it disturbing, as a simple SetUp can be bloated with plenty of concerns.
Another way of providing it is using not a well-known interface of ITestAction. It allows to execute code before and after test in an AOP way. Of course one can argue that a simple method accepting action which execution will be measured is a better option, but I prefer coding in a declarative way and using a simple attribute visible in the signature of your method seems much more suited for this kind of behavior.
Take a look at the gist below and use it in you tests!
The implementation of a http cookie is leaky. Better get used to it. You can read RFCs about, but better read one, more meaningful question posted on the security stackexchange. If your site is hosted as a subdomain with others apps and a malicious user can access any other app a cookie with top domain can be set. What it means, is that the cookie will be sent with every request to the top domain as well as yours (domain-match verb in the RFCs). This can bring a lot of trouble when an attacker sets a cookie with a name important for your app, like a session cookie. According to the specification, both values will be sent under the same name with no additional information about on which basis a given value was sent.
Html5 to the rescue
If you design a new Single Page Application, you can be saved. Imagine that during POST sending the login data (user & password) in a result JSON a value previously stored in cookie is returned. One can save it in the localStorage easily and add later on to the headers of requests needing authentication. A simple change brings another advantage. Requests not needing authentication like GETs (as noone sends fragile data with verb that is vulnerable to JSON Hijacking) can be sent with no header overhead to the same domain. A standard solution to stop sending cookies with GETs is shipping all your static files to another domain. That isn’t needed anymore.
Recently I’ve been doing a few things with http cookies. I went through the specification and I know path-match and domain-match, hell yeah! One of the results of this trip was a nice JS module pattern found in a jQuery plugin for cookies. Let’s take a look!
In the line one to nine, a module method is created. If an AMD is found, then the passed factory function is used in the define. Otherwise a standard module pattern, with calling a function with a dependency passing is used. In lines 9-11 the real factory function is provided (that would be a whole standard module). The defined function is passed as a factory to the function defined above.
You can ship this code with AMD or without and it will work in both scenarios. What a pleasant way of providing library aware of its dependency resolving environment!
In the recent post I’ve dealt with a basic setup of git branching protecting an open source library author from a headache of not-so-production-ready master branch of a repository. Having two main branches setup (master & dev) one can think about introducing continuous delivery. How about automatic publishing any set of commits getting into production branch? TeamCity with its git support made it trivial.
The very first build configuration has been made for dev branch. The target branch was dev branch. It consists of only two steps:
- Build – done with MsBuild
- NUnit – running all the tests
The second was a bit longer, based on the master:
- Build – done with MsBuild
- NUnit – running all the tests
- NuGet pack – preparing a NuGet package with mixed NuSpec + csproj of the main library
- NUnit publish – pushing the prepared NuGet to the NuGet gallery.
As the master branch is considered as production ready, in my opinion, there’s nothing wrong with creating and uploading NuGet package for each set of commits, which goes through the tests. The very last case is versioning. For now (KISS), it’s based on the master branch build number preppended and appended with 0. This generates 0.1.0, 0.2.0, 0.3.0, etc.
Recently I’ve been involved in two OSS projects, the first is ripple, a part of the Fubu family. The second is my own extension to Protobuf-net. Both are hosted on GitHub which brings a great opportunity to polish Git skills.
I started work on protobuf-linq on the master branch. It was the simplest and fastest way of getting this thing done. Throwing in a continuous built with TeamCity was simple: two steps of building and running tests. Then, I thought about stability of the master branch. Will it be always production ready? After a few conversations, rereading the great git book and going through the nvie’s post the idea was clear. Make the master branch always a production ready branch and add another dev branch. This would allow separate stable branch which latest version is always good to be built and publish from the development branch. The very same schema is used for instance by the Event Store public repository and many others.
This simple change, an evolutionary not revolutionary allows to separate production ready branch from dev branch malfunctions (history rewrites, errors and so on). Of course you can throw in tags, feature branches and hot-fix branches, but it isn’t needed from the beginning for an open source library. This always can be added as the ecosystem grows.
Your software problems are not unique.
You don’t have time and money to maintain the whole toolkit you need.
You may be not smartest in the domain of the problem you’re trying to solve.
These are simple facts and you should accept it. What can you do about it? You can open source some parts of the solutions you deliver! Don’t think about a bank’s transactional system, or some high security fragments. Consider closed pieces, parts which deliver one thing (libraries). Put them on GitHub, or Codeplex or whatever. What you get is a chance of having your issues fixed by some other people. The another advantage of open sourcing some parts is paradigm shift. Now you have to combine, compose parts rather than getting your hands dirty in a ball of mud. What about your business, what about competition? At most they have a few of your tools, but the whole knowledge, how to assemble them stays in your company, and that in my opinion is the future of software development.