Pact mock

The 2015 edition of CraftConf ended a few days ago. One of the talks I was eager to listen to was Mary Poppendieck (Poppendieck.LLC) – The New New Software Development Game: Containers, … If you haven’t seen it yet, please do. It’s an hour which is well spent.

One of the topics Mary mentions is the pact mock. The pact is an intelligent recorder and player which lets you to turn your integration tests into tests that look like integration, but using prerecorded request-response pairs. The funny thing is that this hits the very same topic I was presenting and discussing in my current project.

I’m dealing with a legacy project right now. It includes VB.NET as well, so yes, this is a real legacy;) There are some web services on board. Yes, I mean good old-fashioned SOAP services, not the brand new, fancy REST or HATEOAS. How would you test this interaction? Would you put a layer or mock the services? My proposal was different from mocking the service. I thought that, if I setup a local server, just for sake of running tests, and record responses upfront, I could have a pretty highlevel, but still not integration level tests, which could help me to seal functionalities during the transition process to a new architecture. It’s similar a bit to the pact mock but still, do I really need a library for sth like this? A standard Fiddler + HttpListener can work just fine as well. Yes, the validation of the dependency against the recorded dialogue is hard to impossible, but still, what one gets is an easy way of testing your app without placing mocks all around the tests.

Even if you follow the pact mock path till the end, it’s worth considering that instead of dealing with mocks’ setup one can get the conversation recorded easily (just run your app) and reuse it later on. It may be not the unit test, but it can be the best test one can write being given the condition of communicating with other services.

Size of Nullable

What’s the size of nullable structure in C#? Does it depends on anything? How fields are held in the memory. Let’s consider following cases: bool?, int?, long?, Guid? One way to get the structure size is use of Marshal.SizeOf(Type). Unfortunately this method performs a check whether the passed type is generic. Every nullable is generic, hence we cannot use it. If you take a look at the implementation of this method, there is a call to the private method of the Marshal class, named SizeOfHelper. This method does not perform a check and can be easily used to calculate the size of the struct.

Nullable consists of two fields. The first is hasValue which answers to the question if the nullable has a value assigned. The other is the value. If a nullable has no value assigned, the value field will held default(T) value. How this members are aligned in the memory, does it depend on anything? What is the offset from the start of the structure of these specific fields?

To answer the two questions above (size and alignment) please take a look at the following table:

Type Size hasValue offset value offset
bool? 8 0 4
int? 8 0 4
long? 16 0 8
Guid? 20 0 4

The first two: bool? and int? are easy to come up with. Bool is equal to int, it takes 4 bytes to store one, so the offset of the value is 4.

What about long? Why does it take 16 bytes, not 12? Why the value starts at 8, not 4? That’s because of the struct alignment, which CLR performs to enable nice packing up the given struct. In other way, the struct is aligned to the length of the 64bit CPU registries.

The final example with Guid breaks the rule for long. Or maybe not? The struct size is multiplication of 8 bytes, so it’s totally ok for CLR to use 24 bytes as it is already aligned.

If you want to do some checks on your own, you may use the gist I created.

One deployment, one assembly, one project

Currently, I’m working with some pieces of a legacy code. There are good old-fashioned DAL, BLL layers which reside in separate projects. Additionally, there is a common
project with all the interfaces one could need elsewhere. The whole solution is deployed as one solid piece, without any of the projects used anywhere else. What is your opinion about this structure?

To my mind, splitting one solid piece into non-functional projects is not the best option you can get. Another approach which fits this scenario is using feature orientation and one project in solution to rule them all. An old, the deeper you get in namespace, the more internal you become, is the way to approach feature cross-referencing. So how would one could design a project:

  • /Project
    • /Admin
      • /Impl
        • PermissionService
        • InternalUtils.cs
      • Admin.cs (entity)
      • IPermissionService
    • Notifications
      • /Email
        • EmailPublisher.cs
      • /Sms
        • SmsPublisher.cs
      • IPublisher.cs
    • Registration

I see the following advantages:

  • If any of the features requires reference to another, it’s an easy thing to add one.
  • There’s no need of thinking where to put the interface, if it is going to be used in another project of this solution.
  • You don’t onionate all the things. Now, there are top-bottom pillars which one could later on transform into services if needed.

To sum up, you could deal with features oriented toward business or layers oriented toward programming layers. What would you choose?