Protobuf-net: inheritance of messages

The last post was an introduction to a simple project called Protopedia, located here. The project is destined to bring in a simple manner, probably one test per case, solutions for complex scenarios like versioning, derivation of messages, etc. As the versioning was described by the previous entry, it’s right time to deal with derivation.

It’s well known fact that one should favor composition over inheritance. Dealing with derivation trees with plenty of nodes can bring any programmer to his/her knees. How about messaging? Does this rule apply also in this area? It’s common for messages to provide a common denominator, containing fields common for all messages (headers, correlation identifiers and so on), especially if they’re meant to be sent/saved as a stream of messages of the base type (example: Event Sourcing with events of a given aggregate). Using a set of messages with a distilled root greatly simplifies concerns mentioned earlier. Consider the following scenario, serialization of a collection of A messages (or its derivatives) being given the following structure:

Message inheritance tree for example

How would Protobuf-net serialize such collection? First, take a look at the folder from Protopedia. You can notice, that all the classes: A, B, C, have been mapped with different types. It’s worth to notice the ProtoInclude attributes with tag values of the types located one level deeper in the derivation tree. The second important thing is the values of the derived type tags, which do not collide with tags of the class fields. In the example, you can find a constant value of 10 used for sake of future versions of the root, the A class. As one can see in the test of the derivation, the child classed of the given class are serialized as fields with the tags equal to the tag passed in the ProtoInclude attribute. To see the fields composed in a way the Protobuf-net serializes inherited messages take a look into following message contracts. There’s no magic and the whole idea is rather straightforward: serialize derivatives as fields, turning the inheritance into the composition. This working proposal of Protobuf-net will be sufficient and effective in all of your efforts of serialization of inheritance. Nice serializing!

Protobuf-net: message versioning

Welcome again. Recently I’m involved in a project where Protobuf-net library is used for the message serialization concern. The Protobuf-net library was developed by Marc Gravell of Stackoverflow. Beside standard Google Protocol Buffers concepts there is a plenty of .NET based options, like handling derivation, using standard Data Contracts, etc. This is the first post of a few, which will deal with some aspects of Protobuf-net which might be nontrivial for a beginner. Beside the posts, a project Protopedia was created to show cases of Protobuf usages. All the examples below are stored in this repository. All the posts requires a reader to get accustomed to the official Google Protocol Buffers documentation.

The versioning of interfaces is a standard computer science problem. How to deal with sth which was released as looking in one way and after internal changes of the system it must be changed publicly.
Consider following scenario of having two versions of one message located here. As you can see, there are a few changes like the change of the name, the addition and removal of some fields. Imagine that a service A runs on the old version of the message. A sevice B uses a new version. Is it possible to send this message from A to B, and then back to A and have all the data stored in the message? With no loosing anything appended by the B service? With Protobuf’s Extensible class used as a base class it’s possible. The only thing one should remember is to do not reuse ProtoMemberAttribute tags’ values of field removed in the past. New fields should always be added with new tags’ values.

How does it work?
When Protobufs deserialize a message, all the data with tags found in the message contract are deserialized into the specific fields, the rest of data is hold in a private ‘storage’ of the Extensible class. During serialization, these additional fields are added to the binary form allowing another message consumer to retrieve fields according o their version of the message.

Overvalidated design

Imagine, that you’re requested to allow adding additional validation in an ASP MVC application to properties of some models. The set of properties’ types is given and contains following types: string, int, DateTime, a reference to a glossary entity. The business (why?) scenario is irrelevant and will be not exposed here.
To allow MVC work nicely there are multiple ways to resolve this case. The following ways may be used:

  • a custom ModelValidatorProvider implementation can be provided, which takes the validation information saved in some storage and applies to
  • an extension to TypeDescriptor facilities, to add attributes to the specified properties

But how to store this information, being given a meta description (types and their properties already mapped to entities in a database). The first take was to provide an abstract class Validator and try to subclass it with a fully descriptive design, allowing saving any type of parameter, with any kind of operator. You can imagine how many enums, objects (not so strongly typed API) it created.

The “what for?” question arose. Being given a set of types and possibilities of their validation why not to provide validators tightly coupled to those types? If the set of validated types is frozen, the switches can be easily replaced with visitors (very stable hierarchy), which can easily transform the given data into sth which may be used by MVC.

Validators data new design

Having your information about validators correlated with types, which should not be changed in a while, allows you for easier editing and storing (no more operators, objectly typed properties). The transformed values can be easily applied via ModelValidator or added as attributes to the TypeDescriptor of the given model. This approach creates a simple pipeline with a possibility of getting the procession result in any moment and inject it in the framework (ASP MVC) in a preferable way.

How leader should answer a (technical) question

Being a leader of a team means questions. Sometimes too many to answer. Questions means interruptions in your work, hence one can think about minimizing the time spent on answering. You can think of an easy solution: lets put a time limit, which exhausted one day, disallows any further questions. Yep it will limit the time spent on answering but will it help your team as a whole? So many unanswered questions which should have been answered yesterday… It’s wrong.
One can come up with an idea of limiting answer time, for instance you have no more than 5 minutes for a chat about your question. What if the meeting is ended just before the enlightenment moment? It’s wrong.
My preferred way of answering one’s question is to bring even-more-than-needed insight, to not simply answer how to do it, but why it should be done this way. It’s profitable to spend even ten minutes more allowing one to think about reasoning and follow the same path. After all, when the general knowledge about the ‘how-to’ solve the problem is passed, you will have two people in your team why can answer the same question in the future. It’s a lot of better to say ‘he knows how it works and can give you some information. You can analyze this together’ than ‘this was done before. Ask this copier-of-ideas, he will show you how to paste a snippet of code’. This will reduce the chance of you being the bottleneck in your team and increase chances of self-organization.