Overvalidated design

Imagine, that you’re requested to allow adding additional validation in an ASP MVC application to properties of some models. The set of properties’ types is given and contains following types: string, int, DateTime, a reference to a glossary entity. The business (why?) scenario is irrelevant and will be not exposed here.
To allow MVC work nicely there are multiple ways to resolve this case. The following ways may be used:

  • a custom ModelValidatorProvider implementation can be provided, which takes the validation information saved in some storage and applies to
  • an extension to TypeDescriptor facilities, to add attributes to the specified properties

But how to store this information, being given a meta description (types and their properties already mapped to entities in a database). The first take was to provide an abstract class Validator and try to subclass it with a fully descriptive design, allowing saving any type of parameter, with any kind of operator. You can imagine how many enums, objects (not so strongly typed API) it created.

The “what for?” question arose. Being given a set of types and possibilities of their validation why not to provide validators tightly coupled to those types? If the set of validated types is frozen, the switches can be easily replaced with visitors (very stable hierarchy), which can easily transform the given data into sth which may be used by MVC.

Validators data new design

Having your information about validators correlated with types, which should not be changed in a while, allows you for easier editing and storing (no more operators, objectly typed properties). The transformed values can be easily applied via ModelValidator or added as attributes to the TypeDescriptor of the given model. This approach creates a simple pipeline with a possibility of getting the procession result in any moment and inject it in the framework (ASP MVC) in a preferable way.

What am I missing here?

If you’re in a startup and have a full-time job a the same moment as I do, that’s a post for you.

The initial startup pressure and tempo is huge. Focused on the features you can bring to life more and more of them. How often do you load your project, collapse it’s whole structure and ask questions:

  • What am I doing now?
  • How does it influence the rest of the system?
  • Is everything I need expressible in the current infrastructure and/or design?
  • Is it something, which I know from other projects missing?

It might seem that those opened questions are unneeded, to silly to ask, but from last time I asked them, they became a weekly routine. To show you, I’ll give you an example.

I write tests. As you already know, not always unit tests, but… During one of my write test/run/fix error cycles I noticed that it was quite hard to get all the information I needed. There was an assert failing and without debugging, only by viewing logs I had no idea what might have gone wrong. I reopened the project and did ‘what am I missing here?’ After global review of the whole solution I did found a thing. During all the feature based design I did a silly mistake not providing any logging in the application. You know, these _if_log_isDebugEnabled_ stuff (take a look in the NHibernate code). It took me no more than 10 minutes to spike it with some console appender and I rerun my tests. Ha! Some components did not log one or two operations and that was it.

It’s worth not to loose the (overused phrase) big picture and from time to time, stop providing features and ask these silly, ordinary questions.

Features and unit tests

Currently I’m involved in a greenfield project, which was happy enough to start as a event-sourced DDD. The paradigm fits well the domain, but there is some cost of zero-iteration which is being paid now. The cost is a small infrastructure which has to be provided to be wrap the domain. It’s really small (maybe not that small), but has to be tested. To make it simple/complex I’d add that it consists of a few functionalities/modules, each providing and consuming some contracts. So what about testing?

Recently I re-read a Kozmic’s blog entry, which covers a very interesting problem of unit tests vs ‘real life scenarios’ as well as ‘Concepts and features’ written by Ayende. It made me think a lot about the new application and the way I tried to test it. I took the first module, which had a small test base written in a not-good-as-it-should-be manner, deleted the tests and started to implemented sth new.

I know that some people have opinion ‘no container in your tests’, but in tests of the infrastructure? Come on, this is all about matching all the pieces of the code you wrote! Nevertheless, having the WindsorInstaller of a specific part of the application I initialized the container, added one stub for the external dependencies and fully configured this part of the application. All the tests used the enpoints provided by the whole functionality and it worked like a charm. It seems that the number of test fixtures will be equal to the number of functionalities/modules the infrastructure provide. I find it a quick, simple and powerful solution for having it testes. What about the unit tests? If I have some infrastructure class longer than 100 lines and plenty of methods in there, I will go for unit tests, but for now, this paradigm works like a charm.