This post has been imported from my previous blog. I did my best to parse XML properly, but it might have some errors.
If you find one, send a Pull Request.
I read/watch a lot of stuff published on the infoq site. I enjoy it in the majority of cases and find it valid. Recently I read an article about Web APIs and Select N+1 problem and it lacks the very basic information one should provide when writing about the web and http performance. The post discusses structuring your Web API and providing links/identifiers to other resources one should query to get the full information. It’s easy to imagine that returning a collection of identifiers, for example ids of the books belonging to the given category can bring more requests to your server. A client querying over books will hit your app one by one performing from a load test to a fully developed DOS. The answer to this question is given in following points:
the basic http mechanism provided by the specification: cache headers and ETags. There’s no men tion about properly tagging your responses to allow return 304 if the client asks for data that didn’t change. The http caching, its expiration are not mentioned as well. Recently Greg Young posted a great article about leveraging http caching. The best quote summing the whole take on it from Greg’s article would be:
This is often a hard lesson to learn for developers. More often than not you should not try to scale your own software but instead prefer to scale commoditized things. Building performant and scalable things is hard, the smaller the surface area the better. Which is a more complex problem a basic reverse proxy or your business domain?
Before getting into fancy caching systems, understand your responses, cache forever what isn’t changing and ETag with version things that may change. Then, when you have a performance issue turn into more complex solutions.
For sake of reference, the author of the Infoq post reponded to my tweet in here.