Skip to content

Tracking changes to complex viewmodels with Knockout.JS

As part of a project I’ve been working on for a client, we’ve decided to implement HTTP PATCH in our API for making changes. The main client consuming the API is a web application driven by Knockout.JS, so this meant we had to find a way to figure out what had changed on our view model, and then send just those values over the wire.

There is nothing new or exciting about this requirement in itself. The question has been posed before  and it was the subject of a blog post way back in 2011 by Ryan NiemeyerWhat was quite exciting however was that our solution ended up doing much more than just detect changes to viewmodels. We needed to keep tabs on individual property changes, changes to arrays (adds\deletes\modifications), changes to child objects and even changes to child objects nested within arrays. The result was a complete change tracking implementation for knockout that can process not just one object but a complete object graph.

In this two part post I’ll attempt to share the code, the research and the story of how we arrived at the final implementation.

Identifying that a change has occurred

The first step was to get basic change tracking working given a view model with observable properties containing values – no complex objects.

Initial googling turned up the following approach as a starting point:

These methods all involved some variation on adding an isDirty computed observable to your view model. Ryan’s example stores the initial state of the object when it is defined which can then be used as a point of comparison to figure out if a change has occurred.

Suprotim’s approach is based on Ryan’s method but instead of storing a json snapshot of the initial object (which could potentially be very large for complex view models), it merely subscribes to all the observable properties of the view model and sets the isDirty flag accordingly.

Both of these are very lightweight and efficient ways of detecting that a change has occurred, but as detailed in this thread they can’t pinpoint exactly which observable caused the change. Something more was needed.

Tracking changes to simple values

After a bit more digging, a clever solution to the problem of tracking changes to individual properties emerged as described by Stack Overflow one hit wonder, Brett Green in the answer to this question and also in slightly more detail on his blog.

This made the use of knockout extenders to add properties to the observables themselves; an overall isDirty() method for the view model as a whole could then be provided by a computed observable. This post almost entirely formed the basis for the first version. After a bit of restructuring, pretty soon we’ve got an implementation that will allow us to track changes to a flat view model:

An example of utilising this change tracking is as follows:

Detecting changes to complex objects

The next task was to ensure we could work with properties containing complex objects and nested observables. The issue here is that the isDirty property of an observable is only set when it’s contents are replaced. Modifying a child property of an object within an observable will not trigger the change tracking.

This thread on google groups seemed to be going in the right direction and even had links to two libraries already built:

  • Knockout-Rest seemed promising, but although this was able to detect changes in complex properties and even roll them back, it still could not pinpoint the individual properties that triggered the change.
  • EntitySpaces.js seemed to contain all the required elements, but it relied on generated classes and the change tracking features were too tightly coupled to it’s main use as a data access framework. At the time of writing it had not been updated for two years.

In the end we came up with a solution ourselves. In order to detect that a change had occurred further down the graph, we modified the existing isDirty extension member so that in the event that the value of our observable property was a complex object, it should also take into account the isDirty value of any properties of that child object:

Now when extending an observable to apply change tracking, if we find that the initial value is a complex object we also iterate over any properties of our child object and recursively apply change tracking to those observables as well. We also set up subscriptions to the resulting isDirty flags of the child properties to ensure we set the hasDirtyProperties flag on the target.

Tracking individual changes within complex objects

After the previous modifications, our change tracking now behaves like this:

Obviously there’s something missing here… we know that the Skills object has been modified and we also technically know which property of the object was modified but that information isn’t being respected by getChangesFromModel.

Previously it was sufficient to pull out changes by simply returning the value of each observable. That’s no longer the case so we have to add a getChanges method to our observables at the same level as isDirty, and then use this instead of the raw value when building our change log:

Now our getChangesFromModel will operate recursively and produce the results we’d expect. I’d like to draw your attention to this section of the above code in particular:

There’s a reason we’ve been using seperate observables to track hasValueChanged and hasDirtyProperties; in the event that we have replaced the contents of the observable wholesale, we must pull out all the values.

Here’s the change tracking complete with complex objects in action:


In this post we’ve seen how we can use a knockout extender and an isDirty observable to detect changes to individual properties within a view model. We’ve also seen some of the potential pitfalls you may encounter when dealing with nested complex objects and how we can overcome these to provide a robust change tracking system.

In the second part of this post, we’ll look at the real killer feature… tracking changes to complex objects within arrays.

You can view the full code for the finished example here: or play around with the jsFiddle!


Edit: As part of the research for this post, I did come across which takes a very similar approach and even handles arrays. It’s a shame I had not seen this when writing the code as it would have been quite useful.

TDD, continuous deployment and the golden number

You’re a strong supported of the benefits of continuous integration… whenever anyone commits to your source repo, all your tests are run. You have close to 100% coverage (or as close as you desire to have) so if anything breaks you’re going to know about it. Once stuff is pushed and all the tests pass then you’re good to ship, right?


Do you know how many tests you have in your solution? Does everyone in your team know? If they don’t know, do they have ready access to this figure both before and after the push? This figure is important; it’s your TDD Golden Number.

All is not what it seems

Someone you work with is trying out a new Github client. Sure it’s not new in the grand scheme of things, but it’s new to YOUR TEAM. It should work fine but just like a screw from a dissasembled piece of furniture, just because it fits the hole it was remove from doesn’t mean it will fit another.

Something goes wrong with a rebase or a merge and you don’t notice. This shit should be easy and happen as a matter of course, but this time it doesn’t and changesets get lost. Not only that, but those changesets span a complete Red-Green-Refactor cycle, so you’ve lost both code and tests.

The build runs… green light. But…

UNLESS YOU KNOW YOUR GOLDEN NUMBER and have a way of verifying it against the number of tests that ran, you have no idea if this green light is a true indicator of success. All you know is that all the tests in source control pass.

Risk management

Granted, the chances of a scenario where commits get lost during a merge are slim, but if any loss is possible then the chances are that you’ll lose tests as well as code because they’re probably in the same commit. This leads us to:

POINT THE FIRST: Make sure you commit your tests separately from the code that makes them pass.

Some may argue that you can take this one step further and commit each step of Red-Green-Factor separately and this works too so long as you don’t push until you have a green light. This is a good starting point for minimizing the chance of loss.

POINT THE SECOND: Test your golden number

You’ll need to be careful how you deal with this one from a process and team management point of view, but why not VERIFY YOUR GOLDEN NUMBER. Write a test that checks that the number of tests matches what it was at the time of the commit.

Wait a minute…

There’s a good reason why you might not think to do this; what if the number of tests lost is equal to the number of tests added anew by the commit?


The chances of our worst case scenario playing out now with so many distinct steps is orders of magnitude lower than in our previous case. For disaster to go un-noticed, you have to lose the tests AND the the accompanying golden number test AND the code that was in a separate commit to a separate repository.

All in all, a good way to think about the golden number is like this:

Seriously though

Even with the best intentions and code coverage, there’s always a chance that something may go wrong and you won’t know about it. When employed together, these three points will help you efficiently mitigate this risk.


For f**ks sake, pick a god damn signature already!!!

I was fortunate to speak to a lot of people about OWIN, both in the run up to and during the recent NDC London conference. And I tell you what, I’m at a complete loss as to what the hell has happened. OWIN is a great standard, with some great support from many sources and with some great minds putting in their time and effort. Despite this though, things are in dire straits.

One particular issue has cropped up which has got everyone faffing about. There has been a lot of ‘discussion’ on this recently both on Github and on Twitter. And quite frankly I’m rather frustrated by it all.

For those that don’t know, can’t be bothered to read the thread or simply can’t fathom it out from all the tangents and confusion, there’s currently no ‘standard’ way of writing middleware that will ensure that any given provider will be able to wire it into the pipeline.

Microsoft came up with a wire up scenario in the form of an IAppBuilder implementation when they implemented Katana which utilises a class with a constructor and an invoke function, but this is not part of the OWIN specification. There’s another wire up solution in the form of Mark Rendles “fix” that just needs a lamdba function, but this isn’t part of OWIN either. There just isn’t a standard at all.

So basically at the moment, it’s nigh on impossible to implement middleware that may choose routes through the pipeline, or otherwise wrap parts of the pipeline and work with it in a generic manner. And for all the discussion going on there’s no solution or agreement in sight. You just have to hope that the wire up scenario that’s available supports your chosen signature.

How the hell has this happened??


Forgive me but I thought this was a core tenant of OWIN – Interoperability – between middleware and owin compatible hosting environments and in general? Yet the end result of all this is that we’ve got a major interoperability problem… the exact opposite of the OWIN philosophy.

It’s not hard, it’s not rocket science and it doesn’t need protracted discussion. JUST PICK A F***ING SIGNATURE ALREADY.

Func<AppFunc, AppFunc> – There, I did it.

Why you shouldn’t use a web framework to build your next API

A few days ago I blogged about Nancy style Modules in Asp.Net Web API with Superscribe. Now I’m going to take this one step further and show how you can build a basic Web API without any web framework at all, using and OWIN and Superscribe, a graph based routing framework for Asp.Net.


Technically this claim is open to interpretation… just what constitutes a ‘web framework’ is of course debatable. But in the context of this post, I am referring to those such as Nancy Fx, Service Stack, Web API, etc. All these frameworks are great, and simplify the development of complex apps in a myriad of ways, but we rarely stop to think about exactly what we need to achieve our goals.

Most frameworks cater for static content, rendering views, json/xml data and various other ways of providing content and can end up being quite bulky… some more so than others. But it’s actually more than possible to build a functional Web API with just a few basic elements, and the chances are it’ll be more performant too. All we need to do this are the following components:

  • Hosting
  • Routing & Route Handlers
  • Content negotiation
  • Serlisation/Deserialisation

With Asp.Net vNext and OWIN, hosting our app is a piece of cake. We can use any number of web servers seamlessly, even run our app out of the console. Json.Net is easily leveraged to provide serialistion & deserialisation… and there are other libraries for pretty much any other media type you may wish for.

We’re going to use Superscribe modules for handling our routes, so that just leaves us with content negotiation. In it’s simplest terms, conneg is just a way of mapping between media types and the correct serialiser/deserialiser. Superscribe.Owin has this feature built in, so we’ve already got everything that we need!

I’m sure there will be a few who will argue that Superscribe is technically a framework… but it’s specialist and not on the scale of the others I mentioned. And we could still roll our own routing if we so wished!

Setting up

Lets put this into practice. Open up visual studio and create a new empty web project:

Create project

First of all, use the package manager ui or console to install the following packages. This will install the relevant Owin bits and pieces and allow us to host our app using IIS express.

Package Manager

Finally, install the Superscribe.Owin package along with the Superscribe core library.

Superscribe nuget

Enable Owin

We now need to make a quick modification to our Web.Config.cs file to ensure that OWIN will handle our requests, so open it up and add the appsettings section as found below:

OWIN expects a file called Startup.cs in the root of your project, which it uses for configuration of Middleware components. In basic terms you can think of it as similar to global.asax in traditional Asp.Net. We’ll need to add one to tell OWIN that we want to use Superscribe to handle our requests:

Here we’ve also configured Superscribe to respond to requests that accept the text/html media type. Because we aren’t using a web framework we’re dealing with OWIN at a very low level, so we need to be very prescriptive with our content negotiation. If it’s not in the list, it won’t be handled… more on this later.

Finally lets add our module. This step is identical to the one in the Superscribe Web API hello world sample, except this time we need to inherit from Superscribe.Owin.SuperscribeOwinModule. Add the hello world handler and you should have a module that looks like this:

Now we’re ready to go! If you’ve done everything right, you should see the Hello World message

Hello owin

Adding support for Json

So far our API can only handle requests that accept text/html as a response, which is obviously a bit naff. As I mentioned previously, because we have no framework to help us out we have to add support for media types manually on a case by case basis. We’re going to need something to serialise/deserialise our Json for us, so lets go ahead and install ol’ faithful Json.Net: Nuget

Now lets go back to Startup.cs and take a closer look at our Superscribe configuration. Adding MediaTypeHandlers is quite straight forward… in the case of text/html we’ve specified a Write function which simply calls ToString() on whatever we returned from our GET handler and adds the result to the response. If we want to be able to do model binding, we’ll also need to specify a Read function:

It’s really not a huge amount of code, as Json.Net is doing most of the hard work for us. I’ve also extended our module with an extra handler to give us some interesting Json to look at. I’ve used a simple array, but feel free to be more creative:

Now we can fire up our app again and see the fruits of our labour. Just one thing of note, we’ll need to issue a request with an appropriate Accepts: “application/json” header. You can use Curl for this, or alternatively there are extensions available for most browsers to facilitate sending bespoke http requests. I’m using Dev Http Client for chrome:

Dev http client

Not bad for a few minutes work!

Model binding

Just like with our Web API modules, we can bind content from the body of the request to a strongly typed model using the Bind function. This will invoke the Read function of our Media Type handler, but this time it will use the incoming content-type header to figure out which one to use:

In this case, our endpoint is expecting a product and responds accordingly to acknowledge receipt. We can also set the status code of the response as seen above (note that this is slightly different syntax from the Web API module).

Our product is pretty basic but suitable for this demonstration. Fire it up once again, and here’s the result… no hefty frameworks required!

Product Post


The future of Asp.Net is upon us, so pretty soon we’re all going to be getting to grips with OWIN in some shape or form. While it’s primary advantage is decoupling an application from it’s hosting environment & middleware, OWIN’s low level, close to the bones nature makes it very powerful and enables us to do things such as the sample in this post with very little effort.

This also raises a interesting and important point about framework usage. Just as some are reconsidering if they really need to include all of JQuery just to do a few select bits of DOM manipulation, so we should be asking ourselves do we really need a whole web framework? Trying to do something manually is often a great way to better understand it.

As always, you can find the source for this post on github here:



Nancy style Modules in Asp.Net Web API with Superscribe

After months of work I can finally announce that the first Graph based routing framework for is pretty much complete. I’ll be writing another ‘Introducing’ post for this project shortly, but in the meantime I hope you’ll find this little taster intriguing.

UPDATEYou can now use Superscribe Modules with OWIN too.

Inspired by NancyFX

If you don’t know much about Nancy (shame on you) then take a quick look at their site and you’ll quickly get the lay of the land. Nancy is built on the ‘Super Duper Happy Path’ principle which put simply is a fourfold philosophy – “It just works”, “Easily customisable”, “Low ceremony” & “Low friction”.

Hello world in Nancy looks like this:

Couldn’t be much simpler if it tried… particularly when it comes to defining routes and the code that handles them. I can’t stress how much I would like to thank the Nancy team for inspiring the approach covered in this post so if you are reading this, keep up the good work guys and I hope you understand that imitation is the greatest form of flattery!

Ask anyone who uses Web API on a daily basis and they’ll generally tell you that the default MVC style routing is a bag of balls. Wouldn’t it be nice if we could use this module style within Web API?

Leading question much?

With release of Superscribe.WebAPI 0.2.2 we can now do just that, and here’s how.

Fire up Visual Studio and create a new MVC 4 project:

Create Project

Choose to create an empty web application when prompted:

Empty web application

Once your solution loads, go ahead and delete the App_Data, Controllers, Models and Views folders as we won’t be needing them.

Delete crap
You can also remove the RouteConfig.cs file in App_Start. Don’t forget to remove the reference to this in Global.asax too.

Hello, Superscribe

Next, use the package manager to install the Superscribe.WebApi package. This will also install the Superscribe core library.

Install Superscribe

Once that’s complete, we can add our module. Create a new class called HelloWorldModule, and make it implement Superscribe.WebApi.Modules.SuperscribeModule.

Just like in the Nancy sample, all we need to do is add our GET handler to the module. When finished we should have something like this:

Finally, we need to tell Asp.Net that we want to route using Superscribe. We can do this in WebApiConfig.cs by removing the default routing logic and replacing it with the following. I’ve also removed the xml formatter for good measure.

That’s it we’re done, go ahead and hit start. Super Duper Happy Path, have you met Asp.Net Web API?

Hello world

Parameter Capture

At this point I should point out that Superscribe routes are defined fundamentally differently to routes in Nancy, or indeed Web Api’s attribute routing. Superscribe is a graph based routing framework, so route definitions consist of strongly typed segments.

There’s plenty of syntactic sugar to help us out along the way of course. The best way of demonstrating how this affects us is by extending our example to include a parameter. The following is equivalent to a route /Hello/{Name} where name is a string parameter:

As the documentation matures I’ll be filling in the gaps, but in brief Superscribe uses a DSL for defining graph nodes and the edges between them. In this example, the Get assignment attaches a very simple two node graph to the base node where all routes originate. Here’s the result:

Hello Roysvork

Funky interlude

Using the DSL and strongly typed segment definitions, we can harness the full power of graph based routing. As I mentioned in my previous post, all route nodes can have an Activation function, and Action function, and a Final function defined. Superscribe is doing a whole bunch of stuff for us setting these up:

  • The “Hello” node is created with an the Activation function of segment == “Hello”, and there is no Action function.
  • In the case the the (ʃString)”Name” node, the Activation function is set as a Regex matching any string, and the Action function to capture the segment value as a named parameter.
  • In this mode of usage (there are others) the final function is defined by the assigment to Get[…]

In this module mode, Superscribe also allows us to specify activation functions through the DSL. Remember, an activation function dictates whether or not the current segment is a match for this node. For example, we can do the following:

So now the behavior is the same for the first 45 seconds in every minute. For the last 15 seconds, you get insulted. The order of the routes is of course important here, otherwise the regular route will match all the time and the time dependent one won’t get a look in. It’s a pretty useless example but a nice demonstration nonetheless!

One more thing worthy of note here… the DSL relies on operator overloads and implicit casts to do it’s thing. Sometimes it needs a helping hand though… if it doesn’t have a node of the correct type to ‘anchor’ itself on as in the example, we need to add a leading ʅ / to get our route to compile.

Model binding and other Methods

Back to serious mode now and to some more mundane features but nonetheless important features. Our modules would be pretty useless without support for DELETE, PUT, POST, and their stingy friend PATCH. And with some way of making sense of the Http content we’ve received as part of the body of the request.

As you may expect, you can use indexers to handle other methods just like with Get. The model binding again borrows from Nancy, so a POST implementation works using the Bind method like so:

Returning just a status code is not very RESTful of course, but given that this is just Web API underneath, you can still use HttpResponseMessage to do whatever you like:

Dependency Injection

Finally for this post, we’ll have a look at how dependency injection works in a Superscribe module. Unlike a Web API controller or a Nancy module, Superscribe modules are static for all intents and purposes, being instantiated once at application startup for the purposes of registering routes with the framework.

As a consequence, we can’t inject dependencies straight into our module. We can however call the Require function which leverages the dependency resolver in MVC 4. Here I’m using Ninject, but of course you could use any framework of your choice.


I hope you enjoy having a play around with Modules in Web API… this is just one way you can use Superscribe. One of the design goals of the project was to support different approaches to routing, and I will cover more of these in future posts.

Superscribe has many features already implemented that aren’t discussed in this post… such as regex matching and optional nodes, which I’ll also cover in the near future. If you’re thinking of trying stuff out, just bear in mind that the following things won’t or might not work so well just yet, but are on my list of things to do:

  • HttpMessageHandlers \ Global Filters
  • Co-existing with normal controllers
  • Co-existing with attribute routing
  • Co-existing with just about anything

For now, please take this framework for what it is… it’s under development, quite possibly buggy, and changing all the time. Probably don’t use it in a production app just yet, but *do* let me know if you try to do things and they don’t work, via the Github project at Things will become stable real soon!

You can find all the source from the examples in this post here:

Once again, a big shout out to the folks involved with Nancy Fx who inspired this approach!



Graph Based Routing


I’d like to share with you a concept that I’ve been working on for some time. I’ve yet to decide on a good name for yet, but at the moment I call it ‘State MachineGraph based routing‘.

Routing is important… no vital to most web applications. Whether it’s client side or server side, C# or Node.js, nothing (sensible) can happen without some kind of routing. Yet despite this, it’s usually treated as a secondary aspect of the development process… a mere peasant in your chosen web stack. Routing is a dull, unintelligent series of pattern matches and parameter capture.

This is an outdated view. Routes in a contemporary web app are complex, often hierarchical beasts. Routes such as these can be prohibitively costly both to maintain and to execute and as such we are prevented from making the most of things. Enter Graph based Routing.

Aims and Overview

Graph based routing is designed to improve upon traditional routing strategies in terms of route definition, performance and flexibility. As the name suggests, defined routes are stored in a graph structure… each node representing a URI segment. Edges in the graph then represent links to the segments that follow e.g:

Definition for routes:


It becomes clear why this is a good way of representing routes when you introduce slightly more complexity:

Definition for routes:

graph complex

Instead of having to store each route separately and in it’s entirety, we create links between segments that have a common predecessor.

The route matching algorithm no longer needs to scan (potentially) the whole route table for matches… just consider the next segment and then choose only between those possibilities represented by the graph edges.

Route definition

The benefits extend to route definition and maintenance too. Consider a theoretical DSL with special ‘/’ and ‘|’ (or) operators to define the above routes as a graph:

   routes = "api" / (
         "products" / ( "bestsellers" | "id )
       | "categories" / "id"

Even with such a small number of routes, it’s possible to see an improvement in readability and a reduction in redundant code. Because routes are broken down into objects, we can re-use parts of them and even define complex routes programatically or by convention.

These routes are all in one place… but that’s not to say this couldn’t be extended to cater for a definition that involves placing routes near to where they are handled. In the case of Asp.Net/C# we could assign parts of routes to static variables and (thread safely) attach more routes to them from anywhere else in our code, use lambdas in controllers, or a myriad of other techniques.

Parsing Strategy

Implementing a graph based routing engine dictates that we parse URIs by traversing the graph of route definitions in a particular way. In traditional routing we can make use of pattern matching and constraints in order to decide how to interpret segments, but we cannot easily use this information to make choices.

With graph based routing this becomes easy by defining an Activation function for each graph edge. Rather than being limited to pattern matching, an activation function contains arbitrary code that must return true or false (matched or not matched) based on the value of the current URI segment.

If an edge is matched, we transition to the succeeding graph node which can then execute an Action function. If an edge is not matched, we move on to the next edge in the sequence until we either have a match or run out of options.

Once route parsing is complete, the Final function of the last node is executed. If the last node does not provide one, the engine must execute the final function of the last travelled node that did.

State machine

In effect, a graph based routing engine is a finite state machine with the URI as input. In order to understand the purpose of the Activation and Action functions, lets look at a real world example:

Default Web API routing case:
    /api/{controller}/{id} - (optional)


We can see how our graph has produced a very simple state machine, with only one valid transition at each state apart from the optional nature of the id parameter. Pseudo code for each activation function can be seen above each transition line, and a summary of the action function inside each circle.

See also that the optional nature of our Id parameter has created a transition that ‘jumps’ over the Set Id node if a value is not present. Two other things to note here:

  • The final state of the machine is not directly mapped to a graph node, instead this is a result of executing the Final Function.
  • If at any point we fail to find a match for the next URI segment, we transition to an error state and execute an Error Function.


Through this use of Activation, Action & Final functions, we can reproduce all the functionality of traditional routing mechanisms, while at the same time opening providing developers with the ability to execute whatever code they like at each stage of the process. You could view each graph node as a miniature piece of middleware.

The routing engine must provide a mechanism for storing ‘state’ while the FSM is running, in order to store things like target controller/action names, parameter values or anything else that the developer needs to implement their node functions. It must also allow nodes to access information about the current Http request/response.

At a basic level, we could implement easily implement custom model binding, deserialization, or logging as part of our custom actions functions. If we take this further we can some do very cool things… consider an implementation for a Javascript app in which certain actions cause nested elements to open. For certain routes we could show, hide or create DOM elements per segment in order to reconstruct the UI state after a full page load.

If we so desired, we could code our activation functions so that our API routing would make different choices based on the user that was currently logged in, or even the time of day. Routing also no longer needs to be linear… by storing state we can even defer making a decision about what to do with our route segments until we execute the final function so we can use all the information available to us.


With graph based routing, you can make routing a first class citizen in your web application. Although we talked a lot about analogues with Asp.Net / Web API routing, the concept is totally platform independant.

Whats really cool when applying this server side is that you don’t even really need a web framework, so long as your action/final functions are able to send http responses. Alternatively you could even choose which framework you wish to service your request, which brings me onto another point… now that we are moving towards the OWIN based future, routing should become a middleware concern! But that’s a topic for another post.

All this may have been theoretical, but I am working on implementations for Javascript, Web API and OWIN which are all at various stages of development. I hope to have some more news on this in the next few weeks, so stay tuned and please let me have your thoughts and feedback. This is a brand new concept and any contributions will be gladly received.


Appendix – Glossary

  • Route – A potentially valid URL route made up of nodes, transitions & actions.
  • Segment – Part of a URL separated by ‘/’
  • Graph – The representation of all routes for an application
  • Node – A node in the route graph – typically representing a potential route segment match.
  • Base Node – The common root node of every route, and the starting point for route parsing.
  • Error Node – A node that is reached when an error occurs.
  • Transition – A link between two nodes by which we can transition from one to another.
  • Activation function – Determines whether or not a transition can be made.
  • Action function – Executed after a transition has successfully occurred.
  • Final function – Executed by a node *only* if it is the last node in the route.

Is using OData\IQueryable in your Web API an inherently bad thing?

I recently came across this article from a year or so ago, along with some comments and analysis on the matter. The writer and many commentators have a very strong opinion that IQueryable is a leaky abstraction, that this is bad, and that basing APIs around it is also bad.

Now I know these posts are old, but as I’ve recently built an API which exposes an IQueryable, I thought I’d weigh in. Partly I’m curious… I’m interested to know if people still think this way given the recent shift towards REST and HTTP centric APIs in general. But it’s also connected to other debates that are more pertinent, such as this one regarding the Repository Pattern, and also something that goes hand in hand with IQueryable… OData.

The OData Issue

I’m going to start by setting out my position from the outset. I think that OData is misunderstood and has some very useful components… but at the same time it is bulky and tries to do too much. The concept of metadata reeks of WSDL, the current implementation is tightly coupled to Entity Framework, many of the mechanisms\outputs are noisy and a there’s an overall, very particular approach reminiscent of learning to work with WebForms.

Where OData has got it very, very right however is with the URI Query syntax. This provides a relatively clean, pragmatic syntax allowing the client to request filters, paging\ordering and projections on an endpoint that the server will then honour when providing data… all via the querystring. And to top it all off, this is standardised and pretty well documented.

I’m a great believer in standards as a way of encouraging useful, interoperable frameworks\libraries (promises anyone?) but Microsoft currently dominates both the standard and the implementation and this seems to have a negative network effect that prevents the true usefulness of this syntax from gaining ground. And that is something that I would like to, and am currently trying to address.

I feel I should also add a disclaimer here before we get in too deep. I do not advocate OData query syntax for use with complex domains or DDD. Where this standardised syntax is useful is to augment API methods that provide data that may be displayed in a form of list, tabulated or used in charts and graphs.

OData – the good parts

Now that I’ve hopefully made my position clear, the rest of this post is going to focus entirely on the query aspect of the OData standard that I have mentioned. This feature has ended up tarred with the same brush as the rest of OData, and has had the same vehement criticism directed at it.

To dismiss such a powerful, useful tool for developing APIs for this reason is absolute madness. I’m actually pretty shocked at the number of people in the community that I respect highly for their skill and pragmatism that won’t even give OData query syntax the time of day. Yet when I have conversations about it, it turns out that a lot of the major problems don’t really exist at all.

I’ve made it very clear that I’m not currently interested in the rest of what OData has to offer… this isn’t to say that it’s inherently bad though. For experienced OData scholars designing CRUD based systems, or for use in Sharepoint style services I’m sure it has a lot of merit. I just tend to design my APIs so that they draw more from RESTful principals, and wider OData features aren’t generally flexible enough or as widely adopted.

So what’s the beef?

Common criticisms of OData that people have include:

  • It’s not RESTful
  • It encourages anaemic models and so doesn’t sit well with DDD
  • IQueryable is a leaky abstraction that is rarely (or never) fully implemented
  • It forces you to expose elements of your DAL through your API
  • It’s intertwined with MS tech
  • It doesn’t work with loosely typed data

Now I’ve stuck through two of these criticisms already, as they don’t really apply to the query syntax. If we’re being real sticklers, it is kinda hard to take an endpoint that accepts OData query syntax and make it discoverable as would be required for REST. But it’s by no means impossible. The second is the weakest and even daftest criticism of the lot. OData just isn’t designed for complex domain models, but exactly the opposite – anaemic CRUD stuff.

I’ll come back to the last two in just a moment, but as promised I said I’d weigh in on the IQueryable debate. It’s pretty well understood that leaky abstractions should be avoided. And that inversion of control is something that makes code easier to write, and maintain. We also all know that when it comes down to it, it’s really about how much an abstraction (particularly a header interface) brings to the table vs. how much pain it causes us so I won’t be drawn on a debate about that.

IQueryable in my book is a complete no-brainer. It gives us a common syntax – Linq – that we can access as an expression tree and compile to whatever we want. You can use it to control exactly what comes back from the provider, reducing query times, bandwidth and throughput. These things in itself should make it worth a fair bit of pain. Sure it’s pretty extensive and hard to implement a provider yourself, but it’s nowhere near as hard as implementing the whole thing from scratch.

An interface is not a runtime contract

At this point I’d like to make an observation on leaky abstractions. If you take a bunch of classes providing slightly disparate functionality, and then hide them behind the same layer of abstraction then this is leaky. But flip it around and consider designing the interface first. Now if someone implementing this interface chooses to throw a not implemented exception, this is not the same as being leaky.

No-one said that in your implementation all the methods have to be useful. An interface is not a runtime\library level concern, it merely specifies that a class will provide a set of members. And yes, IQueryable is is a header interface… but it’s a damn useful one and if you are really above that then you can have fun getting reimplementing it yourself in a coffee shop on your Macbook Air while the other hipsters watch.

Of course having said that, the onus is on you not to make a flawed implementation, or no-one will use it and that would be your own fault. But if you make an IQueryable provider that is useful and gets used (as many people have), then that speaks for itself more than any argument I can present.


But I digress… lets go back to the criticisms that we have left:

  • It forces you to expose elements of your DAL through your API
  • It’s intertwined with MS tech\Entity Framework
  • It doesn’t work with loosely typed data

Notice anything? These are all criticisms with one particular implementation rather than the standard itself. And what do we expect when a swathe of the community has overlooked it? There’s only one way to rectify the situation and iron out the remaining niggles… contribute to the standards process or implement a better version.

How you should view OData Query Syntax

Rather than focusing entirely on the negatives, lets end by taking a look at some of the awesome things that standardised query syntax does give us:

  • Provide rich data features quickly and easily
  • Apply filtering to any endpoint you like quickly and easily, and store filters as raw OData queries to be retrieved later
  • Benefit from third party components that also work against the standard, such as Breeze.js or KendoUI
  • Project data inline so you don’t consume any more bandwidth than needed
  • Queries against DTO projections can filter down to your DAO without directly exposing it, ensuring your database doesn’t do any more work than needed

In Summary

You can argue that OData itself rightly receives a certain level of criticism, however the query syntax standard definitely does not deserve this treatment. It is elegant and powerful with great performance boons, yet it knows it’s place and doesn’t try to be something it’s not.

If you have a rich domain model then it may not be for you, but regardless I urge you to take another look at this highly underrated aspect of OData! If you are curious to find out more, take a look around this blog or search on google for alternatives to the Microsoft offering.


P.S – Comments and discussion are particularly welcome after this post. The opinions expressed here are my own and I’d love to hear any counter arguments and thoughts!