Skip to content

Why you shouldn’t use a web framework to build your next API

A few days ago I blogged about Nancy style Modules in Asp.Net Web API with Superscribe. Now I’m going to take this one step further and show how you can build a basic Web API without any web framework at all, using and OWIN and Superscribe, a graph based routing framework for Asp.Net.


Technically this claim is open to interpretation… just what constitutes a ‘web framework’ is of course debatable. But in the context of this post, I am referring to those such as Nancy Fx, Service Stack, Web API, etc. All these frameworks are great, and simplify the development of complex apps in a myriad of ways, but we rarely stop to think about exactly what we need to achieve our goals.

Most frameworks cater for static content, rendering views, json/xml data and various other ways of providing content and can end up being quite bulky… some more so than others. But it’s actually more than possible to build a functional Web API with just a few basic elements, and the chances are it’ll be more performant too. All we need to do this are the following components:

  • Hosting
  • Routing & Route Handlers
  • Content negotiation
  • Serlisation/Deserialisation

With Asp.Net vNext and OWIN, hosting our app is a piece of cake. We can use any number of web servers seamlessly, even run our app out of the console. Json.Net is easily leveraged to provide serialistion & deserialisation… and there are other libraries for pretty much any other media type you may wish for.

We’re going to use Superscribe modules for handling our routes, so that just leaves us with content negotiation. In it’s simplest terms, conneg is just a way of mapping between media types and the correct serialiser/deserialiser. Superscribe.Owin has this feature built in, so we’ve already got everything that we need!

I’m sure there will be a few who will argue that Superscribe is technically a framework… but it’s specialist and not on the scale of the others I mentioned. And we could still roll our own routing if we so wished!

Setting up

Lets put this into practice. Open up visual studio and create a new empty web project:

Create project

First of all, use the package manager ui or console to install the following packages. This will install the relevant Owin bits and pieces and allow us to host our app using IIS express.

Package Manager

Finally, install the Superscribe.Owin package along with the Superscribe core library.

Superscribe nuget

Enable Owin

We now need to make a quick modification to our Web.Config.cs file to ensure that OWIN will handle our requests, so open it up and add the appsettings section as found below:

OWIN expects a file called Startup.cs in the root of your project, which it uses for configuration of Middleware components. In basic terms you can think of it as similar to global.asax in traditional Asp.Net. We’ll need to add one to tell OWIN that we want to use Superscribe to handle our requests:

Here we’ve also configured Superscribe to respond to requests that accept the text/html media type. Because we aren’t using a web framework we’re dealing with OWIN at a very low level, so we need to be very prescriptive with our content negotiation. If it’s not in the list, it won’t be handled… more on this later.

Finally lets add our module. This step is identical to the one in the Superscribe Web API hello world sample, except this time we need to inherit from Superscribe.Owin.SuperscribeOwinModule. Add the hello world handler and you should have a module that looks like this:

Now we’re ready to go! If you’ve done everything right, you should see the Hello World message

Hello owin

Adding support for Json

So far our API can only handle requests that accept text/html as a response, which is obviously a bit naff. As I mentioned previously, because we have no framework to help us out we have to add support for media types manually on a case by case basis. We’re going to need something to serialise/deserialise our Json for us, so lets go ahead and install ol’ faithful Json.Net: Nuget

Now lets go back to Startup.cs and take a closer look at our Superscribe configuration. Adding MediaTypeHandlers is quite straight forward… in the case of text/html we’ve specified a Write function which simply calls ToString() on whatever we returned from our GET handler and adds the result to the response. If we want to be able to do model binding, we’ll also need to specify a Read function:

It’s really not a huge amount of code, as Json.Net is doing most of the hard work for us. I’ve also extended our module with an extra handler to give us some interesting Json to look at. I’ve used a simple array, but feel free to be more creative:

Now we can fire up our app again and see the fruits of our labour. Just one thing of note, we’ll need to issue a request with an appropriate Accepts: “application/json” header. You can use Curl for this, or alternatively there are extensions available for most browsers to facilitate sending bespoke http requests. I’m using Dev Http Client for chrome:

Dev http client

Not bad for a few minutes work!

Model binding

Just like with our Web API modules, we can bind content from the body of the request to a strongly typed model using the Bind function. This will invoke the Read function of our Media Type handler, but this time it will use the incoming content-type header to figure out which one to use:

In this case, our endpoint is expecting a product and responds accordingly to acknowledge receipt. We can also set the status code of the response as seen above (note that this is slightly different syntax from the Web API module).

Our product is pretty basic but suitable for this demonstration. Fire it up once again, and here’s the result… no hefty frameworks required!

Product Post


The future of Asp.Net is upon us, so pretty soon we’re all going to be getting to grips with OWIN in some shape or form. While it’s primary advantage is decoupling an application from it’s hosting environment & middleware, OWIN’s low level, close to the bones nature makes it very powerful and enables us to do things such as the sample in this post with very little effort.

This also raises a interesting and important point about framework usage. Just as some are reconsidering if they really need to include all of JQuery just to do a few select bits of DOM manipulation, so we should be asking ourselves do we really need a whole web framework? Trying to do something manually is often a great way to better understand it.

As always, you can find the source for this post on github here:



Nancy style Modules in Asp.Net Web API with Superscribe

After months of work I can finally announce that the first Graph based routing framework for is pretty much complete. I’ll be writing another ‘Introducing’ post for this project shortly, but in the meantime I hope you’ll find this little taster intriguing.

UPDATEYou can now use Superscribe Modules with OWIN too.

Inspired by NancyFX

If you don’t know much about Nancy (shame on you) then take a quick look at their site and you’ll quickly get the lay of the land. Nancy is built on the ‘Super Duper Happy Path’ principle which put simply is a fourfold philosophy – “It just works”, “Easily customisable”, “Low ceremony” & “Low friction”.

Hello world in Nancy looks like this:

Couldn’t be much simpler if it tried… particularly when it comes to defining routes and the code that handles them. I can’t stress how much I would like to thank the Nancy team for inspiring the approach covered in this post so if you are reading this, keep up the good work guys and I hope you understand that imitation is the greatest form of flattery!

Ask anyone who uses Web API on a daily basis and they’ll generally tell you that the default MVC style routing is a bag of balls. Wouldn’t it be nice if we could use this module style within Web API?

Leading question much?

With release of Superscribe.WebAPI 0.2.2 we can now do just that, and here’s how.

Fire up Visual Studio and create a new MVC 4 project:

Create Project

Choose to create an empty web application when prompted:

Empty web application

Once your solution loads, go ahead and delete the App_Data, Controllers, Models and Views folders as we won’t be needing them.

Delete crap
You can also remove the RouteConfig.cs file in App_Start. Don’t forget to remove the reference to this in Global.asax too.

Hello, Superscribe

Next, use the package manager to install the Superscribe.WebApi package. This will also install the Superscribe core library.

Install Superscribe

Once that’s complete, we can add our module. Create a new class called HelloWorldModule, and make it implement Superscribe.WebApi.Modules.SuperscribeModule.

Just like in the Nancy sample, all we need to do is add our GET handler to the module. When finished we should have something like this:

Finally, we need to tell Asp.Net that we want to route using Superscribe. We can do this in WebApiConfig.cs by removing the default routing logic and replacing it with the following. I’ve also removed the xml formatter for good measure.

That’s it we’re done, go ahead and hit start. Super Duper Happy Path, have you met Asp.Net Web API?

Hello world

Parameter Capture

At this point I should point out that Superscribe routes are defined fundamentally differently to routes in Nancy, or indeed Web Api’s attribute routing. Superscribe is a graph based routing framework, so route definitions consist of strongly typed segments.

There’s plenty of syntactic sugar to help us out along the way of course. The best way of demonstrating how this affects us is by extending our example to include a parameter. The following is equivalent to a route /Hello/{Name} where name is a string parameter:

As the documentation matures I’ll be filling in the gaps, but in brief Superscribe uses a DSL for defining graph nodes and the edges between them. In this example, the Get assignment attaches a very simple two node graph to the base node where all routes originate. Here’s the result:

Hello Roysvork

Funky interlude

Using the DSL and strongly typed segment definitions, we can harness the full power of graph based routing. As I mentioned in my previous post, all route nodes can have an Activation function, and Action function, and a Final function defined. Superscribe is doing a whole bunch of stuff for us setting these up:

  • The “Hello” node is created with an the Activation function of segment == “Hello”, and there is no Action function.
  • In the case the the (ʃString)”Name” node, the Activation function is set as a Regex matching any string, and the Action function to capture the segment value as a named parameter.
  • In this mode of usage (there are others) the final function is defined by the assigment to Get[…]

In this module mode, Superscribe also allows us to specify activation functions through the DSL. Remember, an activation function dictates whether or not the current segment is a match for this node. For example, we can do the following:

So now the behavior is the same for the first 45 seconds in every minute. For the last 15 seconds, you get insulted. The order of the routes is of course important here, otherwise the regular route will match all the time and the time dependent one won’t get a look in. It’s a pretty useless example but a nice demonstration nonetheless!

One more thing worthy of note here… the DSL relies on operator overloads and implicit casts to do it’s thing. Sometimes it needs a helping hand though… if it doesn’t have a node of the correct type to ‘anchor’ itself on as in the example, we need to add a leading ʅ / to get our route to compile.

Model binding and other Methods

Back to serious mode now and to some more mundane features but nonetheless important features. Our modules would be pretty useless without support for DELETE, PUT, POST, and their stingy friend PATCH. And with some way of making sense of the Http content we’ve received as part of the body of the request.

As you may expect, you can use indexers to handle other methods just like with Get. The model binding again borrows from Nancy, so a POST implementation works using the Bind method like so:

Returning just a status code is not very RESTful of course, but given that this is just Web API underneath, you can still use HttpResponseMessage to do whatever you like:

Dependency Injection

Finally for this post, we’ll have a look at how dependency injection works in a Superscribe module. Unlike a Web API controller or a Nancy module, Superscribe modules are static for all intents and purposes, being instantiated once at application startup for the purposes of registering routes with the framework.

As a consequence, we can’t inject dependencies straight into our module. We can however call the Require function which leverages the dependency resolver in MVC 4. Here I’m using Ninject, but of course you could use any framework of your choice.


I hope you enjoy having a play around with Modules in Web API… this is just one way you can use Superscribe. One of the design goals of the project was to support different approaches to routing, and I will cover more of these in future posts.

Superscribe has many features already implemented that aren’t discussed in this post… such as regex matching and optional nodes, which I’ll also cover in the near future. If you’re thinking of trying stuff out, just bear in mind that the following things won’t or might not work so well just yet, but are on my list of things to do:

  • HttpMessageHandlers \ Global Filters
  • Co-existing with normal controllers
  • Co-existing with attribute routing
  • Co-existing with just about anything

For now, please take this framework for what it is… it’s under development, quite possibly buggy, and changing all the time. Probably don’t use it in a production app just yet, but *do* let me know if you try to do things and they don’t work, via the Github project at Things will become stable real soon!

You can find all the source from the examples in this post here:

Once again, a big shout out to the folks involved with Nancy Fx who inspired this approach!



Graph Based Routing


I’d like to share with you a concept that I’ve been working on for some time. I’ve yet to decide on a good name for yet, but at the moment I call it ‘State MachineGraph based routing‘.

Routing is important… no vital to most web applications. Whether it’s client side or server side, C# or Node.js, nothing (sensible) can happen without some kind of routing. Yet despite this, it’s usually treated as a secondary aspect of the development process… a mere peasant in your chosen web stack. Routing is a dull, unintelligent series of pattern matches and parameter capture.

This is an outdated view. Routes in a contemporary web app are complex, often hierarchical beasts. Routes such as these can be prohibitively costly both to maintain and to execute and as such we are prevented from making the most of things. Enter Graph based Routing.

Aims and Overview

Graph based routing is designed to improve upon traditional routing strategies in terms of route definition, performance and flexibility. As the name suggests, defined routes are stored in a graph structure… each node representing a URI segment. Edges in the graph then represent links to the segments that follow e.g:

Definition for routes:


It becomes clear why this is a good way of representing routes when you introduce slightly more complexity:

Definition for routes:

graph complex

Instead of having to store each route separately and in it’s entirety, we create links between segments that have a common predecessor.

The route matching algorithm no longer needs to scan (potentially) the whole route table for matches… just consider the next segment and then choose only between those possibilities represented by the graph edges.

Route definition

The benefits extend to route definition and maintenance too. Consider a theoretical DSL with special ‘/’ and ‘|’ (or) operators to define the above routes as a graph:

   routes = "api" / (
         "products" / ( "bestsellers" | "id )
       | "categories" / "id"

Even with such a small number of routes, it’s possible to see an improvement in readability and a reduction in redundant code. Because routes are broken down into objects, we can re-use parts of them and even define complex routes programatically or by convention.

These routes are all in one place… but that’s not to say this couldn’t be extended to cater for a definition that involves placing routes near to where they are handled. In the case of Asp.Net/C# we could assign parts of routes to static variables and (thread safely) attach more routes to them from anywhere else in our code, use lambdas in controllers, or a myriad of other techniques.

Parsing Strategy

Implementing a graph based routing engine dictates that we parse URIs by traversing the graph of route definitions in a particular way. In traditional routing we can make use of pattern matching and constraints in order to decide how to interpret segments, but we cannot easily use this information to make choices.

With graph based routing this becomes easy by defining an Activation function for each graph edge. Rather than being limited to pattern matching, an activation function contains arbitrary code that must return true or false (matched or not matched) based on the value of the current URI segment.

If an edge is matched, we transition to the succeeding graph node which can then execute an Action function. If an edge is not matched, we move on to the next edge in the sequence until we either have a match or run out of options.

Once route parsing is complete, the Final function of the last node is executed. If the last node does not provide one, the engine must execute the final function of the last travelled node that did.

State machine

In effect, a graph based routing engine is a finite state machine with the URI as input. In order to understand the purpose of the Activation and Action functions, lets look at a real world example:

Default Web API routing case:
    /api/{controller}/{id} - (optional)


We can see how our graph has produced a very simple state machine, with only one valid transition at each state apart from the optional nature of the id parameter. Pseudo code for each activation function can be seen above each transition line, and a summary of the action function inside each circle.

See also that the optional nature of our Id parameter has created a transition that ‘jumps’ over the Set Id node if a value is not present. Two other things to note here:

  • The final state of the machine is not directly mapped to a graph node, instead this is a result of executing the Final Function.
  • If at any point we fail to find a match for the next URI segment, we transition to an error state and execute an Error Function.


Through this use of Activation, Action & Final functions, we can reproduce all the functionality of traditional routing mechanisms, while at the same time opening providing developers with the ability to execute whatever code they like at each stage of the process. You could view each graph node as a miniature piece of middleware.

The routing engine must provide a mechanism for storing ‘state’ while the FSM is running, in order to store things like target controller/action names, parameter values or anything else that the developer needs to implement their node functions. It must also allow nodes to access information about the current Http request/response.

At a basic level, we could implement easily implement custom model binding, deserialization, or logging as part of our custom actions functions. If we take this further we can some do very cool things… consider an implementation for a Javascript app in which certain actions cause nested elements to open. For certain routes we could show, hide or create DOM elements per segment in order to reconstruct the UI state after a full page load.

If we so desired, we could code our activation functions so that our API routing would make different choices based on the user that was currently logged in, or even the time of day. Routing also no longer needs to be linear… by storing state we can even defer making a decision about what to do with our route segments until we execute the final function so we can use all the information available to us.


With graph based routing, you can make routing a first class citizen in your web application. Although we talked a lot about analogues with Asp.Net / Web API routing, the concept is totally platform independant.

Whats really cool when applying this server side is that you don’t even really need a web framework, so long as your action/final functions are able to send http responses. Alternatively you could even choose which framework you wish to service your request, which brings me onto another point… now that we are moving towards the OWIN based future, routing should become a middleware concern! But that’s a topic for another post.

All this may have been theoretical, but I am working on implementations for Javascript, Web API and OWIN which are all at various stages of development. I hope to have some more news on this in the next few weeks, so stay tuned and please let me have your thoughts and feedback. This is a brand new concept and any contributions will be gladly received.


Appendix – Glossary

  • Route – A potentially valid URL route made up of nodes, transitions & actions.
  • Segment – Part of a URL separated by ‘/’
  • Graph – The representation of all routes for an application
  • Node – A node in the route graph – typically representing a potential route segment match.
  • Base Node – The common root node of every route, and the starting point for route parsing.
  • Error Node – A node that is reached when an error occurs.
  • Transition – A link between two nodes by which we can transition from one to another.
  • Activation function – Determines whether or not a transition can be made.
  • Action function – Executed after a transition has successfully occurred.
  • Final function – Executed by a node *only* if it is the last node in the route.

Is using OData\IQueryable in your Web API an inherently bad thing?

I recently came across this article from a year or so ago, along with some comments and analysis on the matter. The writer and many commentators have a very strong opinion that IQueryable is a leaky abstraction, that this is bad, and that basing APIs around it is also bad.

Now I know these posts are old, but as I’ve recently built an API which exposes an IQueryable, I thought I’d weigh in. Partly I’m curious… I’m interested to know if people still think this way given the recent shift towards REST and HTTP centric APIs in general. But it’s also connected to other debates that are more pertinent, such as this one regarding the Repository Pattern, and also something that goes hand in hand with IQueryable… OData.

The OData Issue

I’m going to start by setting out my position from the outset. I think that OData is misunderstood and has some very useful components… but at the same time it is bulky and tries to do too much. The concept of metadata reeks of WSDL, the current implementation is tightly coupled to Entity Framework, many of the mechanisms\outputs are noisy and a there’s an overall, very particular approach reminiscent of learning to work with WebForms.

Where OData has got it very, very right however is with the URI Query syntax. This provides a relatively clean, pragmatic syntax allowing the client to request filters, paging\ordering and projections on an endpoint that the server will then honour when providing data… all via the querystring. And to top it all off, this is standardised and pretty well documented.

I’m a great believer in standards as a way of encouraging useful, interoperable frameworks\libraries (promises anyone?) but Microsoft currently dominates both the standard and the implementation and this seems to have a negative network effect that prevents the true usefulness of this syntax from gaining ground. And that is something that I would like to, and am currently trying to address.

I feel I should also add a disclaimer here before we get in too deep. I do not advocate OData query syntax for use with complex domains or DDD. Where this standardised syntax is useful is to augment API methods that provide data that may be displayed in a form of list, tabulated or used in charts and graphs.

OData – the good parts

Now that I’ve hopefully made my position clear, the rest of this post is going to focus entirely on the query aspect of the OData standard that I have mentioned. This feature has ended up tarred with the same brush as the rest of OData, and has had the same vehement criticism directed at it.

To dismiss such a powerful, useful tool for developing APIs for this reason is absolute madness. I’m actually pretty shocked at the number of people in the community that I respect highly for their skill and pragmatism that won’t even give OData query syntax the time of day. Yet when I have conversations about it, it turns out that a lot of the major problems don’t really exist at all.

I’ve made it very clear that I’m not currently interested in the rest of what OData has to offer… this isn’t to say that it’s inherently bad though. For experienced OData scholars designing CRUD based systems, or for use in Sharepoint style services I’m sure it has a lot of merit. I just tend to design my APIs so that they draw more from RESTful principals, and wider OData features aren’t generally flexible enough or as widely adopted.

So what’s the beef?

Common criticisms of OData that people have include:

  • It’s not RESTful
  • It encourages anaemic models and so doesn’t sit well with DDD
  • IQueryable is a leaky abstraction that is rarely (or never) fully implemented
  • It forces you to expose elements of your DAL through your API
  • It’s intertwined with MS tech
  • It doesn’t work with loosely typed data

Now I’ve stuck through two of these criticisms already, as they don’t really apply to the query syntax. If we’re being real sticklers, it is kinda hard to take an endpoint that accepts OData query syntax and make it discoverable as would be required for REST. But it’s by no means impossible. The second is the weakest and even daftest criticism of the lot. OData just isn’t designed for complex domain models, but exactly the opposite – anaemic CRUD stuff.

I’ll come back to the last two in just a moment, but as promised I said I’d weigh in on the IQueryable debate. It’s pretty well understood that leaky abstractions should be avoided. And that inversion of control is something that makes code easier to write, and maintain. We also all know that when it comes down to it, it’s really about how much an abstraction (particularly a header interface) brings to the table vs. how much pain it causes us so I won’t be drawn on a debate about that.

IQueryable in my book is a complete no-brainer. It gives us a common syntax – Linq – that we can access as an expression tree and compile to whatever we want. You can use it to control exactly what comes back from the provider, reducing query times, bandwidth and throughput. These things in itself should make it worth a fair bit of pain. Sure it’s pretty extensive and hard to implement a provider yourself, but it’s nowhere near as hard as implementing the whole thing from scratch.

An interface is not a runtime contract

At this point I’d like to make an observation on leaky abstractions. If you take a bunch of classes providing slightly disparate functionality, and then hide them behind the same layer of abstraction then this is leaky. But flip it around and consider designing the interface first. Now if someone implementing this interface chooses to throw a not implemented exception, this is not the same as being leaky.

No-one said that in your implementation all the methods have to be useful. An interface is not a runtime\library level concern, it merely specifies that a class will provide a set of members. And yes, IQueryable is is a header interface… but it’s a damn useful one and if you are really above that then you can have fun getting reimplementing it yourself in a coffee shop on your Macbook Air while the other hipsters watch.

Of course having said that, the onus is on you not to make a flawed implementation, or no-one will use it and that would be your own fault. But if you make an IQueryable provider that is useful and gets used (as many people have), then that speaks for itself more than any argument I can present.


But I digress… lets go back to the criticisms that we have left:

  • It forces you to expose elements of your DAL through your API
  • It’s intertwined with MS tech\Entity Framework
  • It doesn’t work with loosely typed data

Notice anything? These are all criticisms with one particular implementation rather than the standard itself. And what do we expect when a swathe of the community has overlooked it? There’s only one way to rectify the situation and iron out the remaining niggles… contribute to the standards process or implement a better version.

How you should view OData Query Syntax

Rather than focusing entirely on the negatives, lets end by taking a look at some of the awesome things that standardised query syntax does give us:

  • Provide rich data features quickly and easily
  • Apply filtering to any endpoint you like quickly and easily, and store filters as raw OData queries to be retrieved later
  • Benefit from third party components that also work against the standard, such as Breeze.js or KendoUI
  • Project data inline so you don’t consume any more bandwidth than needed
  • Queries against DTO projections can filter down to your DAO without directly exposing it, ensuring your database doesn’t do any more work than needed

In Summary

You can argue that OData itself rightly receives a certain level of criticism, however the query syntax standard definitely does not deserve this treatment. It is elegant and powerful with great performance boons, yet it knows it’s place and doesn’t try to be something it’s not.

If you have a rich domain model then it may not be for you, but regardless I urge you to take another look at this highly underrated aspect of OData! If you are curious to find out more, take a look around this blog or search on google for alternatives to the Microsoft offering.


P.S – Comments and discussion are particularly welcome after this post. The opinions expressed here are my own and I’d love to hear any counter arguments and thoughts!


New Features in Linq to Querystring v0.6

Linq to Querystring v0.6 has just gone live on to Nuget, and contains a whole bunch of new features. This has been the biggest update so far, and brings together some vital components& bug fixes, as well as some cool new bits.

Take a look at the summary below, and also try stuff out on the updated demo site. As usual you can also find the source on our github page, and download the latest version via NuGet!

Server side page limit

You can now specify a hard page-size limit for OData queries so clients can’t just hammer your server repeatedly. You can do this via the Web API action filter:


Or directly via the LinqToQuerystring extension method

dbcontext.Users.LinqToQuerystring("$skip=3000", maxPageSize: 1000);

Clients can still request a page size smaller than this; the max will only kick in if their specified page size is greater, or omitted.

We’re also planning to add more control over queries and allowed operators, similar to those provided by the WebApi OData offering.

(More) complete list of data types

In addition to the existing String/Int/Date/Complex properties, we’ve now got around to testing and ensuring that the following data types will also work as expected:

Type Example
0..255 or 0x00 to 0xFF

You can check out the OData specification for more details on the format of each data type.

Please note that specifying a byte in hex form may not be part of the v3 specification… if anyone can find me the relevant section of the v3 spec concerning data types then please let me know in the comments as I haven’t yet.

Any/All on enumerable properties

Any & all are defined in the OData v3 spec, and now work with Linq to Querystring too:

$filter=Tags/any(tag: tag eq 'Important') // Find any records tagged as important
$filter=Orders/all(order: order.Size > 10000) // Find customers that have placed only large orders

As long as your Linq Provider supports the query, you can use these will loosely typed data too by marking a property as dynamic using [ ]:

$filter=[Tags]/any(tag: tag eq 'Important')
$filter=[Orders]/all(order: order.[Size] > 10000)

Numeric aggregates

With v0.6, you can now also use the following aggregate functions against Enumerable properties in your queries:

Function Example
$filter=Tags/count() gt 1
$filter=Value/sum() ge 100000
$filter=Result/average() lt 50
$filter=Grade/max() eq 'A'
$filter=Grade/min() eq 'F'

Min and Max will work with data types that are comparable according to support by the underlying Linq Provider. All the others will only work with int/long/single/double. None of the above functions take any sub-queries or parameters at this time.

Please note that these aggregates are not in the OData specification as v3 (although they do have Linq equivalents) and the format may change if and when these are added.

Bug fixes

We’ve also addressed some stuff that has come out of the woodwork while tinkering, particularly when using Linq to Querystring against loosely typed data in MongoDB:

  • If either side of a comparison is of type Object, such as when using the dynamic keyword, the framework will attempt to convert this property to the type of it’s opposite counterpart.
  • When an operand evaluates to a boolean, and it’s counterpart is a constant then this will be removed to address issues with linq providers such as Mongo and Entity Framework
  • Constant expressions can now feature on either side of a comparison
  • Added an extensibility point to allow conversion of certain types when creating enumerable expressions, to facilitate situations where an enumerable type is not generic.
  • Added the ability to specify an extra cast when dealing with types that a linq provider does not directly support, but can be boxed to another type such as single->double, byte->int.

We need your feedback

I hope you’ll find some of these features useful… we’ll be covering some more specifics relating to Mongo DB very soon too, so watch this space!

As always please comment or let us know if you like Linq to Querystring and are using it for your project, or if you would like to see any particular features added.


Getting started with Linq to Querystring Part 2 – Filtering Data

In this second post in my introductory series, I’m going to take a look at how we can filter the results from our API using OData\Linq to Querystring. I’m going to be building on the paging sample from the last post, which you can find here if you want to follow along:

This post is intended as a step by step guide, so if you’re just looking for a reference on what you can do with Linq to Querystring, feel free to skip the first two sections.

Making things more interesting

Because our previous sample only contained a single string value (not very interesting for filtering purposes!), I’ve extended things slightly as a starting point for this post. We’re still hardcoding the data, but there’s now a concrete class with multiple properties so we have something to play around with:

This really seemed like a good idea for a demo class until it came to populating it with sample data. I’m not a movie buff… so after a good amount of googling here’s my test data:

I’ve made sure to include a mix of different property types so we can apply a range of filters, so if you’re using your own test data make sure to do this too.

Rendering results as a table

Last time we used knockout to bind the data coming back from our api into a table, and the markup looked like this:

This was fine for retrieving a list of single values, but now we need it to display all the properties of each movie. We could just hard code the columns, but there’s a little trick we can use instead.

As objects in Javascript are just collections of properties, we can iterate over each one, read the property name for each and then add these values to an array from which we can bind our headers. Here’s the modified function that gets the data from our api:

Note that we’ve also added a new observable array to our viewmodel called headers. And to take advantage of that in our UI, we now just need to tweak our table html also:

Remember, in Javascript accessing an object’s properties directly by name is exactly equivilent to accessing them via indexer syntax, i.e:

record.Title === record["Title"]

We’re using this trick, and nested foreach bindings to render each column. To use the indexer syntax, we need to be able to refer to values from both foreach, which we can do using aliases. If these bindings seem a little confusing, take a look at, and see Note 3: Using “as” to give an alias to “foreach” items

Table with columns

Et viola, a nice table. The labels aren’t perfect and the dates are ugly, but it was very simple to achieve.

The $filter operator

So, now that we’ve got some more meaningful results, we can get back to the task at hand. In OData, we use the $filter= query operator to tell our API that we want to apply a filter to our data. Linq to Querystring takes this query filter and then converts it to a Linq ‘Where’ expression.

A few examples (it’s worth noting that the whitespace here is important):

http://localhost/api/values?$filter=Title eq 'Avatar'
http://localhost/api/values?$filter=MetaScore ge 60
http://localhost/api/values?$filter=Recommended eq true
http://localhost/api/values?$filter=not Recommended

No prizes for guessing what the first one does of course, but some of the other operators such as Greater than or Equal aren’t so obvious. These examples give us an idea of the general format for a filter expression in OData; we can reference properties on our model, specify an equality operator, strings are enclosed in single quotes and we can even specify unary boolean expressions.

Here’s the full list of logical operators from the OData v2 specification:

Operator Description Example
Logical Operators
Eq Equal /Suppliers?$filter=City eq ‘Redmond’
Ne Not equal /Suppliers?$filter=City ne ‘London’
Gt Greater than /Products?$filter=Price gt 20
Ge Greater than or equal /Products?$filter=Price ge 10
Lt Less than /Products?$filter=Price lt 20
Le Less than or equal /Products?$filter=Price le 100
And Logical and /Products?$filter=Price le 200 and Price gt 3.5
Or Logical or /Products?$filter=Price le 3.5 or Price gt 200
Not Logical negation /Products?$filter=not StockAvailable
Grouping Operators
() Precedence grouping /Products?$filter=Price lt 30 or (City eq ‘London’ and Price lt 50)


The full OData specification provides a whole host of functions that allow us to manipulate values within our expressions. Linq to Querystring currently supports a very basic subset of these, which will grow as development continues.

Currently only three string functions are supported, the bare minimum which allow us to do useful string searches:

Function Example
String Functions
bool substringof(string po, string p1) /Customers?$filter=substringof(‘Alfreds’, CompanyName)
bool endswith(string p0, string p1) /Customers?$filter=endswith(CompanyName, ‘Futterkiste’)
bool startswith(string p0, string p1) /Customers?$filter=startswith(CompanyName, ‘Alfr’)

Escape characters

Like all string comparisons, we need to be able to filter using escape characters to indicate certain values. OData is no exception, and Linq to Querystring uses the following escape sequences:

Sequence Meaning
\\  \ (backslash)
\t Tab
\b Non-destructive backspace
\n Newline
\f Line feed
\r Carriage return
\’ ‘ (single quote)
‘ (single quote – alternate)

Please note that while these work with Linq to Querystring, they may or may not be compatible with other OData providers.

Creating a basic search UI

Hopefully that all makes sense, so back to the sample project. We now want to add the ability to specify an OData filter when pulling down data from our API. We could do this manually like we did with the paging, but it’s a lot more complex.

For our search UI I’m going to use a jQuery plugin called OData filter UI, which will take care of generating the filter string for us. Currently in pre-release, this plugin will be the subject of it’s own post in this series at a later date. You can follow progress on the github page. For now, install the plugin using nuget:

Install-Package jQuery.ODataFilterUI -Pre

Make sure you’ve added the jquery.odatafilterui-0.1.js file it to your bundles or otherwise included it in the page. In order to use the plugin we add a textbox as a base and then apply the plugin which will then create the more complex bits of the UI. Here’s the markup and the js code that invokes the plugin and tell it what our fields are:

Because the plugin needs to be flexible enough to fit into any UI, it comes with no default styling. I’ve neatened things up a bit using css you can see in this gist if you like: Anyways if you choose to or not, you should see something like the following:

Initial filter ui

Have a play around with the UI to familiarise yourself… it’s fairly straightforward to add or remove filters. You’ll see that for each data type the contents of the operator drop down change accordingly, and also the input type reflects this too. Currently each filter that you add will get ‘ANDed’ together.

All thats left to do now is wire up the filter to our api call. Here’s the final version of the getData function including paging & now filtering:

As you can see, the OData Filter UI plugin has done the hard work of constructing the filter string for us via the getODataFilter() method.

We’ve also refactored the creation of the url to ensure we use ? and & to seperate querystring elements and the url accordingly.

Try out a few different filters, and use your favourite debugger to inspect the url that gets generated. Here are a few examples:

Date filter

http://localhost:54972/api/values?$filter=ReleaseDate lt datetime'2000-01-01T00:00'&$top=5&$skip=0&$inlinecount=allpages

Complex filter

http://localhost:54972/api/values?$filter=Recommended eq true and MetaScore gt 55&$top=5&$skip=0&$inlinecount=allpages

Startswith filter



Starting with the code from Part 1, we’ve changed the test data to return a complex type with different properties, and updated the rendering of the results to reflect this.

We’ve looked at the OData syntax for filtering data, comparison operators, escape sequences and some of the string functions available. We’ve also seen how we can use the jQuery.ODataFilterUI plugin to provide a basic search UI.

Once again, you can check out the Linq to Querystring github page here: and if you want to download the final source for the example in the post you can find that here:

Stay tuned for the next few posts in the series, in which we’ll cover ordering of results, dealing with complex properties and collections, and how Linq to Querystring can work with Mongo DB to query loosely typed data.



Getting started with Linq to Querystring Part 1 – Paging data

It’s been a little while now since I released Linq to Querystring into the wild… we’ve since solved a few issues and it’s been put to use in some real-world applications. Thanks to everyone who’s provided feedback so far!

So lets have a look now at some of the practical applications for Linq to Querystring (and for OData in general) from a beginners perspective. In this post I’ll take you through creating a sample table with paged data using Web API\Linq to Querystring from start to finish.

Getting set up

Fire up Visual Studio and start a new ASP.Net MVC 4 project:

New project

Choose a suitable name and click OK. Then on the next screen, select the Web API template:

Web api template

Leave the rest of the settings as default, and click OK again to create the project.

Once everything loads up, we just need to install LinqToQuerystring before we can get started. To do that, open the package manager console (View->Other Windows->Package Manager Console if it’s not open already), and type the following:

install-package LinqToQuerystring

If all goes well, you should see something like this:

Install package

Also make sure to add the WebAPI extension to make things even easier to use:

PM> install-package LinqToQuerystring.WebApi

Now we’re ready to get started.

Setting up the API

First we need to write some code in our API so that we can retrieve some values. Linq to Querystring can work with any type of data source or format, so long as your API method can return an IQueryable<>. Open up the ValuesController.cs file that was created for us when we started the project.

It will have the standard methods for the main HTTP verbs as usual… for this sample we’re only interested in retrieving multple records, so we can hose everything apart from the Get method.

Change this method to return an IQueryable instead of an IEnumerable; you’ll also need to use the AsQueryable() extension method on the return statement. Finally, add some more sample strings to the array and give them more imaginative values than just ‘value1’, ‘value2’ otherwise it’s very dull.

If you like you can hook this up to a source of complex objects, from Entity Framework or your favourite document database solution. I’ve just hard coded some values for simplicity as the example works just as well.

If everything has gone to plan, you should be able to fire up your solution and browse to http://localhost:<port>/api/values and get some data back:

Xml bleurgh

Ugh! XML. This isn’t the 90’s. Lets remove the XML formatter from the Web API config so we don’t have to look at it anymore.

Open up the WebApiConfig.cs file in the App_Start folder, and add the first two lines to the Register method so it looks like below:

Fire it up again, and viola… some nice friendly JSON:

Nice friendly json

Now we’ve got some test data, we can look at sorting out our UI.

On the client side

So what we now want to do is render our data into a table, and provide the user with some controls for paging the data. We’ll need to tell them how many records there are in total, allow them to choose how many records they want on each page, and provide a button to click that will retrieve the data.

We’ll go ahead and modify the template Index.cshtml that came with our project to include those elements we need. I’ve made mine look something like this (photoshopped for size):

gui sample

I’ve omitted it from this post for succinctness, but you can get the cshtml source here (or build it yourself if you’re not lazy!):

To make things easier, I’m going to use Knockout.JS to map the values and button click from our form controls onto a viewmodel, which will encapsulate all our functionality. If you’re not familiar with knockout, you can find out more here:

To use knockout, you’ll need to reference it in Layout.cshtml… you can do this directly or use the bundle functionality in MVC 4. Anyways once you’ve done that… here’s the viewmodel and the javascript that fetches the data and wires it all up:

It’s quite straightforward, we have a getData function that makes the ajax call to our API which is also called when the page first loads. We have a bunch of observable properties, and then a pages computed observable that will provide a correct list of pages whenever the page size or record count changes. This provides the list of options for the page size drop down.

Fire up the app again and take a look at your handiwork. You should be able to see that the list of available pages changes as you select a different page size, and the data is displayed along with the correct total. Give yourself a cookie.

So what about the paging?

So now comes the hard part… well it would be if it wasn’t for Linq to Querystring. I’ve deliberately left this till last so you can see just how easy this is. First of all, we need to modify our API method to provide OData query support like so:

// GET api/values
public IQueryable<string> Get()

Now on the client side, we can inform our API that we want to page the data via the OData query operators $top and $skip. As you might expect, $top specifies that we want a restricted number of results, and $skip tells our api to jump over a specified number of records beforehand.

All we need to do is modify our url to use the values from the model:

var skip = model.pageSize() * (model.currentPage() - 1);
$.get("/api/values?$top=" + model.pageSize() + "&$skip=" + skip, [...]

Very simple indeed. If you’re really paying attention though, you’ll notice there’s one last thing we need to do. Our count is now wrong as it doesn’t bring back the total number of records, only the number in the current page.

We can solve that by adding the OData $inlinecount=allpages query operator. Remember the JSON we got back earlier? After adding the inlinecount it now looks like this:

Inline count json

So now we can use the Count and Results properties to provide data for our model. With these tweaks in place, our final getData() implementation now looks like this.

Fire up the sample app for one last time, and we now have a working data paging implementation!

Paging example 1

Paging example 2

This is just a taste of what OData\Linq to Querystring has to offer. Check out Part 2 where I extend this sample to see how we can also perform filtering on our data.

Also feel free to take a look at the github page for the current project progress and features. You can also find the full source for this sample here:



Using Linq to query loosely typed data in MongoDB

As of version 0.5.1 Linq to Querystring now supports Mongo DB out of the box, via the linq support provided by the C# driver. But what is really cool is that with a little bit of code, we can also write Linq queries and hence perform Linq to Querystring filtering on loosely typed data!

Fork of  the MongoDb driver

When I talk about Linq queries against loosely typed data, I mean stuff like this:

var results = mongoCollection.AsQueryable().Where(o => o["Name"] == "Roysvork");

Unfortunately this is not supported by the Mongo C# linq stuff out of the box… the driver only know hows to handle indexers when dealing with arrays. There is a pull request pending to fix this issue which will hopefully be resolved very soon, but for now you can use my fork of the driver which is also available as a nuget package:

PM> Install-Package mongocsharp.linqindexers

I will try to keep this up to date with the latest source from the driver itself, but please bear in mind that although it should be sound and working, it is not official nor supported by 10gen! Hopefully it won’t be around for very long.

Serialisation info

Now for the next step… the Mongo driver needs a source of serialiation information in order to know what to do with our queries. The most common way this usually works is via class maps (either implicit or explicit) , and a BsonClassMapSerializer. When dealing with loosely typed data we don’t have this information available to us however, and documentation is quite sparse on the matter.

After a bit of digging around though, there is a class in the driver that we can use… the BsonDocumentBackedClassSerializer. As the name suggests we need to use this in conjunction with a BsonDocumentBackedClass. Both of these classes are abstract, so we need to write a bit of boilerplate in order to use them.

Here’s the class:

And here’s the serializer:

Using the MongoDocument class

The MongoDocument class has an indexer, so basic usage of the mongo document class works like this:

There’s a couple of cool things at play here… the serializer takes care of generating an _id for us by implementing IBsonIdProvider members GetDocumentId and SetDocumentId. The document class itself also has an implicit cast operator back to BsonDocument for ease of use when you need more granularity. Seems simple? It is!

Registering concrete members with the serializer

There is one more thing that I’d like to elaborate on a little bit, as you may find it useful to get a little bit more flexibility out of your loosely typed MongoDocument. If you look closely at the code above, you’ll see this property in the class:

public ObjectId Id { get; set; }

And then correspondingly in the constructor for the serializer:

this.RegisterMember("Id", "_id", ObjectIdSerializer.Instance, typeof(ObjectId), null);

This allows us to control how our document gets serialized, and also how it behaves in Linq. Given these lines, the following is perfectly valid and works fine, even though the value itself is stored in the BsonDocument backing the class:

var record = mongoCollection.AsQueryable().Where(o => o.Id == ObjectId.Parse("ABCDE1234"));

You can add more of these if you like… just add a property and a corresponding register member call… the parameters should be fairly straightforward, just make sure to pick the appropriate serializer and type.

Just the start

I’m not sure why the BsonDocumentBackedClass and serializer aren’t more well documented. It seems like up until now they have only really seen internal use, but we are using this code in a project that is nearing completion, it’s stable and working really well for us.

There is much more that we can do with MongoDb and Linq by using this code, and in the next post in this series I’ll be exploring how we can work with nested objects and child collections by controlling serialization of our MongoDocument even further.

Don’t forget, you can use this in conjunction with Linq to Querystring and the [] notation to combine your loosely typed data structures with the power of OData. Why not give it a try today!



An OData Journey in ASP.NET Web API Part 2 – Introducing Linq to Querystring

A Brief Recap

First of all I’d like to apologise for the delay in the coming of the second part of this series. It was originally intended to be a guide to building a simple oData query parser with ANTLR, but as I worked on the samples, it quickly turned into a full scale project.

I few months back I came across a need to use OData with a loosely typed data structure. I quickly found that OData support in Asp.NET Web API was readily available… but only when coding against a known entity model. Also not all features of OData are available out of the box, and even fewer work without having to jump through significant hoops.  I started playing around with a solution, and this is the result.

Presenting: Linq to Querystring

The aim of the Linq to Querystring project is to provide a fast, lightweight subset of the OData URI Specification, with additional flexibility to allow use of loosely typed\dynamic data structures. It also supports the $inlinecount and $select operators, and at the time of writing support for $expand is in development.

Linq to Querystring works by parsing an OData query using ANTLR, and then mapping the resulting syntax tree on to a .NET IQueryable. This means that in theory it can work with any Queryable Provider, at present it has been tested with Linq to Objects, Entity Framework & MongoDB

To get started, first add it to your project using NuGet. Once you’ve done that, simply add the following attribute to a Web API controller action that returns an IQueryable or Task<IQueryable>:

public IQueryable<Movie> Get()

And that’s all there is to it. You can now append OData query parameters to your API calls and see the results. You can also use the built in IQueryable extension methods manually if you need to.

Addressing issues with OData

One thing I should stress is that Linq to Querystring is that the OData specification itself is very extensive, and Linq to Querystring does not claim (or intend) to support all of it. In fact, OData itself seems to split opinion – see here for example:

In to the answer above, Mythz states some concerns that proponents of REST often have about OData, which Linq to Querystring goes some way towards addressing:

  • Poor development practices – Linq to Querystring is simple, flexible and open source, so it can respond to new technologies and paradigms.
  • Promotes bad web service practices – No longer tied to your DBMS as it works with any IQueryable, so you don’t have to expose your data model through your services.
  • Only used in Microsoft technologies – The main expression parsing engine of Linq to Querystring is written in ANTLR so can be easily ported to other languages that support construction of expression trees.
  • OData is slow – Leaving out certain elements of the protocol helps to keep things fast compared to full blown OData implementations. All Linq to Querystring does is map the AST produced by ANTLR onto an IQueryable expression tree.

Of course this is probably not going to convince true REST zealots, but I definately see a need and use for in-query filtering regardless if you prefer HATEOAS or CSDS.

Flexibility and extra functionality

Additionally – due to it’s flexbility – the project may also include features that are not present in the standard OData query specification. Such features are carefully designed not to detract from the power of OData, always augmenting the existing functionality.

For example, in Linq to Querystring you can use the ‘[‘ and ‘]’ brackets to designate that a property should be interpreted dynamically. For example the following filter query

$filter=[Age] gt 18

Is equivilent to:

looselyTypedList.Where(o => o["Age"] > 18);

Current features & roadmap

Please consult the Github site for currently supported features and documentation, as these are changing all the time. Some highlights include:

  • Support for other API frameworks, NancyFX\ServiceStack
  • UI plugin for constructing Linq to Querystring OData queries
  • Support for the $expand operator
  • Testing for other NoSQL Linq Providers

The project is still in development, so some things might not work exactly correctly so please let me know if they do by registering an issue on github, or submit a pull request. Be sure to check back in on the github page regularly for updates in the next few weeks, and l’ll also be writing a series of articles on how to make the most of OData, so stay tuned!

If you’re looking for better OData functionality in your API, I think Linq to Querystring could be just what you need. Don’t take my word for it though, take a test drive over at the demo site: and see for yourself!


Running Jasmine Tests Hosted in IIS Express as part of a TeamCity Build

This week I’ve been having a lot of fun setting up a CI server for our project. I went with TeamCity as it’s a great product and there’s oodles of documentation out there so setting things up is a doddle. I chose to set up our server on a Windows Azure virtual machine, there’s a guide here on how to get started if you’re interested:

I promptly set about creating a configuration that would run all my unit tests, but ran into a small problem when it came to the JavaScript side of things.

I’d designed my test project to re-use the bundle config from my web app, and then used MVC to render the test runner. I thought I had been very clever… I had the benefit of picking up new source files as and when I created them; no need to constantly add references to new scripts in the test project.

When I came to run these tests as part of my TeamCity build process however, I realised that I needed to compile and host my tests in order to run them… not something that is easily achieveable as part of a normal build process. We don’t always know where our code will be checked out to, and we may need to do this in a way that will work for multiple configurations.

Not to worry though, with a bit of coding, we can make this work.


The requirements

Our chain of events needs to run as follows:

  • Build the test project
  • Start IIS Express to host the tests
  • Run the tests and capture the results
  • Shut down IIS Express

Seems simple enough. Dan Merino has a great post on how to use the jasmine team city reporter in conjunction with Phantom.JS to run our tests and process the results:

It’s also pretty easy to run IIS express from the command line (of course you’ll need to have iis express installed on your build server first):

Where it all comes unstuck however, is that we need to start IIS express after we’ve built our code, but before running our tests. Then we need to stop it again after our tests have run. There’s no built in way to do this with team city however, we need to script this in some way or write an app to help us.


Phantom Express
First we need to configure a runner in our test project that will output the results in a form that TeamCity can interpret, we can do this using the TeamCity reporter:

    <title>Jasmine Spec Runner</title>

    <link rel="shortcut icon" type="image/png" href="/Content/jasmine/jasmine_favicon.png">
    <link rel="stylesheet" type="text/css" href="/Content/jasmine/jasmine.css">


    <script type="text/javascript">
        (function () {

            var jasmineEnv = jasmine.getEnv();
            jasmineEnv.updateInterval = 1000;

            var teamCityReporter = new jasmine.TeamcityReporter();

            var currentWindowOnload = window.onload;

            window.onload = function () {
                if (currentWindowOnload) {

            function execJasmine() {




Secondly, we need a control file for phantom.js that will load our runner. Here’s one based on Dan’s example that will run our tests and pipe the console output:

    console.log('Loading a web page');
    var page = new WebPage();
    var url = "http://localhost:8080/tests/teamcityrunner";
    phantom.viewportSize = {width: 800, height: 600};

    //This is required because PhantomJS sandboxes the website and it does not show up the console messages form that page by default
    page.onConsoleMessage = function (msg) { console.log(msg); };

    //Open the website, function (status) {

        //Page is loaded!
        if (status !== 'success') {
            console.log('Unable to load the address!');
        } else {
            //Using a delay to make sure the JavaScript is executed in the browser
            window.setTimeout(function () {
            }, 1000);

I wrote a quick command line app that will do the rest for us. All we need to do is supply it with the location of the iisexpress executable, the test site root, port, location of phantomjs and the control js file. Just make sure that you provide an appropriate timeout in the control.js file so that your tests have time to run before phantom.js closes.

I’ve copied the code for the console app into a gist as it was too long to post here:, you just need to compile this and copy it to your build server.


Finally, here’s a snapshot of the resulting configuration in Team City:


Now when we run our build, phantom express will fire up iis express, run our tests and voila!


Now you can utilise all the benefits of MVC (or any other aspect of .net) to include files and specs for your Javascript unit test suite and render your test runner. Not bad!