Skip to content

New Features in Linq to Querystring v0.6

Linq to Querystring v0.6 has just gone live on to Nuget, and contains a whole bunch of new features. This has been the biggest update so far, and brings together some vital components& bug fixes, as well as some cool new bits.

Take a look at the summary below, and also try stuff out on the updated demo site. As usual you can also find the source on our github page, and download the latest version via NuGet!

Server side page limit

You can now specify a hard page-size limit for OData queries so clients can’t just hammer your server repeatedly. You can do this via the Web API action filter:

[LinqToQueryable(maxPageSize=1000)]

Or directly via the LinqToQuerystring extension method

dbcontext.Users.LinqToQuerystring("$skip=3000", maxPageSize: 1000);

Clients can still request a page size smaller than this; the max will only kick in if their specified page size is greater, or omitted.

We’re also planning to add more control over queries and allowed operators, similar to those provided by the WebApi OData offering.

(More) complete list of data types

In addition to the existing String/Int/Date/Complex properties, we’ve now got around to testing and ensuring that the following data types will also work as expected:

Type Example
Long
30000000000L
Single
123.456f
Double
12345678.234234
Byte
0..255 or 0x00 to 0xFF
Guid
guid'12345678-aaaa-bbbb-cccc-ddddeeeeffff'

You can check out the OData specification for more details on the format of each data type.

Please note that specifying a byte in hex form may not be part of the v3 specification… if anyone can find me the relevant section of the v3 spec concerning data types then please let me know in the comments as I haven’t yet.

Any/All on enumerable properties

Any & all are defined in the OData v3 spec, and now work with Linq to Querystring too:

$filter=Tags/any(tag: tag eq 'Important') // Find any records tagged as important
$filter=Orders/all(order: order.Size > 10000) // Find customers that have placed only large orders

As long as your Linq Provider supports the query, you can use these will loosely typed data too by marking a property as dynamic using [ ]:

$filter=[Tags]/any(tag: tag eq 'Important')
$filter=[Orders]/all(order: order.[Size] > 10000)

Numeric aggregates

With v0.6, you can now also use the following aggregate functions against Enumerable properties in your queries:

Function Example
Count()
$filter=Tags/count() gt 1
Sum()
$filter=Value/sum() ge 100000
Average()
$filter=Result/average() lt 50
Max()
$filter=Grade/max() eq 'A'
Min()
$filter=Grade/min() eq 'F'

Min and Max will work with data types that are comparable according to support by the underlying Linq Provider. All the others will only work with int/long/single/double. None of the above functions take any sub-queries or parameters at this time.

Please note that these aggregates are not in the OData specification as v3 (although they do have Linq equivalents) and the format may change if and when these are added.

Bug fixes

We’ve also addressed some stuff that has come out of the woodwork while tinkering, particularly when using Linq to Querystring against loosely typed data in MongoDB:

  • If either side of a comparison is of type Object, such as when using the dynamic keyword, the framework will attempt to convert this property to the type of it’s opposite counterpart.
  • When an operand evaluates to a boolean, and it’s counterpart is a constant then this will be removed to address issues with linq providers such as Mongo and Entity Framework
  • Constant expressions can now feature on either side of a comparison
  • Added an extensibility point to allow conversion of certain types when creating enumerable expressions, to facilitate situations where an enumerable type is not generic.
  • Added the ability to specify an extra cast when dealing with types that a linq provider does not directly support, but can be boxed to another type such as single->double, byte->int.

We need your feedback

I hope you’ll find some of these features useful… we’ll be covering some more specifics relating to Mongo DB very soon too, so watch this space!

As always please comment or let us know if you like Linq to Querystring and are using it for your project, or if you would like to see any particular features added.

Pete

Advertisements

Getting started with Linq to Querystring Part 2 – Filtering Data

In this second post in my introductory series, I’m going to take a look at how we can filter the results from our API using OData\Linq to Querystring. I’m going to be building on the paging sample from the last post, which you can find here if you want to follow along: https://github.com/Roysvork/LinqToQuerystringPagingSample.

This post is intended as a step by step guide, so if you’re just looking for a reference on what you can do with Linq to Querystring, feel free to skip the first two sections.

Making things more interesting

Because our previous sample only contained a single string value (not very interesting for filtering purposes!), I’ve extended things slightly as a starting point for this post. We’re still hardcoding the data, but there’s now a concrete class with multiple properties so we have something to play around with:

This really seemed like a good idea for a demo class until it came to populating it with sample data. I’m not a movie buff… so after a good amount of googling here’s my test data:

I’ve made sure to include a mix of different property types so we can apply a range of filters, so if you’re using your own test data make sure to do this too.

Rendering results as a table

Last time we used knockout to bind the data coming back from our api into a table, and the markup looked like this:

This was fine for retrieving a list of single values, but now we need it to display all the properties of each movie. We could just hard code the columns, but there’s a little trick we can use instead.

As objects in Javascript are just collections of properties, we can iterate over each one, read the property name for each and then add these values to an array from which we can bind our headers. Here’s the modified function that gets the data from our api:

Note that we’ve also added a new observable array to our viewmodel called headers. And to take advantage of that in our UI, we now just need to tweak our table html also:

Remember, in Javascript accessing an object’s properties directly by name is exactly equivilent to accessing them via indexer syntax, i.e:

record.Title === record["Title"]

We’re using this trick, and nested foreach bindings to render each column. To use the indexer syntax, we need to be able to refer to values from both foreach, which we can do using aliases. If these bindings seem a little confusing, take a look at http://knockoutjs.com/documentation/foreach-binding.html, and see Note 3: Using “as” to give an alias to “foreach” items

Table with columns

Et viola, a nice table. The labels aren’t perfect and the dates are ugly, but it was very simple to achieve.

The $filter operator

So, now that we’ve got some more meaningful results, we can get back to the task at hand. In OData, we use the $filter= query operator to tell our API that we want to apply a filter to our data. Linq to Querystring takes this query filter and then converts it to a Linq ‘Where’ expression.

A few examples (it’s worth noting that the whitespace here is important):

http://localhost/api/values?$filter=Title eq 'Avatar'
http://localhost/api/values?$filter=MetaScore ge 60
http://localhost/api/values?$filter=Recommended eq true
http://localhost/api/values?$filter=not Recommended

No prizes for guessing what the first one does of course, but some of the other operators such as Greater than or Equal aren’t so obvious. These examples give us an idea of the general format for a filter expression in OData; we can reference properties on our model, specify an equality operator, strings are enclosed in single quotes and we can even specify unary boolean expressions.

Here’s the full list of logical operators from the OData v2 specification:

Operator Description Example
Logical Operators
Eq Equal /Suppliers?$filter=City eq ‘Redmond’
Ne Not equal /Suppliers?$filter=City ne ‘London’
Gt Greater than /Products?$filter=Price gt 20
Ge Greater than or equal /Products?$filter=Price ge 10
Lt Less than /Products?$filter=Price lt 20
Le Less than or equal /Products?$filter=Price le 100
And Logical and /Products?$filter=Price le 200 and Price gt 3.5
Or Logical or /Products?$filter=Price le 3.5 or Price gt 200
Not Logical negation /Products?$filter=not StockAvailable
Grouping Operators
() Precedence grouping /Products?$filter=Price lt 30 or (City eq ‘London’ and Price lt 50)

Functions

The full OData specification provides a whole host of functions that allow us to manipulate values within our expressions. Linq to Querystring currently supports a very basic subset of these, which will grow as development continues.

Currently only three string functions are supported, the bare minimum which allow us to do useful string searches:

Function Example
String Functions
bool substringof(string po, string p1) /Customers?$filter=substringof(‘Alfreds’, CompanyName)
bool endswith(string p0, string p1) /Customers?$filter=endswith(CompanyName, ‘Futterkiste’)
bool startswith(string p0, string p1) /Customers?$filter=startswith(CompanyName, ‘Alfr’)

Escape characters

Like all string comparisons, we need to be able to filter using escape characters to indicate certain values. OData is no exception, and Linq to Querystring uses the following escape sequences:

Sequence Meaning
\\  \ (backslash)
\t Tab
\b Non-destructive backspace
\n Newline
\f Line feed
\r Carriage return
\’ ‘ (single quote)
‘ (single quote – alternate)

Please note that while these work with Linq to Querystring, they may or may not be compatible with other OData providers.

Creating a basic search UI

Hopefully that all makes sense, so back to the sample project. We now want to add the ability to specify an OData filter when pulling down data from our API. We could do this manually like we did with the paging, but it’s a lot more complex.

For our search UI I’m going to use a jQuery plugin called OData filter UI, which will take care of generating the filter string for us. Currently in pre-release, this plugin will be the subject of it’s own post in this series at a later date. You can follow progress on the github page. For now, install the plugin using nuget:

Install-Package jQuery.ODataFilterUI -Pre

Make sure you’ve added the jquery.odatafilterui-0.1.js file it to your bundles or otherwise included it in the page. In order to use the plugin we add a textbox as a base and then apply the plugin which will then create the more complex bits of the UI. Here’s the markup and the js code that invokes the plugin and tell it what our fields are:

Because the plugin needs to be flexible enough to fit into any UI, it comes with no default styling. I’ve neatened things up a bit using css you can see in this gist if you like: https://gist.github.com/Roysvork/a4d067e9550d32dc74b8. Anyways if you choose to or not, you should see something like the following:

Initial filter ui

Have a play around with the UI to familiarise yourself… it’s fairly straightforward to add or remove filters. You’ll see that for each data type the contents of the operator drop down change accordingly, and also the input type reflects this too. Currently each filter that you add will get ‘ANDed’ together.

All thats left to do now is wire up the filter to our api call. Here’s the final version of the getData function including paging & now filtering:

As you can see, the OData Filter UI plugin has done the hard work of constructing the filter string for us via the getODataFilter() method.

We’ve also refactored the creation of the url to ensure we use ? and & to seperate querystring elements and the url accordingly.

Try out a few different filters, and use your favourite debugger to inspect the url that gets generated. Here are a few examples:

Date filter

http://localhost:54972/api/values?$filter=ReleaseDate lt datetime'2000-01-01T00:00'&$top=5&$skip=0&$inlinecount=allpages

Complex filter

http://localhost:54972/api/values?$filter=Recommended eq true and MetaScore gt 55&$top=5&$skip=0&$inlinecount=allpages

Startswith filter

http://localhost:54972/api/values?$filter=substringof('(The)',Title)&$top=5&$skip=0&$inlinecount=allpages

Summary

Starting with the code from Part 1, we’ve changed the test data to return a complex type with different properties, and updated the rendering of the results to reflect this.

We’ve looked at the OData syntax for filtering data, comparison operators, escape sequences and some of the string functions available. We’ve also seen how we can use the jQuery.ODataFilterUI plugin to provide a basic search UI.

Once again, you can check out the Linq to Querystring github page here: https://github.com/Roysvork/LinqToQuerystring and if you want to download the final source for the example in the post you can find that here: https://github.com/Roysvork/LinqToQuerystringFilteringSample

Stay tuned for the next few posts in the series, in which we’ll cover ordering of results, dealing with complex properties and collections, and how Linq to Querystring can work with Mongo DB to query loosely typed data.

Pete

References

https://roysvork.wordpress.com/2013/05/12/getting-started-with-linq-to-querystring-part-1-paging-data/
https://github.com/Roysvork/LinqToQuerystringPagingSample
http://www.odata.org/documentation/odata-v2-documentation/uri-conventions/#45_Filter_System_Query_Option_filter
http://knockoutjs.com/documentation/foreach-binding.html
http://github.com/roysvork/jquery.odatafilterui
https://gist.github.com/Roysvork/a4d067e9550d32dc74b8
https://github.com/Roysvork/LinqToQuerystring
https://github.com/Roysvork/LinqToQuerystringFilteringSample

Getting started with Linq to Querystring Part 1 – Paging data

It’s been a little while now since I released Linq to Querystring into the wild… we’ve since solved a few issues and it’s been put to use in some real-world applications. Thanks to everyone who’s provided feedback so far!

So lets have a look now at some of the practical applications for Linq to Querystring (and for OData in general) from a beginners perspective. In this post I’ll take you through creating a sample table with paged data using Web API\Linq to Querystring from start to finish.

Getting set up

Fire up Visual Studio and start a new ASP.Net MVC 4 project:

New project

Choose a suitable name and click OK. Then on the next screen, select the Web API template:

Web api template

Leave the rest of the settings as default, and click OK again to create the project.

Once everything loads up, we just need to install LinqToQuerystring before we can get started. To do that, open the package manager console (View->Other Windows->Package Manager Console if it’s not open already), and type the following:

install-package LinqToQuerystring

If all goes well, you should see something like this:

Install package

Also make sure to add the WebAPI extension to make things even easier to use:

PM> install-package LinqToQuerystring.WebApi

Now we’re ready to get started.

Setting up the API

First we need to write some code in our API so that we can retrieve some values. Linq to Querystring can work with any type of data source or format, so long as your API method can return an IQueryable<>. Open up the ValuesController.cs file that was created for us when we started the project.

It will have the standard methods for the main HTTP verbs as usual… for this sample we’re only interested in retrieving multple records, so we can hose everything apart from the Get method.

Change this method to return an IQueryable instead of an IEnumerable; you’ll also need to use the AsQueryable() extension method on the return statement. Finally, add some more sample strings to the array and give them more imaginative values than just ‘value1’, ‘value2’ otherwise it’s very dull.

If you like you can hook this up to a source of complex objects, from Entity Framework or your favourite document database solution. I’ve just hard coded some values for simplicity as the example works just as well.

If everything has gone to plan, you should be able to fire up your solution and browse to http://localhost:<port>/api/values and get some data back:

Xml bleurgh

Ugh! XML. This isn’t the 90’s. Lets remove the XML formatter from the Web API config so we don’t have to look at it anymore.

Open up the WebApiConfig.cs file in the App_Start folder, and add the first two lines to the Register method so it looks like below:

Fire it up again, and viola… some nice friendly JSON:

Nice friendly json

Now we’ve got some test data, we can look at sorting out our UI.

On the client side

So what we now want to do is render our data into a table, and provide the user with some controls for paging the data. We’ll need to tell them how many records there are in total, allow them to choose how many records they want on each page, and provide a button to click that will retrieve the data.

We’ll go ahead and modify the template Index.cshtml that came with our project to include those elements we need. I’ve made mine look something like this (photoshopped for size):

gui sample

I’ve omitted it from this post for succinctness, but you can get the cshtml source here (or build it yourself if you’re not lazy!): https://gist.github.com/Roysvork/5564031

To make things easier, I’m going to use Knockout.JS to map the values and button click from our form controls onto a viewmodel, which will encapsulate all our functionality. If you’re not familiar with knockout, you can find out more here: http://knockoutjs.com/documentation/introduction.html.

To use knockout, you’ll need to reference it in Layout.cshtml… you can do this directly or use the bundle functionality in MVC 4. Anyways once you’ve done that… here’s the viewmodel and the javascript that fetches the data and wires it all up:

It’s quite straightforward, we have a getData function that makes the ajax call to our API which is also called when the page first loads. We have a bunch of observable properties, and then a pages computed observable that will provide a correct list of pages whenever the page size or record count changes. This provides the list of options for the page size drop down.

Fire up the app again and take a look at your handiwork. You should be able to see that the list of available pages changes as you select a different page size, and the data is displayed along with the correct total. Give yourself a cookie.

So what about the paging?

So now comes the hard part… well it would be if it wasn’t for Linq to Querystring. I’ve deliberately left this till last so you can see just how easy this is. First of all, we need to modify our API method to provide OData query support like so:

// GET api/values
[LinqToQueryable]
public IQueryable<string> Get()

Now on the client side, we can inform our API that we want to page the data via the OData query operators $top and $skip. As you might expect, $top specifies that we want a restricted number of results, and $skip tells our api to jump over a specified number of records beforehand.

All we need to do is modify our url to use the values from the model:

var skip = model.pageSize() * (model.currentPage() - 1);
$.get("/api/values?$top=" + model.pageSize() + "&$skip=" + skip, [...]

Very simple indeed. If you’re really paying attention though, you’ll notice there’s one last thing we need to do. Our count is now wrong as it doesn’t bring back the total number of records, only the number in the current page.

We can solve that by adding the OData $inlinecount=allpages query operator. Remember the JSON we got back earlier? After adding the inlinecount it now looks like this:

Inline count json

So now we can use the Count and Results properties to provide data for our model. With these tweaks in place, our final getData() implementation now looks like this.

Fire up the sample app for one last time, and we now have a working data paging implementation!

Paging example 1

Paging example 2

This is just a taste of what OData\Linq to Querystring has to offer. Check out Part 2 where I extend this sample to see how we can also perform filtering on our data.

Also feel free to take a look at the github page for the current project progress and features. You can also find the full source for this sample here: https://github.com/Roysvork/LinqToQuerystringPagingSample

Pete

References:

Using Linq to query loosely typed data in MongoDB

As of version 0.5.1 Linq to Querystring now supports Mongo DB out of the box, via the linq support provided by the C# driver. But what is really cool is that with a little bit of code, we can also write Linq queries and hence perform Linq to Querystring filtering on loosely typed data!

Fork of  the MongoDb driver

When I talk about Linq queries against loosely typed data, I mean stuff like this:

var results = mongoCollection.AsQueryable().Where(o => o["Name"] == "Roysvork");

Unfortunately this is not supported by the Mongo C# linq stuff out of the box… the driver only know hows to handle indexers when dealing with arrays. There is a pull request pending to fix this issue which will hopefully be resolved very soon, but for now you can use my fork of the driver which is also available as a nuget package:

PM> Install-Package mongocsharp.linqindexers

I will try to keep this up to date with the latest source from the driver itself, but please bear in mind that although it should be sound and working, it is not official nor supported by 10gen! Hopefully it won’t be around for very long.

Serialisation info

Now for the next step… the Mongo driver needs a source of serialiation information in order to know what to do with our queries. The most common way this usually works is via class maps (either implicit or explicit) , and a BsonClassMapSerializer. When dealing with loosely typed data we don’t have this information available to us however, and documentation is quite sparse on the matter.

After a bit of digging around though, there is a class in the driver that we can use… the BsonDocumentBackedClassSerializer. As the name suggests we need to use this in conjunction with a BsonDocumentBackedClass. Both of these classes are abstract, so we need to write a bit of boilerplate in order to use them.

Here’s the class:

And here’s the serializer:

Using the MongoDocument class

The MongoDocument class has an indexer, so basic usage of the mongo document class works like this:

There’s a couple of cool things at play here… the serializer takes care of generating an _id for us by implementing IBsonIdProvider members GetDocumentId and SetDocumentId. The document class itself also has an implicit cast operator back to BsonDocument for ease of use when you need more granularity. Seems simple? It is!

Registering concrete members with the serializer

There is one more thing that I’d like to elaborate on a little bit, as you may find it useful to get a little bit more flexibility out of your loosely typed MongoDocument. If you look closely at the code above, you’ll see this property in the class:

[BsonId]
public ObjectId Id { get; set; }

And then correspondingly in the constructor for the serializer:

this.RegisterMember("Id", "_id", ObjectIdSerializer.Instance, typeof(ObjectId), null);

This allows us to control how our document gets serialized, and also how it behaves in Linq. Given these lines, the following is perfectly valid and works fine, even though the value itself is stored in the BsonDocument backing the class:

var record = mongoCollection.AsQueryable().Where(o => o.Id == ObjectId.Parse("ABCDE1234"));

You can add more of these if you like… just add a property and a corresponding register member call… the parameters should be fairly straightforward, just make sure to pick the appropriate serializer and type.

Just the start

I’m not sure why the BsonDocumentBackedClass and serializer aren’t more well documented. It seems like up until now they have only really seen internal use, but we are using this code in a project that is nearing completion, it’s stable and working really well for us.

There is much more that we can do with MongoDb and Linq by using this code, and in the next post in this series I’ll be exploring how we can work with nested objects and child collections by controlling serialization of our MongoDocument even further.

Don’t forget, you can use this in conjunction with Linq to Querystring and the [] notation to combine your loosely typed data structures with the power of OData. Why not give it a try today!

Pete

References:

http://docs.mongodb.org/ecosystem/tutorial/use-linq-queries-with-csharp-driver/
https://github.com/Roysvork/mongo-csharp-driver
https://nuget.org/packages/mongocsharp.linqindexers/
http://docs.mongodb.org/ecosystem/tutorial/serialize-documents-with-the-csharp-driver/
http://api.mongodb.org/csharp/1.8.1/html/225cb105-7edc-9bdf-9b2d-f9232bda4623.htm
https://github.com/Roysvork/LinqToQuerystring#general

An OData Journey in ASP.NET Web API Part 2 – Introducing Linq to Querystring

A Brief Recap

First of all I’d like to apologise for the delay in the coming of the second part of this series. It was originally intended to be a guide to building a simple oData query parser with ANTLR, but as I worked on the samples, it quickly turned into a full scale project.

I few months back I came across a need to use OData with a loosely typed data structure. I quickly found that OData support in Asp.NET Web API was readily available… but only when coding against a known entity model. Also not all features of OData are available out of the box, and even fewer work without having to jump through significant hoops.  I started playing around with a solution, and this is the result.

Presenting: Linq to Querystring

The aim of the Linq to Querystring project is to provide a fast, lightweight subset of the OData URI Specification, with additional flexibility to allow use of loosely typed\dynamic data structures. It also supports the $inlinecount and $select operators, and at the time of writing support for $expand is in development.

Linq to Querystring works by parsing an OData query using ANTLR, and then mapping the resulting syntax tree on to a .NET IQueryable. This means that in theory it can work with any Queryable Provider, at present it has been tested with Linq to Objects, Entity Framework & MongoDB

To get started, first add it to your project using NuGet. Once you’ve done that, simply add the following attribute to a Web API controller action that returns an IQueryable or Task<IQueryable>:

[LinqToQueryable]
public IQueryable<Movie> Get()

And that’s all there is to it. You can now append OData query parameters to your API calls and see the results. You can also use the built in IQueryable extension methods manually if you need to.

Addressing issues with OData

One thing I should stress is that Linq to Querystring is that the OData specification itself is very extensive, and Linq to Querystring does not claim (or intend) to support all of it. In fact, OData itself seems to split opinion – see here for example: http://stackoverflow.com/questions/9577938/odata-with-servicestack.

In to the answer above, Mythz states some concerns that proponents of REST often have about OData, which Linq to Querystring goes some way towards addressing:

  • Poor development practices – Linq to Querystring is simple, flexible and open source, so it can respond to new technologies and paradigms.
  • Promotes bad web service practices – No longer tied to your DBMS as it works with any IQueryable, so you don’t have to expose your data model through your services.
  • Only used in Microsoft technologies – The main expression parsing engine of Linq to Querystring is written in ANTLR so can be easily ported to other languages that support construction of expression trees.
  • OData is slow – Leaving out certain elements of the protocol helps to keep things fast compared to full blown OData implementations. All Linq to Querystring does is map the AST produced by ANTLR onto an IQueryable expression tree.

Of course this is probably not going to convince true REST zealots, but I definately see a need and use for in-query filtering regardless if you prefer HATEOAS or CSDS.

Flexibility and extra functionality

Additionally – due to it’s flexbility – the project may also include features that are not present in the standard OData query specification. Such features are carefully designed not to detract from the power of OData, always augmenting the existing functionality.

For example, in Linq to Querystring you can use the ‘[‘ and ‘]’ brackets to designate that a property should be interpreted dynamically. For example the following filter query

$filter=[Age] gt 18

Is equivilent to:

looselyTypedList.Where(o => o["Age"] > 18);

Current features & roadmap

Please consult the Github site https://github.com/Roysvork/LinqToQuerystring for currently supported features and documentation, as these are changing all the time. Some highlights include:

  • Support for other API frameworks, NancyFX\ServiceStack
  • UI plugin for constructing Linq to Querystring OData queries
  • Support for the $expand operator
  • Testing for other NoSQL Linq Providers

The project is still in development, so some things might not work exactly correctly so please let me know if they do by registering an issue on github, or submit a pull request. Be sure to check back in on the github page regularly for updates in the next few weeks, and l’ll also be writing a series of articles on how to make the most of OData, so stay tuned!

If you’re looking for better OData functionality in your API, I think Linq to Querystring could be just what you need. Don’t take my word for it though, take a test drive over at the demo site: http://linqtoquerystring.azurewebsites.net/ and see for yourself!

Pete

Running Jasmine Tests Hosted in IIS Express as part of a TeamCity Build

This week I’ve been having a lot of fun setting up a CI server for our project. I went with TeamCity as it’s a great product and there’s oodles of documentation out there so setting things up is a doddle. I chose to set up our server on a Windows Azure virtual machine, there’s a guide here on how to get started if you’re interested:

I promptly set about creating a configuration that would run all my unit tests, but ran into a small problem when it came to the JavaScript side of things.

I’d designed my test project to re-use the bundle config from my web app, and then used MVC to render the test runner. I thought I had been very clever… I had the benefit of picking up new source files as and when I created them; no need to constantly add references to new scripts in the test project.

When I came to run these tests as part of my TeamCity build process however, I realised that I needed to compile and host my tests in order to run them… not something that is easily achieveable as part of a normal build process. We don’t always know where our code will be checked out to, and we may need to do this in a way that will work for multiple configurations.

Not to worry though, with a bit of coding, we can make this work.

spc

The requirements

Our chain of events needs to run as follows:

  • Build the test project
  • Start IIS Express to host the tests
  • Run the tests and capture the results
  • Shut down IIS Express
spc

Seems simple enough. Dan Merino has a great post on how to use the jasmine team city reporter in conjunction with Phantom.JS to run our tests and process the results:

It’s also pretty easy to run IIS express from the command line (of course you’ll need to have iis express installed on your build server first):

Where it all comes unstuck however, is that we need to start IIS express after we’ve built our code, but before running our tests. Then we need to stop it again after our tests have run. There’s no built in way to do this with team city however, we need to script this in some way or write an app to help us.

spc

Phantom Express
First we need to configure a runner in our test project that will output the results in a form that TeamCity can interpret, we can do this using the TeamCity reporter:

<html>
<head>
    <title>Jasmine Spec Runner</title>

    <link rel="shortcut icon" type="image/png" href="/Content/jasmine/jasmine_favicon.png">
    <link rel="stylesheet" type="text/css" href="/Content/jasmine/jasmine.css">

    @Html.Partial("TestIncludes");

    <script type="text/javascript">
        (function () {

            var jasmineEnv = jasmine.getEnv();
            jasmineEnv.updateInterval = 1000;

            var teamCityReporter = new jasmine.TeamcityReporter();

            jasmineEnv.addReporter(teamCityReporter);
            var currentWindowOnload = window.onload;

            window.onload = function () {
                if (currentWindowOnload) {
                    currentWindowOnload();
                }
                execJasmine();
            };

            function execJasmine() {
                jasmineEnv.execute();
            }

        })();
    </script>

</head>

<body>
</body>
</html>

Secondly, we need a control file for phantom.js that will load our runner. Here’s one based on Dan’s example that will run our tests and pipe the console output:

    console.log('Loading a web page');
    var page = new WebPage();
    var url = "http://localhost:8080/tests/teamcityrunner";
    phantom.viewportSize = {width: 800, height: 600};

    //This is required because PhantomJS sandboxes the website and it does not show up the console messages form that page by default
    page.onConsoleMessage = function (msg) { console.log(msg); };

    //Open the website
    page.open(url, function (status) {

        //Page is loaded!
        if (status !== 'success') {
            console.log('Unable to load the address!');
        } else {
            //Using a delay to make sure the JavaScript is executed in the browser
            window.setTimeout(function () {
                page.render("output.png");
                phantom.exit();
            }, 1000);
        }
    });
spc

I wrote a quick command line app that will do the rest for us. All we need to do is supply it with the location of the iisexpress executable, the test site root, port, location of phantomjs and the control js file. Just make sure that you provide an appropriate timeout in the control.js file so that your tests have time to run before phantom.js closes.

I’ve copied the code for the console app into a gist as it was too long to post here: https://gist.github.com/Roysvork/5274142, you just need to compile this and copy it to your build server.

spc

Finally, here’s a snapshot of the resulting configuration in Team City:

phantomexpress

Now when we run our build, phantom express will fire up iis express, run our tests and voila!

testresults

Now you can utilise all the benefits of MVC (or any other aspect of .net) to include files and specs for your Javascript unit test suite and render your test runner. Not bad!

spc

Pete

spc

References:

Why I’m giving REST a rest

So there’s quite a hype around REST lately, and I have to admit I’ve been on the bandwagon too. I’ve actually found it quite fun applying the constraints and trying to do things using a standard approach that will be familiar to other developers, and make my API easier to consume.

But recently I’ve become a bit disillusioned. I’ve tried to do a couple of things of late that I just couldn’t get to play nicely with REST:

  • Modeling many-many relationships, assigning or breaking links between resources via a REST API.
  • Bulk updates\inserts, or generally manipulating multiple resources at once.
spc

I frequently come up against the question of what makes an API RESTful, having a dig around, everyone seems to have opinions, but everyone seems to have questions too. Here are some that I’ve asked or that have come up in conversations recently:

  • Is it ok to have a URI that only accepts POSTs?
  • Is it ok to return a different representation on GET than the one expected by a POST to the same URI?
  • Is it ok to assign a URI to the relationships between resources and allow them to be manipulated via representations?
  • Can I update multiple resources by POSTing a composite representation?

Have a read through the comments in these two stack overflow questions, and you’ll quickly see what I mean:
http://stackoverflow.com/questions/511281/patterns-for-handling-batch-operations-in-rest-web-services?lq=1
http://stackoverflow.com/questions/969585/rest-url-design-multiple-resources-in-one-http-call

spc

So what is really RESTful?

RESTful architecture is governed by a set of key constraints… it must involve client-server interaction, be stateless and cacheable. It must be layered, i.e a client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way.

It must provide a “Uniform Interface”, which simplifies and decouples the architecture. Said interface is subject to a set of guiding principles:

  • Requests should identify an individual resource, and then a representation of that resource is returned. This representation is conceptually separate from the resource itself.
  • When a client has a representation of a resource, it should have all the information it needs to manipulate that resource provided it has permission.
  • Each request should include enough information to describe how to interpret the message, i.e specify a media type.
  • Hypermedia as the engine of application state (aka HATEOAS) – Except for entry points to the application, the client should be able to discover actions that can be taken for a resource based on the representation, e.g via hyperlinks or location headers.

You’ll notice I’ve not made any mention of HTTP verbs yet (POST, GET, etc), or status codes. Thats because these are actually incidental to REST… although it was designed on HTTP, it is not limited to the protocol explicitly. RESTful architectures can be built upon any protocol that is sufficiently expressive and has a sufficiently well defined interface. It just so happens that these constructs are very useful for this purpose.

spc

So whats the problem?

The key element that people seem to skimp on or miss completely is the fourth bullet above, or to paraphrase, a REST API must be hypertext driven. Here’s what Roy Fielding, father of REST had to say in 2008:

http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven

For the lazy amonst you (shame on you), here’s are the key quotes

A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience

In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period.

This is also precisely the element that caused me my initial problems. Its one thing to describe the way a resource is related to another resource through hypertext, but much more difficult to describe how you should go about manipulating those relationships… one way to do this is to expose the relationships themselves as resources, but that would require some prior knowledge that the approach was being taken… it’s not implicit.

And if you are performing more than one operation at once? You can’t use location headers… you can return a bunch of links in the body of your response detailing all the resources that were affected, but again you would need some explicit knowledge that those links were present in the body.

Some of us actually don’t see anything wrong with requiring the client have some knowledge of how to interact with the server. An alternative to HATEOAS is Client Server Domain Seperation (CSDS), which defines both client and server as bounded contexts, DDD style.

spc

So that’s why I’ve decided not to worry…

Take ‘Agile’ development as a case in point… most companies take some aspects of agile and use it to their advantage. But the vast majority of us still work to concrete deadlines; anyone doing this is in direct violation of the principles and so cannot be said to be truly agile.

It’s the same with REST… if you don’t make efforts to make your service discoverable then you cannot be said to be making a truly RESTful API. And as it turns out, doing so can sometimes be a bit of a pain. We’d all do a lot better to ignore ‘RESTfulness’ and focus on developing a sensible, easy to use interface.

So while there are all these people debating the finer points about how you should be using verbs or status codes, this is actually more about how to properly use HTTP in the implementation of a REST style architecture rather than what actually defines the architecture as RESTful.

Don’t get me wrong, I’m not going to stop using REST principles to guide good API design, they are sound principles. But from now on, I’ll only be using REST as a wordpress\stackoverflow tag so that people will still be able to find things I write once I stop using the term.

spc

Pete

spc

References:
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
http://en.wikipedia.org/wiki/Representational_state_transfer
http://stackoverflow.com/questions/511281/patterns-for-handling-batch-operations-in-rest-web-services?lq=1
http://stackoverflow.com/questions/969585/rest-url-design-multiple-resources-in-one-http-call
http://byterot.blogspot.co.uk/2012/11/client-server-domain-separation-csds-rest.html