It is with some sadness that I announce the retirement of the roysvork blog. It’s been a great run, but it’s time to move to a new domain and bigger & better things. I won’t be deleting this blog or removing anything, but you won’t see any new posts here and I won’t be migrating existing content across to the new site.
Beyond Code
While I still hold a great love for all things technical, my focus has shifted; as a consultant I code less and so have fewer code related things to write about. Learning to program is one thing, but that alone doesn’t make you a productive member of a development team. Making software is a people business rather than a tech business and it always has been… and with this comes a whole bunch of new things to write about.
With this in mind I’d like to introduce my new rebranded company and blog – Beyond Code. Code is just the beginning, and I hope you’ll join me for the rest of our journey over at the new site. My twitter handle has also changed… you can follow all the latest updates at @beyond_code.
See you all there 🙂
Pete
Before we get started I’d just like to mention that this post is part of the truly excellent F# Advent Calendar 2014 which is a fantastic initiative organised by Sergey Tihon, so big thanks to Sergey and the rest of the F# community as well as wishing you all a merry christmas!
Introduction
Using F# to build web applications is nothing new, we have purpose built F# frameworks like Freya popping up and excellent posts like this one by Mark Seemann. It’s also fairly easy to pick up other .NET frameworks that weren’t designed specifically for F# and build very solid applications.
With that in mind, I’m not just going to write another post about how to build web applications with F#.
Instead, I’d like to introduce the F# community to a whole new way of thinking about web applications, one that draws inspiration from a number of functional programming concepts – primarily pipelining and function composition – to provide a solid base on to which we can build our web applications in F#. This approach is currently known as Graph Based Routing
Some background
So first off – I should point out that I’m not actually an F# guy; in fact I’m pretty new to the language in general so this post is also somewhat of a learning exercise for me. I often find the best way to get acquainted with things is to dive right in, so please feel free to give me pointers in the comments.
Graph based routing itself has been around for a while, in the form of a library called Superscribe (written in C#). I’m not going to go into detail about it’s features; these are language agnostic, and covered by the website and some previous posts.
What I will say is that Superscribe is not a full blown web framework but actually a routing library. In fact, that’s somewhat of an oversimplication… in reality this library takes care of everything between URL and handler. It turns out that routing, content negotiation and some way of invoking a handler is actually all you need to get started building web applications.
Simplicity rules
This simplicity is a key tenet of graph based routing – keeping things minimal helps us build web applications that respond very quickly indeed as there is simply no extra processing going on. If you’re building a very content-heavy application then it’s probably not the right choice, but for APIs it’s incredibly performant.
Lets have a look at an example application using Superscribe in F#:
<?xml version="1.0" encoding="utf-8"?> | |
<packages> | |
<package id="Microsoft.Owin" version="3.0.0" targetFramework="net45" /> | |
<package id="Microsoft.Owin.Host.HttpListener" version="3.0.0" targetFramework="net45" /> | |
<package id="Microsoft.Owin.Host.SystemWeb" version="3.0.0" targetFramework="net45" /> | |
<package id="Microsoft.Owin.Hosting" version="3.0.0" targetFramework="net45" /> | |
<package id="Owin" version="1.0" targetFramework="net45" /> | |
<package id="Superscribe" version="0.4.4.15" targetFramework="net45" /> | |
<package id="Superscribe.Owin" version="0.4.3.14" targetFramework="net45" /> | |
</packages> |
namespace Server | |
open Owin | |
open Microsoft.Owin | |
open Superscribe.Owin | |
open Superscribe.Owin.Engine | |
open Superscribe.Owin.Extensions; | |
type Startup() = | |
member x.Configuration(app: Owin.IAppBuilder) = | |
let define = OwinRouteEngineFactory.Create(); | |
app.UseSuperscribeRouter(define).UseSuperscribeHandler(define) |> ignore | |
define.Route("/hello/world", fun _ -> "Hello World" :> obj) |> ignore | |
define.Route("/hello/fsharp", fun _ -> "Hello from F#!" :> obj) |> ignore | |
[<assembly: OwinStartup(typeof<Startup>)>] | |
do () |
open System | |
open Microsoft.Owin | |
[<EntryPoint>] | |
let main argv = | |
let baseAddress = "http://localhost:8888"; | |
use a = Hosting.WebApp.Start<Server.Startup>(baseAddress) | |
Console.WriteLine("Server running on {0}", baseAddress) | |
Console.ReadLine() |> ignore | |
0 |
Superscribe defaults to a text/html response and will try it’s best to deal with whatever object you return from your handler. You can also do all the usual things like specify custom media type serialisers, return status codes etc.
The key part to focus on here is the define.Route
statement, which allows us to directly assign a handler to a particular route – in this case /hello/world
and /hello/fsharp
. This is kinda cool, but there’s a lot more going on here than meets the eye.
Functions and graph based routing
Graph based routing is so named because it stores route definitions in – you guessed it – a graph structure. Traditional route matching tends focus on tables of strings and pattern matching based on the entire URL, but Superscribe is different.
In the example above the URL /hello/world
gets broken down into it’s respective segments. Each segment is represented by a node in the graph, with the next possible matches as it’s children. Subsequent definitions are also broken down and intelligently added into the graph, so in this instance we end up with something like this:
Route matching is performed by walking the graph and checking for matches – it’s essentially a state machine. This is great because we only need to check for the segments that we expect; we don’t waste time churning through a large route table.
But here’s where it gets interesting. Nodes in graph based routing are comprised of three functions:
- Activation function – returns a boolean indicating if the node is a match for the current segment
- Action function – executed when a match has been found, so we can do things like parameter capture
- Final function – executed when matching finishes on a particular node, i.e the handler
All of these functions can execute absolutely any arbitrary code that we like. With this model we can do some really interesting things such as conditional route matching based on the time of day, a debug flag or even based on live information from a load balancer. Can your pattern matcher do that!?
Efficiency, composibility and extensibility
Graph based routing allows us to build complex web applications that are composed of very simple units. A good approach is to use action functions to compose a pipeline a functions which get executed synchronously once route matching is complete (is this beginning to sound familiar?), but it can also be used for processing segments on the fly, for example in capturing parameter capture.
Here’s another example that shows this compositional nature in action. We’re going to define and use new type of node that will match and capture certain strings. Because Superscribe relies on the C# dynamic keyword, I’ve used the ? operator provided by FSharp.Dynamic
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
type NameBeginningWith(letter) as this = | |
inherit GraphNode() | |
do | |
this.ActivationFunction <- fun data segment -> segment.StartsWith(letter) | |
this.ActionFunctions.Add( | |
"set_param_Name", | |
fun data segment -> data.Parameters?Add("Name", segment)); | |
type Startup() = | |
member x.Configuration(app: Owin.IAppBuilder) = | |
let define = OwinRouteEngineFactory.Create(); | |
app.UseSuperscribeRouter(define).UseSuperscribeHandler(define) |> ignore | |
let hello = ConstantNode("hello") | |
define.Route( | |
hello / NameBeginningWith "p", | |
fun o -> | |
"Hello " + o?Parameters?Name + ", great first letter!" :> obj) |> ignore | |
define.Route( | |
hello / String "Name", | |
fun o -> | |
"Hello " + o?Parameters?Name :> obj) |> ignore | |
[<assembly: OwinStartup(typeof<Startup>)>] | |
do () |
In the previous example we relied on the library to build a graph for us given a string – here we’re being explicit and constructing our own using the / operator (neat eh?). Our custom node will only activate when the segment starts with the letter “p”, and if it does then it will store that parameter away in a dynamic dictionary so we can use it later.
If the engine doesn’t match on a node, it’ll continue through it’s siblings looking for a match there instead. In our case, anything that doesn’t start with “p” will get picked up by the second route – the String
parameter node acts as a catch-all:
Pipelines and OWIN
This gets even more exciting when we bring OWIN into the mix. OWIN allows us to build web applications out of multiple pieces of middleware, distinct orthogonal units that run together in a pipeline.
Usually these are quite linear, but with graph based routing and it’s ability to execute arbitrary code, we can build our pipeline on the fly. In this final example, we’re using two pieces of sample middleware to control access to parts of our web application:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
type RequireHttps(next: AppFunc) = | |
member this.Invoke(environment: IDictionary<string, obj>) : Task = | |
match environment.["owin.RequestScheme"].ToString() with | |
| "https" -> (next.Invoke(environment)) | |
| other -> | |
environment.["owin.ResponseStatusCode"] <- 400 :> obj | |
environment.["owin.ResponseReasonPhrase"] <- "Connection was not secure" :> obj | |
Task.FromResult<obj>(null) :> Task | |
type RequireAuthentication(next: AppFunc) = | |
member this.Invoke(environment: IDictionary<string, obj>) : Task = | |
let requestHeaders = environment.["owin.RequestHeaders"] :?> Dictionary<string, string> | |
match requestHeaders.["Authentication"] with | |
| "ABC123" -> (next.Invoke(environment)) | |
| other -> | |
environment.["owin.ResponseStatusCode"] <- 403 :> obj | |
environment.["owin.ResponseReasonPhrase"] <- "Authentication required" :> obj | |
Task.FromResult<obj>(null) :> Task | |
type Startup() = | |
member x.Configuration(app: Owin.IAppBuilder) = | |
let define = OwinRouteEngineFactory.Create(); | |
app.UseSuperscribeRouter(define).UseSuperscribeHandler(define) |> ignore | |
define.Route("admin/token", fun o -> "{ token: ABC123 }" :> obj) |> ignore | |
define.Route("admin/users", fun o -> "List all users" :> obj) |> ignore | |
let users = define.Route("users") | |
define.Route(users / String "UserId", fun o -> "User details for " + o?Parameters?UserId :> obj) |> ignore | |
define.Pipeline("admin").Use<RequireHttps>() |> ignore | |
define.Pipeline("admin/users").Use<RequireAuthentication>() |> ignore |
Superscribe has support for this kind of middleware pipelining built in via the Pipeline
method. In this code above we’ve specified that anything under the admin/
route will invoke the RequireHttps
middleware, and if we’re doing anything other than requesting a token then we’ll need to provide the correct auth header.Behind the syntactic sugar, Superscribe is simply doing everything using the three types of function that we looked at earlier.
This example is not going to win any awards for security practices but it’s a pretty powerful demonstration of how these functional-inspired practices of composition and pipelining can help us build some really flexible and maintainable web applications. It turns out that there really is a lot more synergy between F# and the web that most people realise!
Summary
Some aspects still leave a little to be desired from the functional perspective – our functions aren’t exactly pure for example. But this is just the beginning of the relationship between F# and Superscribe. Most of the examples in the post have been ported straight from C# and so don’t really make any use of F# language features.
I’m really excited about what can be achieved when we start bringing things like monads and discriminated unions into the mix, it should make for some super-terse syntax. I’d love to hear some thoughts on this from the community… I’m sure we can do better than previous attempts at monadic url routing at any rate!
I hope you enjoyed today’s advent calendar… special thanks go to Scott Wlaschlin for all his technical feedback. I deliberately kept the specifics light here so as not to detract from the message of the post, but you can read more about Superscribe and graph based routing on the Superscribe website
Merry christmas to you all!
Pete
References
http://owin.org/
http://sergeytihon.wordpress.com/2014/11/24/f-advent-calendar-in-english-2014/
http://about.me/sergey.tihon
http://superscribe.org/
http://superscribe.org/graphbasedrouting.html
https://github.com/fsprojects/FSharp.Dynamic
https://gist.github.com/unknownexception/6035260
https://github.com/koistya/fsharp-owin-sample
https://github.com/freya-fs/freya
http://blog.ploeh.dk/2013/08/23/how-to-create-a-pure-f-aspnet-web-api-project/
http://wizardsofsmart.net/samples/working-with-non-compliant-owin-middleware/
http://happstack.com/page/view-page-slug/16/comparison-of-4-approaches-to-implementing-url-routing-combinators-including-the-free-and-operational-monads
https://twitter.com/scottwlaschin
I’m seeing a common pattern lately – respected mentors with weight in the community giving people ‘advice’ in the form:
X is usually bad, therefore never do X
Giving this kind of advice IS bad, therefore don’t do it (not even a hint of irony here). If you find yourself making these kind of blanket statements, you need to ask yourself “Who is this advice aimed at?”
Juniors/Intermediates
Often we want to direct our information at people who are still learning, but are being led astray by the majority of advice they may be reading. In doing so, we hope that they can avoid making a mistake without having to first master the subject.
This is a noble motive. The flaw in this approach is that making mistakes is a really important way in which people learn.
When you have driving lessons, you’re being taught how to operate a car safely and reliably enough to pass a test. Learning to ‘drive’ is an ongoing process that takes years of practice, almost all of which will come after you pass your test and get out on your own.
What makes this tricky is that often junior developers within teams are not given this early opportunity to make the mistakes they need to… they are expected to be able to drive the car on their own, or teach themselves to do it.
Your blanket advice is bad for this audience. It will conflict with and confuse them.
Seniors/Experts
Some people in this audience are still very keen to learn, others are very set in their ways. Depending on your role or standing, this is probably where the bulk of your followers lie. There’s a thin line between a senior who is very keen to learn and a junior as both have a capacity to misinterpret advice.
Assuming that your blanket statement is encountered by a true expert however, they may be offended and rightly so. These people assume that you know your stuff and you know your audience, after all you’ve worked hard to get where you are right?
When an expert encounters an unbalanced statement that does not take into account the true circumstances and complexity of the situation, they immediately either question it or dismiss it. Most likely it’s the latter; you’ve not helped your cause and you’ve made yourself look like a bit of a tit.
Your blanket advice is bad for this audience. It will insult them and undermine you.
Your peers
When directing content at your peers, it is much more likely to spark debate and inspire theoretical discussion which will help drive both your ideas and the community forward. You can stand to omit well understood details, be smug, sarcastic or controversial without fear of your advice being misconstrued.
In this context however, what you’re really conveying is a concept, idea or opinion which is at risk of being consumed incorrectly as advice by one of your other audiences.
This is what is known as Leaky Advice. It’s leaky in terms of it’s target audience, and leaky because you can’t cover a complex underlying problem with a simple statement. Just like leaky abstractions though, it’s only a problem if your audience or use case is not the correct one.
Your blanket advice is great for this audience, but it’s no longer advice and you shouldn’t frame it as such.
Solving the root problem
If you seek to provide advice, you have a duty to educate your audience correctly. The more followers you have and the more respected in the community you are, the more important this becomes.
Bad advice is propagated by people with a poor understanding – they’ve read an over-generalised post or tweet somewhere and treated it as gospel because it has come from a reliable source, without thinking about the consequences.
By tweeting blanket staments to the wrong audience you are not helping to end the bad practice that you were trying to educate people against, you’ve simply made it worse by providing more bad advice and adding to the confusion.
We owe it to our followers and readers to provide balanced arguments along with evidence. We are scientists after all.
Pete
This week on twitter we find ourselves back on the subject of OWIN, and once again the battle lines are drawn and there is source of much consternation.
The current debate goes thusly… should we attempt to build a centralised Dependency Injection wrapper, available to any middleware and allowing them – essentially – to share state?
If you’d like some context to this post, you can also read:
- The project that inspired the debate: http://www.tugberkugurlu.com/archive/owin-dependencies–an-ioc-container-adapter-into-owin-pipeline
- The main twitter thread – https://twitter.com/tourismgeek/status/435438467309252608
- Discussion summarising the issue – https://groups.google.com/forum/#!topic/net-http-abstractions/fjEa3Luyc5E
Is sharing state in this way considered an anti-pattern, or even a bastardisation of OWIN itself? To answer this, we need to ask ourselves some questions.
What *is* OWIN?
Paraphrasing, from the OWIN specification itself:
… OWIN, a standard interface between .NET web servers and web applications. The goal of OWIN is to decouple server and application…
The specification also defines some terms:
- Application – Possibly built on top of a Web Framework, which is run using OWIN compatible Servers.
- Web Framework – A self-contained component on top of OWIN exposing its own object model or API that applications may use to facilitate request processing. Web Frameworks may require an adapter layer that converts from OWIN semantics.
- Middleware – Pass through components that form a pipeline between a server and application to inspect, route, or modify request and response messages for a specific purpose.
This helps clear some things up…. particularly about the boundaries between our concerns and the terminology that identifies them. An application is built on top of a web framework, and the framework itself should be self-contained.
What should OWIN be used for?
OWIN purists say that middleware is an extension of the HTTP pipeline as a whole… the journey from it’s source to your server, passing through many intermediaries capable of caching or otherwise working with the request itself. OWIN middleware are simply further such intermediaries that happen to be fully within your control.
But there is another view – that OWIN provides an opportunity to augment or compose an application from several reusable, framework-agnostic middleware components. This is clearly at odds with the specifiation, but is it not without merit?
Composing applications in this way takes the strain off framework developers, and allows us all to work together towards a common goal. It allows us to build composite applications involving multiple web frameworks, leveraging their relative strengths and weaknesses within a single domain.
A lot of the purists are already unhappy with the direction that Microsoft has taken with their OWIN implementation – Katana. By and large I think they were just being practical and didn’t have the time to wait around for the decision of a committee, but this has only served to further muddy the waters when defining OWIN’s identity, purpose and intended usage.
If this isn’t a use for OWIN, then what is it?
When I began to learn about OWIN, I intrinsically ended up in the composable applications camp, as did several others I know. I would love to see our disparate framework communities unite, and the availability of framework-agnostic modules could only be a good thing in this regard. But the specification is clear… this is not what OWIN is for.
On the subject of the specification and it’s definitions from earlier though, I think there is one quite glaring error. This error wasn’t present when the OWIN specification was drawn up, but rather came to be due the effect that OWIN and middleware such as SignalR has had on the way we think about building applications:
Instead of building on top of a framework, the inverse is now true: we build our application out of frameworks, plural.
So what now?
What we are really after is a bootstrapper that allows us to run a pipeline of framework-agnostic components from *within* the context of our applications. If we execute this pipeline just beyond the boundary, it will have exactly the same effect as middleware in the OWIN pipeline but with the correct seperation of concerns, and with access to shared state.
This bootstrapper could (probably should?) be a terminating middleware, that itself can hand off control to whatever frameworks your application is built from. Alternatively it could be a compatibility layer built into frameworks themselves… although I think that getting people to agree on a common interface is probably a ‘pipe’ dream.
Our communtiy clearly desires such a mechanism for composing applications, and allowing interoperability between frameworks. But that’s not what OWIN is for, and if we are serious about our goal, we’ll need to work together to meet the challenge.
Please leave your comments below.
Pete
In the first part of this series, I talked about the challenges of tracking changes to complex viewmodels in knockout, using isDirty() (see here and here) and getChanges() methods.
In this second part, I’ll go through how we extended this initial approach so we could track changes to array elements as well as regular observables. If you haven’t already, I suggest you have a read of part one as many of the examples build on code from the first post.
Starting Simple
For the purposes of this post we are only considering ‘Primitive’ arrays… these are arrays of values such as strings and numbers, as opposed to complex objects with properties of their own. Previously we created an extender that allows us to apply change tracking to a given observable, and we’re using the same approach here.
We won’t be re-using the existing extender, but we will use some of the same code for iterating over our model and applying it to our observables. In that vein, here’s a skeleton for our change tracked array extender… it has a similar structure to our previous one:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ko.extenders.trackArrayChange = function (target, track) { | |
if (track) { | |
target.isDirty = ko.observable(false); | |
target.added = ko.observableArray([]); | |
target.removed = ko.observableArray([]); | |
var addItem = function (item) { | |
//… | |
}; | |
var removeItem = function (item) { | |
//… | |
}; | |
target.getChanges = function () { | |
var result = { | |
added: target.added, | |
removed: target.removed | |
}; | |
return result; | |
}; | |
//…. | |
//…. | |
} | |
} |
You should notice a few differences however:
- Two observable arrays are being exposed in addition to the isDirty() flag – added and removed
- The getChanges() method returns a complex object also containing adds and removes
As this functionality was developed with HTTP PATCH in mind, we’re assuming that we will need to track both the added items and the removed items, so that we can only send the changes back to the server. If you aren’t using PATCH, it can be sufficient just to know that a change has occurred and then save your data by replacing the entire array.
Last points to make – we’re treating any ‘changes’ to existing elements as an add and then a delete… these are just primitive values after all. Also the ordering of the elements is not going to be tracked (although this is possible and will be covered in the next post).
Array subscriptions
Prior to Knockout 3.0, we had to provide alternative methods to the usual push() and pop() so that we could keep track of array elements… subscribing to the observableArray itself would only notify you if the entire array was replaced. As of Knockout 3.0 though, we now have a way to subscribe to array element changes themselves!
We’re using the latest version for this example, but check the links at the bottom of the third post in the series if you are interested in the old version.
Let’s begin to flesh out the skeleton a little more:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//…. | |
//…. | |
target.getChanges = function () { | |
var result = { | |
added: target.added(), | |
removed: target.removed() | |
}; | |
return result; | |
}; | |
target.subscribe(function(changes) { | |
ko.utils.arrayForEach(changes, function (change) { | |
switch (change.status) | |
{ | |
case "added": | |
addItem(change.value); | |
break; | |
case "deleted": | |
removeItem(change.value); | |
break; | |
} | |
}); | |
}, null, "arrayChange"); | |
} |
Now we’ve added an arrayChange subscription, we’ll be notified whenever anyone pops, pushes or even splices our array. In the event of the latter, we’ll receive multiple changes so we have to cater for that eventuality.
We’ve deferred the actual tracking of the changes to private methods, addItem() and removeItem(). The reason for this becomes clear when you consider what you’d expect to happen after performing the following operations:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Initialise array with some data and track changes | |
var trackedArray = ko.observableArray([1,2,3]).extend({trackArrayChanges: true}); | |
trackedArray.getChanges(); | |
// -> { } No changes yet | |
trackedArray.push(4); | |
trackedArray.pop(); | |
trackedArray.getChanges(); | |
// -> { } Changes have negated each other |
In order to achieve this behavior, we first need to check that the item in question has not already been added to one of the lists like so:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//…. | |
//…. | |
var findItem = function (array, item) { | |
return ko.utils.arrayFirst(array, function (o) { | |
return o === item; | |
}); | |
}; | |
var addItem = function (item) { | |
var previouslyRemoved = findItem(target.removed(), item); | |
if (previouslyRemoved) { | |
target.removed.remove(previouslyRemoved); | |
} else { | |
target.added.push(item); | |
target.isDirty(true); | |
} | |
}; | |
var removeItem = function (item) { | |
var previouslyAdded = findItem(target.added(), item); | |
if (previouslyAdded) { | |
target.added.remove(previouslyAdded); | |
} else { | |
target.removed.push(item); | |
target.isDirty(true); | |
} | |
}; | |
//…. | |
//…. |
Applying this to the view model
A change tracked primitive array is unlikely to be very useful on it’s own, so we need to make sure that we can track changes to an observable array regardless of where it appeared in our view model. Lets revisit the code from our previous sample that traversed the view model and extended all the observables it encountered:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…. | |
…. | |
var applyChangeTrackingToObservable = function (observable) { | |
// Only apply to basic writeable observables | |
if (observable && !observable.nodeType && !observable.refresh && ko.isObservable(observable)) { | |
if (!observable.isDirty) observable.extend({ trackChange: true }); | |
} | |
}; | |
var applyChangeTracking = function (obj) { | |
var properties = getObjProperties(obj); | |
ko.utils.arrayForEach(properties, function (property) { | |
applyChangeTrackingToObservable(property.value); | |
}); | |
}; | |
…. | |
…. |
In order to properly apply change tracking to our model, we need to detect whether a given observable is in fact an observableArray, and if so then apply the new extender instead of the old one. This is not actually as easy as it sounds… based on the status of this pull request, Knockout seems to provide no mechanism for doing this (please correct me if you know otherwise!).
Luckily, this thread had the answer… we can simply extend the observableArray “prototype” by adding the following line somewhere in global scope:
ko.observableArray.fn.isObservableArray = true;
Assuming that’s in place, our change becomes very simple:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//…. | |
//…. | |
var applyChangeTrackingToObservable = function (observable) { | |
// Only apply to basic writeable observables | |
if (observable && !observable.nodeType && !observable.refresh && ko.isObservable(observable)) { | |
if (observable.isObservableArray) { | |
observable.extend({ trackArrayChange: true }); | |
} | |
else { | |
if (!observable.isDirty) observable.extend({ trackChange: true }); | |
} | |
} | |
}; | |
//…. | |
//…. |
We don’t need to change any of the rest of the wireup code from the first sample, as we are already working through our view model recursively and letting applyChangeTrackingToObservable do it’s thing.
That’s all the code we needed, now we can take it for a spin!
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
var viewModel = { | |
Name: ko.observable("Pete"), | |
Age: ko.observable(29), | |
Skills: ko.observableArray([ | |
"TDD", "Knockout", "WebForms" | |
}), | |
Occupation: ko.observable("Developer") | |
}; | |
applyChangeTracking(viewModel); | |
viewModel.Occupation("Blogger"); | |
viewModel.Skills.push("Change tracking"); | |
viewModel.Skills.remove("WebForms"); | |
getChangesFromModel(viewModel); | |
/* -> { | |
"Skills": { | |
added: ["Change tracking"], | |
removed: ["WebForms"] | |
}, | |
Occupation: "Blogger" | |
} */ |
Summary
We’ve seen how we can make use of the new arraySubscriptions feature in Knockout 3.0 to get notified about changes to array elements. We made sure that we didn’t get strange results when items were added and then removed again or vice-versa, and then integrated the whole thing into a change tracked viewmodel.
In the third and final post in this series, we’ll go the whole hog and enable change tracking for complex and nested objects within arrays.
You can view the full code for this post here: https://gist.github.com/Roysvork/8743663, or play around with it in jsFiddle!
Pete
As part of a project I’ve been working on for a client, we’ve decided to implement HTTP PATCH in our API for making changes. The main client consuming the API is a web application driven by Knockout.JS, so this meant we had to find a way to figure out what had changed on our view model, and then send just those values over the wire.
There is nothing new or exciting about this requirement in itself. The question has been posed before and it was the subject of a blog post way back in 2011 by Ryan Niemeyer. What was quite exciting however was that our solution ended up doing much more than just detect changes to viewmodels. We needed to keep tabs on individual property changes, changes to arrays (adds\deletes\modifications), changes to child objects and even changes to child objects nested within arrays. The result was a complete change tracking implementation for knockout that can process not just one object but a complete object graph.
In this two part post I’ll attempt to share the code, the research and the story of how we arrived at the final implementation.
Identifying that a change has occurred
The first step was to get basic change tracking working given a view model with observable properties containing values – no complex objects.
Initial googling turned up the following approach as a starting point:
http://www.knockmeout.net/2011/05/creating-smart-dirty-flag-in-knockoutjs.html
http://www.johnpapa.net/spapost10/
http://www.dotnetcurry.com/showarticle.aspx?ID=876
These methods all involved some variation on adding an isDirty computed observable to your view model. Ryan’s example stores the initial state of the object when it is defined which can then be used as a point of comparison to figure out if a change has occurred.
Suprotim’s approach is based on Ryan’s method but instead of storing a json snapshot of the initial object (which could potentially be very large for complex view models), it merely subscribes to all the observable properties of the view model and sets the isDirty flag accordingly.
Both of these are very lightweight and efficient ways of detecting that a change has occurred, but as detailed in this thread they can’t pinpoint exactly which observable caused the change. Something more was needed.
Tracking changes to simple values
After a bit more digging, a clever solution to the problem of tracking changes to individual properties emerged as described by Stack Overflow one hit wonder, Brett Green in the answer to this question and also in slightly more detail on his blog.
This made the use of knockout extenders to add properties to the observables themselves; an overall isDirty() method for the view model as a whole could then be provided by a computed observable. This post almost entirely formed the basis for the first version. After a bit of restructuring, pretty soon we’ve got an implementation that will allow us to track changes to a flat view model:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
var getObjProperties = function (obj) { | |
var objProperties = []; | |
var val = ko.utils.unwrapObservable(obj); | |
if (val !== null && typeof val === 'object') { | |
for (var i in val) { | |
if (val.hasOwnProperty(i)) objProperties.push({ "name": i, "value": val[i] }); | |
} | |
} | |
return objProperties; | |
}; | |
ko.extenders.trackChange = function (target, track) { | |
if (track) { | |
target.isDirty = ko.observable(false); | |
target.originalValue = target(); | |
target.subscribe(function (newValue) { | |
// use != not !== so numbers will equate naturally | |
target.hasValueChanged(newValue != target.originalValue); | |
target.hasValueChanged.valueHasMutated(); | |
}); | |
} | |
return target; | |
}; | |
var applyChangeTrackingToObservable = function (observable) { | |
// Only apply to basic writeable observables | |
if (observable && !observable.nodeType && !observable.refresh && ko.isObservable(observable)) { | |
if (!observable.isDirty) observable.extend({ trackChange: true }); | |
} | |
}; | |
var applyChangeTracking = function (obj) { | |
var properties = getObjProperties(obj); | |
ko.utils.arrayForEach(properties, function (property) { | |
applyChangeTrackingToObservable(property.value); | |
}); | |
}; | |
var getChangesFromModel = function (obj) { | |
var changes = null; | |
var properties = getObjProperties(obj); | |
ko.utils.arrayForEach(properties, function (property) { | |
if (property.value != null && typeof property.value.isDirty != "undefined" && property.value.isDirty()) { | |
changes = changes || {}; | |
changes[property.name] = property.value(); | |
} | |
}); | |
return changes; | |
}; |
An example of utilising this change tracking is as follows:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
var viewModel = { | |
Name: ko.observable("Pete"), | |
Age: ko.observable(29), | |
Occupation: ko.observable("Developer") | |
}; | |
applyChangeTracking(viewModel); | |
viewModel.Occupation("Unemployed"); | |
getChangesFromModel(viewModel); | |
// -> { "Occupation": "Unemployed" } |
Detecting changes to complex objects
The next task was to ensure we could work with properties containing complex objects and nested observables. The issue here is that the isDirty property of an observable is only set when it’s contents are replaced. Modifying a child property of an object within an observable will not trigger the change tracking.
This thread on google groups seemed to be going in the right direction and even had links to two libraries already built:
- Knockout-Rest seemed promising, but although this was able to detect changes in complex properties and even roll them back, it still could not pinpoint the individual properties that triggered the change.
- EntitySpaces.js seemed to contain all the required elements, but it relied on generated classes and the change tracking features were too tightly coupled to it’s main use as a data access framework. At the time of writing it had not been updated for two years.
In the end we came up with a solution ourselves. In order to detect that a change had occurred further down the graph, we modified the existing isDirty extension member so that in the event that the value of our observable property was a complex object, it should also take into account the isDirty value of any properties of that child object:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
var traverseObservables = function (obj, action) { | |
ko.utils.arrayForEach(getObjProperties(obj), function (observable) { | |
if (observable && observable.value && !observable.value.nodeType && ko.isObservable(observable.value)) { | |
action(observable); | |
} | |
}); | |
}; | |
ko.extenders.trackChange = function (target, track) { | |
if (track) { | |
target.hasValueChanged = ko.observable(false); | |
target.hasDirtyProperties = ko.observable(false); | |
target.isDirty = ko.computed(function () { | |
return target.hasValueChanged() || target.hasDirtyProperties(); | |
}); | |
var unwrapped = target(); | |
if ((typeof unwrapped == "object") && (unwrapped !== null)) { | |
traverseObservables(unwrapped, function (obj) { | |
applyChangeTrackingToObservable(obj.value); | |
obj.value.isDirty.subscribe(function (isdirty) { | |
if (isdirty) target.hasDirtyProperties(true); | |
}); | |
}); | |
} | |
target.originalValue = target(); | |
target.subscribe(function (newValue) { | |
// use != not !== so numbers will equate naturally | |
target.hasValueChanged(newValue != target.originalValue); | |
target.hasValueChanged.valueHasMutated(); | |
}); | |
} | |
return target; | |
}; |
Now when extending an observable to apply change tracking, if we find that the initial value is a complex object we also iterate over any properties of our child object and recursively apply change tracking to those observables as well. We also set up subscriptions to the resulting isDirty flags of the child properties to ensure we set the hasDirtyProperties flag on the target.
Tracking individual changes within complex objects
After the previous modifications, our change tracking now behaves like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
var viewModel = { | |
Name: ko.observable("Pete"), | |
Age: ko.observable(29), | |
Skills: { | |
Tdd: ko.observable(true), | |
Knockout: ko.observable(true), | |
ChangeTracking: ko.observable(false), | |
}, | |
Occupation: ko.observable("Developer") | |
}; | |
applyChangeTracking(viewModel); | |
viewModel.Skills().ChangeTracking(true); | |
getChangesFromModel(viewModel); | |
/* -> { | |
"Skills": { | |
Tdd: function observable() { …. }, | |
Knockout: function observable() { …. }, | |
ChangeTracking: function observable() { …. }, | |
} | |
} */ |
Obviously there’s something missing here… we know that the Skills object has been modified and we also technically know which property of the object was modified but that information isn’t being respected by getChangesFromModel.
Previously it was sufficient to pull out changes by simply returning the value of each observable. That’s no longer the case so we have to add a getChanges method to our observables at the same level as isDirty, and then use this instead of the raw value when building our change log:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ko.extenders.trackChange = function (target, track) { | |
if (track) { | |
// … | |
// … | |
if (!target.getChanges) { | |
target.getChanges = function (newObject) { | |
var obj = target(); | |
if ((typeof obj == "object") && (obj !== null)) { | |
if (target.hasValueChanged()) { | |
return ko.mapping.toJS(obj); | |
} | |
return getChangesFromModel(obj); | |
} | |
return target(); | |
}; | |
} | |
} | |
return target; | |
}; | |
var getChangesFromModel = function (obj) { | |
var changes = null; | |
var properties = getObjProperties(obj); | |
ko.utils.arrayForEach(properties, function (property) { | |
if (property.value != null && typeof property.value.isDirty != "undefined" && property.value.isDirty()) { | |
changes = changes || {}; | |
changes[property.name] = property.value.getChanges(); | |
} | |
}); | |
return changes; | |
}; |
Now our getChangesFromModel will operate recursively and produce the results we’d expect. I’d like to draw your attention to this section of the above code in particular:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
if ((typeof obj == "object") && (obj !== null)) { | |
if (target.hasValueChanged()) { | |
return ko.mapping.toJS(obj); | |
} | |
return getChangesFromModel(obj); | |
} |
There’s a reason we’ve been using seperate observables to track hasValueChanged and hasDirtyProperties; in the event that we have replaced the contents of the observable wholesale, we must pull out all the values.
Here’s the change tracking complete with complex objects in action:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
var viewModel = { | |
Name: ko.observable("Pete"), | |
Age: ko.observable(29), | |
Skills: ko.observable({ | |
Tdd: ko.observable(true), | |
Knockout: ko.observable(true), | |
ChangeTracking: ko.observable(false), | |
Languages: ko.observable({ | |
Csharp: ko.observable(false), | |
Javascript: ko.observable(false) | |
}), | |
}), | |
Occupation: ko.observable("Developer") | |
}; | |
applyChangeTracking(viewModel); | |
viewModel.Skills().ChangeTracking(true); | |
viewModel.Skills().Languages({ | |
Csharp: ko.observable(true), | |
Javascript: ko.observable(true), | |
}); | |
getChangesFromModel(viewModel); | |
/* -> { | |
"Skills": { | |
ChangeTracking: true, | |
Languages: { | |
Csharp: true, | |
Javascript: true | |
} | |
} | |
} */ |
Summary
In this post we’ve seen how we can use a knockout extender and an isDirty observable to detect changes to individual properties within a view model. We’ve also seen some of the potential pitfalls you may encounter when dealing with nested complex objects and how we can overcome these to provide a robust change tracking system.
In the second part of this post, we’ll look at the real killer feature… tracking changes to complex objects within arrays.
You can view the full code for the finished example here: https://gist.github.com/Roysvork/8744757 or play around with the jsFiddle!
Pete
Edit: As part of the research for this post, I did come across https://github.com/ZiadJ/knockoutjs-reactor which takes a very similar approach and even handles arrays. It’s a shame I had not seen this when writing the code as it would have been quite useful.
You’re a strong supported of the benefits of continuous integration… whenever anyone commits to your source repo, all your tests are run. You have close to 100% coverage (or as close as you desire to have) so if anything breaks you’re going to know about it. Once stuff is pushed and all the tests pass then you’re good to ship, right?
WRONG.
Do you know how many tests you have in your solution? Does everyone in your team know? If they don’t know, do they have ready access to this figure both before and after the push? This figure is important; it’s your TDD Golden Number.
All is not what it seems
Someone you work with is trying out a new Github client. Sure it’s not new in the grand scheme of things, but it’s new to YOUR TEAM. It should work fine but just like a screw from a dissasembled piece of furniture, just because it fits the hole it was remove from doesn’t mean it will fit another.
Something goes wrong with a rebase or a merge and you don’t notice. This shit should be easy and happen as a matter of course, but this time it doesn’t and changesets get lost. Not only that, but those changesets span a complete Red-Green-Refactor cycle, so you’ve lost both code and tests.
The build runs… green light. But…
UNLESS YOU KNOW YOUR GOLDEN NUMBER and have a way of verifying it against the number of tests that ran, you have no idea if this green light is a true indicator of success. All you know is that all the tests in source control pass.
Risk management
Granted, the chances of a scenario where commits get lost during a merge are slim, but if any loss is possible then the chances are that you’ll lose tests as well as code because they’re probably in the same commit. This leads us to:
POINT THE FIRST: Make sure you commit your tests separately from the code that makes them pass.
Some may argue that you can take this one step further and commit each step of Red-Green-Factor separately and this works too so long as you don’t push until you have a green light. This is a good starting point for minimizing the chance of loss.
POINT THE SECOND: Test your golden number
You’ll need to be careful how you deal with this one from a process and team management point of view, but why not VERIFY YOUR GOLDEN NUMBER. Write a test that checks that the number of tests matches what it was at the time of the commit.
Wait a minute…
There’s a good reason why you might not think to do this; what if the number of tests lost is equal to the number of tests added anew by the commit?
POINT THE MOST IMPORTANT: COMMIT YOUR TESTS TO A SEPARATE REPO
The chances of our worst case scenario playing out now with so many distinct steps is orders of magnitude lower than in our previous case. For disaster to go un-noticed, you have to lose the tests AND the the accompanying golden number test AND the code that was in a separate commit to a separate repository.
All in all, a good way to think about the golden number is like this:
Seriously though
Even with the best intentions and code coverage, there’s always a chance that something may go wrong and you won’t know about it. When employed together, these three points will help you efficiently mitigate this risk.
Pete
I was fortunate to speak to a lot of people about OWIN, both in the run up to and during the recent NDC London conference. And I tell you what, I’m at a complete loss as to what the hell has happened. OWIN is a great standard, with some great support from many sources and with some great minds putting in their time and effort. Despite this though, things are in dire straits.
One particular issue has cropped up which has got everyone faffing about. There has been a lot of ‘discussion’ on this recently both on Github and on Twitter. And quite frankly I’m rather frustrated by it all.
For those that don’t know, can’t be bothered to read the thread or simply can’t fathom it out from all the tangents and confusion, there’s currently no ‘standard’ way of writing middleware that will ensure that any given provider will be able to wire it into the pipeline.
Microsoft came up with a wire up scenario in the form of an IAppBuilder implementation when they implemented Katana which utilises a class with a constructor and an invoke function, but this is not part of the OWIN specification. There’s another wire up solution in the form of Mark Rendles “fix” that just needs a lamdba function, but this isn’t part of OWIN either. There just isn’t a standard at all.
So basically at the moment, it’s nigh on impossible to implement middleware that may choose routes through the pipeline, or otherwise wrap parts of the pipeline and work with it in a generic manner. And for all the discussion going on there’s no solution or agreement in sight. You just have to hope that the wire up scenario that’s available supports your chosen signature.
How the hell has this happened??
Forgive me but I thought this was a core tenant of OWIN – Interoperability – between middleware and owin compatible hosting environments and in general? Yet the end result of all this is that we’ve got a major interoperability problem… the exact opposite of the OWIN philosophy.
It’s not hard, it’s not rocket science and it doesn’t need protracted discussion. JUST PICK A F***ING SIGNATURE ALREADY.
Func<AppFunc, AppFunc> – There, I did it.