Skip to content

Introducing… Beyond Code

 

 

new beyond code-02

It is with some sadness that I announce the retirement of the roysvork blog. It’s been a great run, but it’s time to move to a new domain and bigger & better things. I won’t be deleting this blog or removing anything, but you won’t see any new posts here and I won’t be migrating existing content across to the new site.

Beyond Code

While I still hold a great love for all things technical, my focus has shifted; as a consultant I code less and so have fewer code related things to write about. Learning to program is one thing, but that alone doesn’t make you a productive member of a development team. Making software is a people business rather than a tech business and it always has been… and with this comes a whole bunch of new things to write about.

With this in mind I’d like to introduce my new rebranded company and blog – Beyond Code. Code is just the beginning, and I hope you’ll join me for the rest of our journey over at the new site. My twitter handle has also changed… you can follow all the latest updates at @beyond_code.

See you all there 🙂

Pete

Advertisement

Functional web synergy with F# and OWIN

Before we get started I’d just like to mention that this post is part of the truly excellent F# Advent Calendar 2014 which is a fantastic initiative organised by Sergey Tihon, so big thanks to Sergey and the rest of the F# community as well as wishing you all a merry christmas!

Introduction

Using F# to build web applications is nothing new, we have purpose built F# frameworks like Freya popping up and excellent posts like this one by Mark Seemann. It’s also fairly easy to pick up other .NET frameworks that weren’t designed specifically for F# and build very solid applications.

With that in mind, I’m not just going to write another post about how to build web applications with F#.

Instead, I’d like to introduce the F# community to a whole new way of thinking about web applications, one that draws inspiration from a number of functional programming concepts – primarily pipelining and function composition – to provide a solid base on to which we can build our web applications in F#. This approach is currently known as Graph Based Routing

Some background

So first off – I should point out that I’m not actually an F# guy; in fact I’m pretty new to the language in general so this post is also somewhat of a learning exercise for me. I often find the best way to get acquainted with things is to dive right in, so please feel free to give me pointers in the comments.

Graph based routing itself has been around for a while, in the form of a library called Superscribe (written in C#). I’m not going to go into detail about it’s features; these are language agnostic, and covered by the website and some previous posts.

What I will say is that Superscribe is not a full blown web framework but actually a routing library. In fact, that’s somewhat of an oversimplication… in reality this library takes care of everything between URL and handler. It turns out that routing, content negotiation and some way of invoking a handler is actually all you need to get started building web applications.

Simplicity rules

This simplicity is a key tenet of graph based routing – keeping things minimal helps us build web applications that respond very quickly indeed as there is simply no extra processing going on. If you’re building a very content-heavy application then it’s probably not the right choice, but for APIs it’s incredibly performant.

Lets have a look at an example application using Superscribe in F#:

<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Microsoft.Owin" version="3.0.0" targetFramework="net45" />
<package id="Microsoft.Owin.Host.HttpListener" version="3.0.0" targetFramework="net45" />
<package id="Microsoft.Owin.Host.SystemWeb" version="3.0.0" targetFramework="net45" />
<package id="Microsoft.Owin.Hosting" version="3.0.0" targetFramework="net45" />
<package id="Owin" version="1.0" targetFramework="net45" />
<package id="Superscribe" version="0.4.4.15" targetFramework="net45" />
<package id="Superscribe.Owin" version="0.4.3.14" targetFramework="net45" />
</packages>
view raw 1-packages.xml hosted with ❤ by GitHub
namespace Server
open Owin
open Microsoft.Owin
open Superscribe.Owin
open Superscribe.Owin.Engine
open Superscribe.Owin.Extensions;
type Startup() =
member x.Configuration(app: Owin.IAppBuilder) =
let define = OwinRouteEngineFactory.Create();
app.UseSuperscribeRouter(define).UseSuperscribeHandler(define) |> ignore
define.Route("/hello/world", fun _ -> "Hello World" :> obj) |> ignore
define.Route("/hello/fsharp", fun _ -> "Hello from F#!" :> obj) |> ignore
[<assembly: OwinStartup(typeof<Startup>)>]
do ()
view raw 2-startup.fs hosted with ❤ by GitHub
open System
open Microsoft.Owin
[<EntryPoint>]
let main argv =
let baseAddress = "http://localhost:8888";
use a = Hosting.WebApp.Start<Server.Startup>(baseAddress)
Console.WriteLine("Server running on {0}", baseAddress)
Console.ReadLine() |> ignore
0
view raw 3-program.fs hosted with ❤ by GitHub

Superscribe defaults to a text/html response and will try it’s best to deal with whatever object you return from your handler. You can also do all the usual things like specify custom media type serialisers, return status codes etc.

The key part to focus on here is the define.Route statement, which allows us to directly assign a handler to a particular route – in this case /hello/world and /hello/fsharp. This is kinda cool, but there’s a lot more going on here than meets the eye.

Functions and graph based routing

Graph based routing is so named because it stores route definitions in – you guessed it – a graph structure. Traditional route matching tends focus on tables of strings and pattern matching based on the entire URL, but Superscribe is different.

In the example above the URL /hello/world gets broken down into it’s respective segments. Each segment is represented by a node in the graph, with the next possible matches as it’s children. Subsequent definitions are also broken down and intelligently added into the graph, so in this instance we end up with something like this:

hello world graph

Route matching is performed by walking the graph and checking for matches – it’s essentially a state machine. This is great because we only need to check for the segments that we expect; we don’t waste time churning through a large route table.

But here’s where it gets interesting. Nodes in graph based routing are comprised of three functions:

  • Activation function – returns a boolean indicating if the node is a match for the current segment
  • Action function – executed when a match has been found, so we can do things like parameter capture
  • Final function – executed when matching finishes on a particular node, i.e the handler

All of these functions can execute absolutely any arbitrary code that we like. With this model we can do some really interesting things such as conditional route matching based on the time of day, a debug flag or even based on live information from a load balancer. Can your pattern matcher do that!?

Efficiency, composibility and extensibility

Graph based routing allows us to build complex web applications that are composed of very simple units. A good approach is to use action functions to compose a pipeline a functions which get executed synchronously once route matching is complete (is this beginning to sound familiar?), but it can also be used for processing segments on the fly, for example in capturing parameter capture.

Here’s another example that shows this compositional nature in action. We’re going to define and use new type of node that will match and capture certain strings. Because Superscribe relies on the C# dynamic keyword, I’ve used the ? operator provided by FSharp.Dynamic


type NameBeginningWith(letter) as this =
inherit GraphNode()
do
this.ActivationFunction <- fun data segment -> segment.StartsWith(letter)
this.ActionFunctions.Add(
"set_param_Name",
fun data segment -> data.Parameters?Add("Name", segment));
type Startup() =
member x.Configuration(app: Owin.IAppBuilder) =
let define = OwinRouteEngineFactory.Create();
app.UseSuperscribeRouter(define).UseSuperscribeHandler(define) |> ignore
let hello = ConstantNode("hello")
define.Route(
hello / NameBeginningWith "p",
fun o ->
"Hello " + o?Parameters?Name + ", great first letter!" :> obj) |> ignore
define.Route(
hello / String "Name",
fun o ->
"Hello " + o?Parameters?Name :> obj) |> ignore
[<assembly: OwinStartup(typeof<Startup>)>]
do ()

view raw

composition.fs

hosted with ❤ by GitHub

In the previous example we relied on the library to build a graph for us given a string – here we’re being explicit and constructing our own using the / operator (neat eh?). Our custom node will only activate when the segment starts with the letter “p”, and if it does then it will store that parameter away in a dynamic dictionary so we can use it later.

If the engine doesn’t match on a node, it’ll continue through it’s siblings looking for a match there instead. In our case, anything that doesn’t start with “p” will get picked up by the second route – the String parameter node acts as a catch-all:

hello fsharp
hello pete

Pipelines and OWIN

This gets even more exciting when we bring OWIN into the mix. OWIN allows us to build web applications out of multiple pieces of middleware, distinct orthogonal units that run together in a pipeline.

Usually these are quite linear, but with graph based routing and it’s ability to execute arbitrary code, we can build our pipeline on the fly. In this final example, we’re using two pieces of sample middleware to control access to parts of our web application:


type RequireHttps(next: AppFunc) =
member this.Invoke(environment: IDictionary<string, obj>) : Task =
match environment.["owin.RequestScheme"].ToString() with
| "https" -> (next.Invoke(environment))
| other ->
environment.["owin.ResponseStatusCode"] <- 400 :> obj
environment.["owin.ResponseReasonPhrase"] <- "Connection was not secure" :> obj
Task.FromResult<obj>(null) :> Task
type RequireAuthentication(next: AppFunc) =
member this.Invoke(environment: IDictionary<string, obj>) : Task =
let requestHeaders = environment.["owin.RequestHeaders"] :?> Dictionary<string, string>
match requestHeaders.["Authentication"] with
| "ABC123" -> (next.Invoke(environment))
| other ->
environment.["owin.ResponseStatusCode"] <- 403 :> obj
environment.["owin.ResponseReasonPhrase"] <- "Authentication required" :> obj
Task.FromResult<obj>(null) :> Task
type Startup() =
member x.Configuration(app: Owin.IAppBuilder) =
let define = OwinRouteEngineFactory.Create();
app.UseSuperscribeRouter(define).UseSuperscribeHandler(define) |> ignore
define.Route("admin/token", fun o -> "{ token: ABC123 }" :> obj) |> ignore
define.Route("admin/users", fun o -> "List all users" :> obj) |> ignore
let users = define.Route("users")
define.Route(users / String "UserId", fun o -> "User details for " + o?Parameters?UserId :> obj) |> ignore
define.Pipeline("admin").Use<RequireHttps>() |> ignore
define.Pipeline("admin/users").Use<RequireAuthentication>() |> ignore

view raw

pipelining.fs

hosted with ❤ by GitHub

Superscribe has support for this kind of middleware pipelining built in via the Pipeline method. In this code above we’ve specified that anything under the admin/ route will invoke the RequireHttps middleware, and if we’re doing anything other than requesting a token then we’ll need to provide the correct auth header.Behind the syntactic sugar, Superscribe is simply doing everything using the three types of function that we looked at earlier.

This example is not going to win any awards for security practices but it’s a pretty powerful demonstration of how these functional-inspired practices of composition and pipelining can help us build some really flexible and maintainable web applications. It turns out that there really is a lot more synergy between F# and the web that most people realise!

Summary

Some aspects still leave a little to be desired from the functional perspective – our functions aren’t exactly pure for example. But this is just the beginning of the relationship between F# and Superscribe. Most of the examples in the post have been ported straight from C# and so don’t really make any use of F# language features.

I’m really excited about what can be achieved when we start bringing things like monads and discriminated unions into the mix, it should make for some super-terse syntax. I’d love to hear some thoughts on this from the community… I’m sure we can do better than previous attempts at monadic url routing at any rate!

I hope you enjoyed today’s advent calendar… special thanks go to Scott Wlaschlin for all his technical feedback. I deliberately kept the specifics light here so as not to detract from the message of the post, but you can read more about Superscribe and graph based routing on the Superscribe website

Merry christmas to you all!
Pete

References

http://owin.org/
http://sergeytihon.wordpress.com/2014/11/24/f-advent-calendar-in-english-2014/
http://about.me/sergey.tihon
http://superscribe.org/
http://superscribe.org/graphbasedrouting.html
https://github.com/fsprojects/FSharp.Dynamic
https://gist.github.com/unknownexception/6035260
https://github.com/koistya/fsharp-owin-sample
https://github.com/freya-fs/freya
http://blog.ploeh.dk/2013/08/23/how-to-create-a-pure-f-aspnet-web-api-project/
http://wizardsofsmart.net/samples/working-with-non-compliant-owin-middleware/
http://happstack.com/page/view-page-slug/16/comparison-of-4-approaches-to-implementing-url-routing-combinators-including-the-free-and-operational-monads
https://twitter.com/scottwlaschin

Developing against Service Bus for Windows 1.1

Wouldn’t it be great if we could work on applications that leverage Microsoft Service Bus locally without having to connect to and potentially pay for Microsoft Azure?

Not everyone knows this, but there’s a local counterpart to Microsoft Azure Service Bus in the form of Service Bus for Windows. Those that do know about it know that it doesn’t have a great development story and can be a pain in the arse to set up.

In this post we’ll take the sting out of the process and show you how you can get your local environment set up so you and the rest of your team can develop against Service Bus without using any Microsoft Azure services.

You’ll need an instance of SQL server to which you have admin rights to use Service Bus for Windows.

IMPORTANT: If you have any other service running that use the default AMQP ports 5671, & 5672 then the configuration process will hang and then fail without giving a meaningful error. Ensure that there are no port clashes before continuing.

Install Service Bus for Windows 1.1

The easiest way to install Service Bus for Windows it to grab the Web Platform Installer if you haven’t got it already. and search for Service Bus 1.1

Step 1

Click add, then install, and follow the instructions through to completion.

Generating a certificate to use with a custom hostname

One of the most annoying things about Service Bus for Windows is that by default it will install on an endpoint that is named according to your computer name, e,g:

sb://vectron/ServiceBusDefaultNamespace;StsEndpoint=https://servicebus:10355/ServiceBusDefaultNamespace;RuntimePort=10354;ManagementPort=10355

This is pretty useless if you want a configuration that’s going to be common for everyone in your team. It’s not easy, but it is possible to configure Service Bus to use a custom hostname.

To do this you’ll need to generate a self-signed certificate which you can do by using SelfSSL.exe which comes as part of the IIS 6.0 Resource Kit Tools. This comes with a bunch of other cruft you don’t need which you can deselect.

The default install location is C:\Program Files (x86)\IIS Resources\SelfSSL. Choose a hostname – in my case I’m just using ‘servicebus’. Locate the exe and run the following command as an administrator:

SelfSSL /N:CN=servicebus /V:1000 /T

selfssl
Press ‘y’ when asked if you want to add it to site 1 and ignore the error – both of these things are of no consequence. This will now have added a certificate to the Trusted Root Authorities store of your local machine.

Adding a hosts file entry

In order to use our custom hostname, we need to add an entry into the hosts file that maps servicebus to localhost. This file can be found in c:\Windows\System32\drivers\etcIt’s read only by default so you may have to change the security settings.

Open this file in a text editor and add a line so it looks like this:

edit hosts

Configuring Service Bus…

Service Bus was installed in the first step but now we have to configure it. You’ll find a utility called Service Bus Configuration in your start menu which will guide you through the process. Run this and choose to create a new farm with custom settings.

custom settings

There’s a few things you need to do on this next screen. Firstly, make sure your SQL server details are correctly specified. You can leave all the database name and container settings as standard.

sql settings

Specify the service account under which Service Bus will be run… this can be any user that has admin rights.

service account

Next we will need to tell the setup process where to find the certificate we generated earlier. Under Configure Certificate, uncheck the auto-generate checkbox.

configure certificate

Click each browse button in turn and select the certificate – the name should match the hostname we chose earlier. If there is more than one, check the certificate properties and select the one that does not contain a warning about trust.

Make sure you repeat this for both the farm and the encryption certificate settings.

select certificate

Finally, change the port settings that begin with ‘9’ to start with ’10’ instead… this will avoid some of the more common potential port conflicts which will cause the installation to fail.

Leave the AMQP ports as they are, unless you know for sure that these are in use by another service such as Rabbit MQ.

If there are any other services using the same AMQP ports, Service Bus configuration will hang for a long period of time and then eventually fail, but not give any indication of why!

ports
more ports

You may also enter an alternative namespace name instead of the default provided.

That’s all the configuration we need, so hit the arrow to proceed. If everything has been entered correctly, you’ll see a summary of all the information. Click the tick to apply the settings!

The process may take a few minutes, however you shouldn’t see it go too long without logging progress to the window. If it does hang then see the above advice about ports, and check for other services that may be conflicting. If all is well you should see something like this:

Success

Changing the hostname

You’ll see from the processing completed message that our endpoint still contains the computer name, so we need to run a few powershell commands to change it. Open the Service Bus Powershell prompt from the start menu and type the following as distinct commands, replacing servicebus with your hostname if you chose an alternative.

Stop-SBFarm
Set-SBFarm -FarmDns 'servicebus'
Update-SBFarm
Start-SBFarm

The start command may take around 5 minutes to complete, but you should see something like this (edited for brevity):

set dns

Now if you open a web browser and navigate to https://servicebus:10355/ServiceBusDefaultNamespace (substitute your host and namespace where appropriate), lo and behold we have a working Service Bus deployment… complete with a valid SSL certificate!

working in a browser

Connecting to your new Service Bus deployment

You can now access your local Service Bus installation and default namespace using the following connection string (again, adjust where appropriate):

Endpoint=sb://servicebus/ServiceBusDefaultNamespace;StsEndpoint=https://servicebus:10355/ServiceBusDefaultNamespace;RuntimePort=10354;ManagementPort=10355

The low-level API exposed by Service Bus for windows is not quite the same as the latest one offered by Microsoft Azure, and as a result you will need to use a different nuget package:

install package

The good news is that Azure Service Bus is backwards compatible so you can use the same 1.1 package for for both development and production. The only downside is that you may not have access to all the very latest features of Azure Service Bus, only those that are common to both deployments.

Accessing Service Bus from an IIS app pool

When we set up the initial configuration, we specified that the users that could manage our namespace were those that were members of the Admin access control group.

If you want to be able to create queues/topics in your new Service Bus deployment from within an IIS hosted web application, you will need to create a new group for these users and then authorise the group with Service Bus. Please note that this is not possible out of the box with Windows 8 Home edition.

Open up the Computer Management window using the run dialog and typing compmgmt.msc:

computer management run

Expand the Local Users and Groups section, right click on Groups and choose New GroupEnter a suitable name and select the relevant users. You may need to quality the names of app pool users with ‘IISAPPPOOL\<user>’ in order to find them.

create group

Once the group has been created, return the the Service Bus Powershell prompt and type the following (substitute your namespace name if you chose a different one)

Set-SBNamespace -Name 'ServiceBusDefaultNamespace' -ManageUsers 'Administrators','ServiceBusUsers'

set group permissions

Once this has completed, be sure to restart IIS in order for the changes to take effect otherwise you will continue to receive a 401 not authorised back from Service Bus. You can do this through the management console or by using the run command and typing iisreset.

Summary

In this post we’ve seen how to:

  • Install Service Bus for Windows 1.1
  • Generate a certificate and add a hosts entry in order to use a custom hostname
  • Configure Service Bus using the wizard and the apply the custom hostname
  • Use the Service Bus 1.1 nuget package to work with both production and development.
  • Authorise IIS app pool users to use Service Bus from within a web application

Hopefully this has helped to demystify Service Bus for windows and demonstrate that although it may lack wider documentation, it  is a very robust and viable tool. You can (and should) use it to help develop and accurately test Service Bus applications that will eventually be deployed to Azure.

Please also check out the alpha release of my new project AzureNetQ which provides an easy api for working with Microsoft Service Bus. It’s based on the most excellent Rabbit MQ library EasyNetQ, but please take care as at the moment the documentation is in the progress of being migrated.

All questions welcome in the comments!

Pete

References

http://msdn.microsoft.com/en-us/library/dn282144.aspx
http://www.microsoft.com/web/downloads/platform.aspx
http://support.microsoft.com/kb/840671
https://www.nuget.org/packages/ServiceBus.v1_1/
http://roysvork.github.io/AzureNetQ
http://easynetq.com/
http://social.msdn.microsoft.com/forums/windowsazure/ru-ru/688ada3c-bb95-488d-9ad0-aec297438e1c/problem-starting-message-broker-during-service-broker-configuration
http://stackoverflow.com/questions/22456947/service-bus-for-windows-server-the-api-version-is-not-supported/22622117#22622117
http://social.msdn.microsoft.com/Forums/windowsazure/en-US/c23a7c1f-742d-4d7f-ad4f-3bf149964762/service-bus-for-windows-server-the-api-version-is-not-supported?forum=servbus
http://www.dotnetconsult.co.uk/weblog2/PermaLink,guid,50861acd-6bd1-4283-9fdc-7a611a440829.aspx
https://www.sslshopper.com/article-how-to-create-a-self-signed-certificate-in-iis-7.html
http://msdn.microsoft.com/en-us/library/dn520958.aspx
http://social.msdn.microsoft.com/Forums/windowsazure/en-US/f5096a7a-9605-4231-b093-b7d278be7c20/cant-uninstall-service-bus

Consider your target audience when giving advice

I’m seeing a common pattern lately – respected mentors with weight in the community giving people ‘advice’ in the form:

X is usually bad, therefore never do X

Giving this kind of advice IS bad, therefore don’t do it (not even a hint of irony here). If you find yourself making these kind of blanket statements, you need to ask yourself “Who is this advice aimed at?”

Juniors/Intermediates

Often we want to direct our information at people who are still learning, but are being led astray by the majority of advice they may be reading. In doing so, we hope that they can avoid making a mistake without having to first master the subject.

This is a noble motive. The flaw in this approach is that making mistakes is a really important way in which people learn.

When you have driving lessons, you’re being taught how to operate a car safely and reliably enough to pass a test. Learning to ‘drive’ is an ongoing process that takes years of practice, almost all of which will come after you pass your test and get out on your own.

What makes this tricky is that often junior developers within teams are not given this early opportunity to make the mistakes they need to… they are expected to be able to drive the car on their own, or teach themselves to do it.

Your blanket advice is bad for this audience. It will conflict with and confuse them.

Seniors/Experts

Some people in this audience are still very keen to learn, others are very set in their ways. Depending on your role or standing, this is probably where the bulk of your followers lie. There’s a thin line between a senior who is very keen to learn and a junior as both have a capacity to misinterpret advice.

Assuming that your blanket statement is encountered by a true expert however, they may be offended and rightly so. These people assume that you know your stuff and you know your audience, after all you’ve worked hard to get where you are right?

When an expert encounters an unbalanced statement that does not take into account the true circumstances and complexity of the situation, they immediately either question it or dismiss it. Most likely it’s the latter; you’ve not helped your cause and you’ve made yourself look like a bit of a tit.

Your blanket advice is bad for this audience. It will insult them and undermine you.

Your peers

When directing content at your peers, it is much more likely to spark debate and inspire theoretical discussion which will help drive both your ideas and the community forward. You can stand to omit well understood details, be smug, sarcastic or controversial without fear of your advice being misconstrued.

In this context however, what you’re really conveying is a concept, idea or opinion which is at risk of being consumed incorrectly as advice by one of your other audiences.

This is what is known as Leaky Advice. It’s leaky in terms of it’s target audience, and leaky because you can’t cover a complex underlying problem with a simple statement. Just like leaky abstractions though, it’s only a problem if your audience or use case is not the correct one.

Your blanket advice is great for this audience, but it’s no longer advice and you shouldn’t frame it as such.

Solving the root problem

If you seek to provide advice, you have a duty to educate your audience correctly. The more followers you have and the more respected in the community you are, the more important this becomes.

Bad advice is propagated by people with a poor understanding – they’ve read an over-generalised post or tweet somewhere and treated it as gospel because it has come from a reliable source, without thinking about the consequences.

By tweeting blanket staments to the wrong audience you are not helping to end the bad practice that you were trying to educate people against, you’ve simply made it worse by providing more bad advice and adding to the confusion.

We owe it to our followers and readers to provide balanced arguments along with evidence. We are scientists after all.

 

Pete

Aside

Are we gOWINg in the right direction?

This week on twitter we find ourselves back on the subject of OWIN, and once again the battle lines are drawn and there is source of much consternation.

The current debate goes thusly… should we attempt to build a centralised Dependency Injection wrapper, available to any middleware and allowing them – essentially – to share state?

If you’d like some context to this post, you can also read:

Is sharing state in this way considered an anti-pattern, or even a bastardisation of OWIN itself? To answer this, we need to ask ourselves some questions.

What *is* OWIN?

Paraphrasing, from the OWIN specification itself:

… OWIN, a standard interface between .NET web servers and web applications. The goal of OWIN is to decouple server and application…

The specification also defines some terms:

  • Application – Possibly built on top of a Web Framework, which is run using OWIN compatible Servers.
  • Web Framework – A self-contained component on top of OWIN exposing its own object model or API that applications may use to facilitate request processing. Web Frameworks may require an adapter layer that converts from OWIN semantics.
  • Middleware – Pass through components that form a pipeline between a server and application to inspect, route, or modify request and response messages for a specific purpose.

This helps clear some things up…. particularly about the boundaries between our concerns and the terminology that identifies them. An application is built on top of a web framework, and the framework itself should be self-contained.

What should OWIN be used for?

OWIN purists say that middleware is an extension of the HTTP pipeline as a whole… the journey from it’s source to your server, passing through many intermediaries capable of caching or otherwise working with the request itself. OWIN middleware are simply further such intermediaries that happen to be fully within your control.

But there is another view – that OWIN provides an opportunity to augment or compose an application from several reusable, framework-agnostic middleware components. This is clearly at odds with the specifiation, but is it not without merit?

Composing applications in this way takes the strain off framework developers, and allows us all to work together towards a common goal. It allows us to build composite applications involving multiple web frameworks, leveraging their relative strengths and weaknesses within a single domain.

A lot of the purists are already unhappy with the direction that Microsoft has taken with their OWIN implementation – Katana. By and large I think they were just being practical and didn’t have the time to wait around for the decision of a committee, but this has only served to further muddy the waters when defining OWIN’s identity, purpose and intended usage.

If this isn’t a use for OWIN, then what is it?

When I began to learn about OWIN, I intrinsically ended up in the composable applications camp, as did several others I know. I would love to see our disparate framework communities unite, and the availability of framework-agnostic modules could only be a good thing in this regard. But the specification is clear… this is not what OWIN is for.

On the subject of the specification and it’s definitions from earlier though, I think there is one quite glaring error. This error wasn’t present when the OWIN specification was drawn up, but rather came to be due the effect that OWIN and middleware such as SignalR has had on the way we think about building applications:

Instead of building on top of a framework, the inverse is now true: we build our application out of frameworks, plural.

So what now?

What we are really after is a bootstrapper that allows us to run a pipeline of framework-agnostic components from *within* the context of our applications. If we execute this pipeline just beyond the boundary, it will have exactly the same effect as middleware in the OWIN pipeline but with the correct seperation of concerns, and with access to shared state.

This bootstrapper could (probably should?) be a terminating middleware, that itself can hand off control to whatever frameworks your application is built from. Alternatively it could be a compatibility layer built into frameworks themselves… although I think that getting people to agree on a common interface is probably a ‘pipe’ dream.

Our communtiy clearly desires such a mechanism for composing applications, and allowing interoperability between frameworks. But that’s not what OWIN is for, and if we are serious about our goal, we’ll need to work together to meet the challenge.

Please leave your comments below.

Pete

Tracking changes to complex viewmodels with Knockout.JS Part 2 – Primitive Arrays

In the first part of this series, I talked about the challenges of tracking changes to complex viewmodels in knockout, using isDirty() (see here and here) and getChanges() methods.

In this second part, I’ll go through how we extended this initial approach so we could track changes to array elements as well as regular observables. If you haven’t already, I suggest you have a read of part one as many of the examples build on code from the first post.

Starting Simple

For the purposes of this post we are only considering ‘Primitive’ arrays… these are arrays of values such as strings and numbers, as opposed to complex objects with properties of their own. Previously we created an extender that allows us to apply change tracking to a given observable, and we’re using the same approach here.

We won’t be re-using the existing extender, but we will use some of the same code for iterating over our model and applying it to our observables. In that vein, here’s a skeleton for our change tracked array extender… it has a similar structure to our previous one:


ko.extenders.trackArrayChange = function (target, track) {
if (track) {
target.isDirty = ko.observable(false);
target.added = ko.observableArray([]);
target.removed = ko.observableArray([]);
var addItem = function (item) {
//…
};
var removeItem = function (item) {
//…
};
target.getChanges = function () {
var result = {
added: target.added,
removed: target.removed
};
return result;
};
//….
//….
}
}

You should notice a few differences however:

  • Two observable arrays are being exposed in addition to the isDirty() flag – added and removed
  • The getChanges() method returns a complex object also containing adds and removes

As this functionality was developed with HTTP PATCH in mind, we’re assuming that we will need to track both the added items and the removed items, so that we can only send the changes back to the server. If you aren’t using PATCH, it can be sufficient just to know that a change has occurred and then save your data by replacing the entire array.

Last points to make – we’re treating any ‘changes’ to existing elements as an add and then a delete… these are just primitive values after all. Also the ordering of the elements is not going to be tracked (although this is possible and will be covered in the next post).

Array subscriptions

Prior to Knockout 3.0, we had to provide alternative methods to the usual push() and pop() so that we could keep track of array elements… subscribing to the observableArray itself would only notify you if the entire array was replaced. As of Knockout 3.0 though, we now have a way to subscribe to array element changes themselves!

We’re using the latest version for this example, but check the links at the bottom of the third post in the series if you are interested in the old version.

Let’s begin to flesh out the skeleton a little more:


//….
//….
target.getChanges = function () {
var result = {
added: target.added(),
removed: target.removed()
};
return result;
};
target.subscribe(function(changes) {
ko.utils.arrayForEach(changes, function (change) {
switch (change.status)
{
case "added":
addItem(change.value);
break;
case "deleted":
removeItem(change.value);
break;
}
});
}, null, "arrayChange");
}

Now we’ve added an arrayChange subscription, we’ll be notified whenever anyone pops, pushes or even splices our array. In the event of the latter, we’ll receive multiple changes so we have to cater for that eventuality.

We’ve deferred the actual tracking of the changes to private methods, addItem() and removeItem(). The reason for this becomes clear when you consider what you’d expect to happen after performing the following operations:


// Initialise array with some data and track changes
var trackedArray = ko.observableArray([1,2,3]).extend({trackArrayChanges: true});
trackedArray.getChanges();
// -> { } No changes yet
trackedArray.push(4);
trackedArray.pop();
trackedArray.getChanges();
// -> { } Changes have negated each other

In order to achieve this behavior, we first need to check that the item in question has not already been added to one of the lists like so:


//….
//….
var findItem = function (array, item) {
return ko.utils.arrayFirst(array, function (o) {
return o === item;
});
};
var addItem = function (item) {
var previouslyRemoved = findItem(target.removed(), item);
if (previouslyRemoved) {
target.removed.remove(previouslyRemoved);
} else {
target.added.push(item);
target.isDirty(true);
}
};
var removeItem = function (item) {
var previouslyAdded = findItem(target.added(), item);
if (previouslyAdded) {
target.added.remove(previouslyAdded);
} else {
target.removed.push(item);
target.isDirty(true);
}
};
//….
//….

Applying this to the view model

A change tracked primitive array is unlikely to be very useful on it’s own, so we need to make sure that we can track changes to an observable array regardless of where it appeared in our view model. Lets revisit the code from our previous sample that traversed the view model and extended all the observables it encountered:


….
….
var applyChangeTrackingToObservable = function (observable) {
// Only apply to basic writeable observables
if (observable && !observable.nodeType && !observable.refresh && ko.isObservable(observable)) {
if (!observable.isDirty) observable.extend({ trackChange: true });
}
};
var applyChangeTracking = function (obj) {
var properties = getObjProperties(obj);
ko.utils.arrayForEach(properties, function (property) {
applyChangeTrackingToObservable(property.value);
});
};
….
….

In order to properly apply change tracking to our model, we need to detect whether a given observable is in fact an observableArray, and if so then apply the new extender instead of the old one. This is not actually as easy as it sounds… based on the status of this pull request, Knockout seems to provide no mechanism for doing this (please correct me if you know otherwise!).

Luckily, this thread had the answer… we can simply extend the observableArray “prototype” by adding the following line somewhere in global scope:

ko.observableArray.fn.isObservableArray = true;

Assuming that’s in place, our change becomes very simple:


//….
//….
var applyChangeTrackingToObservable = function (observable) {
// Only apply to basic writeable observables
if (observable && !observable.nodeType && !observable.refresh && ko.isObservable(observable)) {
if (observable.isObservableArray) {
observable.extend({ trackArrayChange: true });
}
else {
if (!observable.isDirty) observable.extend({ trackChange: true });
}
}
};
//….
//….

We don’t need to change any of the rest of the wireup code from the first sample, as we are already working through our view model recursively and letting applyChangeTrackingToObservable do it’s thing.

That’s all the code we needed, now we can take it for a spin!


var viewModel = {
Name: ko.observable("Pete"),
Age: ko.observable(29),
Skills: ko.observableArray([
"TDD", "Knockout", "WebForms"
}),
Occupation: ko.observable("Developer")
};
applyChangeTracking(viewModel);
viewModel.Occupation("Blogger");
viewModel.Skills.push("Change tracking");
viewModel.Skills.remove("WebForms");
getChangesFromModel(viewModel);
/* -> {
"Skills": {
added: ["Change tracking"],
removed: ["WebForms"]
},
Occupation: "Blogger"
} */

view raw

example.js

hosted with ❤ by GitHub

Summary

We’ve seen how we can make use of the new arraySubscriptions feature in Knockout 3.0 to get notified about changes to array elements. We made sure that we didn’t get strange results when items were added and then removed again or vice-versa, and then integrated the whole thing into a change tracked viewmodel.

In the third and final post in this series, we’ll go the whole hog and enable change tracking for complex and nested objects within arrays.

You can view the full code for this post here: https://gist.github.com/Roysvork/8743663, or play around with it in jsFiddle!

Pete

Aside

Increasing loop performance by iterating two intersecting lists simultaneously

Disclaimer

This brief post covers a micro-optimisation that we employed recently in an Asp.Net Web Api app. If you’re looking to solve major performance problems or get a quick win on small tasks, this isn’t going to be very useful to you. However, if you’ve nailed all the big stuff and are processing a large batch (think millions) of many records together then these small inefficiencies really begin to add up. If this applies to you, then you may find the following solution useful.

It’s possible that many people have thought of this problem and provided a solution before… in fact I’m very sure they have as I’ve googled it and so many Stack Overflow posts came up that I’m not going to bother linking to any of them. However, no-one seemed to made anything that was simple, re-usable and easy to integrate… so this is my take on it.

Finally, I’ve attempted to do some napkin maths. It’s probably wrong in some way so please correct me.

Compound Iteration

How often have you written code like this? 


public List<string> Validate(IDictionary<string, object> data, List<Field> mandatoryFields)
{
var errors = new List<string>();
foreach (var field in mandatoryFields)
{
if (!data.Keys.Contains(field.Identifier))
{
errors.add(String.Format("The {0} field is required", field.Identifier);
}
}
return errors;
}

This simplified sample shows how you might validate an HTTP PATCH request against some metadata. It seems innocuous right?

But say you have 1000 fields to validate, and maybe half of them are present in the body of your request. In the worst case we’ll have 500 iterations of the outer fields loop where we’ll then have to iterate through 500 dictionary keys just to find out that the field doesn’t exist in the data set.

Even in an optimal case for the remaining fields that do exist, you’ll have to iterate through 250 keys on average before we find a match, so for an ‘average’ case we could be looking at:

(500 * 500) + (500 * 250) = 375,000

As an ‘average’ case, it could potentially be a lot less than this, potentially a lot more. Either way,  imagine trying to bulk validate 100,000 records and… yikes!

Sort your data, and enter the Efficient Iterator

Provided your numbers are big enough it’s much more efficient to sort your data first and then step through each collection simultaneously. If your field info is coming say from a SQL table with a clustered index and an orderby is essentially free then it’s even more possible that this will result in significant speedup.

Basically what such an algorithm does is to take the first item from each of the two lists, and compare them. If Item A comes before Item B in the sort order, you advance forward one item in List A – or vice versa – until the two are found to match (or you run out of items). You are able to take action on each step, in the case a value is a match, or an orphan on either side.

Now the worst case iteration is merely the sum of the elements in the two lists. So in our average case, just 1500. That’s a 250x reduction… over two orders of magnitude!

Show me the code

Without further ado, here’s a Gist that you can use to do this right now…


public class EfficientIterator<TOne, TTwo>
{
private Action<TOne, TTwo> always = (one, two) => { };
private Action<TOne, TTwo> match = (one, two) => { };
private Action<TOne> oneOnly = (one) => { };
private Action<TTwo> twoOnly = (two) => { };
private readonly Func<TOne, TTwo, int> compare;
public Action<TOne, TTwo> Always { set { this.always = value; } }
public Action<TOne, TTwo> Match { set { this.match = value; } }
public Action<TOne> OneOnly { set { this.oneOnly = value; } }
public Action<TTwo> TwoOnly { set { this.twoOnly = value; } }
public EfficientIterator(Func<TOne, TTwo, int> compare)
{
this.compare = compare;
}
public void RunToEnd(IEnumerable<TOne> listOne, IEnumerable<TTwo> listTwo)
{
using (var listOneEnumerator = listOne.GetEnumerator())
using (var listTwoEnumerator = listTwo.GetEnumerator())
{
var listOneIterating = listOneEnumerator.MoveNext();
var listTwoIterating = listTwoEnumerator.MoveNext();
while (listOneIterating || listTwoIterating)
{
int result = 0;
if (listOneIterating && listTwoIterating)
{
result = this.compare(listOneEnumerator.Current, listTwoEnumerator.Current);
// a and b are equal
if (result == 0)
{
this.always(listOneEnumerator.Current, listTwoEnumerator.Current);
this.match(listOneEnumerator.Current, listTwoEnumerator.Current);
listOneIterating = listOneEnumerator.MoveNext();
listTwoIterating = listTwoEnumerator.MoveNext();
continue;
}
}
// list one has run out, or it's current item is greater than list two
if (!listOneIterating || result > 0)
{
this.always(default(TOne), listTwoEnumerator.Current);
this.twoOnly(listTwoEnumerator.Current);
listTwoIterating = listTwoEnumerator.MoveNext();
continue;
}
// list two has run out, or it's current item is greater than list one
if (!listTwoIterating || result < 0)
{
this.always(listOneEnumerator.Current, default(TTwo));
this.oneOnly(listOneEnumerator.Current);
listOneIterating = listOneEnumerator.MoveNext();
continue;
}
}
}
}
}

Take a look at these MSpec tests for information on how to use it. You’ll also need to use nullable types if you want to work with non-reference types but that should be straightforward. Thanks to Tommy Carlier for his amendments to the sample to allow any type of IEnumerable and to support value types!

Questions are welcome in the comments… but please refrain from unhelpful critiquing the ‘design’ of the simplified problem sample ; ) Enjoy iterating efficiently!

Don’t forget that you’ll have to sort both lists before passing them to the efficient iterator!

Pete

Tracking changes to complex viewmodels with Knockout.JS

As part of a project I’ve been working on for a client, we’ve decided to implement HTTP PATCH in our API for making changes. The main client consuming the API is a web application driven by Knockout.JS, so this meant we had to find a way to figure out what had changed on our view model, and then send just those values over the wire.

There is nothing new or exciting about this requirement in itself. The question has been posed before  and it was the subject of a blog post way back in 2011 by Ryan NiemeyerWhat was quite exciting however was that our solution ended up doing much more than just detect changes to viewmodels. We needed to keep tabs on individual property changes, changes to arrays (adds\deletes\modifications), changes to child objects and even changes to child objects nested within arrays. The result was a complete change tracking implementation for knockout that can process not just one object but a complete object graph.

In this two part post I’ll attempt to share the code, the research and the story of how we arrived at the final implementation.

Identifying that a change has occurred

The first step was to get basic change tracking working given a view model with observable properties containing values – no complex objects.

Initial googling turned up the following approach as a starting point:

http://www.knockmeout.net/2011/05/creating-smart-dirty-flag-in-knockoutjs.html
http://www.johnpapa.net/spapost10/
http://www.dotnetcurry.com/showarticle.aspx?ID=876

These methods all involved some variation on adding an isDirty computed observable to your view model. Ryan’s example stores the initial state of the object when it is defined which can then be used as a point of comparison to figure out if a change has occurred.

Suprotim’s approach is based on Ryan’s method but instead of storing a json snapshot of the initial object (which could potentially be very large for complex view models), it merely subscribes to all the observable properties of the view model and sets the isDirty flag accordingly.

Both of these are very lightweight and efficient ways of detecting that a change has occurred, but as detailed in this thread they can’t pinpoint exactly which observable caused the change. Something more was needed.

Tracking changes to simple values

After a bit more digging, a clever solution to the problem of tracking changes to individual properties emerged as described by Stack Overflow one hit wonder, Brett Green in the answer to this question and also in slightly more detail on his blog.

This made the use of knockout extenders to add properties to the observables themselves; an overall isDirty() method for the view model as a whole could then be provided by a computed observable. This post almost entirely formed the basis for the first version. After a bit of restructuring, pretty soon we’ve got an implementation that will allow us to track changes to a flat view model:


var getObjProperties = function (obj) {
var objProperties = [];
var val = ko.utils.unwrapObservable(obj);
if (val !== null && typeof val === 'object') {
for (var i in val) {
if (val.hasOwnProperty(i)) objProperties.push({ "name": i, "value": val[i] });
}
}
return objProperties;
};
ko.extenders.trackChange = function (target, track) {
if (track) {
target.isDirty = ko.observable(false);
target.originalValue = target();
target.subscribe(function (newValue) {
// use != not !== so numbers will equate naturally
target.hasValueChanged(newValue != target.originalValue);
target.hasValueChanged.valueHasMutated();
});
}
return target;
};
var applyChangeTrackingToObservable = function (observable) {
// Only apply to basic writeable observables
if (observable && !observable.nodeType && !observable.refresh && ko.isObservable(observable)) {
if (!observable.isDirty) observable.extend({ trackChange: true });
}
};
var applyChangeTracking = function (obj) {
var properties = getObjProperties(obj);
ko.utils.arrayForEach(properties, function (property) {
applyChangeTrackingToObservable(property.value);
});
};
var getChangesFromModel = function (obj) {
var changes = null;
var properties = getObjProperties(obj);
ko.utils.arrayForEach(properties, function (property) {
if (property.value != null && typeof property.value.isDirty != "undefined" && property.value.isDirty()) {
changes = changes || {};
changes[property.name] = property.value();
}
});
return changes;
};

An example of utilising this change tracking is as follows:


var viewModel = {
Name: ko.observable("Pete"),
Age: ko.observable(29),
Occupation: ko.observable("Developer")
};
applyChangeTracking(viewModel);
viewModel.Occupation("Unemployed");
getChangesFromModel(viewModel);
// -> { "Occupation": "Unemployed" }

Detecting changes to complex objects

The next task was to ensure we could work with properties containing complex objects and nested observables. The issue here is that the isDirty property of an observable is only set when it’s contents are replaced. Modifying a child property of an object within an observable will not trigger the change tracking.

This thread on google groups seemed to be going in the right direction and even had links to two libraries already built:

  • Knockout-Rest seemed promising, but although this was able to detect changes in complex properties and even roll them back, it still could not pinpoint the individual properties that triggered the change.
  • EntitySpaces.js seemed to contain all the required elements, but it relied on generated classes and the change tracking features were too tightly coupled to it’s main use as a data access framework. At the time of writing it had not been updated for two years.

In the end we came up with a solution ourselves. In order to detect that a change had occurred further down the graph, we modified the existing isDirty extension member so that in the event that the value of our observable property was a complex object, it should also take into account the isDirty value of any properties of that child object:


var traverseObservables = function (obj, action) {
ko.utils.arrayForEach(getObjProperties(obj), function (observable) {
if (observable && observable.value && !observable.value.nodeType && ko.isObservable(observable.value)) {
action(observable);
}
});
};
ko.extenders.trackChange = function (target, track) {
if (track) {
target.hasValueChanged = ko.observable(false);
target.hasDirtyProperties = ko.observable(false);
target.isDirty = ko.computed(function () {
return target.hasValueChanged() || target.hasDirtyProperties();
});
var unwrapped = target();
if ((typeof unwrapped == "object") && (unwrapped !== null)) {
traverseObservables(unwrapped, function (obj) {
applyChangeTrackingToObservable(obj.value);
obj.value.isDirty.subscribe(function (isdirty) {
if (isdirty) target.hasDirtyProperties(true);
});
});
}
target.originalValue = target();
target.subscribe(function (newValue) {
// use != not !== so numbers will equate naturally
target.hasValueChanged(newValue != target.originalValue);
target.hasValueChanged.valueHasMutated();
});
}
return target;
};

Now when extending an observable to apply change tracking, if we find that the initial value is a complex object we also iterate over any properties of our child object and recursively apply change tracking to those observables as well. We also set up subscriptions to the resulting isDirty flags of the child properties to ensure we set the hasDirtyProperties flag on the target.

Tracking individual changes within complex objects

After the previous modifications, our change tracking now behaves like this:


var viewModel = {
Name: ko.observable("Pete"),
Age: ko.observable(29),
Skills: {
Tdd: ko.observable(true),
Knockout: ko.observable(true),
ChangeTracking: ko.observable(false),
},
Occupation: ko.observable("Developer")
};
applyChangeTracking(viewModel);
viewModel.Skills().ChangeTracking(true);
getChangesFromModel(viewModel);
/* -> {
"Skills": {
Tdd: function observable() { …. },
Knockout: function observable() { …. },
ChangeTracking: function observable() { …. },
}
} */

Obviously there’s something missing here… we know that the Skills object has been modified and we also technically know which property of the object was modified but that information isn’t being respected by getChangesFromModel.

Previously it was sufficient to pull out changes by simply returning the value of each observable. That’s no longer the case so we have to add a getChanges method to our observables at the same level as isDirty, and then use this instead of the raw value when building our change log:


ko.extenders.trackChange = function (target, track) {
if (track) {
// …
// …
if (!target.getChanges) {
target.getChanges = function (newObject) {
var obj = target();
if ((typeof obj == "object") && (obj !== null)) {
if (target.hasValueChanged()) {
return ko.mapping.toJS(obj);
}
return getChangesFromModel(obj);
}
return target();
};
}
}
return target;
};
var getChangesFromModel = function (obj) {
var changes = null;
var properties = getObjProperties(obj);
ko.utils.arrayForEach(properties, function (property) {
if (property.value != null && typeof property.value.isDirty != "undefined" && property.value.isDirty()) {
changes = changes || {};
changes[property.name] = property.value.getChanges();
}
});
return changes;
};

Now our getChangesFromModel will operate recursively and produce the results we’d expect. I’d like to draw your attention to this section of the above code in particular:


if ((typeof obj == "object") && (obj !== null)) {
if (target.hasValueChanged()) {
return ko.mapping.toJS(obj);
}
return getChangesFromModel(obj);
}

view raw

highlight.js

hosted with ❤ by GitHub

There’s a reason we’ve been using seperate observables to track hasValueChanged and hasDirtyProperties; in the event that we have replaced the contents of the observable wholesale, we must pull out all the values.

Here’s the change tracking complete with complex objects in action:


var viewModel = {
Name: ko.observable("Pete"),
Age: ko.observable(29),
Skills: ko.observable({
Tdd: ko.observable(true),
Knockout: ko.observable(true),
ChangeTracking: ko.observable(false),
Languages: ko.observable({
Csharp: ko.observable(false),
Javascript: ko.observable(false)
}),
}),
Occupation: ko.observable("Developer")
};
applyChangeTracking(viewModel);
viewModel.Skills().ChangeTracking(true);
viewModel.Skills().Languages({
Csharp: ko.observable(true),
Javascript: ko.observable(true),
});
getChangesFromModel(viewModel);
/* -> {
"Skills": {
ChangeTracking: true,
Languages: {
Csharp: true,
Javascript: true
}
}
} */

Summary

In this post we’ve seen how we can use a knockout extender and an isDirty observable to detect changes to individual properties within a view model. We’ve also seen some of the potential pitfalls you may encounter when dealing with nested complex objects and how we can overcome these to provide a robust change tracking system.

In the second part of this post, we’ll look at the real killer feature… tracking changes to complex objects within arrays.

You can view the full code for the finished example here: https://gist.github.com/Roysvork/8744757 or play around with the jsFiddle!

Pete

Edit: As part of the research for this post, I did come across https://github.com/ZiadJ/knockoutjs-reactor which takes a very similar approach and even handles arrays. It’s a shame I had not seen this when writing the code as it would have been quite useful.

TDD, continuous deployment and the golden number

You’re a strong supported of the benefits of continuous integration… whenever anyone commits to your source repo, all your tests are run. You have close to 100% coverage (or as close as you desire to have) so if anything breaks you’re going to know about it. Once stuff is pushed and all the tests pass then you’re good to ship, right?

WRONG.

Do you know how many tests you have in your solution? Does everyone in your team know? If they don’t know, do they have ready access to this figure both before and after the push? This figure is important; it’s your TDD Golden Number.

All is not what it seems

Someone you work with is trying out a new Github client. Sure it’s not new in the grand scheme of things, but it’s new to YOUR TEAM. It should work fine but just like a screw from a dissasembled piece of furniture, just because it fits the hole it was remove from doesn’t mean it will fit another.

Something goes wrong with a rebase or a merge and you don’t notice. This shit should be easy and happen as a matter of course, but this time it doesn’t and changesets get lost. Not only that, but those changesets span a complete Red-Green-Refactor cycle, so you’ve lost both code and tests.

The build runs… green light. But…

UNLESS YOU KNOW YOUR GOLDEN NUMBER and have a way of verifying it against the number of tests that ran, you have no idea if this green light is a true indicator of success. All you know is that all the tests in source control pass.

Risk management

Granted, the chances of a scenario where commits get lost during a merge are slim, but if any loss is possible then the chances are that you’ll lose tests as well as code because they’re probably in the same commit. This leads us to:

POINT THE FIRST: Make sure you commit your tests separately from the code that makes them pass.

Some may argue that you can take this one step further and commit each step of Red-Green-Factor separately and this works too so long as you don’t push until you have a green light. This is a good starting point for minimizing the chance of loss.

POINT THE SECOND: Test your golden number

You’ll need to be careful how you deal with this one from a process and team management point of view, but why not VERIFY YOUR GOLDEN NUMBER. Write a test that checks that the number of tests matches what it was at the time of the commit.

Wait a minute…

There’s a good reason why you might not think to do this; what if the number of tests lost is equal to the number of tests added anew by the commit?

POINT THE MOST IMPORTANT: COMMIT YOUR TESTS TO A SEPARATE REPO

The chances of our worst case scenario playing out now with so many distinct steps is orders of magnitude lower than in our previous case. For disaster to go un-noticed, you have to lose the tests AND the the accompanying golden number test AND the code that was in a separate commit to a separate repository.

All in all, a good way to think about the golden number is like this:

Seriously though

Even with the best intentions and code coverage, there’s always a chance that something may go wrong and you won’t know about it. When employed together, these three points will help you efficiently mitigate this risk.

Pete

For f**ks sake, pick a god damn signature already!!!

I was fortunate to speak to a lot of people about OWIN, both in the run up to and during the recent NDC London conference. And I tell you what, I’m at a complete loss as to what the hell has happened. OWIN is a great standard, with some great support from many sources and with some great minds putting in their time and effort. Despite this though, things are in dire straits.

One particular issue has cropped up which has got everyone faffing about. There has been a lot of ‘discussion’ on this recently both on Github and on Twitter. And quite frankly I’m rather frustrated by it all.

For those that don’t know, can’t be bothered to read the thread or simply can’t fathom it out from all the tangents and confusion, there’s currently no ‘standard’ way of writing middleware that will ensure that any given provider will be able to wire it into the pipeline.

Microsoft came up with a wire up scenario in the form of an IAppBuilder implementation when they implemented Katana which utilises a class with a constructor and an invoke function, but this is not part of the OWIN specification. There’s another wire up solution in the form of Mark Rendles “fix” that just needs a lamdba function, but this isn’t part of OWIN either. There just isn’t a standard at all.

So basically at the moment, it’s nigh on impossible to implement middleware that may choose routes through the pipeline, or otherwise wrap parts of the pipeline and work with it in a generic manner. And for all the discussion going on there’s no solution or agreement in sight. You just have to hope that the wire up scenario that’s available supports your chosen signature.

How the hell has this happened??

 

Forgive me but I thought this was a core tenant of OWIN – Interoperability – between middleware and owin compatible hosting environments and in general? Yet the end result of all this is that we’ve got a major interoperability problem… the exact opposite of the OWIN philosophy.

It’s not hard, it’s not rocket science and it doesn’t need protracted discussion. JUST PICK A F***ING SIGNATURE ALREADY.

Func<AppFunc, AppFunc> – There, I did it.