Skip to content

Developing against Service Bus for Windows 1.1

Wouldn’t it be great if we could work on applications that leverage Microsoft Service Bus locally without having to connect to and potentially pay for Microsoft Azure?

Not everyone knows this, but there’s a local counterpart to Microsoft Azure Service Bus in the form of Service Bus for Windows. Those that do know about it know that it doesn’t have a great development story and can be a pain in the arse to set up.

In this post we’ll take the sting out of the process and show you how you can get your local environment set up so you and the rest of your team can develop against Service Bus without using any Microsoft Azure services.

You’ll need an instance of SQL server to which you have admin rights to use Service Bus for Windows.

IMPORTANT: If you have any other service running that use the default AMQP ports 5671, & 5672 then the configuration process will hang and then fail without giving a meaningful error. Ensure that there are no port clashes before continuing.

Install Service Bus for Windows 1.1

The easiest way to install Service Bus for Windows it to grab the Web Platform Installer if you haven’t got it already. and search for Service Bus 1.1

Step 1

Click add, then install, and follow the instructions through to completion.

Generating a certificate to use with a custom hostname

One of the most annoying things about Service Bus for Windows is that by default it will install on an endpoint that is named according to your computer name, e,g:

sb://vectron/ServiceBusDefaultNamespace;StsEndpoint=https://servicebus:10355/ServiceBusDefaultNamespace;RuntimePort=10354;ManagementPort=10355

This is pretty useless if you want a configuration that’s going to be common for everyone in your team. It’s not easy, but it is possible to configure Service Bus to use a custom hostname.

To do this you’ll need to generate a self-signed certificate which you can do by using SelfSSL.exe which comes as part of the IIS 6.0 Resource Kit Tools. This comes with a bunch of other cruft you don’t need which you can deselect.

The default install location is C:\Program Files (x86)\IIS Resources\SelfSSL. Choose a hostname – in my case I’m just using ‘servicebus’. Locate the exe and run the following command as an administrator:

SelfSSL /N:CN=servicebus /V:1000 /T

selfssl
Press ‘y’ when asked if you want to add it to site 1 and ignore the error – both of these things are of no consequence. This will now have added a certificate to the Trusted Root Authorities store of your local machine.

Adding a hosts file entry

In order to use our custom hostname, we need to add an entry into the hosts file that maps servicebus to localhost. This file can be found in c:\Windows\System32\drivers\etcIt’s read only by default so you may have to change the security settings.

Open this file in a text editor and add a line so it looks like this:

edit hosts

Configuring Service Bus…

Service Bus was installed in the first step but now we have to configure it. You’ll find a utility called Service Bus Configuration in your start menu which will guide you through the process. Run this and choose to create a new farm with custom settings.

custom settings

There’s a few things you need to do on this next screen. Firstly, make sure your SQL server details are correctly specified. You can leave all the database name and container settings as standard.

sql settings

Specify the service account under which Service Bus will be run… this can be any user that has admin rights.

service account

Next we will need to tell the setup process where to find the certificate we generated earlier. Under Configure Certificate, uncheck the auto-generate checkbox.

configure certificate

Click each browse button in turn and select the certificate – the name should match the hostname we chose earlier. If there is more than one, check the certificate properties and select the one that does not contain a warning about trust.

Make sure you repeat this for both the farm and the encryption certificate settings.

select certificate

Finally, change the port settings that begin with ‘9’ to start with ’10’ instead… this will avoid some of the more common potential port conflicts which will cause the installation to fail.

Leave the AMQP ports as they are, unless you know for sure that these are in use by another service such as Rabbit MQ.

If there are any other services using the same AMQP ports, Service Bus configuration will hang for a long period of time and then eventually fail, but not give any indication of why!

ports
more ports

You may also enter an alternative namespace name instead of the default provided.

That’s all the configuration we need, so hit the arrow to proceed. If everything has been entered correctly, you’ll see a summary of all the information. Click the tick to apply the settings!

The process may take a few minutes, however you shouldn’t see it go too long without logging progress to the window. If it does hang then see the above advice about ports, and check for other services that may be conflicting. If all is well you should see something like this:

Success

Changing the hostname

You’ll see from the processing completed message that our endpoint still contains the computer name, so we need to run a few powershell commands to change it. Open the Service Bus Powershell prompt from the start menu and type the following as distinct commands, replacing servicebus with your hostname if you chose an alternative.

Stop-SBFarm
Set-SBFarm -FarmDns 'servicebus'
Update-SBFarm
Start-SBFarm

The start command may take around 5 minutes to complete, but you should see something like this (edited for brevity):

set dns

Now if you open a web browser and navigate to https://servicebus:10355/ServiceBusDefaultNamespace (substitute your host and namespace where appropriate), lo and behold we have a working Service Bus deployment… complete with a valid SSL certificate!

working in a browser

Connecting to your new Service Bus deployment

You can now access your local Service Bus installation and default namespace using the following connection string (again, adjust where appropriate):

Endpoint=sb://servicebus/ServiceBusDefaultNamespace;StsEndpoint=https://servicebus:10355/ServiceBusDefaultNamespace;RuntimePort=10354;ManagementPort=10355

The low-level API exposed by Service Bus for windows is not quite the same as the latest one offered by Microsoft Azure, and as a result you will need to use a different nuget package:

install package

The good news is that Azure Service Bus is backwards compatible so you can use the same 1.1 package for for both development and production. The only downside is that you may not have access to all the very latest features of Azure Service Bus, only those that are common to both deployments.

Accessing Service Bus from an IIS app pool

When we set up the initial configuration, we specified that the users that could manage our namespace were those that were members of the Admin access control group.

If you want to be able to create queues/topics in your new Service Bus deployment from within an IIS hosted web application, you will need to create a new group for these users and then authorise the group with Service Bus. Please note that this is not possible out of the box with Windows 8 Home edition.

Open up the Computer Management window using the run dialog and typing compmgmt.msc:

computer management run

Expand the Local Users and Groups section, right click on Groups and choose New GroupEnter a suitable name and select the relevant users. You may need to quality the names of app pool users with ‘IISAPPPOOL\<user>’ in order to find them.

create group

Once the group has been created, return the the Service Bus Powershell prompt and type the following (substitute your namespace name if you chose a different one)

Set-SBNamespace -Name 'ServiceBusDefaultNamespace' -ManageUsers 'Administrators','ServiceBusUsers'

set group permissions

Once this has completed, be sure to restart IIS in order for the changes to take effect otherwise you will continue to receive a 401 not authorised back from Service Bus. You can do this through the management console or by using the run command and typing iisreset.

Summary

In this post we’ve seen how to:

  • Install Service Bus for Windows 1.1
  • Generate a certificate and add a hosts entry in order to use a custom hostname
  • Configure Service Bus using the wizard and the apply the custom hostname
  • Use the Service Bus 1.1 nuget package to work with both production and development.
  • Authorise IIS app pool users to use Service Bus from within a web application

Hopefully this has helped to demystify Service Bus for windows and demonstrate that although it may lack wider documentation, it  is a very robust and viable tool. You can (and should) use it to help develop and accurately test Service Bus applications that will eventually be deployed to Azure.

Please also check out the alpha release of my new project AzureNetQ which provides an easy api for working with Microsoft Service Bus. It’s based on the most excellent Rabbit MQ library EasyNetQ, but please take care as at the moment the documentation is in the progress of being migrated.

All questions welcome in the comments!

Pete

References

http://msdn.microsoft.com/en-us/library/dn282144.aspx

http://www.microsoft.com/web/downloads/platform.aspx

http://support.microsoft.com/kb/840671

https://www.nuget.org/packages/ServiceBus.v1_1/

http://roysvork.github.io/AzureNetQ

http://easynetq.com/

http://social.msdn.microsoft.com/forums/windowsazure/ru-ru/688ada3c-bb95-488d-9ad0-aec297438e1c/problem-starting-message-broker-during-service-broker-configuration

http://stackoverflow.com/questions/22456947/service-bus-for-windows-server-the-api-version-is-not-supported/22622117#22622117

http://social.msdn.microsoft.com/Forums/windowsazure/en-US/c23a7c1f-742d-4d7f-ad4f-3bf149964762/service-bus-for-windows-server-the-api-version-is-not-supported?forum=servbus

http://www.dotnetconsult.co.uk/weblog2/PermaLink,guid,50861acd-6bd1-4283-9fdc-7a611a440829.aspx

https://www.sslshopper.com/article-how-to-create-a-self-signed-certificate-in-iis-7.html

http://msdn.microsoft.com/en-us/library/dn520958.aspx

http://social.msdn.microsoft.com/Forums/windowsazure/en-US/f5096a7a-9605-4231-b093-b7d278be7c20/cant-uninstall-service-bus

Consider your target audience when giving advice

I’m seeing a common pattern lately – respected mentors with weight in the community giving people ‘advice’ in the form:

X is usually bad, therefore never do X

Giving this kind of advice IS bad, therefore don’t do it (not even a hint of irony here). If you find yourself making these kind of blanket statements, you need to ask yourself “Who is this advice aimed at?”

Juniors/Intermediates

Often we want to direct our information at people who are still learning, but are being led astray by the majority of advice they may be reading. In doing so, we hope that they can avoid making a mistake without having to first master the subject.

This is a noble motive. The flaw in this approach is that making mistakes is a really important way in which people learn.

When you have driving lessons, you’re being taught how to operate a car safely and reliably enough to pass a test. Learning to ‘drive’ is an ongoing process that takes years of practice, almost all of which will come after you pass your test and get out on your own.

What makes this tricky is that often junior developers within teams are not given this early opportunity to make the mistakes they need to… they are expected to be able to drive the car on their own, or teach themselves to do it.

Your blanket advice is bad for this audience. It will conflict with and confuse them.

Seniors/Experts

Some people in this audience are still very keen to learn, others are very set in their ways. Depending on your role or standing, this is probably where the bulk of your followers lie. There’s a thin line between a senior who is very keen to learn and a junior as both have a capacity to misinterpret advice.

Assuming that your blanket statement is encountered by a true expert however, they may be offended and rightly so. These people assume that you know your stuff and you know your audience, after all you’ve worked hard to get where you are right?

When an expert encounters an unbalanced statement that does not take into account the true circumstances and complexity of the situation, they immediately either question it or dismiss it. Most likely it’s the latter; you’ve not helped your cause and you’ve made yourself look like a bit of a tit.

Your blanket advice is bad for this audience. It will insult them and undermine you.

Your peers

When directing content at your peers, it is much more likely to spark debate and inspire theoretical discussion which will help drive both your ideas and the community forward. You can stand to omit well understood details, be smug, sarcastic or controversial without fear of your advice being misconstrued.

In this context however, what you’re really conveying is a concept, idea or opinion which is at risk of being consumed incorrectly as advice by one of your other audiences.

This is what is known as Leaky Advice. It’s leaky in terms of it’s target audience, and leaky because you can’t cover a complex underlying problem with a simple statement. Just like leaky abstractions though, it’s only a problem if your audience or use case is not the correct one.

Your blanket advice is great for this audience, but it’s no longer advice and you shouldn’t frame it as such.

Solving the root problem

If you seek to provide advice, you have a duty to educate your audience correctly. The more followers you have and the more respected in the community you are, the more important this becomes.

Bad advice is propagated by people with a poor understanding – they’ve read an over-generalised post or tweet somewhere and treated it as gospel because it has come from a reliable source, without thinking about the consequences.

By tweeting blanket staments to the wrong audience you are not helping to end the bad practice that you were trying to educate people against, you’ve simply made it worse by providing more bad advice and adding to the confusion.

We owe it to our followers and readers to provide balanced arguments along with evidence. We are scientists after all.

 

Pete

Aside

Are we gOWINg in the right direction?

This week on twitter we find ourselves back on the subject of OWIN, and once again the battle lines are drawn and there is source of much consternation.

The current debate goes thusly… should we attempt to build a centralised Dependency Injection wrapper, available to any middleware and allowing them – essentially – to share state?

If you’d like some context to this post, you can also read:

Is sharing state in this way considered an anti-pattern, or even a bastardisation of OWIN itself? To answer this, we need to ask ourselves some questions.

What *is* OWIN?

Paraphrasing, from the OWIN specification itself:

… OWIN, a standard interface between .NET web servers and web applications. The goal of OWIN is to decouple server and application…

The specification also defines some terms:

  • Application – Possibly built on top of a Web Framework, which is run using OWIN compatible Servers.
  • Web Framework – A self-contained component on top of OWIN exposing its own object model or API that applications may use to facilitate request processing. Web Frameworks may require an adapter layer that converts from OWIN semantics.
  • Middleware – Pass through components that form a pipeline between a server and application to inspect, route, or modify request and response messages for a specific purpose.

This helps clear some things up…. particularly about the boundaries between our concerns and the terminology that identifies them. An application is built on top of a web framework, and the framework itself should be self-contained.

What should OWIN be used for?

OWIN purists say that middleware is an extension of the HTTP pipeline as a whole… the journey from it’s source to your server, passing through many intermediaries capable of caching or otherwise working with the request itself. OWIN middleware are simply further such intermediaries that happen to be fully within your control.

But there is another view – that OWIN provides an opportunity to augment or compose an application from several reusable, framework-agnostic middleware components. This is clearly at odds with the specifiation, but is it not without merit?

Composing applications in this way takes the strain off framework developers, and allows us all to work together towards a common goal. It allows us to build composite applications involving multiple web frameworks, leveraging their relative strengths and weaknesses within a single domain.

A lot of the purists are already unhappy with the direction that Microsoft has taken with their OWIN implementation – Katana. By and large I think they were just being practical and didn’t have the time to wait around for the decision of a committee, but this has only served to further muddy the waters when defining OWIN’s identity, purpose and intended usage.

If this isn’t a use for OWIN, then what is it?

When I began to learn about OWIN, I intrinsically ended up in the composable applications camp, as did several others I know. I would love to see our disparate framework communities unite, and the availability of framework-agnostic modules could only be a good thing in this regard. But the specification is clear… this is not what OWIN is for.

On the subject of the specification and it’s definitions from earlier though, I think there is one quite glaring error. This error wasn’t present when the OWIN specification was drawn up, but rather came to be due the effect that OWIN and middleware such as SignalR has had on the way we think about building applications:

Instead of building on top of a framework, the inverse is now true: we build our application out of frameworks, plural.

So what now?

What we are really after is a bootstrapper that allows us to run a pipeline of framework-agnostic components from *within* the context of our applications. If we execute this pipeline just beyond the boundary, it will have exactly the same effect as middleware in the OWIN pipeline but with the correct seperation of concerns, and with access to shared state.

This bootstrapper could (probably should?) be a terminating middleware, that itself can hand off control to whatever frameworks your application is built from. Alternatively it could be a compatibility layer built into frameworks themselves… although I think that getting people to agree on a common interface is probably a ‘pipe’ dream.

Our communtiy clearly desires such a mechanism for composing applications, and allowing interoperability between frameworks. But that’s not what OWIN is for, and if we are serious about our goal, we’ll need to work together to meet the challenge.

Please leave your comments below.

Pete

Tracking changes to complex viewmodels with Knockout.JS Part 2 – Primitive Arrays

In the first part of this series, I talked about the challenges of tracking changes to complex viewmodels in knockout, using isDirty() (see here and here) and getChanges() methods.

In this second part, I’ll go through how we extended this initial approach so we could track changes to array elements as well as regular observables. If you haven’t already, I suggest you have a read of part one as many of the examples build on code from the first post.

Starting Simple

For the purposes of this post we are only considering ‘Primitive’ arrays… these are arrays of values such as strings and numbers, as opposed to complex objects with properties of their own. Previously we created an extender that allows us to apply change tracking to a given observable, and we’re using the same approach here.

We won’t be re-using the existing extender, but we will use some of the same code for iterating over our model and applying it to our observables. In that vein, here’s a skeleton for our change tracked array extender… it has a similar structure to our previous one:

You should notice a few differences however:

  • Two observable arrays are being exposed in addition to the isDirty() flag – added and removed
  • The getChanges() method returns a complex object also containing adds and removes

As this functionality was developed with HTTP PATCH in mind, we’re assuming that we will need to track both the added items and the removed items, so that we can only send the changes back to the server. If you aren’t using PATCH, it can be sufficient just to know that a change has occurred and then save your data by replacing the entire array.

Last points to make – we’re treating any ‘changes’ to existing elements as an add and then a delete… these are just primitive values after all. Also the ordering of the elements is not going to be tracked (although this is possible and will be covered in the next post).

Array subscriptions

Prior to Knockout 3.0, we had to provide alternative methods to the usual push() and pop() so that we could keep track of array elements… subscribing to the observableArray itself would only notify you if the entire array was replaced. As of Knockout 3.0 though, we now have a way to subscribe to array element changes themselves!

We’re using the latest version for this example, but check the links at the bottom of the third post in the series if you are interested in the old version.

Let’s begin to flesh out the skeleton a little more:

Now we’ve added an arrayChange subscription, we’ll be notified whenever anyone pops, pushes or even splices our array. In the event of the latter, we’ll receive multiple changes so we have to cater for that eventuality.

We’ve deferred the actual tracking of the changes to private methods, addItem() and removeItem(). The reason for this becomes clear when you consider what you’d expect to happen after performing the following operations:

In order to achieve this behavior, we first need to check that the item in question has not already been added to one of the lists like so:

Applying this to the view model

A change tracked primitive array is unlikely to be very useful on it’s own, so we need to make sure that we can track changes to an observable array regardless of where it appeared in our view model. Lets revisit the code from our previous sample that traversed the view model and extended all the observables it encountered:

In order to properly apply change tracking to our model, we need to detect whether a given observable is in fact an observableArray, and if so then apply the new extender instead of the old one. This is not actually as easy as it sounds… based on the status of this pull request, Knockout seems to provide no mechanism for doing this (please correct me if you know otherwise!).

Luckily, this thread had the answer… we can simply extend the observableArray “prototype” by adding the following line somewhere in global scope:

ko.observableArray.fn.isObservableArray = true; 

Assuming that’s in place, our change becomes very simple:

We don’t need to change any of the rest of the wireup code from the first sample, as we are already working through our view model recursively and letting applyChangeTrackingToObservable do it’s thing.

That’s all the code we needed, now we can take it for a spin!

Summary

We’ve seen how we can make use of the new arraySubscriptions feature in Knockout 3.0 to get notified about changes to array elements. We made sure that we didn’t get strange results when items were added and then removed again or vice-versa, and then integrated the whole thing into a change tracked viewmodel.

In the third and final post in this series, we’ll go the whole hog and enable change tracking for complex and nested objects within arrays.

You can view the full code for this post here: https://gist.github.com/Roysvork/8743663, or play around with it in jsFiddle!

Pete

Aside

Increasing loop performance by iterating two intersecting lists simultaneously

Disclaimer

This brief post covers a micro-optimisation that we employed recently in an Asp.Net Web Api app. If you’re looking to solve major performance problems or get a quick win on small tasks, this isn’t going to be very useful to you. However, if you’ve nailed all the big stuff and are processing a large batch (think millions) of many records together then these small inefficiencies really begin to add up. If this applies to you, then you may find the following solution useful.

It’s possible that many people have thought of this problem and provided a solution before… in fact I’m very sure they have as I’ve googled it and so many Stack Overflow posts came up that I’m not going to bother linking to any of them. However, no-one seemed to made anything that was simple, re-usable and easy to integrate… so this is my take on it.

Finally, I’ve attempted to do some napkin maths. It’s probably wrong in some way so please correct me.

Compound Iteration

How often have you written code like this? 

This simplified sample shows how you might validate an HTTP PATCH request against some metadata. It seems innocuous right?

But say you have 1000 fields to validate, and maybe half of them are present in the body of your request. In the worst case we’ll have 500 iterations of the outer fields loop where we’ll then have to iterate through 500 dictionary keys just to find out that the field doesn’t exist in the data set.

Even in an optimal case for the remaining fields that do exist, you’ll have to iterate through 250 keys on average before we find a match, so for an ‘average’ case we could be looking at:

(500 * 500) + (500 * 250) = 375,000

As an ‘average’ case, it could potentially be a lot less than this, potentially a lot more. Either way,  imagine trying to bulk validate 100,000 records and… yikes!

Sort your data, and enter the Efficient Iterator

Provided your numbers are big enough it’s much more efficient to sort your data first and then step through each collection simultaneously. If your field info is coming say from a SQL table with a clustered index and an orderby is essentially free then it’s even more possible that this will result in significant speedup.

Basically what such an algorithm does is to take the first item from each of the two lists, and compare them. If Item A comes before Item B in the sort order, you advance forward one item in List A – or vice versa – until the two are found to match (or you run out of items). You are able to take action on each step, in the case a value is a match, or an orphan on either side.

Now the worst case iteration is merely the sum of the elements in the two lists. So in our average case, just 1500. That’s a 250x reduction… over two orders of magnitude!

Show me the code

Without further ado, here’s a Gist that you can use to do this right now…

Take a look at these MSpec tests for information on how to use it. You’ll also need to use nullable types if you want to work with non-reference types but that should be straightforward. Thanks to Tommy Carlier for his amendments to the sample to allow any type of IEnumerable and to support value types!

Questions are welcome in the comments… but please refrain from unhelpful critiquing the ‘design’ of the simplified problem sample ; ) Enjoy iterating efficiently!

Don’t forget that you’ll have to sort both lists before passing them to the efficient iterator!

Pete

Tracking changes to complex viewmodels with Knockout.JS

As part of a project I’ve been working on for a client, we’ve decided to implement HTTP PATCH in our API for making changes. The main client consuming the API is a web application driven by Knockout.JS, so this meant we had to find a way to figure out what had changed on our view model, and then send just those values over the wire.

There is nothing new or exciting about this requirement in itself. The question has been posed before  and it was the subject of a blog post way back in 2011 by Ryan NiemeyerWhat was quite exciting however was that our solution ended up doing much more than just detect changes to viewmodels. We needed to keep tabs on individual property changes, changes to arrays (adds\deletes\modifications), changes to child objects and even changes to child objects nested within arrays. The result was a complete change tracking implementation for knockout that can process not just one object but a complete object graph.

In this two part post I’ll attempt to share the code, the research and the story of how we arrived at the final implementation.

Identifying that a change has occurred

The first step was to get basic change tracking working given a view model with observable properties containing values – no complex objects.

Initial googling turned up the following approach as a starting point:

http://www.knockmeout.net/2011/05/creating-smart-dirty-flag-in-knockoutjs.html
http://www.johnpapa.net/spapost10/
http://www.dotnetcurry.com/showarticle.aspx?ID=876

These methods all involved some variation on adding an isDirty computed observable to your view model. Ryan’s example stores the initial state of the object when it is defined which can then be used as a point of comparison to figure out if a change has occurred.

Suprotim’s approach is based on Ryan’s method but instead of storing a json snapshot of the initial object (which could potentially be very large for complex view models), it merely subscribes to all the observable properties of the view model and sets the isDirty flag accordingly.

Both of these are very lightweight and efficient ways of detecting that a change has occurred, but as detailed in this thread they can’t pinpoint exactly which observable caused the change. Something more was needed.

Tracking changes to simple values

After a bit more digging, a clever solution to the problem of tracking changes to individual properties emerged as described by Stack Overflow one hit wonder, Brett Green in the answer to this question and also in slightly more detail on his blog.

This made the use of knockout extenders to add properties to the observables themselves; an overall isDirty() method for the view model as a whole could then be provided by a computed observable. This post almost entirely formed the basis for the first version. After a bit of restructuring, pretty soon we’ve got an implementation that will allow us to track changes to a flat view model:

An example of utilising this change tracking is as follows:

Detecting changes to complex objects

The next task was to ensure we could work with properties containing complex objects and nested observables. The issue here is that the isDirty property of an observable is only set when it’s contents are replaced. Modifying a child property of an object within an observable will not trigger the change tracking.

This thread on google groups seemed to be going in the right direction and even had links to two libraries already built:

  • Knockout-Rest seemed promising, but although this was able to detect changes in complex properties and even roll them back, it still could not pinpoint the individual properties that triggered the change.
  • EntitySpaces.js seemed to contain all the required elements, but it relied on generated classes and the change tracking features were too tightly coupled to it’s main use as a data access framework. At the time of writing it had not been updated for two years.

In the end we came up with a solution ourselves. In order to detect that a change had occurred further down the graph, we modified the existing isDirty extension member so that in the event that the value of our observable property was a complex object, it should also take into account the isDirty value of any properties of that child object:

Now when extending an observable to apply change tracking, if we find that the initial value is a complex object we also iterate over any properties of our child object and recursively apply change tracking to those observables as well. We also set up subscriptions to the resulting isDirty flags of the child properties to ensure we set the hasDirtyProperties flag on the target.

Tracking individual changes within complex objects

After the previous modifications, our change tracking now behaves like this:

Obviously there’s something missing here… we know that the Skills object has been modified and we also technically know which property of the object was modified but that information isn’t being respected by getChangesFromModel.

Previously it was sufficient to pull out changes by simply returning the value of each observable. That’s no longer the case so we have to add a getChanges method to our observables at the same level as isDirty, and then use this instead of the raw value when building our change log:

Now our getChangesFromModel will operate recursively and produce the results we’d expect. I’d like to draw your attention to this section of the above code in particular:

There’s a reason we’ve been using seperate observables to track hasValueChanged and hasDirtyProperties; in the event that we have replaced the contents of the observable wholesale, we must pull out all the values.

Here’s the change tracking complete with complex objects in action:

Summary

In this post we’ve seen how we can use a knockout extender and an isDirty observable to detect changes to individual properties within a view model. We’ve also seen some of the potential pitfalls you may encounter when dealing with nested complex objects and how we can overcome these to provide a robust change tracking system.

In the second part of this post, we’ll look at the real killer feature… tracking changes to complex objects within arrays.

You can view the full code for the finished example here: https://gist.github.com/Roysvork/8744757 or play around with the jsFiddle!

Pete

Edit: As part of the research for this post, I did come across https://github.com/ZiadJ/knockoutjs-reactor which takes a very similar approach and even handles arrays. It’s a shame I had not seen this when writing the code as it would have been quite useful.

TDD, continuous deployment and the golden number

You’re a strong supported of the benefits of continuous integration… whenever anyone commits to your source repo, all your tests are run. You have close to 100% coverage (or as close as you desire to have) so if anything breaks you’re going to know about it. Once stuff is pushed and all the tests pass then you’re good to ship, right?

WRONG.

Do you know how many tests you have in your solution? Does everyone in your team know? If they don’t know, do they have ready access to this figure both before and after the push? This figure is important; it’s your TDD Golden Number.

All is not what it seems

Someone you work with is trying out a new Github client. Sure it’s not new in the grand scheme of things, but it’s new to YOUR TEAM. It should work fine but just like a screw from a dissasembled piece of furniture, just because it fits the hole it was remove from doesn’t mean it will fit another.

Something goes wrong with a rebase or a merge and you don’t notice. This shit should be easy and happen as a matter of course, but this time it doesn’t and changesets get lost. Not only that, but those changesets span a complete Red-Green-Refactor cycle, so you’ve lost both code and tests.

The build runs… green light. But…

UNLESS YOU KNOW YOUR GOLDEN NUMBER and have a way of verifying it against the number of tests that ran, you have no idea if this green light is a true indicator of success. All you know is that all the tests in source control pass.

Risk management

Granted, the chances of a scenario where commits get lost during a merge are slim, but if any loss is possible then the chances are that you’ll lose tests as well as code because they’re probably in the same commit. This leads us to:

POINT THE FIRST: Make sure you commit your tests separately from the code that makes them pass.

Some may argue that you can take this one step further and commit each step of Red-Green-Factor separately and this works too so long as you don’t push until you have a green light. This is a good starting point for minimizing the chance of loss.

POINT THE SECOND: Test your golden number

You’ll need to be careful how you deal with this one from a process and team management point of view, but why not VERIFY YOUR GOLDEN NUMBER. Write a test that checks that the number of tests matches what it was at the time of the commit.

Wait a minute…

There’s a good reason why you might not think to do this; what if the number of tests lost is equal to the number of tests added anew by the commit?

POINT THE MOST IMPORTANT: COMMIT YOUR TESTS TO A SEPARATE REPO

The chances of our worst case scenario playing out now with so many distinct steps is orders of magnitude lower than in our previous case. For disaster to go un-noticed, you have to lose the tests AND the the accompanying golden number test AND the code that was in a separate commit to a separate repository.

All in all, a good way to think about the golden number is like this:

Seriously though

Even with the best intentions and code coverage, there’s always a chance that something may go wrong and you won’t know about it. When employed together, these three points will help you efficiently mitigate this risk.

Pete

Follow

Get every new post delivered to your Inbox.