• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Wednesday, 06 Nov 2013 17:42

Recently I’ve been playing around a lot with JavaScript modules. The particular use case I’ve been thinking about is the creation of a large complex JavaScript library in a modular and sensible way. JavaScript doesn’t really do this very well. It does it so poorly, in fact, that a sizeable number of projects are all done in a single file. I did fine a number that used file concatenation to assemble the output scrpit, but this seems like a stone-age technique to me.

This led me to look at the two competing JavaScript module techniques: Asynchronous Module Definition (AMD) and CommonJS (CJS). AMD is the technique used in RequireJS) and CommonJS is the technique used by nodejs.

The RequireJS project has a script called r.js which will “compile” a set of AMD modules into a single file. There are other projects like Browserify which do the same thing for a collection of CommonJS modules Basically, all of these figure out the ordering from the dependencies, concatenate the files, and inject a minimalistic bootstrapper to provide the require/module/exports functions. Unfortunately, this means that they all have the downside of leaving all the ‘cruft’ of the module specification in the resulting file.

To illustrate what I mean by ‘cruft’, I will use one of the examples from the browserify project. This project has three JavaScript files that use the CommonJS module syntax, and depend on each other in a chain.

bar.js
1
2
3
module.exports = function(n) {
	return n * 3;
}
foo.js
1
2
3
4
var bar = require('./bar');
module.exports = function(n) {
	return n * bar(n);
}
main.js
1
2
var foo = require('./foo');
console.log(foo(5));

When I run this through browserify, it produces this output:

out.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
;(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){
var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);
if(i)return i(o,!0);throw new Error("Cannot find module '"+o+"'")
}var f=n[o]={exports:{}};t[o][0].call(f.exports,function(e){
var n=t[o][1][e];return s(n?n:e)},f,f.exports,e,t,n,r)}return n[o].exports
}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o])
;return s})({1:[function(require,module,exports){
module.exports = function(n) {
	return n * 3;
}

},{}],2:[function(require,module,exports){
var bar = require('./bar');

module.exports = function(n) {
	return n * bar(n);
}

},{"./bar":1}],3:[function(require,module,exports){
var foo = require('./foo');
console.log(foo(5));

},{"./foo":2}]},{},[3])
;

When you run the resulting program in either a browser or nodejs, it does the right thing and tells you 75. But look at how much garbage there is! To deal with the require & module parts, it defines a bunch of stuff at the top, seriously bloating the file. The original program code was 9 lines of code with 169 characters). The “compiled” version was 14 lines and 733 characters. Minifying it with uglifyjs only shrinks it down to 713 characters.

In an ideal world, I would like system that can produce this from those same three source files:

ideal.js
1
2
3
4
5
6
7
8
9
(function() {
	function bar(n) {
		return n * 3;
	}
	function foo(n) {
		return n * bar(n);
	}
	console.log(foo(5));
}());

That looks better to me and is only 114 characters long (minified it is 95). Because I used the module pattern, like the compiled version, it doesn’t leak anything into the global namespace.

I’d read that JQuery used AMD modules, and knew that their resulting JavaScript files weren’t cluttered up, so I took a look at their build script. Interestingly they use the optimizer from RequireJS, but then do a bunch of post processing. Essentially they rip out all the declares and requires and such, and then wrap it in a closure. Pretty cool.

I found a number of other tools for doing single-file compiles, but none of them seemed to take the kind of aggressive approach to optimization that I think would be ideal when building a large library, so for now I’ve been using a hacked up version of the JQuery build system.

I like modular code, but I’m still not very happy about what I’ve found for JavaScript. I can get cohesive, decoupled and testable components using either the AMD or CommonJS approach, but still haven’t found a reasonable way to bring it all together when building a single library.

Next up, I’m going to look at what I can get from TypeScript modules. Maybe it will give me more of what I want.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 23 Sep 2013 10:07

Last month I wrote an article for Visual Studio Magazine called, “What Really Matters in Developer Testing” that I wanted to share with my readers here.

Note: They changed the title and made a few other tweaks here and there, so this is my original manuscript as opposed to the edited and published version. Enjoy!

If you’d prefer, you can read the published article here:
http://visualstudiomagazine.com/articles/2013/08/01/what-really-matters-in-developer-testing.aspx?m=1

What Really Matters In Developer Testing

It isn’t about the tests, it is about the feedback

by Peter Provost – Principal Program Manager Lead – Microsoft Visual Studio

Introduction

Recently someone asked me to provide guidance to someone wanting “convince development teams to adopt TDD”. I wrote a rather long reply that was the seed leading to this article. My answer was simple: you cannot convince people to do Test-Driven Development (TDD). In fact, you should not even try because it focuses on the wrong thing.

TDD is a technique, not an objective. What really matters to developers that their code does what they expect it to do. They want to be able to quickly implement their code, and know that it does what they want. By focusing on this, we can more easily see how developers can use testing techniques to achieve their goals.

All developers do some kind of testing when they write code. Even those who claim not to write tests actually do. Many will simply create little command line programs to validate expected outcomes. Others will create special modes for their app that allow them to confirm different behaviors. Nevertheless, all developers do some kind of testing while working on their code.

What Test-Driven Development Is (and Isn’t)

Unlike ad-hoc methods like those described above, TDD defines a regimented approach to writing code. The essence of it is to define and write a small test for what you want before you write the code that implements it. Then you make the test pass, refactor as appropriate and repeat. One can view TDD as a kind of scientific method for code, where you create a hypothesis (what you expect), define your experiment (the tests) and then run the experiment on your subject (the code).

TDD proponents assign additional benefits to this technique. One of the most important is that it strongly encourages a highly cohesive, loosely coupled design. Since the test defines the interface and how dependencies are provided, the resulting code typically ends up easy to isolate and with a single purpose. Object-oriented design defines high cohesion and loose coupling to be essential properties of well-designed components.

In addition, TDD provides a scaffold for when the developer is unsure how to implement something. It allows the developer to have a conversation with the code, asking it questions, getting answers and the adjusting. This makes it a great exploratory tool for the understanding something new.

Test-driven development is not a panacea, however. While the tests produced can serve to protect from regressions, they are not sufficient on their own to assess and ensure quality. Integration tests that combine several units together are essential to establish a complete picture of quality. End-to-end and exploratory testing will still be required to evaluate the fitness of the entire system.

In short, TDD is another tool that the developer can use while writing code. It has benefits but it also has costs. It can help you define components with very little upfront definition. However, it will add to the time required to create the code. It has a strong, positive impact on the design of your code, but it requires practice and experience to learn, and can be frustrating for the novice.

Short-cycle Mode

When we think about test-driven development as a tool, we recognize that like all tools, it has times when it is effective and times when it is not. As my father used to tell me, “You can put in a screw with a hammer, but it probably isn’t the best choice.”

There are times that the developer begins with a clear picture of how a component should work. There is a clear mental model of what is should do and how to code it. Trying to baby-step into that design using a test-fist technique would be tedious and time consuming. Nevertheless, the design benefits from the TDD approach are still desirable. In these cases, it would be nice if we could get the best of both worlds.

Test-driven development leads to these design benefits primarily by forcing the developer into a rapid back-and-forth between consuming code (tests) and the system under test. This, I believe, is where the real benefit comes from TDD. Write a small amount of code, then a small amount of tests. Cycle back and forth between them frequently. I call this short-cycle testing.

Tests are a tool that let you ask questions of your code. By asking questions frequently, you can fine-tune the thing you are building. Sometimes you will write a test first, sometimes after. Avoid staying in more mode too long. Keep the back-and-forth fluid and frequent.

Even when you know what you want, switching to a test lets you confirm your thinking. You will often find that the test tells you something you unexpected, letting you correct it earlier. The tests will also help you find dependencies and coupling you never realized were there. Trying to write tests after an extended free-coding period will also tell you these things, but by then it will be significantly harder to do anything about it.

Conclusion

I said at the beginning that developers really want to be able to quickly write the correct code

and know it does what they think it does. I have highlighted a few words in that sentence that I think are the key elements. Developers want to be efficient and do their work quickly. They also want to create the right thing. Moreover, they need to be sure that the code they write does what they think it does. This is the essential goal of developer testing and is what differentiates it from other kinds of testing.

Developer tests are an effective tool that deliver quality and design benefits. Test-driven development provides this result, but the biggest benefit comes not from dogmatic application of TDD, but from using a short-cycle mode where you write tests and code at almost the same time.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 17 Sep 2013 08:29

For a couple of weeks I’ve been playing around with some of the updated tools I use to make this blog. Back in April 2012, I pulled all of my content out of a server-side ASP.NET blog engine and moved to Jekyll and Octopress. Honestly, I can’t see myself going back.

But it has been more than a year since I created the current skin, and it was time for change. Also, Jekyll has matured a lot and many of the things that Octopress brought to the table are no longer needed. So I decided to kill two birds with one stone and update the whole thing… generator and skin.

Of course I want a responsive layout, and for a long time my go-to framework has been Twitter Bootstrap. But TWBS has a few issues that have started to bug me, most notably the way it handles font-sizes. So I decided to begin an investigation of available frameworks and toolsets.

I’m not sure if you’ve tried searching for “html5 template”, but I will tell you that it results in a a big list of “free, fresh web design templates”. Nothing particular interesting or useful there. A few refining searches and clicks later landed me at the Front-end Frameworks repository owned by usabli.ca. This is the list the search engines were failing to provide for me.

You can clone the repository if you want, but since it is really just the code for the compare website, I would recommend you star it instead (so you know when changes happen) and then visit the CSS Front-end Frameworks Comparison website itself.

CSS Front-end Frameworks Compare

As you can see, it gives you a nice list of the top frameworks, annotated with useful bits like mobile/tablet support, browser support, license, etc. Great stuff and certainly a link to keep handy.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 23 Aug 2013 17:31

Update 2013-08-29: We got mentioned in This Week on Channel 9 today. Woot!

This morning I did an interview on Radio TFS, hosted by Martin Woodward and Greg Duncan. The topic was “What have you been working on since Visual Studio 2012”, and we had a great time talking about all the cool stuff we’ve done in the VS2012 updates and what we’re targeting for Visual Studio 2013.

You can download & listen to the interview here:
Episode 64: Peter Provost on Visual Studio 2013 Ultimate

Many thanks to Martin and Greg for having me. It was fun and I’m looking forward to doing it again so we can talk more about developer testing.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 29 Nov 2012 12:04

This year at both TechEd North America and TechEd Europe I gave a presentation called “Testing Untestable Code with Visual Studio Fakes”. So far VS Fakes has been very well received by customers, and most people seemed to understand my feelings about when (and when not) to use Shims (see Part 2 for more on this). But one thing that has consistently come up has been questions about Behavioral Verification.

I talked about this briefly in Part 1 of this series, but let me rehash a few of the important points:

  • Stubs are dummy implementations of interfaces or abstract classes that you use while unit testing to provide concrete, predictable instances to fulfill the dependencies of your system under test.
  • Mocks are Stubs that provide the ability to verify calls on the Stub, typically including things like the number of calls made, the arguments passed in, etc.

With Visual Studio Fakes, introduced in Visual Studio 2012 Ultimate, we are providing the ability to generate fast running, easy to use Stubs, but they are not Mocks. They do not come with any kind of behavioral verification built in. But as I showed at TechEd Europe, there are hooks available in the framework that allow one to perform this kind of verification. This post will show you how they work and how to use them to create your own Mocks.

Why would you need to verify stub calls?

There is a long-running philosophical argument between the “mockists” and the “classists” about whether mocking is good or bad. My personal take is that they can be very useful when unit testing certain kinds of code, but also that they can cause problems if overused, because rather than pinning down the external behavior of a method, they pin the implementation. But rather than dwell on that, lets look at some of the cases where they are valuable.

Suppose you have a class who’s responsibility is to coordinate calls to other classes. These might be classes like message brokers, delegators, loggers, etc. The whole purpose of this class is to make predictable, coordinated calls on other objects. That is the external behavior we want to confirm.

Consider a system like this:

System Under Test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// ILogSink will be implemented by the various targets to which
// log messages can be sent.
public interface ILogSink
{
   void LogMessage(string message, string categories, int priority);
}

// MessageLogger is a service class that will be used to log the
// messages sent by the application. Only the method signatures
// are shown, since in this case we really don't need to care
// about the implementation.
public class MessageLogger
{
   public void RegisterMessageSink(ILogSink messageSink);

   public void LogMessage(string message);
   public void LogMessage(string message, string categories);
   public void LogMessage(string message, string categories, int priority);
}

What we need to confirm is that the right calls are made into the registered sinks, and that all the parameters are passed in correctly. When we use Stubs to test a class like that, we will be providing fake versions of the ILogSink so we should be able to have the fake version tell us how it was called.

Behavioral verification using closures

I showed in my previous posts how you can combine lambda expressions and closures to pass data out of a stub’s method delegate. For this test, I will do this again to verify that the sink is being called.

Testing that the sink is called
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[TestMethod]
public void VerifyOneSinkIsCalledCorrectly()
{
    // Arrange
    var sut = new MessageLogger();
    var wasCalled = false;
    var sink = new StubILogSink
    {
        LogMessageStringStringInt32 = (msg, cat, pri) => wasCalled = true
    };
    sut.RegisterMessageSink(sink);

    // Act
    sut.LogMessage("Hello there!");

    // Assert
    Assert.IsTrue(wasCalled);
}

This works, and isn’t really too verbose. But as I start to test more things, it can become a bit cluttered with complex setup code.

Testing that multiple sinks are called
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[TestMethod]
public void VerifyMultipleSinksCalledCorrectly()
{
    // Arrange
    var sut = new MessageLogger();
    var wasCalled1 = false;
    var wasCalled2 = false;
    var wasCalled3 = false;
    var sink1 = new StubILogSink
    {
        LogMessageStringStringInt32 = (msg, cat, pri) => wasCalled1 = true
    };
    var sink2 = new StubILogSink
    {
        LogMessageStringStringInt32 = (msg, cat, pri) => wasCalled2 = true
    };
    var sink3 = new StubILogSink
    {
        LogMessageStringStringInt32 = (msg, cat, pri) => wasCalled3 = true
    };
    sut.RegisterMessageSink(sink1);
    sut.RegisterMessageSink(sink2);
    sut.RegisterMessageSink(sink3);

    // Act
    sut.LogMessage("Hello there!");

    // Assert
    Assert.IsTrue(wasCalled1);
    Assert.IsTrue(wasCalled2);
    Assert.IsTrue(wasCalled3);
}

As I add more tests to verify the parameters, it can get out of hand pretty quickly. Closures are great when I’m checking only a few things, but the more I want to track, the harder it gets.

What I really need is a better way to track these calls.

Introducing IInstanceObserver

When we were creating the Stubs framework, we knew these situations would come up. We also knew that people can get very passionate about the syntax and form that their mocking frameworks use. So we decided to introduce an extension point into the generated Stubs that enables people to create any kind of mocking or verification API.

Every Stub that Visual Studio generate has a property on it called InstanceObserver that can take any object which implements the IStubObserver interface. When an observer is installed into a Stub, it will be called every time a method or property on the Stub is accessed. This is what you really need to do the kind of behavioral verification I need here.

The definition of the interface IInstanceObserver is pretty simple:

IInstanceObserver interface
1
2
3
4
5
6
7
8
public interface IStubObserver
{
   void Enter(Type stubbedType, Delegate stubCall);
   void Enter(Type stubbedType, Delegate stubCall, object arg1);
   void Enter(Type stubbedType, Delegate stubCall, object arg1, object arg2);
   void Enter(Type stubbedType, Delegate stubCall, object arg1, object arg2, object arg3);
   void Enter(Type stubbedType, Delegate stubCall, params object[] args);
}

The reason there are five overloads is an optimization based on the observation that most methods in .NET have three or fewer arguments. The final overload is used for those that exceed three. This is a common pattern in the CLR and .NET BCL.

The first parameter to each call is the type of the interface that was stubbed. The second parameter is a delegate that represents the call which was made on the interface. The remaining parameters are the arguments provided to the call.

The delegate is properly typed so even if the interface or method is generic, the MethodInfo provided in the stubCall delegate will have the types that were actually used when the object was called.

Creating a custom StubObserver

Using this interface, I can create a class that will record the calls made to my stub.

CustomObserver
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class CustomObserver : IStubObserver
{
    List<MethodInfo> calls = new List<MethodInfo>();

    public IEnumerable<MethodInfo> GetCalls()
    {
        return calls;
    }

    public void Enter(Type stubbedType, Delegate stubCall, params object[] args)
    {
        calls.Add(stubCall.Method);
    }

    public void Enter(Type stubbedType, Delegate stubCall, object arg1, object arg2, object arg3)
    {
        this.Enter(stubbedType, stubCall, new[] { arg1, arg2, arg3 });
    }

    public void Enter(Type stubbedType, Delegate stubCall, object arg1, object arg2)
    {
        this.Enter(stubbedType, stubCall, new[] { arg1, arg2 });
    }

    public void Enter(Type stubbedType, Delegate stubCall, object arg1)
    {
        this.Enter(stubbedType, stubCall, new[] { arg1 });
    }

    public void Enter(Type stubbedType, Delegate stubCall)
    {
        this.Enter(stubbedType, stubCall, new object[] { } );
    }
}

Yes, I realize that this implementation spoils the whole point of the overloads on IStubObserver, but we’ll get rid of that later.

Now we can rewrite two tests above without using closures. The observer will do the tracking for us, and we can simply check what it saw after we make our call to the system under test.

Testing with the CustomObserver
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[TestMethod]
public void VerifyOneSinkIsCalledCorrectly()
{
   // Arrange
   var sut = new MessageLogger();
   var sink = new StubILogSink { InstanceObserver = new CustomObserver() };
   sut.RegisterMessageSink(sink);

   // Act
   sut.LogMessage("Hello there!");

   // Assert
   Assert.IsTrue(((CustomObserver)sink.InstanceObserver).GetCalls().Any(mi => mi.Name.Contains("ILogSink.LogMessage")));
}

[TestMethod]
public void VerifyMultipleSinksCalledCorrectly()
{
   // Arrange
   var sut = new MessageLogger();
   var sink1 = new StubILogSink { InstanceObserver = new CustomObserver() };
   var sink2 = new StubILogSink { InstanceObserver = new CustomObserver() };
   var sink3 = new StubILogSink { InstanceObserver = new CustomObserver() };
   sut.RegisterMessageSink(sink1);
   sut.RegisterMessageSink(sink2);
   sut.RegisterMessageSink(sink3);

   // Act
   sut.LogMessage("Hello there!");

   // Assert
   Assert.IsTrue(((CustomObserver)sink1.InstanceObserver).GetCalls().Any(mi => mi.Name.Contains("ILogSink.LogMessage")));
   Assert.IsTrue(((CustomObserver)sink2.InstanceObserver).GetCalls().Any(mi => mi.Name.Contains("ILogSink.LogMessage")));
   Assert.IsTrue(((CustomObserver)sink3.InstanceObserver).GetCalls().Any(mi => mi.Name.Contains("ILogSink.LogMessage")));
}

While we’re not using the closure anymore, those asserts are pretty ugly, so we will want to look at fixing that. But before we do, I want to delete some code.

Using the built-in StubObserver class

It turns out that VS2012 include an implementation of IStubObserver in the framework that does everything my implementation above does, but it also includes all the missing stuff like the arguments, etc. This class is called StubObserver and is in the Microsoft.QualityTools.Testing.Fakes.Stubs namespace.

If we swap out my CustomObserver for the built-in StubObserver, the resulting test code is very similar, with just a few changes to handle how StubObserver provides the method call data back to us.

Testing with StubObserver
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[TestMethod]
public void VerifyOneSinkIsCalledCorrectly()
{
    // Arrange
    var sut = new MessageLogger();
    var sink = new StubILogSink { InstanceObserver = new StubObserver() };
    sut.RegisterMessageSink(sink);

    // Act
    sut.LogMessage("Hello there!");

    // Assert
    Assert.IsTrue(((StubObserver)sink.InstanceObserver).GetCalls().Any(call => call.StubbedMethod.Name == "LogMessage"));
}

[TestMethod]
public void VerifyMultipleSinksCalledCorrectly()
{
    // Arrange
    var sut = new MessageLogger();
    var sink1 = new StubILogSink { InstanceObserver = new StubObserver() };
    var sink2 = new StubILogSink { InstanceObserver = new StubObserver() };
    var sink3 = new StubILogSink { InstanceObserver = new StubObserver() };
    sut.RegisterMessageSink(sink1);
    sut.RegisterMessageSink(sink2);
    sut.RegisterMessageSink(sink3);

    // Act
    sut.LogMessage("Hello there!");

    // Assert
    Assert.IsTrue(((StubObserver)sink1.InstanceObserver).GetCalls().Any(call => call.StubbedMethod.Name == "LogMessage"));
    Assert.IsTrue(((StubObserver)sink2.InstanceObserver).GetCalls().Any(call => call.StubbedMethod.Name == "LogMessage"));
    Assert.IsTrue(((StubObserver)sink3.InstanceObserver).GetCalls().Any(call => call.StubbedMethod.Name == "LogMessage"));
}

Simplifying the Assertions

While that works, the Assert calls are certainly not friendly to the eye. What we’d really like is something more like this:

1
Verify.MethodCalled( sink1, "LogMessage" );

It turns out that making this helper method isn’t very hard.

Verify Helper Class
1
2
3
4
5
6
7
8
9
10
11
12
13
public static class Verify
{
   public void MethodCalled<T>( IStub<T> stub, string methodName )
   {
      var observer = stub.InstanceObserver;
      if (observer == null)
         throw new ArgumentException("stub", "No InstanceObserver installed into the stub.");

      var wasCalled = observer.GetCalls().Any(call => call.StubbedMethod.Name == methodName);
      if (wasCalled == false)
         throw new VerificationException("Method {0} was expected, but was not called", methodName);
   }
}

That isn’t too bad. Still not perfect because I really don’t like the string for the method name because it won’t be refactoring resilient. Fixing that will take us on a trek through the world of Linq expressions however, which I will cover in a later post.

Conclusions

While the VS 2012 Fakes framework does not have a built-in verification framework, you can do verification using existing language constructs like closures and lambdas. You also can leverage the IStubObserver interface to create a more customized behavioral frameworks, potentially going all the way to a full fluent API for “mockist” style behavioral verification.

If anything, the assert statements have gotten uglier, but we have now eliminated all of the closures, and moves all of the verification logic to the Assert section of the test. We’re making headway.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 11 Oct 2012 00:15

I know this post is probably going to make a lot of people say, “Holy crap, man. If you need that much of a system, you get too much email.” All I can say is “Guilty as charged, but I know I’m not the only one with this problem.” So if you find this useful, great. If not, move on.

I’ve long been a fan of the whole Inbox Zero idea. While the concept of using other kinds of task lists (e.g. Outlook Tasks, Trello or Personal Kanban) is nice, in my experience the Inbox is a much more natural place to keep track of the things I need to do. Like it or not, a large number of us in the tech sector use email as our primary personal project management system. I don’t think this is just a “Microsoft PM” thing, but certainly the amount of email that happens here, in this role, makes it more the case.

Scott Hanslelman has written a few times about his rule system on his blog. In his most recent post on this topic, he classifies emails into four big buckets:

  1. Most Important - I am on the To line
  2. Kind of Important - I am CC
  3. External - From someone outside the company
  4. Meeting invites

I tried his system for a while, but I found that I still ended up with too many emails in my Inbox that weren’t high priority for me to read, but that I may need to find later via search.

This brings up a good point: My system depends on the idea that I will TRAF (Toss, Refer, Act, File) the things in my Inbox quickly. Any email which doesn’t meet certain criteria (more on that in a second) will be Filed by default, and if I need to I can find them later with search.

Naming rules

My system leverages the power of Inbox rules. I use Outlook, but most email clients (web or local) have something like them, but you may need to adjust the specifics to your client. This system also ends up with a lot of single purpose rules. I like this because it lets me see in the rules management UI what I have. I tried the great big “Mailing Lists” rule before and it was difficult to keep track of.

Naming the rules is an important part of keeping track of it all. My general rule naming pattern is like this:

<action> - <criteria>

Action is typically things like “Move” or “Don’t move”. Criteria is typically things like “to:(some mailing list)”. I tend to use the descriptive name of things in the rule name rather than the email address or alias name. This, again, helps me see at a glance my rules and the priority stack.

How does it work?

As I said, I do this in Outlook, so it is important to understand that Outlook will process the rules in order, from top to bottom, only stopping if a rule says explcitly to stop.

The essence of this system is the final rule in the stack which moves everything to an archive folder, and then the rules before which pick out what they care about and stop the processing chain.

In Outlook, I do this with the “Stop Processing More Rules” action that has no conditions (i.e. it matches all arriving email). To make this rule, you can’t use the simple rules creation UI. Instead click Advanced Options. Leave everything in the conditions section blank and click Next. It will warn you about the rule being applied to all messages, and since this is what we want, click Yes. Then in the Actions section (2nd page) near the bottom, check “stop processing more rules”. It should look something like this:

Rule Priorities

Over time I’ve learned that my rules come in a prioritized set of groups.

Each rule takes a look at the mail and makes a simple decision: keep it in the Inbox, move it somewhere, or pass and let the next rule give it a try. This means that it is worth putting a bit of thought into the priority of the rules so you get expected outcomes from the tool.

My groups, in priority order and with an example or two, are listed below. The final “group” only has the catch-all move rule I mentioned above.

1. Pre-processing

These rules are about classification and other modifications to emails as they come in. Unlike all the other rules, these typically do not end with the “Stop Processing More Rules” instruction, but apply things like Tags, Categories, Importance, etc.

Examples:

  • Tag Personal - from:(family) - I don’t use this right now, but if I wanted to color code, tag or categorize mails from my family, I would do this here.
  • Tag Important - from:(my management) - Same idea, but maybe you want emails from your bosses to be red.

2. High Priority - Always read these

These rules are the first set that actually make a decision on an email, in this case looking for emails that I will always see in my Inbox. Any criteria that I come up with which means to me “always read these, no matter what” goes in this bucket. Currently, I have only two rules in this bucket.

Examples:

  • Don’t move - to:(me) - This is the most important rule in my system, and typically is the second rule you create after the ending catch-all. If I’m on the To: line explicitly, don’t move it and stop processing more rules.
  • Don’t move - Meeting invites - I added this one after I realized that sometimes in my working group people will send meeting invites to one of the distribution or security groups I’m in, which would cause them to be caught by rules in the next category, and I would miss them.

3. Low Priority - Move mailing lists

I like to have my mailing lists taken out of my Inbox and I will scan them, search them, or read them when I have the time or inclination.

To do this, I have a root folder called “Mailing Lists” with subfolders underneath. The rules in this section typically look for something in the To line to figure out where it was sent and then move it to the appropriate folder. (Note that some lists may need you to look in the From or the Subject to identify it.)

I create one rule for each target folder, which means that sometimes they may combine different lists together because they mean the same thing (logically) to me. An example might be if there was a TDD list and a Unit Testing list. Odds are I would move those both to the same target folder.

Depending on how many mailing lists you subscribe to, this section may be big or small.

Examples:

  • Move - to:(Internal Agile Dscussion Lists)
  • Move - to:(Some other discussion list)

4. Special CC handlers

If an email gets this far down in the rule chain, I was likely on the CC or BCC or it was sent to a distribution list that I am on, that I don’t have a rule for. There are actually a number of cases where I do want to see an email that gets this far, so I have special rules for those which each result in “Stop Processing More Rules” to keep the message in the Inbox.

  • Don’t move - from:(my management chain) - This rule includes all of the people I consider in my management chain, all the way up to Mr. Ballmer. It also includes people who aren’t directly in my chain of command, but who I consider to have equivalent importance in my email triaging. I’ve often struggled with whether to put this one in the upper Don’t Move section or not. However, since we have a few managers here who can be VERY active on some of the mailing lists, putting it up there can cause it to fill your Inbox with stuff you might not really need to read. Most of these managers have learned, however, that if you really want someone to read something, you explicitly list them on the To line, so the rule I already have up there works.
  • Don’t move - from:(my direct reports) - Now that I’m a manager again, I like to see what my folks are saying (not as Big Brother, more as a coach). This rule lets me see mails where they CC me. If I wanted to see everything they sent, I would move this one up to the upper “Don’t Move” section.
  • Don’t move - from:(me) - This is a weird one, and I’m not sure I really need it, but it does sometimes help me find distribution lists that I need a rule for.
  • Don’t move - from:(family) - My wife, kids, sister, brothers- and sisters-in-law, parents, etc.
  • Don’t move - from:(friends) - Other friends and colleagues whose messages I will read even if I’m on the CC.
  • Don’t move - from:(other lists) - A few other low-frequency mailing lists used internally that don’t warrant their own folder.

5. Catch-all move

  • Move everything to archive - This is the catch-all rule discussed above. It should always be last in the list.

The results

This system has allowed me to regain control of my Inbox, and only occasionally do I miss something important. I have 32 rules in my system right now. Most of them are split evenly between #3 and #4, mostly because I like the granularity of having each different thing in its own rule.

The downside to this is that I really depend on people to know the different between To (read this) and CC (FYI) when they send an email. A lot of people, and a lot of email clients (e.g. Gmail), really don’t get this and love to put everyone but the specific person you’re replying to on the CC line. This is really a bummer because it takes a field which has meaning and degrades it to being the same thing as the To field.

Anyway, I’m not sure if this will be useful to anyone else but those of us who regularly get hundreds of emails a day, but if you do find it useful let me know!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 01 Oct 2012 10:00

Near the end of the development cycle for Visual Studio 2012, a group of folks in the VSALM team (led by my very creative manager Tracey Trewin) came up with this cool animated video introducing some of the great new features in Visual Studio 2012 Ultimate. I think it is pretty cool, and even pretty funny, so I wanted to share it with you all.


What do you think?

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 19 Jun 2012 20:21

In my last post I focused on how to unit test a new Visual Studio 2012 RC ASP.NET Web API project. In general, it was pretty straightforward, but when I had Web API methods that needed to return an HttpResponseMessage, it got a little harder.

If you recall, I decided to start with the Creating a Web API that Supports CRUD Operations tutorial and the provided solution that came with it. That project did not use any form of dependency inversion to resolve the controller’s need for a ProductRepository. My solution in that post was to use manual dependency injection and a default value. But in the real world I would probably reach for a dependency injection framework to avoid having to do all the resolution wiring throughout my code.

In this post I am going to convert the manual injection I used in the last post to one that uses the Ninject framework. Of course you can use any other framework you wanted like Unity, Castle Windsor, StructureMap, etc. but the code that adapts between it and ASP.NET Web API will probably have to be different.

Getting Started

First let’s take a look at the code I had for the ProductsController at the end of the last post, focusing on the constructors and the IProductRepository field.

Manual Depdendency Injection
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
namespace ProductStore.Controllers
{
   public class ProductsController : ApiController
   {
      readonly IProductRepository repository;

      public ProductsController()
      {
         this.repository = new ProductRepository();
      }

      public ProductsController( IProductRepository repository )
      {
         this.repository = repository;
      }

      // Everything else stays the same
   }
}

The default constructor provides the default value for the repository, while the second constructor lets us provide one. This is fine when we only have a single controller, but in a real-world system we will likely have a number of different controllers, and having the logic for which repository to use spread among all those controllers is going to be a nightmare to maintain or change.

By default, the ASP.NET Web API routing stuff will use the default constructor to create the controller. What I want to do is make that default constructor go away, and instead let Ninject be responsible for providing the required dependency.

A quick aside - Constructor injection or Property injection?

This came up in one of my talks last week at TechEd, so it probably warrants some discussion here. When Brad Wilson and I made the first version of the ObjectBuilder engine that hides inside Unity and the various P&P CAB frameworks, we got to have this argument with people all the time.

While this argument looks like another one of those “philosophical arguments” that doesn’t have a right answer, I don’t think it really is. I think the distinction between constructor injection and property injection is important, and I think you can find yourself using both depending on the circumstances.

Here’s the gist of my argument: If the class would be in an invalid state without the dependency, then it is a hard dependency and should be resolved via constructor injection. It cannot be used without it. Putting the dependency on the constructor and not providing a default constructor makes it very clear to the developer who wants to consume this class. The developer is required to provide the dependency or the class cannot be created. If you find yourself doing a null check everywhere the dependency gets used, and especially if you throw an exception when it is null, then you likely have a hard dependency.

But if the class has a dependency that either isn’t required, or that will use a default object or a null object if it is not provided, then it is a soft dependency and should not be resolved via constructor injection. If you do choose to express this optional dependency via constructor, then you should either show the default value in the constructor definition or provide an override that doesn’t require it. Then if the developer wants to provide a special implementation (logging is an example that comes to mind), they can provide it via the property setter.

Dependency injection containers hide a lot of this from you, which might take you to a “who cares” kind of place. But one thing I always look for in DI frameworks is that I don’t have to change the code itself very much. It should read and be understandable without knowing anything about the DI container. If I don’t want to use a DI framework at all, the code should still express its meaning and it should still be usable. I believe that this distinction between hard and soft dependencies makes that clear.

Expressing the IProductRepository dependency

In our case, the repository is a hard dependency, so I will express it on the constructor. Additionally, I don’t actually want the default object instance so I’m going to delete the default constructor. It will be invalid to create a ProductsController without providing an instance of IProductRepository.

The code now looks like this:

Manual Depdendncy Injection ProductsController.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
namespace MvcApplication.Controllers
{
   public class ProductsController : ApiController
   {
      readonly IProductRepository repository;

      public ProductsController(IProductRepository repository)
      {
         this.repository = repository;
      }

      // Everything else stays the same
   }
}

If you try to run the app now, either through the web page that consumes the API or by pointing a browser at **/api/products** you will get an HTTP 500 Internal Server error because the ASP.NET routing cannot create an instance of the controller.

It is time to bring Ninject to the party.

Using Ninject to resolve the repository dependency

Getting the NuGet Packages

In previous releases of ASP.NET MVC4, you were able to use the existing Ninject MVC3 NuGet package to give us all the required glue code. But with the RC build of MVC4, it no longer works because the MVC team changed the dependency resolution mechanism for Web API projects.

Since I can’t use the Ninject MVC3 package, I need to do something slightly different. Instead, I will use the Ninject.Web.Common package and then create my own wrapper class to adapt between the IDependencyResolver API and Ninject.

When you add the Ninject.Web.Common NuGet package to your project, it does a few things to help you. So that you can skip adding a bunch of code to your Global.asax.cs file, it uses a nice little package called WebActivator to call another class when certain ASP.NET events have happened.

After adding Ninject.Web.Common to the project, open the App_Start folder and you will see a new file called NinjectWebCommon. This static class will be called by WebActivator when the application starts and stops. The Ninject.Web.Common package provided it with the code required to bootstrap the Ninject kernel.

NinjectDependencyResolver

I said we needed an implementation of IDependencyResolver that knew about Ninject. Fortunately my good friend Brad Wilson came to the rescue again, and pointed me to a chunk of code he’d written to do just that. Here’s my slightly modified version of his code:

NinjectDependencyResolver.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
using System;
using System.Web.Http.Dependencies;
using Ninject;
using Ninject.Syntax;

namespace MvcApplication.App_Start
{
   // Provides a Ninject implementation of IDependencyScope
   // which resolves services using the Ninject container.
   public class NinjectDependencyScope : IDependencyScope
   {
      IResolutionRoot resolver;

      public NinjectDependencyScope(IResolutionRoot resolver)
      {
         this.resolver = resolver;
      }

      public object GetService(Type serviceType)
      {
         if (resolver == null)
            throw new ObjectDisposedException("this", "This scope has been disposed");

         return resolver.TryGet(serviceType);
      }

      public System.Collections.Generic.IEnumerable<object> GetServices(Type serviceType)
      {
         if (resolver == null)
            throw new ObjectDisposedException("this", "This scope has been disposed");

         return resolver.GetAll(serviceType);
      }

      public void Dispose()
      {
         IDisposable disposable = resolver as IDisposable;
         if (disposable != null)
            disposable.Dispose();

         resolver = null;
      }
   }

   // This class is the resolver, but it is also the global scope
   // so we derive from NinjectScope.
   public class NinjectDependencyResolver : NinjectDependencyScope, IDependencyResolver
   {
      IKernel kernel;

      public NinjectDependencyResolver(IKernel kernel) : base(kernel)
      {
         this.kernel = kernel;
      }

      public IDependencyScope BeginScope()
      {
         return new NinjectDependencyScope(kernel.BeginBlock());
      }
   }
}

This file contains two classes. The first, NinjectDependencyScope, provides a scoping region for dependency resolution. Scopes and Blocks are beyond the scope (sic) of this post, but the gist is that you may need to provide different service resolutions for different places in your app. The second class is the actual implementation of IDependencyResolver, and since it is also the global scope, it derives from NinjectDependencyScope.

Hooking it all up

Now that we have these classes, we can hook it all up. Returning to the NinjectWebCommon static class, we only need to add one line to use our new dependency resolver. We will add this to the CreateKernel method:

CreateKernel() method
1
2
3
4
5
6
7
8
9
10
11
12
13
private static IKernel CreateKernel()
{
   var kernel = new StandardKernel();
   kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel);
   kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>();

   RegisterServices(kernel);

   // Install our Ninject-based IDependencyResolver into the Web API config
   GlobalConfiguration.Configuration.DependencyResolver = new NinjectDependencyResolver(kernel);

   return kernel;
}

We simply new up an instance of the NinjectDependencyResolver and assign it to the DependencyResolver property of the global HttpConfiguration object.

I’m still not 100% sure this is the best place to do this, but I tried a few others and it didn’t feel any better. If I later change my mind on this, or if someone can tell me a better place or way to do this, I will update this post. In the Unity sample that I link to at the end of this post, they did it in the Application class. One advantage to doing it here is that we can imagine it being done for us by a NuGet package (more on that at the end). Choose what feels right for you.

Registering our services with Ninject

The NinjectWebCommon static class already had a nice method called RegisterServices where I am supposed to express my service bindings, so I will simply use that. Here is mine:

RegisterServices method
1
2
3
4
5
private static void RegisterServices(IKernel kernel)
{
   // This is where we tell Ninject how to resolve service requests
   kernel.Bind<IProductRepository>().To<ProductRepository>();
}

Ninject has a very friendly fluent API for binding instance types to interface types, so with that one line I am telling it that whenever someone (in this case ASP.NET) asks for an IProductRepository, it should give back a ProductRepository. Remember when I said we didn’t want these kinds of dependency resolutions spread all over our code? Now they’re not. We have them all in one place where they are easy to see and change as needed.

Next Steps

I feel pretty good about the state of this project at this point. In my previous post I focused on testability and the issues with having a Web API handler that returned an HttpResponseMessage. In this post I explored how to resolve the dependencies using Ninject. If this was a real project, I would now feel pretty good about adding additional controllers, real repository implementations, etc. I’m in a testable state, which I love, and my classes are loosely coupled while still expressing their dependencies in a clear, readable way.

I would love to see the NinjectDependencyResolver stuff become a new NuGet package for others to use, so if someone wants to grab this code and do that, I’m fine with it (and I assume Brad Wilson is too, since this was originally his code). I don’t do enough full-time web development to be a very good maintainer of such a package, so it really would be better for someone else to do that instead of me.

I mentioned at the top that you can do this kind of thing with a number of different dependency injection frameworks. If you want to see an implementation that uses the patterns & practices Unity container, there is one called Using the Web API Dependency Resolver up on www.asp.net.

If you are looking at the new ASP.NET Web API framework for building out your new REST APIs, hopefully you’ve found these two posts useful. Let me know!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Saturday, 16 Jun 2012 14:00

A couple of days ago a colleague pinged me wanting to talk about unit testing an ASP.NET Web API project. In particular he was having a hard time testing the POST controller, but it got me thinking I needed to explore unit testing the new Web API stuff.

Since it is always fun to add unit tests to someone else’s codebase, I decided to start by using the tutorial called Creating a Web API that Supports CRUD Operations and the provided solution available on www.asp.net.

What should we test?

In a Web API project, one of the things you need to ask yourself is, “What do we need to test?”

Despite my passion for unit testing and TDD, you might be surprised when I answer “as little as possible.” You see, when I’m adding tests to legacy code, I believe strongly that you should only add tests to the things that need it. There is very little value-add in spending hours adding tests to things that might not need it.

I tend to follow the WELC approach, focusing on adding tests to either areas of code that I am about to work on, or areas that I know need some test coverage. The goal when adding tests for legacy code like this is to “pin” the behavior down, so you at least can make positive statements about what it does do right now. But I only really care about “pinning” those methods that have interesting code in them or code we are likely to want to change in the future. (Many thanks to my friend Arlo Belshee for promoting the phrase “pinning test” for this concept. I really like it.)

So I’m not going to bother putting any unit tests on things like BundleConfig, FilterConfig, or RouteConfig. These classes really just provide an in-code way of configuring the various conventions in ASP.NET MVC and Web API. I’m also not going to bother with any of the code in the Content or Views folders, nor will I unit test any of the JavaScript (but if this were not just a Web API, but a full web app with important JavaScript, I would certainly think more about that last one).

Since this is a Web API project, its main purpose is to provide an easy to use REST JSON API that can be used from apps or web pages. All of the code that matters is in the Controllers folder, and in particular the ProductsController class, which is the main API for the project.

This is the class we will unit test today.

Unit Tests, not Integration Tests

Notice that I said unit test in the previous sentence. For me a unit test is the smallest bit of code that I can test in isolation from other bits of code. In .NET code, this tends to be classes and methods. Defining unit test in this way makes it easy to find what to test, but sometimes the how part can be tough because of the “in isolation from other bits” part.

When we create tests that bring up large parts of our system, or of the environment, we are really creating integration tests. Don’t get me wrong, I think integration tests are useful and can be important, but I do not want to get into the habit of depending entirely on integration tests when writing code. Creating a testable, cohesive, decoupled design is important to me. It is the only way to achieve the design goal of simplicity (maximizing the amount of work not done).

But in this case we will be adding tests to an existing system. To make the point, I will try to avoid changing the system if I can. Because of this we may find ourselves occasionally creating integration tests because we have no choice. But we can (and should) use that feedback to think about the design of what we have and whether it needs some refactoring.

Analyzing the ProductsController class

The ProductsController class isn’t too complex, so it should be pretty easy to test. Let’s take a look at the code we got in the download:

ProductsController.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
namespace ProductStore.Controllers
{
    public class ProductsController : ApiController
    {
        static readonly IProductRepository repository = new ProductRepository();

        public IEnumerable<Product> GetAllProducts()
        {
            return repository.GetAll();
        }

        public Product GetProduct(int id)
        {
            Product item = repository.Get(id);
            if (item == null)
            {
                throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound));
            }
            return item;
        }


        public IEnumerable<Product> GetProductsByCategory(string category)
        {
            return repository.GetAll().Where(
                p => string.Equals(p.Category, category, StringComparison.OrdinalIgnoreCase));
        }


        public HttpResponseMessage PostProduct(Product item)
        {
            item = repository.Add(item);
            var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);

            string uri = Url.Link("DefaultApi", new { id = item.Id });
            response.Headers.Location = new Uri(uri);
            return response;
        }


        public void PutProduct(int id, Product contact)
        {
            contact.Id = id;
            if (!repository.Update(contact))
            {
                throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound));
            }
        }


        public HttpResponseMessage DeleteProduct(int id)
        {
            repository.Remove(id);
            return new HttpResponseMessage(HttpStatusCode.NoContent);
        }

    }
}

We have three Get methods, and a method for each of Post, Put and Delete.

Straight away I see the first problem: The IProductRepository is private and static. Since I said I didn’t want to change the product code, this is an issue. As a static, readonly, private field, we really don’t have any way to replace it, so in this one case, I will need to change the product to a more testable design. This isn’t as bad as it looks, however, since in the tutorial they acknowledge that this is a temporary measure in their code:

Calling new ProductRepository() in the controller is not the best design, because it ties the controller to a particular implementation of IProductRepository. For a better approach, see Using the Web API Dependency Resolver.

In a future post I will show how to resolve this dependency with something like Ninject, but for now we will just use manual dependency injection by creating a testing constructor. First I will make the repository field non-static. Then I add a second constructor which allows me to pass in a repository. Finally I update the default constructor to initialize the field with an instance of the concrete ProductRepository class.

This approach of creating a testing constructor is a good first step, even if you are going to later add a dependency injection framework. It allows us to provide a stub value for the dependency when we need it, but existing clients of the class can continue to use the default constructor.

Now the class looks like this.

ProductsController.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
namespace ProductStore.Controllers
{
   public class ProductsController : ApiController
   {
      readonly IProductRepository repository;

      public ProductsController()
      {
         this.repository = new ProductRepository();
      }

      public ProductsController( IProductRepository repository )
      {
         this.repository = repository;
      }

      // Everything else stays the same
   }
}

Testing the easy stuff

Now that we can use the testing constructor to provide a custom instance of the IProductRepository, we can get back to writing our unit tests.

For these tests I will be using the xUnit.net unit testing framework. I will also be using Visual Studio 2012 Fakes to provide easy-to-use Stubs for interfaces we depend on like IProductRepository. After using NuGet to get an xUnit.net reference in the test project, I added a project reference to the ProductsStore project. Then by right-clicking on the ProductsStore reference and choosing Add Fakes Assembly, I can create the Stubs I will use in my tests.

Testing all of the methods except for PostProduct is pretty straightforward.

GetAllProducts

This is a very simple method that just returns whatever the repository gives it. No transformations, no deep copies, it just returns the same IEnumerable it gets from the repository. Here’s the test:

Tests for GetAllProducts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[Fact]
public void GetAllProductsReturnsEverythingInRepository()
{
    // Arrange
    var allProducts = new[] {
                new Product { Id=111, Name="Tomato Soup", Category="Food", Price = 1.4M },
                new Product { Id=222, Name="Laptop Computer", Category="Electronics", Price=699.99M }
            };
    var repo = new StubIProductRepository
    {
        GetAll = () => allProducts
    };
    var controller = new ProductsController(repo);

    // Act
    var result = controller.GetAllProducts();

    // Assert
    Assert.Same(allProducts, result);
}

GetProduct

I used two tests to pin the existing behavior of the GetProduct method. The first confirms that it returns what the repository gives it, and the second confirms that it will throw if the repository returns null.

Unit Testing the GetProduct method
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[Fact]
public void GetProductReturnsCorrectItemFromRepository()
{
    // Arrange
    var product = new Product { Id = 222, Name = "Laptop Computer", Category = "Electronics", Price = 699.99M };
    var repo = new StubIProductRepository { GetInt32 = id => product };
    var controller = new ProductsController(repo);

    // Act
    var result = controller.GetProduct(222);

    // Assert
    Assert.Same(product, result);
}

[Fact]
public void GetProductThrowsWhenRepositoryReturnsNull()
{
    var repo = new StubIProductRepository
    {
        GetInt32 = id => null
    };
    var controller = new ProductsController(repo);

    Assert.Throws<HttpResponseException>(() => controller.GetProduct(1));
}

GetProductsByCategory

I just used one test to pin the behavior of GetProductsByCategory.

Unit Testing GetProductsByCategory
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Fact]
public void GetProductsByCategoryFiltersByCategory()
{
    var products = new[] {
                new Product { Id=111, Name="Tomato Soup", Category="Food", Price = 1.4M },
                new Product { Id=222, Name="Laptop Computer", Category="Electronics", Price=699.99M }
            };
    var repo = new StubIProductRepository { GetAll = () => products };
    var controller = new ProductsController(repo);

    var result = controller.GetProductsByCategory("Electronics").ToArray();

    Assert.Same(products[1], result[0]);
}

PutProduct

I used three tests to pin the various aspects of the PutProduct method:

Unit Testing PutProduct
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[Fact]
public void PutProductUpdatesRepository()
{
    var wasCalled = false;
    var repo = new StubIProductRepository
    {
        UpdateProduct = prod => wasCalled = true
    };
    var controller = new ProductsController(repo);
    var product = new Product { Id = 111 };

    // Act
    controller.PutProduct(111, product);

    // Assert
    Assert.True(wasCalled);
}

[Fact]
public void PutProductThrowsWhenRepositoryUpdateReturnsFalse()
{
    var repo = new StubIProductRepository
    {
        UpdateProduct = prod => false
    };
    var controller = new ProductsController(repo);


    Assert.Throws<HttpResponseException>(() => controller.PutProduct(1, new Product()));
}

[Fact]
public void PutProductSetsIdBeforeUpdatingRepository()
{
    var updatedId = Int32.MinValue;
    var repo = new StubIProductRepository
    {
        UpdateProduct = prod => { updatedId = prod.Id; return true; }
    };
    var controller = new ProductsController(repo);

    controller.PutProduct(123, new Product { Id = 0 });

    Assert.Equal(123, updatedId);
}

DeleteProduct

Like the PUT handler, we had a few cases to handle to correctly pin the behavior of this method.

Unit Testing DeleteProduct
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[Fact]
public void DeleteProductCallsRepositoryRemove()
{
    var removedId = Int32.MinValue;
    var repo = new StubIProductRepository
    {
        RemoveInt32 = id => removedId = id
    };
    var controller = new ProductsController(repo);

    controller.DeleteProduct(123);

    Assert.Equal(123, removedId);
}

[Fact]
public void DeleteProductReturnsResponseMessageWithNoContentStatusCode()
{
    var repo = new StubIProductRepository();
    var controller = new ProductsController(repo);

    var result = controller.DeleteProduct(123);

    Assert.IsType<HttpResponseMessage>(result);
    Assert.Equal(HttpStatusCode.NoContent, result.StatusCode);
}

Testing the harder stuff: PostProduct

The PostProduct method is where things get interesting. Because the HTTP spec says that when you create a resource from a POST you are supposed to return a Created HTTP status code and include a location to the new resource, the method we want to test does some funny things to get the HttpResponseMessage assembled.

My first attempt at a test looked like this:

Failed attempt at unit testing PostProduct
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Fact]
public void PostProductReturnsCreatedStatusCode()
{
   // Arrange
   var repo = new StubIProductRepository
   {
      AddProduct = item => item
   };
   var controller = new ProductsController(repo);

   // Act
   var result = controller.PostProduct(new Product { Id = 1 });

   // Assert
   Assert.Equal(HttpStatusCode.Created, result.StatusCode);
}

Unfortunately, that didn’t work. You end up getting a NullReferenceException thrown by Request.CreateResponse because it expects a fair amount of web config stuff to have been assembled. This is a bummer, but it is what it is.

I reached out to Brad Wilson for help, and we figured out how to test this without going all the way to creating a web server/client pair, but there is clearly a lot of extra non-test code still running. We had to assemble a whole bunch of interesting configuration and routing classes to make the Request.CreateResponse method happy, but it did work.

The first test we wrote looked like this:

Successfully unit testing PostProduct
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[Fact]
public void PostProductReturnsCreatedStatusCode()
{
   // Arrange
   var repo = new StubIProductRepository
   {
      AddProduct = item => item
   };

   var config = new HttpConfiguration();
   var request = new HttpRequestMessage(HttpMethod.Post, "http://localhost/api/products");
   var route = config.Routes.MapHttpRoute("DefaultApi", "api/{controller}/{id}");
   var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "controller", "products" } });
   var controller = new ProductsController(repo);
   controller.ControllerContext = new HttpControllerContext(config, routeData, request);
   controller.Request = request;
   controller.Request.Properties[HttpPropertyKeys.HttpConfigurationKey] = config;

   // Act
   var result = controller.PostProduct(new Product { Id = 1 });

   // Assert
   Assert.Equal(HttpStatusCode.Created, result.StatusCode);
}

In a future post, I may take a look at how we might use Visual Studio 2010 Fakes to create a Shim to remove all that config stuff, but this will have to do for now.

Since I knew I needed to make a few more tests to adequately pin the behavior of PostProduct, I refactored out the ugly config code into a private method in the test class called SetupControllerForTests. I find that when I have issues like this, I really like to keep the weird setup code close to the tests. I generally prefer this over creating abstract test classes because I like it to be very obvious what is happening. I also like my tests to be easily read and understood without having to jump around in the class hierarchy.

Helper method for configuring the controller
1
2
3
4
5
6
7
8
9
10
11
private static void SetupControllerForTests(ApiController controller)
{
    var config = new HttpConfiguration();
    var request = new HttpRequestMessage(HttpMethod.Post, "http://localhost/api/products");
    var route = config.Routes.MapHttpRoute("DefaultApi", "api/{controller}/{id}");
    var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "controller", "products" } });

    controller.ControllerContext = new HttpControllerContext(config, routeData, request);
    controller.Request = request;
    controller.Request.Properties[HttpPropertyKeys.HttpConfigurationKey] = config;
}

Now that I have the helper method, I can refactor the status code test, and add two more to check the location and to confirm that it actually calls the AddProduct method on the repository.

Final unit tests for PostProduct
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[Fact]
public void PostProductReturnsCreatedStatusCode()
{
    var repo = new StubIProductRepository
    {
        AddProduct = item => item
    };
    var controller = new ProductsController(repo);
    SetupControllerForTests(controller);

    var result = controller.PostProduct(new Product { Id = 1 });

    Assert.Equal(HttpStatusCode.Created, result.StatusCode);
}


[Fact]
public void PostProductReturnsTheCorrectLocationInResponseMessage()
{
    var repo = new StubIProductRepository
    {
        AddProduct = item => item
    };
    var controller = new ProductsController(repo);
    SetupControllerForTests(controller);

    var result = controller.PostProduct(new Product { Id = 111 });

    Assert.Equal("http://localhost/api/products/111", result.Headers.Location.ToString());
}

[Fact]
public void PostProductCallsAddOnRepositoryWithProvidedProduct()
{
    var providedProduct = default(Product);
    var repo = new StubIProductRepository
    {
        AddProduct = item => providedProduct = item
    };
    var controller = new ProductsController(repo);
    SetupControllerForTests(controller);

    var product = new Product { Id = 111 };
    var result = controller.PostProduct(product);

    Assert.Same(product, providedProduct);
}

Conclusion

And now we’re done. We have successfully “pinned” the behavior of the entire ProductsController class so if we later need to refactor it, we have a way of knowing what the current behavior is. As I discussed in my previous post about VS 2012 Shims, we can’t refactor without being able to confirm the current behavior, otherwise we will end up in a Catch-22. Creating “pinning” or “characterization” tests like those created here are the first step to being able to safely and confidently refactor or add new behaviors.

Hopefully this post showed you a few new things. First, we got to see another example of using Stubs in unit tests. Also, we learned a bit about how to deal with the nastiness around the HttpRequestMessage.CreateResponse extension method.

Personally I wish that POST handler was as easy to test as the rest of the controller was. One of my favorite things about MVC was always that the separation of concerns let me test things in isolation. When a controller doesn’t have tight dependencies on the web stack, it is a well-behaved controller. Unfortunately, when you want to create a controller that followed the HTTP spec for POST, you will find this a bit hard today. But at least we found a way around it.

Let me know what you think!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 15 Jun 2012 10:43

I’ve been working on a series of posts about authoring a new unit test plugin for Visual Studio 2012, but today my friend Matthew Manela, author of the Chutzpah test plugin, sent me a post he did a few days ago that discusses the main interfaces he had to use to make his plugin.

The Chutzpah plugin runs JavaScript unit tests that are written in either the QUnit or Jasmine test frameworks. Since JavaScript files don’t get compiled into a DLL or EXE, he had to create custom implementations of what we call a test container.

A test container represents a thing that contains tests. For .NET and C++ unit testing, this is a DLL or EXE, and Visual Studio 2012 comes with a built-in test container subsystem for them. But when you need to support tests contained in other files, e.g. JavaScript files, then you need to do a bit more.

Matthews post does a great job of going through each of these interfaces, discussing what each is for and what he did for it in his plugin.

The Chutzpah test adapter revolves around four interfaces:

1. ITestContainer – Represents a file that contains tests
2. ITestContainerDiscoverer – Finds all files that contain tests
3. ITestDiscoverer – Finds all tests within a test container
4. ITestExecutor – Runs the tests found inside the test container

You can read the entire post here:
http://matthewmanela.com/blog/anatomy-of-the-chutzpah-test-adapter-for-vs-2012-rc/

And if you want to see the source for his plugin, you can read it all here (look for the VS11.Plugin folder on the left side):
http://chutzpah.codeplex.com/SourceControl/BrowseLatest

Nice post Matthew! Thanks for providing it to the community.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Jun 2012 12:44

I’m getting ready for TechEd 2012 this month and realized I wanted to have a nice desktop wallpaper for my laptop. Well, I’d not seen any popping up on twitter or blogs yet, so I grabbed my trusty GIMP image editor and some base images of the new VS 2012 logo and got to work.

Here are a few of my favorites, but I’ve not decided which I will use on my laptop.

You can see/download them all by clicking on one of the links above or you browse them on my SkyDrive. They are all 1680x1050 PNG files and have been compressed with PNGOUT to be as small as possible. I may end up making more so keep an eye on that folder if you’re interested.

If you find them useful, please let me know.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 01 Jun 2012 11:19

I tweeted a bit about the craziness coming up for me in June, but I realized today that I’ve not posted the whole schedule. So if you’re interested and wondering what I’ll be talking about, here’s the complete list (in chonological order).

The first two are now behind me, but the twin TechEds are coming up fast. Also, I’ve got a couple that are currently “maybes” so once they are confirmed I’ll update this post. If you want to contact me about speaking at an event, please use my about.me contact page.

Update 2012-06-06 - I had to pull out of Agile.NET Columbus due to a conference date change that wasn’t compatible with my schedule. Bummer.

Microsoft TechReady 14

January 30 - February 3, 2012 – Seattle, WA

  • Introducing the New VS11 Unit Testing Experience - Ramping up the field on where we’re taking the Unit Test experience in VS11 and beyond.

Microsoft MVP Summit

February 28 - March 2, 2012 – Bellevue, WA

  • Agile Development in VS11 - Unit Testing, Fakes and Code Clone - Bringing the MVPs up to speed on the new stuff in VS11 Beta.

TechEd North America 2012

June 11-14, 2012 – Orlando, FL

  • DEV214 - Introducing the New Visual Studio 11 Unit Testing Experience - An updated version of the talk I’d previously given for internal and MVP audiences based on current bits and plans.
  • AAP401 - Real World Developer Testing with Visual Studio 11 - David Starr and I will go through a bunch of real-world unit test scenarios sharing our tips and tricks for getting them under test while ensuring they are still good unit tests.
  • DEV411 - Testing Un-testable Code with Fakes in Visual Studio 11 - A deep dive into VS 2012 Fakes, focusing on things that are either hard to unit test or that you might think aren’t unit testable at all.
  • DEV318 - Working on an Agile Team with Visual Studio 11 and Team Foundation Server 11 - Gregg Boer and I will take you through the lifecycle of an agile team using all the great new features in VS 2012 and TFS 2012.

TechEd Europe 2012

June 26-29, 2012 – Amsterdam, RAI

  • Same as TechEd North America 2012

Denver Visual Studio User Group

July 23, 2012 – Denver, CO

  • Unit Testing and TDD with Visual Studio 2012 - In Visual Studio 11, a lot of changes have happened to make unit testing for flexible and powerful. For agile and non-agile teams who are writing unit tests, these changes will let you work faster and stay focused on your code. In this session Peter Provost, long time TDD advocate and Senior Program Manager Lead in Visual Studio ALM Tools and designer of this new experience, will take us through the experience, focusing on the what the biggest differences are and why they are important to developers.

Agile 2012

August 13-17, 2012 – Dallas, TX

  • Talk Title TBD
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 31 May 2012 00:29

The simple answer: Just Say No™

Every now and then I get an email, see a forum post, or get a query at a conference about this topic. It typically goes like this:

“When you’re doing TDD, how do you test your private methods?”

The answer of course, is simple: You don’t. And you shouldn’t.

This has been written up in numerous places, by many people smarter than I, but apparently there are still people who don’t understand our point.

Private Methods and TDD

I do create private methods while doing TDD, as a result of aggressive refactoring. But from the perspective of the tests I’m writing, these private methods are entirely an implementation detail. They are, and should remain, irrelevant to the tests. As many people have pointed out over the years, putting tests on internals of any kind inhibits free refactoring.

They are internals. They are private. Keep them that way.

What I like to do every now and then, when I’m in the green phase of the TDD cycle, is stop and look at the private methods I’ve extracted in a class. If I see a bunch of them, it makes me stop and ask, “What is this code trying to tell me?”

More often than not, it is telling me there is another class needed. I look at the method parameters. I look at the fields they reference. I look for a pattern. Often, I will find that a whole new class is lurking in there.

So I refactor it out. And my old tests should STILL pass.

Now it is time to go back and add some new tests for this new class. Or if you feel like being really rigorous, delete that class and write it from scratch, using TDD. (But I admit, I don’t really ever do that except for practice when doing longer kata.)

At the risk of repeating myself, I’ll say it one more time: Just Say No™

What about the public API?

I most often hear this from people who’ve grown up creating frameworks and APIs. The essence of this argument is rooted in the idea that making something have private accessibility (a language construct) somehow reduces the supported public API of your product.

I would argue that this is a red herring, or the fallacy of irrelevant conclusion. The issue of “supported public API” and “public method visibility” are unfortunately related only by the use of the word public.

Making a class or method public does not make it a part of your public API. You can mark is as “for internal use only” with various attributes or code constructs (depending on your language). You can put it into an “internals only” package or namespace. You can simply document it in your docs as being “internal use only”.

All of these are at least as good, if not better, than making it private or internal. Why? Because it frees the developers to use good object-oriented design, to refactor aggressively and to TDD/unit test effectively. It makes your code more flexible and the developers are more free to do what they are asked to do.

But customers will use it anyway!

Consider the answer to these questions:

  1. You have an API that has public accessibility, and it is marked Internal use only in any of the ways I mentioned above. A customer calls you up and says “When I use this API it doesn’t work as expected.” What is your response?
  2. You have an API that has public accessibility, and it is marked Internal use only in any of the ways I mentioned above. You changed the API between product versions. The customer complains that the API changed. What is your response?

In each case, I would argue that the answer is the same. You simply say, “That API is not for public consumption. It is for internal use only, which is why it was marked & documented as such. Do not use it. Go away.”

If you feel that you wouldn’t do that. Consider these slightly revised versions of the same questions:

  1. You have an API that has private accessibility. A customer calls you up and says, “When I used private reflection to call this API, it didn’t work as expected.” What is your response?
  2. You have an API that has private accessibility. You changed the API between product versions. The customer complains that the API changed. What is your response?

If you are working in a language like C# (or Java or Ruby or any other that supports reflection), you know that making a method, property or field have private accessibility does not ever prevent a user from calling it. So don’t delude yourself into thinking that it will prevent them from using it. Trust me on this, I’ve been doing “privates hacking” since the good old days of C++ and pointer offset math.

Other arguments

If you think it is about security, that argument doesn’t hold water either. If you spend just a little bit of time researching how hackers create aimbots and other cheats for popular online games, you will see that even in C++, making things private doesn’t protect you. A little pointer arithmetic and structure mapping, and you can do whatever you want.

A very interesting argument I’ve heard on this one had to do with IntelliTrace in Visual Studio. It went something like this: “When you make it public, it will show up in the statement completion drop-down, and we don’t want to confuse people with all that noise.”

When I pointed out that with some simple refactoring (e.g. use a sub-namespace called Internal and actually do the Extract Class refactoring I mentioned above), they realized that they were designing for the wrong thing.

A final one I get at about this point goes something like this: “Information hiding and encapsulation are fundamental to good object-oriented design.” Let me say that I believe in information hiding. I think fields should be private. And I think external actors should only manipulate them through public methods. If you look at languages like Smalltalk and Ruby, this is just the way the language works. (Yes, yes, I know… private in Ruby means something different. But the point is still the same.)

Conclusion

You can make a well-factored design that doesn’t require private methods. You can design an easy to understand API that is usable and not too noisy by applying well-targeted refactorings and design patterns. You can remove the support burden by clearly documenting what is meant to be used and what is not. And it doesn’t help with security.

Focus on what matters. Let the tests drive the code and let the code tell you when it needs to be refactored. You don’t need to make everything private.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 04 May 2012 11:07

Good news!! Last night I got an email from Charlie Poole, the maintainer of NUnit pointing me to a blog post he’d just made:

Today, I’m releasing version 0.92 of the NUnit Adapter for Visual Studio and
creating a new project under the NUnit umbrella to hold it’s source code, track
bugs, etc.

In case you didn’t know, Visual Studio 11 has a unit test window, which
supports running tests using any test framework for which an adapter has been
installed. The NUnit adapter has been available on Code Gallery since the
initial VS Developer preview. This release fixes some problems with running
tests that target .NET 2.0/3.5 and with debugging through tests in the unit
test window.

This is great news because if you’ve tried to use NUnit with VS11 Beta before now, you probably noticed that you couldn’t actually run or debug a selected test or tests. When you tried, you ended up either getting a failed run or having all tests run. Clearly not good.

The fix was pretty simple, and I want to thank our team for helping find the issue and also of course thank Charlie for getting it out to customers to get them unblocked while using NUnit.

He’s also pushed some new content about the plugin:

So if you are experimenting with VS11 and are an NUnit user, be sure to get this update.

And of course, keep the feedback on VS11 Unit Testing coming!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 03 May 2012 20:24

I first started playing games on computers the day after we got our first computer way back in 1978 or 1979 on our venerable Ohio Scientific Challenger 4P. Man, those were good times. Tanks and mazes… that was about all we had.

In the 80s, when we lived in Egypt and had an Apple II+, I would type the games in, line-by-line, from the backs of magazines. I’m still convinced that I really learned to debug computer programs back in these days.

And I’ve been a gamer ever since.

Lately, I’ve been playing a bunch of different games, looking for something that I can really get into for a while. Although I’ve had fun playing almost all of them, I’ve still not found any with real staying power for me. But I have had a chance to play a number of different games on a few different platforms.

Here are some review of what I’ve been playing lately, and a look to what I’m waiting for.

What I’ve Been Playing

World of Warcraft - PC

Metacritic Score: 90-93

Yes, I still play Wow. I’ve been playing since the tail-end of so-called “Vanilla Wow”, but I have severely cut back on the amount of time I play it. I still raid two nights a week with my long time guildmates and friends but on non-raiding nights, I just can’t seem to bring myself to login and play.

The thing I always liked about Wow was the team & social nature of the game. Working together with 5, 10 or 25 people (or even 40 in the olden days) to figure out how to defeat one of the dungeons is a blast. Problem solving, leadership, joking, laughing and then the joy of victory. But while the non-raiding parts of the game used to keep me engaged, it has lost its luster, which is why I’ve been dabbling in so many others.

Dead Space 2 - PC

Metacritic Score: 87

This game can be pretty scary at times, as you expect from a survival horror FPS. Tons of scary shit coming at you all the time. The upgrade and leveling system was a bit weird, but I muddled my way through and seemed to maintain decent killing power.

The magnetic grip things is a blast to kill with. You can grab steel rods, saw blades, etc. (kinda like Half-Life) and rip the monsters in half. Great fun and recommended if you like the scary stuff.

Like so many FPS/RPG games though, I didn’t actually finish it before moving on to something else.

Skyrim - PC

Metacritic Score: 94

The whole world was raving about Skyrim a few months ago, so I had to give it a try. I got it for a steal during one of Steam’s special sales, and I have to admit it is a fun game. When the modding community really got rolling that was fun too.

But I have a problem in general with single player RPGs, and it is that I just start to get bored. I don’t really “role play”, so for me the fun is in the puzzles and challenges. In every single player RPG I’ve played I find that once I figure out the basic fighting mechanics, it becomes very rote, very quickly.

I will say, this game is one of the most beautiful game worlds I’ve played in a long time. If you install the HD textured pack (free downloadable content) it is even better, but make sure you have a good graphics card.

I still may return to Skyrim for some more dragon killing and whatnot, but it just hasn’t been able to draw me back in more than a month.

Tom Clancy’s Splinter Cell: Conviction - PC

Metacritic Score: 83

Sneak and spy RPGs are a lot of fun, especially when you’re first getting started. This one has very nice graphics, and a pretty reasonable set of controls and mechanics. Like many games that are also console games, there aren’t as many controls and options as a big MMO, and that is good for this kind of game. Crouch, sneak, aim, shoot, stab, etc.

I did really enjoy this game but there is one thing that really annoyed me and made me stop playing. Ubisoft totally screwed the pooch on making the wireless Xbox controller for PC work. And since they don’t give you any configuration options for the controller buttons, you pretty much have to use the keyboard. (Apparently it works fine with a wired controller but that isn’t what I have. Also there are some hacks here and there and I almost got one to work, but then I gave up and started playing something else.)

A good game. Good playstyle, an interesting story line, and it is always fun to grab a bad guy from outside a window and toss him down four floors to the street below.

Deus Ex: Human Revolution - PC

Metacritic Score: 90

I am still playing this one on-and-off. Unlike Splinter Cell, this one works perfectly with my Xbox wireless controller. It has the stealthy aspects of Splinter Cell. You also can just run and gun if you want to play that way, but it is not nearly as much fun.

There is an achievement called Ghost that was fun to get. Basically you have to get through an entire level without getting seen. And given that you do have to take some people out in almost every level, that can be fun to pull off.

This one has some interesting sci-fi aspects and a recurring mini-game that they call “hacking”, but that is basically dependent on whether you spent the right talent points and get lucky with the RNG.

A good game, probably will give it some more time.

Rage - PC

Metacritic Score: 79

I grabbed this one in another Steam weekend sale and it was a lot of fun for a while. It is not-quite a rail shooter, but also not at all an open environment game. Graphically it is clearly an Id game, as it just has that “Doom feel” so many of their titles have.

What really makes this game interesting is the constant transition between vehicle fighting and dungeon-like instances. Any time you get a mission, you have to drive there. Along the way you will do combat with bad guys also in vehicles. This is fun.

There are also races you can do to get car upgrades, so if you want to just do that, you can play this as if it were a car game. An in-town “mission board” provides odd-jobs you can do, and some of those are fun. (I really liked being the sniper defender from 300 yards away while the work crew repaired something.)

As I mentioned before the dungeon instances are pretty much rail shooters (no real path choices, just keep going forward blasting everying you see) but there are a few puzzles to solve here and there.

Call of Duty: Modern Warfare 3 - PC

Metacritic Score: 78

I really liked multi-player CoD:MW2 until the aimbotting got so bad I had to get one myself just to compete. And then it was boring. CoD: Black Ops was also pretty good, and while some people didn’t like the way the servers worked (publicly controlled with various rules and configs), I actually liked it. I especially liked the so-called crouch servers where you had to always stay crouched or you’d get kicked. It slowed the game down and make it more strategic as opposed to just full-speed run and gun.

I was optimistic when Modern Warfare 3 was announced, but unfortunately I think I played it for less than a month. Like all of the CoD games, I really enjoyed the single player campaign and the small-scale coop missions, but the MP was all crazy run-and-gun. A few of the new battleground styles were fun, but I just got bored.

Honestly, I’m not sure I’ll do another big only FPS for a while after this one. I know the new CoD was just announced, but unless I can get it for cheap, I will probably pass.

Old School Racer - Windows Phone

Metacritic Score: not available

I saw my son playing some side scrolling motorcycle stunt game on his iPod Touch, so decided to look around for something similar for my phone. There are actually two versions of this game “Old School Racer” and “Old School Racer Classic”. One is a typical outside motorcycle trials game and the other has all kinds of weird environments.

I played both games all the way through. I found the tilt control to be passable , it was easier to play with the left thumb lean bar.

A good game for a small screen, but it didn’t feel as natural as I’d like. Still a pretty fun phone game for a buck. (BTW, this game is apparently known as OSR Unhinged on Xbox.)

iStunt 2 - Windows Phone & iOS

Metacritic Score: 82

I found this one purely by accident, and it is a blast. The same idea as other stunt side scrollers like Old School Racer, but this time you’re snowboarding. It has grabs, flips, rail slides, gravity flips and zero-G zones.

This one I played through in both Stunt and Time Trial mode, of course getting all stars on all levels. This is a bit of an obsession for me with games that keep score this way, and that includes things like Angry Birds and Cut the Rope. I simply can’t stop until I get all the points. I will do a level for as long as it takes to get that last star, and I didn’t get bored of this one too quickly to achieve that.

For $3 it is a bit more money than some other phone games, but it is worth it. The tilt controls are natural and work well. The stunts are fun, and often required to get that last star. My son has it on his iPod and it was fun for us to be sitting side-by-side working on levels. Recommended.

(You can play the Flash version of this game on the Miniclip site but without the tilt and shake of the phone, it feels totally different.)

Trials HD and Trials Evolution

Metacritic Score: 86-91

I knew there had to be a good stunt trials kind of game on the Xbox and these two are it. It has good graphics and physics, easy levels and ridiculously hard levels. It has rewards for achievements, even some that cross games. (I really wanted the Micro Donkey 60cc mini bike in Evolution so had to go back to trials HD to get the Unyielding achievement. That one was hard!)

If your Xbox Live friends also have the game, you can see an in-game marker indicating their best time & position on that track. It can be a bit distracting though as you try to make sure you don’t get “beat” by the your friends.

Evolution added a bunch of new things to the classic HD motorcycle game. There are skiing courses, a very cool map editor (you can share maps), and even mutliplayer races (my son always wins).

I’m still playing through these two games because, as you may have guessed, I still don’t have gold medals on all the levels. Oh, and the hard+ levels are frigging HARD!

Other Games of Interest

Star Wars: The Old Republic

Metacritic Score: 85

The newest Wow-killer certainly took a dent out of it. A lot of my friends left Wow to play SW:ToR, but many of them also came back after the leveling part was over. From what people are telling me, the story line and leveling are second to none for an MMO, but the end game raiding experience is way behind.

I have an account, but still haven’t activated it. Let’s see if it lasts longer than the other so-called Wow killers from the past two years.

Fallout: New Vegas

Metacritic Score: 84

I bought this one a long time ago when I was playing Fallout 3, but by the time I was done with that, I just couldn’t bring myself to keep going. It is still sitting there in my Steam library waiting for me and maybe I’ll go back and give it a try some day.

Just Cause 2

Metacritic Score: 84

I actually played this one three or four times and it was interesting. I like the grapple thing, but as I was playing Skyrim as the time I gave up pretty early. It has a bunch of other interesting gameplay mechanics too, so if you can get it cheap, it might be worth a try.

We’ll see if it draws me back.

Half-Life 2: Episode 3

Metacritic Score: not yet released

I absolutely loved the entire Half-Life series and played them all through to the end, even the Lost Coast mini expansion. Occasionally scary, but always fun, I’ve been eagerly waiting for any new at all from Valve about the continuation of this franchise. These are probably the only first person RPGs I’ve actually stuck with end-to-end and come back for more.

Hopefully this isn’t a dead series, because I think there is more good stuff possible here. Valve has made occasional statements about it, but still nothing concrete. And given how well Portal 2 did last year, I wonder if they will just go back to that pool to drink. (Of course, Portal 2 was a great game, and the coop was super-fun, but I want some HL2:E3!!)

Battlefield 3

Metacritic Score: 89

I sorta wish I’d bought this instead of CoD:MW3, but… well… I didn’t. And by the time I gave up MW3, I wasn’t interested in another combat & warfare FPS.

But friends have told me that the vehicle combat is great, so I may go back and get this if I can grab it for a steal.

Call of Duty: Black Ops II

Metacritic Score: not yet released

This was just announced and will ship before the holiday season this year. Apparently set in a much more futuristic setting than the classic CoD games, this one could be very interesting. Since it is a Treyarch game like he last Black Ops, I’m hoping it has the same kind of public multiplayer server community, so it doesn’t become just another speed run-and-gun game. We’ll see.

It is available for pre-order pretty much everywhere, but given that the release date isn’t until November, I’m in no rush to hand over my money.

Halo: Reach

Metacritic Score: 91

Like some of the others listed here at the end, I own this one but haven’t really sat down to play it yet. I did find aiming to be a bit hard with the controller but that was probably just lack of practice, since I don’t generally play FPS games on the Xbox.

Wrap up

I’m still looking for a good MMO to take the place Wow has had for me for the last few years, but there are certainly a lot of good games out there to play and I’m having fun trying them all out. I love that so many of the games now have trials you can play before committing the real money. The number of games I’ve tried but not purchased on the phone and Xbox is crazy.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 02 May 2012 23:34

Lately I’ve been asked by more and more people inside Microsoft to help them really learn to do TDD. Sure, they’ve read the books, and probably some blog posts and articles, but they are struggling to figure it out.

But with so many people talking about how good it is they are frustrated that when they try to apply it at work, in their day job, it just doesn’t seem to work out.

My guidance to them is simple. Do a TDD kata every morning for two weeks. Limit yourself to 30 minutes each morning. Then pick another kata and do it again.

What is a TDD Kata?

Kata is a Japanese word meaning “form”, and in the martial arts it describes a choreographed pattern of movements used to train yourself to the level of muscle memory. I study Kenpo myself and have a number of kata that I practice regularly both in training and at home.

The idea of kata for software development was originally coined by Dave Thomas, co-author of one of my favorite developer handbooks The Pragmatic Programmer: From Journeyman to Master. The idea is the same as in martial arts, practice and repetition to hone the skills and lock in the patterns.

One of the reasons TDD is hard for newcomers is that it is very explicit about the baby-step nature of the changes you make to the code. You simply are not allowed to add anything more than the current tests require. And for experienced developers, this is very hard to do, especially when working on real code.

By using the small but precise nature of the kata to practice these skills, you can get over this roadblock in a safe way where you understand that the purpose for the baby steps is to learn the movements.

Every day for two weeks? Really?

Source: http://en.wikipedia.org/wiki/File:Timex_T5E901_Ironman_Triathlon_30_Lap_FLIX.jpg

Actually, I would encourage you to do one every day for the rest of your programming career, but that might be a bit extreme for some people. In my martial arts training, we warm up with forms and kata, so why not have a nice regular warm-up exercise to do while your morning coffee takes hold?

My recommendation to people is to do a 30-minute kata every morning for two weeks. Then pick another one and do it every day for two weeks. Continue on.

I don’t recommend people start using at work, on their actual work projects until they feel like they are ready. I can’t tell you when that will be, but you will know. It may be after one week or six, but until you feel like you are very comfortable with the rhythm of the TDD cycle, you should wait. To do otherwise is like entering into a martial arts tournament with no skills.

Oh and did I mention that kata are actually fun?

Using a kata to learn

I still do kata all the time, although I don’t do them every day. Sometimes I do one that I have already memorized and sometimes I go hunt down a new one. Sometime I make one up and do that over and over for a few days.

Most commonly, when I do a kata these days, it is to learn a new technology. As an example, a while back I wanted to learn more about SubSpec and Shouldly, which are a nice way to do BDD style development on top of xUnit.net. I could have just played with it for five minutes, but instead I did the string calculator kata every day for a week.

By doing that I actually learned a lot more about SubSpec than I would have learned otherwise. It really helped me understand the different between their Assert and Observe methods, for example.

I’ve also used Kata to learn new underlying technologies. When WPF and Silverlight were first getting attention, and the Model-View-ViewModel (MVVM) approach appeared, I developed a quick kata where I create a ViewModel from scratch for a simple but imaginary system. I did it the same way every day for a week and MVVM became very internalized for me.

Then when XAML-based Metro Style app development appeared for Windows 8, I did them again, but this time entirely within the constraints of Windows 8 development. I reused my existing practice form to learn a new way of developing apps. And it worked great.

What katas are there? Which should I do first?

I generally recommend one of two different kata for getting started: The Bowling Game or the String Calculator. Most recently, I’ve been demonstrating the String Calculator over the Bowling Game because for people who didn’t grow up in the US, 10-pin bowling can be a bit weird to describe.

The Bowling Game Kata

As I said, if you live in the US or Canada, this one is easy to understand. If not, you might want to look at the next one.

Source: http://en.wikipedia.org/wiki/File:Bowling_-_albury.jpg

The essence of this kata, popularized by Uncle Bob Martin, is to create a scoring engine for 10-pin Bowling. In 10-pin bowling you have ten frames where you can roll one, two (or three) balls and score the pins that you knock down. Sounds simple right?

Except the scoring is weird. You’d think 10 pins and ten frames would yield a highest possible score of 100, but you’d be wrong. The lowest possible score (all misses) is zero, but the highest possible score is 300. For more information about 10-pin bowling scoring I suggest reading the Wikipedia page. But this is exactly why it is a fun system to model for our kata.

Uncle Bob breaks this kata down into the following five tests:

  1. Gutter game scores zero - When you roll all misses, you get a total score of zero.
  2. All ones scores 20 - When you knock down one pin with each ball, your total score is 20.
  3. A spare in the first frame, followed by three pins, followed by all misses scores 16.
  4. A strike in the first frame, followed by three and then four pins, followed by all misses, scores 24.
  5. A perfect game (12 strikes) scores 300.

He has some very explicit refactorings that you are to perform while implementing the third and fourth tests, and doing these is important. I’ve always found it weird that the final test just passes when you do the first four right (this smells a bit of over-implementing test four) but I’ve never really found a way to do #3 and then #4 that didn’t result in #5 just passing. Oh well.

You can read Uncle Bob’s wiki page on the topic for the canonical definition.

The String Calculator Kata

Like the Bowling Kata, this kata, made popular by Roy Osherove, comes with a precise set of steps to follow. The essence is a method that given a delimited string, returns the sum of the values. I’ve always preferred my kata to define the tests I will follow every time through the exercise, so here are the tests I use for this one:

Source: http://en.wikipedia.org/wiki/File:Timex_T5E901_Ironman_Triathlon_30_Lap_FLIX.jpg

  1. An empty string returns zero
  2. A single number returns the value
  3. Two numbers, comma delimited, returns the sum
  4. Two numbers, newline delimited, returns the sum
  5. Three numbers, delimited either way, returns the sum
  6. Negative numbers throw an exception
  7. Numbers greater than 1000 are ignored
  8. A single char delimiter can be defined on the first line (e.g. //# for a ‘#’ as the delimiter)
  9. A multi char delimiter can be defined on the first line (e.g. //[###] for ‘###’ as the delimiter)
  10. Many single or multi-char delimiters can be defined (each wrapped in square brackets)

I rarely bother with Test #10 when I do it, because it feels like a big step to take all at once, but Roy does include it in his definition, and I have it in my kata notebook.

To read Roy’s original post describing the kata, or to see a bunch of video demonstrations (in just as many languages) visit his page on the topic.

More katas to choose from

A couple of others that I’ve used when training or for my own fun and benefit are:

Most kata share a few key attributes. First they are simple to describe (well maybe not 10-pin bowling). The problem is intentionally simplified to one thing. Secondly, they should come with the set of tests you are to use to drive the design. Sometimes they will also come with specific or recommended refactorings to perform at certain steps.

There are many sites out there that have catalogs of kata to dig through, but I really do recommend doing them one at a time for at least a week before picking up another one. Here are a few of those lists, and I’m sure there are many more:

What are you waiting for?

I’ve been doing kata for a very long time and I really do recommend them as a tool for learning what TDD really feels like. Give it a try, and use my guidelines: every morning, time-boxed to 30 mins, the same kata for one or two weeks, then pick a new one and repeat.

Hopefully you will get as much value from them as I do.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 25 Apr 2012 09:00

Let me start by saying Shims are evil. But they are evil by design. They let you do things you otherwise couldn’t do, which is very powerful. They let you do things you might not want to do, and might know you shouldn’t do, but because of the real world of software, you have to do.

The Catch-22 of Refactoring to Enable Unit Testing

In the first part of my series on VS11 Fakes, I reviewed Stubs, which are a simple way of creating concrete implementations of interfaces and abstract classes for use in unit tests.

But sometimes it happens that you have to test a method where the dependencies can’t be simply injected via an interface. It might be that your code depends on an external system like SharePoint, or simply that the code news up or uses a concrete object inside the method, where you can’t easily replace it.

The unit testing agilista have always said, “Refactor your code to make it more testable,” but therein lies the rub. I will again refer to the esteemed Martin Fowler for a quote:

Refactoring is a disciplined technique for restructuring an existing body of
code, altering its internal structure without changing its external behavior.

How do you know if you are changing its external behavior? “Simple!” says the Agilista, “You know you didn’t change it as long as your unit tests still pass.” But wait… we don’t have unit tests yet for this code (that’s what we’re trying to fix), so I’m stuck… Catch-22.

I have to quote Joseph Heller’s masterwork just for fun:

There was only one catch and that was Catch-22, which specified that a concern
for one's safety in the face of dangers that were real and immediate was the
process of a rational mind. Orr was crazy and could be grounded. All he had to
do was ask; and as soon as he did, he would no longer be crazy and would have
to fly more missions. Orr would be crazy to fly more missions and sane if he
didn't, but if he were sane he had to fly them. If he flew them he was crazy
and didn't have to; but if he didn't want to he was sane and had to. Yossarian
was moved very deeply by the absolute simplicity of this clause of Catch-22 and
let out a respectful whistle.

Joseph Heller Catch-22 Chapter 5

He’s crazy if he wants to go fight, but if he says he really didn’t want to fight then he isn’t crazy and so needs to go fight. We have the same problem. We can’t test because it needs refactoring, but we can’t refactor because we don’t have tests.

Shim Your Way Out of the Paradox

Example 1 -

To see what this really looks like, let’s look at some code (continued from the example in Part 1).

"Untestable" System Under Test Code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
namespace ShimsDemo.SystemUnderTest
{
   public class CustomerViewModel : ViewModelBase
   {
      private Customer customer;
      private readonly ICustomerRepository repository;

      public CustomerViewModel(Customer customer, ICustomerRepository repository)
      {
         this.customer = customer;
         this.repository = repository;
      }

      public string Name
      {
         get { return customer.Name; }
         set
         {
            customer.Name = value;
            RaisePropertyChanged("Name");
         }
      }

      public void Save()
      {
         customer.LastUpdated = DateTime.Now;  // HOW DO WE TEST THIS?
         customer = repository.SaveOrUpdate(customer);
      }
   }
}

There is one minor change from our previous example. We are now setting the LastUpdated property on the Customer object before passing it to the repository. (I know this might not be the best way to do this, but go with me…)

How do we test that Save sets the correct value to LastUpdated?

You might start by writing a test like this. This uses the same Stubs techniques I showed in the last article and tries hard to deal with the variable nature of the LastUpdated property.

A Not-so-good Test for LastUpdated
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class CustomerViewModelTests
{
   [Fact]
   public void SaveShouldSetTheCorrectLastUpdatedDate_WithoutShim()
   {
      // Arrange
      var savedCustomer = default(Customer); // null
      var repository = new StubICustomerRepository
            {
               SaveOrUpdateCustomer = customer => savedCustomer = customer
            };
      var actualCustomer = new Customer
            {
               Id = 1,
               Name = "Sample Customer",
               LastUpdated=DateTime.MinValue
            };
      var viewModel = new CustomerViewModel(actualCustomer, repository);

      // Act
      var now = DateTime.Now;
      viewModel.Save();

      // Assert
      Assert.NotNull(savedCustomer);

      // We will use a 10ms "window" to confirm that the date is "close enough"
      // to what we expect. Not ideal, but it should work... most of the time.
      var delta = Math.Abs((savedCustomer.LastUpdated - now).TotalMilliseconds);
      const int accuracy = 10; // milliseconds
      Assert.True(delta <= accuracy, "LastUpdated was not appx equal to expected");
   }
}

There are, of course, a few issues with this code, most notably the bit below the comment explaining the 10ms window. How small can we make the accuracy variable before the test starts to fail? How close is close enough? What about when the viewModel.Save() code does extra work eating up some of that time?

A better way to test this is to use a Shim to override System.DateTime, causing it to always return a predictable value. That test would look something like this:

A Better Test for LastUpdated
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class CustomerViewModelTests
{
   [Fact]
   public void SaveShouldSetTheCorrectLastUpdatedDate_WithShim()
   {
      using (ShimsContext.Create())
      {
         // Arrange
         var savedCustomer = default(Customer); // null
         var repository = new StubICustomerRepository
               {
                  SaveOrUpdateCustomer = customer => savedCustomer = customer
               };

         // Make DateTime.Now always return midnight Jan 1, 2012
         ShimDateTime.NowGet = () => new DateTime(2012, 1, 1);

         var actualCustomer = new Customer
               {
                  Id = 1,
                  Name = "Sample Customer",
                  LastUpdated=DateTime.MinValue
               };
         var viewModel = new CustomerViewModel(actualCustomer, repository);

         // Act
         viewModel.Save();

         // Assert
         Assert.NotNull(savedCustomer);
         Assert.Equal( new DateTime(2012, 1, 1), savedCustomer.LastUpdated );
      }
   }
}

On line 21 we use Shims to override the Now static property getter on the DateTime object. This replacement is for all calls to DateTime.Now in the AppDomain, which is why we are required to provide a scoping region for our replacements with the ShimsContext.Create() block.

But now this test is no longer “flaky”. It has no dependency on your system clock, on the length of time that Save() or any other code takes to run. We have controlled the environment and isolated this test from the external dependency on DateTime.Now.

This example shows one issue that can make code difficult to test, but of course there are more.

Example 2 - The hidden object instance

Here’s a different example showing another problem.

How do I deal with that new statement?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public void StartTimer()
{
   var timer = new DispatcherTimer();
   var count = 30;
   timer.Tick += (s, e) =>
      {
         count -= 1;
         if (count == 0)
         {
            count = 30;
            RefreshData();
         }
      };
   timer.Interval = new TimeSpan(0, 0, 1);
   timer.Start();
}

In this example, we have a method that is creating an instance of the WPF DispatchTimer class. It uses the timer to track when 30 seconds have elapsed and then it refreshes a property on the view model (presumably this collection is bound in the XAML to a list on-screen) by calling RefreshData().

How can we test this code? There are a few things we might like to test, including:

  1. Does it start the timer with a 1 second interval?
  2. Does it refresh the list only after 30 ticks have gone by?

Let’s look at how we might use Shims to create that first test.

Shimming out the DispatchTimer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[Fact]
public void TimerStartedWith30secInterval()
{
   using (ShimsContext.Create())
   {
      // Arrange
      var customer = new Customer();
      var repository = new StubICustomerRepository();
      var sut = new CustomerViewModel(customer, repository);

      var span = default(TimeSpan);
      var startCalled = false;
      ShimDispatcherTimer.AllInstances.IntervalSetTimeSpan = (@this, ts) => span = ts;
      ShimDispatcherTimer.AllInstances.Start = (@this) => startCalled = true;

      // Act
      sut.StartTimer();

      // Assert
      Assert.Equal(TimeSpan.FromSeconds(1), span);
      Assert.True(startCalled);
   }
}

In this example we use Shims ability to detour all future instances of an object by using the special AllInstances property on the generated Shim object. Since these methods are instance methods, our delegate has to include a parameter for the “this” pointer. I like to use @this for that parameter to remind me what it is, but I’ve seen other people use that or instance if you dislike the idea of using a C# keyword in your code.

We override the Interval property setter (which takes a timespan, hence the name IntervalSetTimeSpan) and the Start method. By stashing away values into C# closures, we can call sut.StartTimer() and then verify that what we expected did in fact happen.

Example 3 - Using Shims to wrap an existing instance

One more example will wrap up my introduction to Shims.

We will now take a look at how we might implement the second test in my list above. We want to confirm that the RefreshData method is called only after 30 ticks have elapsed.

Let’s take a look at how we would test this using Shims. In this case we will still need to control the DispatchTimer, but we will also need to override the implementation of RefreshData(), which is a method on the class we’re testing.

Shimming a specific object instance
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[Fact]
public void RefreshTimerCallsRefreshAfter30Ticks()
{
   using (ShimsContext.Create())
   {
      // Arrange
      var customer = new Customer();
      var repository = new StubICustomerRepository();
      var sut = new CustomerViewModel(customer, repository);

      var refreshDataWasCalled = false;
      new ShimCustomerViewModel(sut)
      {
         RefreshData = () => refreshDataWasCalled = true,
      };


		// Setup our override of the Dispatch timer
		// We will store the tick handler so we can "pump" it
		// and noop override Interval and Start just to be safe
		var tick = default(EventHandler);
		ShimDispatcherTimer.AllInstances.TickAddEventHandler = (@this, h) => tick = h;
		ShimDispatcherTimer.AllInstances.IntervalSetTimeSpan = (@this, ts) => { };
		ShimDispatcherTimer.AllInstances.Start = (@this) => { };

      // Act
      sut.StartTimer();

      // Assert
      refreshDataWasCalled = false; // Clear out the any calls that happened before the tick()
      for (var i = 0; i < 29; i++) tick(this, null); // Tick 29 times
      Assert.False(refreshDataWasCalled);
      tick(this, null); // Tick one more time
      Assert.True(refreshDataWasCalled);
   }
}

While nobody would call this a pretty test, it does what we asked it to do, and has let us test one aspect of this method we otherwise wouldn’t have been able to test.

Resolving the Catch-22

The Catch-22 that I mentioned at the start of this article can be restated like this:

  1. You can’t test the code because it has internal dependencies that make it untestable
  2. You need to refactor it to fix the dependency
  3. Since you don’t have any tests for this code, you aren’t safe to refactor
  4. See #1

Hopefully you can see that Shims help you get out of this dilemma. With Shims we can rework that list to something like this:

  1. You can’t test the code using traditional techniques because it has dependencies that make it untestable.
  2. Before you can refactor it, you need to create a test that asserts its current behavior.
  3. You can use Shims to create this characterization test, enabling you to refactor safely.
  4. Once you have it under test, you can safely refactor and make the dependency explicit and replaceable using traditional (e.g. interfaces and stubs) methods.
  5. The characterization test should continue to pass after performing the refactoring.
  6. Once the refactoring is complete, you can create new tests that do not use Shims (but may use Stubs) to test the code.
  7. Then you can remove the Shims test.

It is a few more steps, but it lets you have a clear, intentional way to refactor code that otherwise would have required “refactoring in the dark”. The Catch-22 is resolved and you can continue to improve the design and testability of your code.

I do recommend people go all the way with this approach and refactor the code until the Shim-based test is no longer required. Every time you need to use something like Shims to test something, it is telling you that you have a design problem. You probably have high-coupling and low cohesion in that method.

Refactoring is the act of intentional design, and you should always take the opportunity to make your design better. Shims can be used to get out of this impasse, but if you don’t think about the problem you will end up with a Big Ball of Mud for your design.

Some final words about Shims

As the famous literary quote goes (Voltaire, Socrates and even Spider Man’s Uncle Ben):

With great power comes great responsibility.

The examples I’ve shown in this post are all examples of design flaws around coupling and cohesion. Knowing that, we should always feel a bit “dirty” about having to use Shims to test something. Every time we do it, we should remember to go the extra mile afterwards and refactor it away if we can (my steps 4-7 above).

The Catch-22 of untestable code is real. Getting out of it is hard. Shims are designed to help you get out of this trap.

Shims may be evil from a purist design and TDD sense, but in the real world we are often faced with code we a) don’t control or b) didn’t write and which doesn’t have any tests. Use Shims to get out of that, but always continue on and fix your design issues.

Conclusion

Visual Studio 11 includes the new Fakes library for creating isolated unit tests. It includes two kinds of Fakes:

  • Stubs for creating lightweight, fast running concrete classes for interfaces and abstract classes your system uses. I reviewed Stubs in my last article.
  • Shims for intercepting and detouring almost any .NET call. Shims are particularly useful for removing internal dependencies of methods, and for getting you out of the “testability Catch-22”.

Hopefully you have found these two articles useful. For more information on Fakes please take some time looking through the MSDN Documentation.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 23 Apr 2012 21:39

UPDATE: Stephen Toub pointed out that in .NET 4.5 you don’t need my CreatePseudoTask() helper method. See the bottom of this post for more information.

If you’ve been coding in VS11 Beta with .NET 4.5 you may have started experimenting with using async and await in your programs. You also probably noticed a lot more of the APIs you consume are starting to expose asynchonous methods using Task and Task<T>.

This technology let’s use specify that operations are long running and should be expected to not return quickly. You basically get to fire off async processes without you having to manage the threads yourself. Behind the scenes, the necessary state machine code is created and, as they say, “it just works”.

I would really recommend reading all the great posts by Stephen Toub and others over on the PFX Team blog. And of course the MSDN Docs on the Task Parallel Library should be reviewed too.

But did you know that in VS11 Beta you can now create async unit tests? Both MS-Test and the newest version of xUnit.net now support the idea of a unit test that is async, and can therefore use the await keyword to block on a call that returns a Task.

One of the interesting things about this occurs when you use async with a faked interface that contains an async method. Consider the case where you have an interface that returns Task<string> because it is expected that some or all of the implementors could be long running. Your interface definition might look like this:

1
2
3
4
public interface IDoStuff
{
   Task<string> LongRunningOperation();
}

When you are testing a class that consumes this interface, you will want to provide a fake implementation of that method. So what is the best way to do that?

(Note: I will use VS11 Fakes for this example but it really doesn’t matter.)

You might write a test like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[TestMethod]
public async Task TestingWithRealRunningTasks()
{
   // Arrange
   var stub = new StubIDoStuff
      {
         LongRunningOperation = () => Task.Run( () => "Hello there" );
      };
   var sut = new SystemUnderTest(stub);

   // Act
   var result = await sut.DoSomething(); // This calls the interface stub

   // Assert
   Assert.AreEqual( "Interface said 'Hello there'", result );
}

Assuming DoSomething() produces the formatted string that is expected, this test will work. But there’s a bit that is unfortunate…

You actually did spin off a background thread when you called Task.Run(). You can confirm this with some well-placed breakpoints and looking at the threads.

But did you need to do that in your fake object? Not really. It probably slowed your test down by a bit and really isn’t required.

The System.Threading.Tasks namespace includes a class you can use to help you with these kinds of things: TaskCompletionSource<T> (see MSDN Docs). This very cool class can be used for a lot of different things like making event-based async live in a TAP world, etc.

Stephen Toub says a lot about TCS in his post The Nature of TaskCompletionSource<TResult> but the part most relevant to us here is where he says:

Unlike Tasks created by Task.Factory.StartNew, the Task handed out by
TaskCompletionSource<TResult> does not have any scheduled delegate associated
with it. Rather, TaskCompletionSource<TResult> provides methods that allow you
as the developer to control the lifetime and completion of the associated Task.
This includes SetResult, SetException, and SetCanceled, as well as TrySet
variants of each of those.

It “does not have any schedule delegates associated with it.” That sounds perfect!

UPDATE: This code is only required in the .NET 4.0 version of Task Parallels. See below for an updated .NET 4.5 version of this test that doesn’t require my helper method.

So what I’m going to do is use TCS to create a task that just contains the concrete return value, acting as if the long running operation has happened, and returning a Task that the consuming code can treat normally.

Rewriting that last test using a TCS it would look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[TestMethod]
public async Task TestWithHandCodedTCS()
{
   // Arrange
   var stub = new StubIDoStuff
   {
      LongRunningOperation = () => {
         var tcs = new TaskCompletionSource<string>();
         tcs.SetResult("Hello there!");
         return tcs.Task;
      }
   };
   var sut = new SystemUnderTest(stub);

   // Act
   var result = await sut.DoSomething();

   // Assert
   Assert.AreEqual("Interface said 'Hello there!'", result);
}

Now I no longer have the background thread! But that chunk of code where I create the TCS is annoying so I can refactor it out into a reusable helper method:

1
2
3
4
5
6
7
8
9
internal class TaskHelpers
{
    public static Task<T> CreatePseudoTask<T>(T result)
    {
        TaskCompletionSource<T> tcs = new TaskCompletionSource<T>();
        tcs.SetResult(result);
        return tcs.Task;
    }
}

Now I can rewrite the test to this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[TestMethod]
public async Task TestWithPseudoTask()
{
   // Arrange
   var stub = new StubIDoStuff
   {
       LongRunningOperation = () => TaskHelpers.CreatePseudoTask<string>("Hello there!")
   };
   var sut = new SystemUnderTest(stub);

   // Act
   var result = await sut.DoSomething();

   // Assert
   Assert.AreEqual("Interface said 'Hello there!'", result);
}

Nice and simple, and easy to read, without all the mess of creating real scheduled background Task delegates.

What do you think? Useful? I’ve found it to help me a bit when writing tests against async stuff.


UPDATE: Using Task.FromResult in .NET 4.5

Apparently this patterns was common enough in the .NET 4.0 version of TPL that the team decided to just “make it so” and bake it in so we don’t need the helper method anymore. And since it is baked in, it is probably optimized to perform even better.

Here is the updated test using the new Task.FromResult method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[TestMethod]
public async Task TestWithFromResultHelper()
{
   // Arrange
   var stub = new StubIDoStuff
   {
       LongRunningOperation = () => Task.FromResult("Hello there!")
   };
   var sut = new SystemUnderTest(stub);

   // Act
   var result = await sut.DoSomething();

   // Assert
   Assert.AreEqual("Interface said 'Hello there!'", result);
}

Enjoy!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 22 Apr 2012 02:26

Update - I was able to get backtick code blocks working much better, and made a stab at the YAML Front Matter, but it doesn’t seem to work using syntax include. See the git repo for the updated source.

I use Vim as my day-to-day, non-IDE text editor. Yeah, I know everyone is in love with Notepad2, Notepad+ or whatever the new favorite on the block is. I’ve been a vi/vim guy for ages and am not gonna change.

Since switching my blog to Octopress, I’ve been writing all my posts in Vim. Vim does a nice job with Markdown, but it doesn’t know anything about the other things that are often used in a Jekyll markdown file.

The two big things are:

  1. It gets confused by the YAML Front Matter
  2. It can go nuts over some of the Liquid filters and tags

Fortunately Vim has a nice way of letting you add new things to an existing syntax definition. You just create another syntax file and put it in the after directory in your ~/.vim directory. Then you just add the new syntax descriptors and restart Vim.

For the first problem, I found a blog post by Christopher Sexton that had a nice regex match for the YAML Front Matter. He has it included in his Jekyll.vim plugin (which I don’t use, but it is pretty cool).

A quick catch-all regex for Liquid tags and another for backtick code blocks and it works pretty damn well.

Here’s the code:

markdown.vim
1
2
3
4
5
6
7
8
    " YAML front matter
    syntax match Comment /\%^---\_.\{-}---$/ contains=@Spell
    
    " Match Liquid Tags and Filters
    syntax match Statement /{[%{].*[}%]}/
    
    " Match the Octopress Backtick Code Block line
    syntax match Statement /^```.*$/

I do think it would be cool if I could do a few other things:

  1. Actually use YAML syntax coloring in the Front Matter. I’d like not to have to reimplement the YAML syntax to accomplish this, but from looking at the way the HTML syntax works for javascript, I may have to.
  2. Build in understanding of the Octopress codeblock tag and disable Markdown syntax processing within it.

It also has a three line ftplugin tweak to force markdown files to use expandtab and use a 3 character tabstop. Since I typically keep tabs (I don’t have expandtab in my vimrc) and since Markdown actually uses the spaces to mean things, this just works better. If you don’t like that part, just delete the file.

If you want to use it, I recommend using pathogen and then clone the GitHub repository into your bundle folder.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Saturday, 21 Apr 2012 23:05

It is so much fun to go back and look at old posts. I saw Scott Hanselman mention on Twitter that he’d recently marked his 10th anniversary as a blogger. Since I just converted all my posts from SubText to Markdown, I’ve been going through the older ones. Sometimes I find one and say “Ugh, did I really say that?” but other times I find a good one and it still resonates with me as much as it did when I originally wrote it.

This is about one of those good ones.

The post I found was from January 29, 2009 and was called Rules of the Road. In that post I talked about how I found these stashed away in a OneNote file from 2006, so these have been with me for a while.

When I was growing up, my dad occasionally had issues with an ulcer in his duodenum. It was stress related. He was a very Type-A kind of person, in an occasionally stressful job, with all the stresses that one expects starting out a family. (You know… money, kids, etc.)

It was during this time that he started using three rules to help him deal with it. He told me those rules as I became older and was in college, where there can be similar but different stresses. I didn’t really take them to heart though until I became a manager and really started to experience stress.

Since then I’ve added some more to his list.

The Rules

  1. Don’t stress out about things you can’t control - ignore them
  2. Don’t stress out about things you can control - fix them
  3. If you have an issue with someone (anyone), talk to them about it immediately, do not let it fester
  4. Help people who politely and sincerely ask for help
  5. Fight for what you believe in
  6. Admit when you are wrong and don’t be afraid to apologize
  7. Reserve the right to change your mind
  8. You do not have to justify saying no to someone

Let’s look at each one

1. Don't stress out about things you can't control - ignore them
2. Don't stress out about things you can control - fix them

The first two of my dad’s original list are really the most important of the bunch. They help you manage and control your own reactions and expectations, and can play a huge part in increasing your personal happiness.

It is amazing how often people will stress out and complain about things over which they have no control. It is even more amazing how often people whine and complain about things they can control. This is related to the old adage, “Change your environment or change your environment.” Don’t like your job? Fix it or quit. You are in control over that. The same attitude can be applied to just about anything that stresses you.

3. If you have an issue with someone (anyone), talk to them about it
immediately, do not let it fester

Rule #3 is really just a special case of Rule #2, but pertaining to people. But this special case is important because we often forget that our relationships and communications with our peers, colleagues, friends and family are in our control. If you have an issue with something someone said or did, burying that down in your subconscious will only make it worse. These conversations can be hard, but they are essential to maintaining good relationships with people.

4. Help people who politely and sincerely ask for help.

This one is obvious to me, but particularly important on teams. Your family, friends and team should take priority over almost everything else. And when one of them comes to you with a request for help, help them.

5. Fight for what you believe in

This is another special case of #2 and it another example of “Change your environment or change your environment.” If you believe strongly that something needs to change, don’t leave it unchallenged. This is true at work and at home. But watch out for things that are beyond your control (Rule #1). If they really are beyond your control, then you probably need to stop caring about it so much.

6. Admit when you are wrong and don't be afraid to apologize.

I’m always amazed how many people believe that admitting a mistake and apologizing are somehow a sign of weakness. Nothing could be farther from the truth. True self-confidence and personal strength is demonstrated when you can openly and sincerely admit you made a mistake and apologize for it. It is amazing how well this works with people and how it can help establish trust and respect.

7. Reserve the right to change your mind

I always think of this one during election year because it seems that our political system doesn’t agree with it. How many times have you seen one politician accuse another of “flip flopping”?

I saw a good quote on Twitter the other day that reminded me of this rule:

Teaching kids to accept baseless assertions is damaging.
Teaching them to ignore contradictory evidence is downright dangerous.

Anyone who dogmatically sticks to a position because they’re unwilling to change is someone I simply cannot respect or work with. Everyone has to be willing to listen to evidence and when presented with new information you have to be willing to change your mind. Of course you don’t have to change your mind, it is about the willingness to accept new evidence and adjust your position. Anything else is simply illogical and foolish.

When you know better, do better.

8. You do not have to justify saying no to someone

Many people, when presented with a request they are unable or unwilling to do will seek to rationalize or explain away their reason. If you have a reason and are willing to share it, go for it, but don’t feel like you always have to justify saying no to someone. Be polite and respectful, but there is nothing wrong with simply saying, “No I’m sorry, I can’t do that.”

Conclusion

I’m not sure if this will help you but these are rules that I use in my life to help guide my behavior. The essence is about being open and honest with yourself and with those around you. Consistent application of them can help you be happier, less stressed and be someone that people know they can trust and respect.

Author: "--"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader