Saturday, March 1, 2014

Mighty Moose and Contextual (a/k/a Hierarchical) TDD

Introduction

In my last few blog posts, I introduced Mighty Moose and advanced TDD using nested, hierarchical context classes. If you have started using Mighty Moose and tried your hand at contextual TDD (that’s my new name for it, it seems to fit), also known as hierarchical testing, you may have noticed a problem.

The Test Runner Behind Mighty Moose

Mighty Moose is compatible with the following testing frameworks:

  • MS Test
  • NUnit
  • xUnit
  • MbUnit
  • SimpleTest
  • MSpec

What may be surprising is that Mighty Moose does not necessarily use the native test runners for these frameworks. Mighty Moose has its own test runner called AutoTest.Net. From what I can tell, it appears that Mighty Moose implements a bunch of Adapters that it uses to interact with the various testing frameworks it supports. The Adapters don’t make use of the native test framework engines themselves. The Adapters contain their own implementation. Unfortunately, it appears the execution of the Adapters may not be 100% compatible with the test framework you’re using to write your tests.

So, imagine my surprise when I refactored all of my tests to use nested, hierarchical classes in order to constrain the boundaries of the various setups I would need (see my last blog post on advanced TDD). Mighty Moose started reporting that all of my abstract base class tests were broken! (The actual error was that AutoTest.Net was unable to instantiate an abstract class. No, really?) I ran my tests in Visual Studio using the CTRL+R, T shortcut. This invokes the native MS Test test runner. I pop on over to the Test Explorer window, and what do I find? All of my tests pass.

On the one hand, having these Adapters are great; you don’t have to learn a different testing framework API in order to use Mighty Moose. On the other hand, my confidence in Mighty Moose is now lower than I would otherwise like because the output is not 100% compatible with the native test runner. How can I be sure that in all instances the tests are actually passing or failing (or even that AutoTest.Net is reporting the right result, for that matter)?

AutoTest.Net is Broken

So obviously, there’s a bug in AutoTest.Net, which is what Mighty Moose uses to run your tests; and it’s a pretty big one. AutoTest.Net recognizes that you have various test classes (a là the TestClassAttribute attribute). And so it tries to instantiate any class decorated with that attribute without checking if it’s valid to instantiate said class. But wait, it gets worse. It finds the nested inner class (which derives from the abstract base classes) and only runs the tests found in the concrete derived class (none of the inherited base class tests run with the derived class).

Is there a way around this? Well, yes, but you should be careful. The way to avoid this problem with Mighty Moose is to not make the base classes abstract. However, this will lead to the base class tests executing for every class which derives from that base class. And this won’t be true just for AutoTest.Net, but also for MS Test. (I don’t know about NUnit and xUnit since I’ve never used them.)

So what, you say? That’s ok, as long as Mighty Moose works. Well, consider this. You have 100 tests in your base class, and 100 tests in a derived class. How many tests will get executed? 300 tests. 100 for the base class, 100 for the derived class’s inherited base class methods, and 100 tests contained in the derived class itself. OK, so you don’t have 100 tests in a single test class. However, in a production system, you may have upwards of 5000 tests. (And that’s just when they’re running once!) So now multiply a good chunk of those tests and pretty soon you’re pushing 15000 tests. (And keep in mind that that's assuming that you're only nesting one level. If you nest two levels deep, now you have an explosion of the possible number of tests that will execute.) It’s wasteful of time and computing resources.

Conclusion

If you have a really small project and don’t mind tests being executed more than once, by all means, go for it. But it’s really not a good of a solution. AutoTest.Net should take into account whether or not a class is abstract and ignore it if it’s marked with the TestClassAttribute attribute. Furthermore, AutoTest.Net should look to see if a test class is a derived class and ensure that the derived class’s base class hierarchy is properly instantiated and that all inherited members are executed as part of the derived class.

Hopefully the authors of AutoTest.Net will fix this bug soon. Until then, if you have a small project, go for it. Otherwise, it’s probably best not to use Mighty Moose. If you really need a continuous test runner, check out NCrunch. I’ve been using it at work and I really like that continuous test runner, too. (Oh, and as far as the cost for NCrunch, while I don’t want to pay for it for personal use at this time, it’s really not that expensive; and if things were different as far as the level of development I do personally, I’d definitely pay for it.)

2 comments:

  1. Sorry about the inconvenience. MSTest is the only adapter that does not integrate directly with the native frameworks. We gave it a shot but MSTest is clearly not made with that in mind. I just added the MSTest implementation to the repo under https://github.com/continuoustests/AutoTest.Net/tree/master/lib/celer/src. I will try to get to this but right now there are other matters with higher priority. If I on the other hand got a pull request I'd merge it right away.

    ReplyDelete
    Replies
    1. Hey, thanks for commenting on this! I really appreciate you taking the time. I also really appreciate the effort that has gone into Mighty Moose and AutoTest.Net. Bugs are bound to happen. Since these are both FOSS projects, I know you guys only have so much bandwidth--it's not like you're getting paid to work on this 100%. So I appreciate it.

      I'll try and take a look at the source code and see if I can make heads or tails of it. I was digging around in it on GitHub to see if anything stood out--might as well clone the repo and see if I can find the problem.

      Thanks again.

      Delete