Showing posts with label ContinuousTests. Show all posts
Showing posts with label ContinuousTests. Show all posts

Saturday, March 1, 2014

Mighty Moose and Contextual (a/k/a Hierarchical) TDD

Introduction

In my last few blog posts, I introduced Mighty Moose and advanced TDD using nested, hierarchical context classes. If you have started using Mighty Moose and tried your hand at contextual TDD (that’s my new name for it, it seems to fit), also known as hierarchical testing, you may have noticed a problem.

The Test Runner Behind Mighty Moose

Mighty Moose is compatible with the following testing frameworks:

  • MS Test
  • NUnit
  • xUnit
  • MbUnit
  • SimpleTest
  • MSpec

What may be surprising is that Mighty Moose does not necessarily use the native test runners for these frameworks. Mighty Moose has its own test runner called AutoTest.Net. From what I can tell, it appears that Mighty Moose implements a bunch of Adapters that it uses to interact with the various testing frameworks it supports. The Adapters don’t make use of the native test framework engines themselves. The Adapters contain their own implementation. Unfortunately, it appears the execution of the Adapters may not be 100% compatible with the test framework you’re using to write your tests.

So, imagine my surprise when I refactored all of my tests to use nested, hierarchical classes in order to constrain the boundaries of the various setups I would need (see my last blog post on advanced TDD). Mighty Moose started reporting that all of my abstract base class tests were broken! (The actual error was that AutoTest.Net was unable to instantiate an abstract class. No, really?) I ran my tests in Visual Studio using the CTRL+R, T shortcut. This invokes the native MS Test test runner. I pop on over to the Test Explorer window, and what do I find? All of my tests pass.

On the one hand, having these Adapters are great; you don’t have to learn a different testing framework API in order to use Mighty Moose. On the other hand, my confidence in Mighty Moose is now lower than I would otherwise like because the output is not 100% compatible with the native test runner. How can I be sure that in all instances the tests are actually passing or failing (or even that AutoTest.Net is reporting the right result, for that matter)?

AutoTest.Net is Broken

So obviously, there’s a bug in AutoTest.Net, which is what Mighty Moose uses to run your tests; and it’s a pretty big one. AutoTest.Net recognizes that you have various test classes (a là the TestClassAttribute attribute). And so it tries to instantiate any class decorated with that attribute without checking if it’s valid to instantiate said class. But wait, it gets worse. It finds the nested inner class (which derives from the abstract base classes) and only runs the tests found in the concrete derived class (none of the inherited base class tests run with the derived class).

Is there a way around this? Well, yes, but you should be careful. The way to avoid this problem with Mighty Moose is to not make the base classes abstract. However, this will lead to the base class tests executing for every class which derives from that base class. And this won’t be true just for AutoTest.Net, but also for MS Test. (I don’t know about NUnit and xUnit since I’ve never used them.)

So what, you say? That’s ok, as long as Mighty Moose works. Well, consider this. You have 100 tests in your base class, and 100 tests in a derived class. How many tests will get executed? 300 tests. 100 for the base class, 100 for the derived class’s inherited base class methods, and 100 tests contained in the derived class itself. OK, so you don’t have 100 tests in a single test class. However, in a production system, you may have upwards of 5000 tests. (And that’s just when they’re running once!) So now multiply a good chunk of those tests and pretty soon you’re pushing 15000 tests. (And keep in mind that that's assuming that you're only nesting one level. If you nest two levels deep, now you have an explosion of the possible number of tests that will execute.) It’s wasteful of time and computing resources.

Conclusion

If you have a really small project and don’t mind tests being executed more than once, by all means, go for it. But it’s really not a good of a solution. AutoTest.Net should take into account whether or not a class is abstract and ignore it if it’s marked with the TestClassAttribute attribute. Furthermore, AutoTest.Net should look to see if a test class is a derived class and ensure that the derived class’s base class hierarchy is properly instantiated and that all inherited members are executed as part of the derived class.

Hopefully the authors of AutoTest.Net will fix this bug soon. Until then, if you have a small project, go for it. Otherwise, it’s probably best not to use Mighty Moose. If you really need a continuous test runner, check out NCrunch. I’ve been using it at work and I really like that continuous test runner, too. (Oh, and as far as the cost for NCrunch, while I don’t want to pay for it for personal use at this time, it’s really not that expensive; and if things were different as far as the level of development I do personally, I’d definitely pay for it.)

Saturday, February 1, 2014

Mighty Moose a/k/a ContinuousTests

Introduction

At my current place of employment, I was introduced to NCrunch, a continuous testing solution for .NET and Visual Studio. It’s a great tool that I really enjoy using at work. However, for at home use, I can’t justify paying their asking price for a personal license. Don’t get me wrong, I really love the tool, and if I were doing some serious development on my own (i.e. creating products for sale as a side job), I’d probably pay for it, because it’s fast and shows exactly where an exception occurred or a test failed.

So I really like this concept of continuous testing and went on an Internet search to see what was available out there for free. I wanted something that would run in Visual Studio 2012/2013. A lot of the “free” tools use external test runners which display their output in either the browser (as an XML/HTML page) or in a console window. I don’t like those tools because they interrupt my context, having to switch between Visual Studio and, say, a console window. Plus, then I have to go digging for the file and line number when a test fails.

Enter Mighty Moose a/k/a Continuous Tests

During my search, I found Mighty Moose a/k/a ContinuousTests, by Greg Young. Mighty Moose has a slightly different philosophy about continuous testing. Instead of a code coverage margin in Visual Studio, they have a risk margin. The risk margin shows a colored circle next to a method under test with a number inside. The number inside tells you how many tests are covering the method. The color: red, green, or yellow, indicates the risk of changing that method.

How is risk calculated? It’s based on how close the method was called from the test. According to Greg, while code coverage tells you whether or not the code was executed by a test, that information alone is not enough to classify how risky it is to change the code. What if the code that’s “covered” was called from a method 10 frames down the call stack? By changing that code, you could be affecting not just the method under test, but all 8 other methods in the call stack.

In addition to the risk margin, Mighty Moose contains graph “overlay” graphics (for those of us who can’t afford Visual Studio Ultimate ;) ) that show the call graph of all methods called during the test. This can help you gauge “how far away” a method was invoked from the test, and in general, just help you see the entire call graph, which in itself, could be enlightening.

Getting Mighty Moose to Work with Visual Studio 2013

Mighty Moose is currently at version 1.0.47. It officially only supports Visual Studio 2008, 2010, and 2012. However, there’s an excellent question over on Stack Overflow that I contributed to which shows you how to get Mighty Moose to work with Visual Studio 2013.