-
Notifications
You must be signed in to change notification settings - Fork 4
Motivation
There are so many existing test frameworks, like NUnit, xUnit.net MSpec, xBehave.net, and some more. I used few of them, so what drove me into building a new one?
Tests can act as a living documentation - many developers already realized it's a good thing, because they never get out-of-date. But still, is a test really as readable as a written documentation?
In xUnit frameworks, tests are consolidated in test classes, where every test is represented as a method, and the method is named to reflect the tested aspect appropriately. Method identifiers can't have spaces - we all know that, so we usually leave the spaces or replace them with underscores. We end up with methods like TransferWithInsufficientFunds
and TransferWithInsufficientFundsAtomicity
(taken from NUnit Quick Start), which makes you feel like back in the days with SMS and a limit of 160 characters - trying to save as many spaces as possible. Most companies will have their own naming convention, mostly consisting of the method to test, a brief description of preconditions, and optionally the expected postconditions. Depending on the aspect you're testing, the test method name can be pretty long, as in Transfer_WithInsufficientFunds_ThrowsException
. This is barely readable if you need to put a lot of context information into the name, and it's also not refactoring-safe if you want the Transfer
method to be renamed.
Of course, there are alternative naming conventions like test fixture per method or test fixture per feature, which are very readable when using a test runner from within the IDE. The disadvantage of these approaches is the introduction of language noise (e.g., declarations, braces, and blank lines), which finally leads to class explosion.
Another problem with conventional test naming, is that it cannot outline all the assertions made. Assertions that was written once, might be hard to decode later and lose their logical statement. Suppose a test Start_WithFuel
like this:
vehicle.Start();
Assert.That(vehicle.IsRunning, Is.True);
Assert.That(vehicle.RevCounter, Is.InRange(0, 1000));
The second assertion actually verifies, that the engine is idling. Even though this can be figured out within a second, there are cases which are not that simple. Especially when using fakes, assertions tend to be long, like in this example:
A.CallTo(() => Container.Register(A<object>.That.Matches(
p => p is IDecorator &&
((IDecorator)p).InnerDecorator == OriginalDecorator))
.MustHaveHappened();
There is indeed often the possibility to state the reason as an additional parameter, which varies on the framework used. Unfortunately, it is mostly not mandatory, and developers may tend to skip it leading to an inconsistent style. MSpec and xBehave.net are great tools to tackle this problem. However, they have other characteristics, that prevent me from using them.
Assertion helpers encapsulate functionality that is used across all tests in a code base. They can be realized as static classes or as base classes. Using static classes, usages tend to be wordy as in ControllerAssertions.AssertRedirect(viewResult, url: "http://github.com")
. These helper classes are also rarely following the same naming convention, making them hardly discoverable with IDE support. Discoverability is not a problem when using base classes. However, the assertions helpers must be maintained either in a single master base class or within a base class hierarchy, which violates the single responsibility principle or introduces the diamond problem respectively.
In order to refer to my initial question about tests being as readable as written documentation: Check yourself. Open up a random test. Examine the movement of your eyes while reading and trying to understand this test. Do they go back and forth in a single line? Do they skip particular regions? I'm sure they do. Using tests as documentation comes with a price, and that's readability. So there is a definite need for better test frameworks with less cognitive requirements, which make us faster in browsing through our documentation.
Speaking of extensibility as far as I know, only NUnit and xUnit.net are worth talking about. Both of them support to affect the execution of a test through attribute usages. Extension attributes may seem very declarative and readable at first, like the CultureAttribute
or the AutoDataAttribute
, but all that glitters is not gold. One of the biggest disadvantages of using attributes, is that they can only be parameterized with constant values. For sure, you can work around this issue by passing the root values which are constants again, but it just feels inconvenient not to have full language support there. Another big disadvantage is that attributes do not know nothing about their order. So you cannot simply have two attributes CreateDatabaseAttribute
and CreateTableAttribute
applied to the same test method, and expect that the database is created before the tables. Note that this is extremely dangerous for developers, who do not know about the characteristics of the CLR. In the worst case, the test will alternate between passing and failing without actually changing the code.
Independence of inheritance is obviously a goal of test frameworks like NUnit. I could hardly understand this argument ever since, because developers can save so much effort with a well-designed base class. One of the easiest things a generic base class Test<T>
could do, is to declare a field of type T
holding the subject instance. This is of course not the only good reason for having a test base class, but I'll leave it to your journey through the project to discover more opportunities.
You may ask, why I didn't collaborate with to existing projects. As of now, I've finished the major part of TestFx; it has a console runner, ReSharper support, and a so-called extension with an extensible fluent interface. Also, there are many features slumbering in my head, waiting to be realized. I can guarantee, absolutely, that some of these features would have required huge efforts in terms of implementation or persuasiveness (i.e., because of necessary changes), if they were brought to a different test framework; some other features may be even impossible to deliver. I cannot say that I knew this from the beginning, but as the headline states, I was uncertain.
When we were moving away from NUnit in our team, we evaluated various alternatives, and noticed that there are plenty having really good ideas and intents. Unfortunately, most of them missed to offer native runner support, i.e., not relying on NUnit or xUnit.net. I have seen this as a challenge. I intended to be more durable, and pursue my own ideas (often in collaboration with other developers). Maintaining your own open-source project is a great opportunity to learn about new development aspects and improve your general perspective.