So I decided to add tests to my current project. I had Behaviour-Driven Tests (BDT) in the previous Scala-based implementation, so I wanted to see if I can get something similar for .Net.
I found Xunit.Gherkin.Quick, which uses Gherkin files to specify test cases (which they call features). These are natural-language style scripts, which translate to a set of function calls implemented in an accompanying test class. We’ll see how this works out.
Thus, I wrote the following feature file:
Feature: StoryNames
As the client
I can generate Url Names for Stories,
so that I can use those in Urls.
Scenario: Generate simple Url Name
Given a scenario named "Helms Schlund"
When I generate the urlname
Then the urlname should be helms-schlund
And added this implementation class:
using Xunit;
using Xunit.Gherkin.Quick;
namespace GoodNight.Service.Domain.Test
{
[FeatureFile("StoryNames.feature")]
public sealed class StoryNames : Feature
{
private Domain.Story.Story story;
private string urlname;
[Given(@"a scenario named ""(.*)""")]
public void CreateScenarioNamed(string name)
{
story = new Domain.Story.Story(name);
}
[When(@"I generate the urlname")]
public void GenerateTheUrlname()
{
urlname = story.Urlname;
}
[Then(@"the urlname should be (.*)")]
public void TheUrlnameShouldBe(string expected)
{
Assert.Equal(expected, this.urlname);
}
}
}
The test runner finds the implementation class, which references the feature file through the FeatureFile
annotation. The feature parts get run through the various methods on the class.
I’m rather curious if this works out for larger test bases. The feature files to seem to promise a better overview over existing test cases.
Addendum 1: One advantage are the smaller amounts of repetition. It is much simpler to repeat “Given an input of “[…]” When parsed by the parser” than repeating the bit of code that stores the input, creates and calls the parser and checks the result for null
. Possibly more savings in larger code examples, I’d wager.