Reference Testing Exercise 2 (unittest flavour)
Posted on Wed 30 October 2019 in TDDA • Tagged with reference test, exercise, screencast, video, unittest
This exercise (video 3m 34s)
shows a powerful way to run only a single test, or some subset of tests,
by using the @tag
decorator available in the TDDA library.
This is useful for speeding up the test cycle and allowing you to focus
on a single test, or a few tests.
We will also see, in the next exercise, how it can be used to update
test results more easily and safely when expected behaviour changes.
(If you use pytest
for writing tests,
you might prefer the
pytest-flavoured version
of this exercise.)
Prerequisites
★ You need to have the TDDA Python library (version 1.0.31 or newer) installed see installation. Use
tdda version
to check the version that you have.
Step 1: Copy the exercises (if you don't already have them)
You need to change to some directory in which you're happy to create three
directories with data. We are use ~/tmp
for this. Then copy the
example code.
$ cd ~/tmp
$ tdda examples # copy the example code
Step 2: Go the exercise files and examine them:
$ cd referencetest_examples/exercises-unittest/exercise2 # Go to exercise2
As in the first exercise, you should have at least the following three files
$ ls
expected.html generators.py test_all.py
expected.html
contains the expected output from one test,generators.py
contains the code to be tested,test_all.py
contains the tests.
If you look at test_all.py
, you'll see it contains two test classes
with five tests between them. Only one of the tests is useful
(testExampleStringGeneration
) with all the others making
manifestly true assertions and deliberately wasting time
to simulate annoyingly slow tests.
import time
from tdda.referencetest import ReferenceTestCase, tag
from generators import generate_string
class TestQuickThings(ReferenceTestCase):
def testExampleStringGeneration(self):
actual = generate_string()
self.assertStringCorrect(actual, 'expected.html')
def testZero(self):
self.assertIsNone(None)
class TestSuperSlowThings(ReferenceTestCase):
def testOne(self):
time.sleep(1)
self.assertEqual(1, 1)
def testTwo(self):
time.sleep(2)
self.assertEqual(2, 2)
def testThree(self):
time.sleep(3)
self.assertEqual(3, 3)
Step 3: Run the tests, which should be slow and produce one failure
$ python test_all.py # This will work with Python 3 or Python2
When you run the tests, you should get a single failure, that being
the non-trivial test testExampleStringGeneration
from the class
TestQuickThings
.
The output will be:
F....
[...details of test failure...]
Ran 5 tests in 6.007s
FAILED (failures=1)
We get a test failure because we haven't added the ignore_substrings
parameter that we saw in Exercise 1
is needed for it to pass.
The tests should take slightly over 6 seconds in total to run,
because of the annoyingly slow tests in TestSuperSlowThings
.
(If you're not annoyed by a 6-second delay, increase the sleep time in
one of the "slow" tests until you are annoyed!)
The point of this exercise is to show some simple but very useful functionality for running only tests on which we wish to focus, such as our failing test.
Step 4: Tag the failing test using @tag
If you look at the import
statements, you'll see that as well as
ReferenceTestCase
we also import tag
.
This is a decorator function1
that we can put before individual tests, or test classes,
to indicate that they are of special interest temporarily.
Edit test_all.py
to decorate the failing test by adding @tag
on the
line before it, thus:
class TestQuickThings(ReferenceTestCase):
@tag
def testExampleStringGeneration(self):
actual = generate_string()
self.assertStringCorrect(actual, 'expected.html')
def testZero(self):
self.assertIsNone(None)
Step 5: Run only the tagged test
Having tagged the failing test, if we run the tests again adding
-1
(the digit one, for "single",not the letter ell)
to the command, it will run only the tagged test, and
take hardly any time. The (abbreviated) output should be something like
$ python test_all.py -1
F
[...details of test failure...]
Ran 1 tests in 0.006s
FAILED (failures=1)
You can also use --tagged
instead of -1
if you like more descriptive
flags.
We can tag as many tests as we like, across any number of test files,
and we can also tag whole classes
by placing the @tag
decorator before a test class definition.
So if we instead use:
@tag
class TestQuickThings(ReferenceTestCase):
def testExampleStringGeneration(self):
actual = generate_string()
self.assertStringCorrect(actual, 'expected.html')
def testZero(self):
self.assertIsNone(None)
and run the tests with -1
, we will get output more like:
$ python test_all.py -1
F.
[...details of test failure...]
Ran 2 tests in 0.006s
FAILED (failures=1)
In this case, both the tests in our first test class were run, but no others (and, in particular, not our painfully slow tests!)
Step 6: Locating @tag decorators
In a typical debugging or test development cycle in which you have
been using the @tag
decorator to focus on just a few failing tests,
you might end up with @tag
decorations scattered across several
files, perhaps in multiple directories. (We're assuming here you have
test_all.py
or similar that imports all the other test classes so
you can easily run them all together.)
Although it's not hard to use grep
or grep -r
to find them, the library
can actually do this for you. If you use the -0
flag (the digit zero,
for "no tests"), or the --istagged
flag,
instead of running the tests, the library will report which test classes in
which files have tagged tests. So in our case:
$ python test_all.py -0
produces:
__main__.TestQuickThings
Here, __main__
stands for the current file; other files would be
referenced by their imported name.
Recap: What we have seen
This simple exercise has shown how we can easily run subsets of tests
by tagging them and then using the -1
flag (or --tagged
)
to run only tagged tests.
In this case, the motivation was simply to save time and reduce clutter in the output, focusing on one test, or a small number of tests.
In the Exercise 3, we will see how this combines with the ability to automatically regenerate updated reference outputs to make for a safe and efficient way to update tests after code changes.
-
Decorator functions in Python are functions that are used to transform other functions: they take a function as an argument and return a new function that modifies the original in some way. Out decorator function
tag
is called by writing@tag
on the line before function (or class) definition, and the effect of this is that the function returned by@tag
replaces the function (or class) it precedes. In our case, all@tag
does is set an attribute on the function in question so that the TDDA reference test framework can identify it as atagged
function, and choose to run only tagged tests when so requested. ↩