Reference Testing Exercise 2 (pytest flavour)

Posted on Thu 31 October 2019 in TDDA

This exercise (video 2m 58s) shows a powerful way to run only a single test, or some subset of tests, by using the @tag decorator available in the TDDA library. This is useful for speeding up the test cycle and allowing you to focus on a single test, or a few tests. We will also see, in the next exercise, how it can be used to update test results more easily and safely when expected behaviour changes.

(If you do not currently use pytest for writing tests, you might prefer the unittest-flavoured version of this exercise, since unittest is in Python's standard library.)

Prerequisites

★ You need to have the TDDA Python library (version 1.0.31 or newer) installed see installation. Use

tdda version

to check the version that you have.

Step 1: Copy the exercises (if you don't already have them)

You need to change to some directory in which you're happy to create three directories with data. We are use ~/tmp for this. Then copy the example code.

$ cd ~/tmp
$ tdda examples    # copy the example code

Step 2: Go the exercise files and examine them:

$ cd referencetest_examples/exercises-pytest/exercise2  # Go to exercise2

As in the first exercise, you should have at least the following four files:

$ ls
conftest.py expected.html   generators.py   test_all.py
  • conftest.html is configuration to extend pytest with referencetest capabilities,
  • expected.html contains the expected output from one test,
  • generators.py contains the code to be tested,
  • test_all.py contains the tests.

If you look at test_all.py, you'll see it contains five test functions. Only one of the tests is useful (testExampleStringGeneration) with all the others making manifestly true assertions and most of them deliberately wasting time to simulate annoyingly slow tests.

from generators import generate_string

def testZero():
    assert True

def testOne():
    time.sleep(1)
    assert 1 == 1

def testExampleStringGeneration(ref):
    actual = generate_string()
    ref.assertStringCorrect(actual, 'expected.html')

def testTwo():
    time.sleep(2)
    assert 2 == 2

def testThree():
    time.sleep(3)
    assert 3 == 3

Step 3: Run the tests, which should be slow and produce one failure

$ pytest           #  This will work with Python 3 or Python2

When you run the tests, you should get a single failure, that being the non-trivial test testExampleStringGeneration.

The output will be something like:

============================= test session starts ==============================
test_all.py ..F..

[...details of test failure...]

====================== 1 failed, 4 passed in 6.17 seconds ======================

We get a test failure because we haven't added the ignore_substrings parameter that we saw in Exercise 1 is needed for it to pass.

The tests should take slightly over 6 seconds in total to run, because of the three annoyingly slow tests with sleep statements in them—testOne, testTwo and testThree. (If you're not annoyed by a 6-second delay, increase the sleep time in one of the "sleepy" tests until you are annoyed!)

The point of this exercise is to show some simple but very useful functionality for running only tests on which we wish to focus, such as our failing test.

Step 4: Tag the failing test using @tag

The TDDA library includes a function called tag; this is a decorator function1 that we can put before individual tests, to mark them as being of special interest temporarily.

Edit test_all.py to decorate the failing test by an import statement to bring in tag from the TDDA library, and then decorate the definition of testStringFunction by preceding it with @tag as follows:

from tdda.referencetest import tag

def testZero():
    assert True

def testOne():
    time.sleep(1)
    assert 1 == 1

@tag
def testExampleStringGeneration(ref):
    actual = generate_string()
    ref.assertStringCorrect(actual, 'expected.html')

Step 5: Run only the tagged test

Having tagged the failing test, if we run the tests again adding --tagged to the command, it will run only the tagged test, and take hardly any time. The (abbreviated) output should be something like

============================= test session starts ==============================
$ pytest --tagged
test_all.py F

[...details of test failure...]

=========================== 1 failed in 0.16 seconds ===========================

We can tag as many tests as we like, across any number of test files, to run a subset of tests, rather than a single one.

Step 6: Locating @tag decorators

In a typical debugging or test development cycle in which you have been using the @tag decorator to focus on just a few failing tests, you might end up with @tag decorations scattered across several files, perhaps in multiple directories.

Although it's not hard to use grep or grep -r to find them, the library can actually do this for you. If you use the --istagged flag instead of running the tests, the library will report which test classes in which files have tagged tests. So in our case:

$ pytest --istagged
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-4.4.0, py-1.8.0, pluggy-0.9.0
rootdir: /Users/njr/tmp/referencetest_examples/exercises-pytest/exercise2
collecting ...
test_all.testExampleStringGeneration
collected 5 items

========================= no tests ran in 0.01 seconds =========================

Obviously, in the case of a single test file, this is not a big deal, but if you have dozens or hundreds of source files, in a directory hierarchy, and have tagged a few functions across them, it becomes significantly more helpful.

Recap: What we have seen

This simple exercise has shown how we can easily run subsets of tests by tagging them and then using --tagged to run only tagged tests.

In this case, the motivation was simply to save time and reduce clutter in the output, focusing on one test, or a small number of tests.

In the Exercise 3, we will see how this combines with the ability to automatically regenerate updated reference outputs to make for a safe and efficient way to update tests after code changes.


  1. Decorator functions in Python are functions that are used to transform other functions: they take a function as an argument and return a new function that modifies the original in some way. Out decorator function tag is called by writing @tag on the line before function (or class) definition, and the effect of this is that the function returned by @tag replaces the function (or class) it precedes. In our case, all @tag does is set an attribute on the function in question so that the TDDA reference test framework can identify it as a tagged function, and choose to run only tagged tests when so requested.