Skip to content

Pytest Tutorial

Prasad Talasila edited this page Mar 15, 2018 · 3 revisions

Installation

To install pytest from the terminal, type pip install -U pytest

Our first test run

Let’s create a first test file with a simple test function:

# content of test_sample.py
def func(x):
    return x + 1
def test_answer():
    assert func(3) == 5

pytest allows you to use the standard python assert for verifying expectations and values in Python tests. If your test function and function to be tested are in two different files then the file should be imported in the testing file. If this assertion fails you will see the return value of the function call:

call:$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_sample.py F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================

We got a failure report because our little func (3) call did not return 5. pytest –v lets you get a full description of failure /success message of the tests executed. if you specify a message with the assertion like this:

assert  a % 2 == 0, "value was odd, should be even"

then no assertion introspection takes places at all and the message will be simply shown in the traceback.

Running multiple tests

pytest will run all files in the current directory and its subdirectories of the form test_*.py or *_test.py. More generally, it follows standard test discovery rules.You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:

import pytest
help(pytest)

Calling pytest through python -m pytest

New in version 2.0. You can invoke testing through the Python interpreter from the command line:

python -m pytest [...]

This is almost equivalent to invoking the command line script pytest [...] directly, except that calling via python will also add the current directory to sys.path.

Grouping multiple tests in a class

Once you start to have more than a few tests it often makes sense to group tests logically, in classes and modules. Let’s write a class containing two tests:

# content of test_class.py
class TestClass(object):
    def test_one(self):
        x = "this"
        assert 'h' in x
    def test_two(self):
        x = "hello"
        assert hasattr(x, 'check')

Making use of context-sensitive comparisons

New in version 2.0. pytest has rich support for providing context-sensitive information when it encounters comparisons. For example:

# content of test_assert2.py
def test_set_comparison():
    set1 = set("1308")
    set2 = set("8035")
    assert set1 == set2

if you run this module:

$ pytest test_assert2.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_assert2.py F [100%]
================================= FAILURES =================================
___________________________ test_set_comparison ____________________________
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
> assert set1 == set2
E AssertionError: assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'}
E Extra items in the left set:
E '1'
E Extra items in the right set:
E '5'
E Use -v to get the full diff
test_assert2.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================

Specifying tests / selecting tests

Pytest supports several ways to run and select tests from the command-line.

Run tests in a module

pytest test_mod.py

Run tests in a directory

pytest testing/

Run tests by keyword expressions

pytest -k "MyClass and not method"

This will run tests which contain names that match the given string expression, which can include Python operators that use filenames, class names and function names as variables. The example above will run TestMyClass.test_something but not TestMyClass.test_method_simple. Each collected test is assigned a unique nodeid which consist of the module filename followed by specifiers like class names, function names and parameters from parametrization, separated by :: characters.To run a specific test within a module:

pytest test_mod.py::test_func

Calling pytest from Python code

New in version 2.0. You can invoke pytest from Python code directly:

pytest.main() 

This acts as if you would call “pytest” from the command line. It will not raise SystemExit but return the exitcode instead. You can pass in options and arguments:

pytest.main(['-x', 'mytestdir'])

You can specify additional plugins to pytest.main:

# content of myinvoke.py
import pytest
class MyPlugin(object):
    def pytest_sessionfinish(self):
        print("*** test run reporting finishing")
        pytest.main(["-qq"], plugins=[MyPlugin()])

Marking test functions with attributes

By using the pytest.mark helper you can easily set metadata on your test functions. There are some builtin markers, for example: • skip - always skip a test function • skipif - skip a test function if a certain condition is met • xfail - produce an “expected failure” outcome if a certain condition is met • parametrize to perform multiple calls to the same test function.

Skipping test functions

A skip means that you expect your test to pass only if some conditions are met, otherwise pytest should skip running the test altogether. Common examples are skipping windows-only tests on non-windows platforms, or skipping tests that depend on an external resource which is not available at the moment (for example a database).

The simplest way to skip a test function is to mark it with the skip decorator which may be passed an optional reason:

@pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
...

Alternatively, it is also possible to skip imperatively during test execution or setup by calling the pytest.skip(reason) function:

def test_function():
    if not valid_config():
    pytest.skip("unsupported configuration")

skipif

New in version 2.0. If you wish to skip something conditionally then you can use skipif instead. Here is an example of marking a test function to be skipped when run on a Python3.6 interpreter:

import sys
@pytest.mark.skipif(sys.version_info < (3,6),
reason="requires python3.6")
def test_function():
...

If the condition evaluates to True during collection, the test function will be skipped, with the specified reason appearing in the summary when using -rs.

Skip all test functions of a class or module

You can use the skipif marker (as any other marker) on classes:

@pytest.mark.skipif(sys.platform == 'win32',
reason="does not run on windows")
class TestPosixCalls(object):
def test_function(self):
"will not be setup or run under 'win32' platform"

Skip all tests in a module if some import is missing:

pexpect = pytest.importorskip('pexpect')

Custom Markers

You can create your own custom markers and mark each test function according to their characteristic feature.For example if we want to separate tests run on Windows and mac then in the tests we specify:

import pytest

@pytest.mark.windows
def test_windows_1():
    assert True

@pytest.mark.mac
def test_mac_1():
    assert True

Now if I want to run only mac tests then we can write

pytest –m mac

To run tests other than windows :

pytest –m  “not windows”

XFail

You can use the xfail marker to indicate that you expect a test to fail:

@pytest.mark.xfail
def test_function():
...

This test will be run but no traceback will be reported when it fails. Instead terminal reporting will list it in the “expected to fail” (XFAIL) or “unexpectedly passing” (XPASS) sections. Alternatively, you can also mark a test as XFAIL from within a test or setup function imperatively:

def test_function():
    if not valid_config():
        pytest.xfail("failing configuration (but should work)")

This will unconditionally make test_function XFAIL. Note that no other code is executed after pytest.xfail call, differently from the marker. That’s because it is implemented internally by raising a known exception. Here’s the signature of the xfail marker (not the function), using Python 3 keyword-only arguments syntax:

def xfail(condition=None, *, reason=None, raises=None, run=True, strict=False):

Both XFAIL and XPASS don’t fail the test suite, unless the strict keyword-only parameter is passed as True. If a test should be marked as xfail and reported as such but should not be even executed, use the run parameter as False.

Pytest Fixtures

The purpose of test fixtures is to provide a fixed baseline upon which tests can reliably and repeatedly execute. pytest fixtures offer dramatic improvements over the classic xUnit style of setup/teardown functions: • fixtures have explicit names and are activated by declaring their use from test functions, modules, classes or whole projects. • fixtures are implemented in a modular manner, as each fixture name triggers a fixture function which can itself use other fixtures.

Fixtures as Function arguments

Test functions can receive fixture objects by naming them as an input argument. For each argument name, a fixture function with that name provides the fixture object. Fixture functions are registered by marking them with @pytest.fixture. Let’s look at a simple self-contained test module containing a fixture and a test function using it:

# content of ./test_smtpsimple.py
import pytest
@pytest.fixture
def smtp():
    import smtplib
    return smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
def test_ehlo(smtp):
    response, msg = smtp.ehlo()
    assert response == 250
    assert 0 # for demo purposes

Here, the test_ehlo needs the smtp fixture value. pytest will discover and call the @pytest.fixture marked smtp fixture function. Here is the exact protocol used by pytest to call the test function this way:

  1. pytest finds the test_ehlo because of the test_ prefix. The test function needs a function argument named smtp. A matching fixture function is discovered by looking for a fixture-marked function named smtp.
  2. smtp() is called to create an instance.
  3. test_ehlo() is called and fails in the last line of the test function.

Fixtures: a prime example of dependency injection

Fixtures allow test functions to easily receive and work against specific pre-initialized application objects without having to care about import/setup/cleanup details. It’s a prime example of dependency injection where fixture functions take the role of the injector and test functions are the consumers of fixture objects. If during implementing your tests you realize that you want to use a fixture function from multiple test files you can move it to a conftest.py file.

Scope: sharing a fixture instance across tests in a class, module or Session

Fixtures requiring network access depend on connectivity and are usually time-expensive to create. Extending the previous example, we can add a scope='module' parameter to the @pytest.fixture invocation to cause the decorated smtp fixture function to only be invoked once per test module (the default is to invoke once per test function). Multiple test functions in a test module will thus each receive the same smtp fixture instance, thus saving time. The next example puts the fixture function into a separate conftest.py file so that tests from multiple test modules in the directory can access the fixture function:

# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module")
def smtp():
    return smtplib.SMTP("smtp.gmail.com", 587, timeout=5)

You can even set the scope to “session”, “class” and “function”.

Fixture finalization / executing teardown code

pytest supports execution of fixture specific finalization code when the fixture goes out of scope. By using a yield statement instead of return, all the code after the yield statement serves as the teardown code:

#content of conftest.py
import smtplib
import pytest

@pytest.fixture(scope="module")
def smtp():
    smtp = smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
    yield smtp # provide the fixture value  
    print("teardown smtp") 
    smtp.close()

The print and smtp.close() statements will execute when the last test in the module has finished execution, regardless of the exception status of the tests.

Parametrizing fixtures

Fixture functions can be parametrized in which case they will be called multiple times, each time executing the set of dependent tests, i. e. the tests that depend on this fixture. Test functions do usually not need to be aware of their re-running. Fixture parametrization helps to write exhaustive functional tests for components which themselves can be configured in multiple ways. Extending the previous example, we can flag the fixture to create two smtp fixture instances which will cause all tests using the fixture to run twice. The fixture function gets access to each parameter through the special request object:

# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module",
params=["smtp.gmail.com", "mail.python.org"])
def smtp(request):
    smtp = smtplib.SMTP(request.param, 587, timeout=5)
    yield smtp
    print ("finalizing %s" % smtp)
    smtp.close()

The main change is the declaration of params with @pytest.fixture, a list of values for each of which the fixture function will execute and can access a value via request.param. pytest will build a string that is the test ID for each fixture value in a parametrized fixture, e.g. test_ehlo[smtp.gmail.com] and test_ehlo[mail.python.org] in the above examples. These IDs can be used with -k to select specific cases to run, and they will also identify the specific case when one is failing.

Another way to parametrize is using @pytest.mark.parametrize

The builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Here is a typical example of a test function that implements checking that a certain input leads to an expected output:

# content of test_expectation.py
import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
    assert eval(test_input) == expected

Here, the @parametrize decorator defines three different (test_input,expected) tuples so that the test_eval function will run three times using them in turn:

$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F [100%]
================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________
test_input = '6*9', expected = 42
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
==================== 1 failed, 2 passed in 0.12 seconds ====================

Classic xunit-style setup

Module level setup/teardown

If you have multiple test functions and test classes in a single module you can optionally implement the following fixture methods which will usually be called once for all the functions:

def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method.
"""

Class level setup/teardown

Similarly, the following methods are called at class level before and after all test methods of the class are called:

@classmethod
def setup_class(cls):
""" setup any state specific to the execution of the given class (which
usually contains tests).
"""
@classmethod
def teardown_class(cls):
""" teardown any state that was previously setup with a call to
setup_class.
"""

Method and function level setup/teardown

Similarly, the following methods are called around each method invocation:

def setup_method(self, method):
""" setup any state tied to the execution of the given method in a
class. setup_method is invoked for every test method of a class.
"""
def teardown_method(self, method):
""" teardown any state that was previously setup with a setup_method
call.
"""

If you would rather define test functions directly at module level you can also use the following functions to implement fixtures:

def setup_function(function):
""" setup any state tied to the execution of the given function.
Invoked for every test function in the module.
"""
def teardown_function(function):
""" teardown any state that was previously setup with a setup_function
call.
"""

Asserting that a certain exception is raised

If you want to assert that some code raises an exception you can use the raises helper:

# content of test_sysexit.py
import pytest
def f():
    raise SystemExit(1)
def test_mytest():
    with pytest.raises(SystemExit):
        f()

Running it with, this time in “quiet” reporting mode:

$ pytest -q test_sysexit.py
. [100%]
1 passed in 0.12 seconds

Requesting a unique temporary directory

For functional tests one often needs to create some files and pass them to application objects. pytest provides Builtin fixtures/function arguments which allow to request arbitrary resources, for example a unique temporary directory:

# content of test_tmpdir.py
def test_needsfiles(tmpdir):
    print (tmpdir)
    assert 0

We list the name tmpdir in the test function signature and pytest will lookup and call a fixture factory to create the resource before performing the test function call. Let’s just run it:

$ pytest -q test_tmpdir.py
F [100%]
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
def test_needsfiles(tmpdir):
print (tmpdir)
> assert 0
E assert 0
test_tmpdir.py:3: AssertionError
--------------------------- Captured stdout call ---------------------------
PYTEST_TMPDIR/test_needsfiles0
1 failed in 0.12 seconds

Before the test runs, a unique-per-test-invocation temporary directory was created

Getting help on version, option names, environment variables

pytest --version # shows where pytest was imported from
pytest --fixtures # show available builtin function arguments
pytest -h | --help # show help on command line and config file options

Stopping after the first (or N) failures

To stop the testing process after the first (N) failures:

pytest --maxfail=2 # stop after two failures

#Dropping to PDB (Python Debugger) on failures

Python comes with a builtin Python debugger called PDB. pytest allows one to drop into the PDB prompt via a command line option:

pytest --pdb

This will invoke the Python debugger on every failure. Often you might only want to do this for the first failing test to understand a certain failure situation:

pytest -x --pdb # drop to PDB on first failure, then end test session
pytest --pdb --maxfail=3 # drop to PDB for first three failures

#Profiling test execution duration

To get a list of the slowest 10 test durations:

pytest --durations=10

Setting capturing methods or disabling capturing

You can influence output capturing mechanisms from the command line:

pytest -s # disable all capturing
pytest --capture=sys # replace sys.stdout/stderr with in-mem files
pytest --capture=fd # also point filedescriptors 1 and 2 to temp file

Warnings Capture

Starting from version 3.1, pytest now automatically catches warnings during test execution and displays them at the end of the session:

# content of test_show_warnings.py
import warnings
def api_v1():
    warnings.warn(UserWarning("api v1, should use functions from v2"))
    return 1
def test_one():
    assert api_v1() == 1

Running pytest now produces this output:

$ pytest test_show_warnings.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_show_warnings.py . [100%]
============================= warnings summary =============================
test_show_warnings.py::test_one
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions
˓→from v2
warnings.warn(UserWarning("api v1, should use functions from v2"))
-- Docs: http://doc.pytest.org/en/latest/warnings.html
=================== 1 passed, 1 warnings in 0.12 seconds ===================
Pytest by default catches all warnings except for DeprecationWarning and PendingDeprecationWarning.
The -W flag can be passed to control which warnings will be displayed or even turn them into errors:
$ pytest -q test_show_warnings.py -W error::UserWarning
F [100%]
================================= FAILURES =================================
_________________________________ test_one _________________________________

Asserting warnings with the warns function

You can check that code raises a particular warning using pytest.warns, which works in a similar manner to raises:

import warnings
import pytest
def test_warning():
    with pytest.warns(UserWarning):
        warnings.warn("my warning", UserWarning)

The test will fail if the warning in question is not raised. Doctest integration for modules and test files By default all files matching the test*.txt pattern will be run through the python standard doctest module. You can change the pattern by issuing:

pytest --doctest-glob='*.rst'
```You can also trigger running of doctests from docstrings in all python modules (including regular python test modules):
```pytest --doctest-modules

You can make these changes permanent in your project by putting them into a pytest.ini file like this:

# content of pytest.ini
[pytest]
addopts = --doctest-modules

REFERENCES: