Getting Started with PyTest: A Powerful Testing Framework

PyTest: The Best Partner for Writing Test Cases

When it comes to code testing frameworks, PyTest is definitely a dark horse in the Python world. It not only solves the verbosity problem of unittest but also comes with a bunch of super useful features. Let’s take a look at this powerhouse in the testing field!

Installation and Experience

Without further ado, just install it:

pip install pytest

Let’s write the simplest test case to give it a try:

def add(a, b):
    return a + b

def test_add():
    assert add(1, 2) == 3
    assert add(-1, 1) == 0

See? Writing test cases is that simple! As long as the function name starts with test_, pytest can automatically find it. Running the tests is even easier; just type pytest in the terminal, and you’re done.

Friendly reminder: The test file name also needs to contain the word “test”, like test_demo.py, otherwise pytest might not find your test cases~

Assertions Made Easy

Writing assertions in unittest is too cumbersome; you have to use self.assertEqual which is quite long. With pytest, you can directly use Python’s built-in assert, which is super comfortable:

def test_string():
    s = "hello, pytest"
    assert "hello" in s
    assert s.startswith("hello")
    assert len(s) == 12

When assertions fail, pytest will also tell you exactly what went wrong, such as the value of variables and the differences, which is very helpful.

Fixtures to Cure Obsessive-Compulsive Disorder

Sometimes many test cases require the same setup, like connecting to a database. Pytest’s fixtures are specifically designed to solve this problem:

import pytest

@pytest.fixture
def db_conn():
    # Pretend this is a database connection
    print("Connecting to the database...")
    conn = {"connected": True}
    yield conn
    print("Disconnecting...")

def test_db(db_conn):
    assert db_conn["connected"] == True

Did you see yield? This means that after the test ends, the cleanup code following it will be executed automatically, so you no longer have to manually close the database connection. How nice!

Parameterization is Very Useful

We often need to test the same function under different inputs, writing a bunch of test cases manually can be tedious. Pytest’s parameterized tests are made for this:

@pytest.mark.parametrize("input,expected", [
    ("hello", 5),
    ("pytest", 6),
    ("", 0),
])
def test_len(input, expected):
    assert len(input) == expected

A single test function can handle all scenarios, and the code looks clean.

Friendly reminder: Parameterized tests are very useful, but don’t get greedy; too many test cases will affect the running speed, so keep it to the point.

Skipping Tests and Expected Failures

Sometimes certain test cases can’t run for now, or you expect them to fail, and pytest has you covered:

@pytest.mark.skip(reason="This feature is not implemented yet")
def test_future():
    pass

@pytest.mark.xfail
def test_known_bug():
    assert 1 / 0  # This will raise a zero division error

The skip decorator will skip this test, while xfail tells pytest that this test is expected to fail, and it won’t show up as a failure in the test report.

PyTest has many advanced features, such as parallel testing, test coverage statistics, and a plugin system, etc. But let’s stop here for today; these basic features are enough to write decent test cases. Remember a principle: write tests, but don’t make them too complex; simplicity and practicality are key!

Make sure to run the test cases every time you modify the code to develop a good habit. Finding bugs early is better than fixing them after deployment!

Leave a Comment