The Python track uses a generator script and Jinja2 templates for creating test files from the canonical data.
Test generation requires a local copy of the problem-specifications repository.
The script should be run from the root python
directory, in order to correctly
access the exercises.
Run bin/generate_tests.py --help
for usage information.
Test templates support Jinja2 syntax, and have the following context variables available from the canonical data:
exercise
: The hyphenated name of the exercise (ex:two-fer
)version
: The canonical data version (ex.1.2.0
)cases
: A list of case objects or a list ofcases
lists. For exact structure for the exercise you're working on, please consultcanonical-data.json
has_error_case
: Indicates if any test case expects an error to be thrown (ex:False
)additional_cases
: similar structure tocases
, but is populated from the exercise's.meta/additional_tests.json
file if one exists (for an example, seeexercises/word-count/.meta/additional_tests.json
)
Additionally, the following template filters are added for convenience:
to_snake
: Converts a string to snake_case (ex:{{ "CamelCaseString" | to_snake }}
->camel_case_string
)camel_case
: Converts a string to CamelCase (ex:{{ "snake_case_string" | camel_case }}
->SnakeCaseString
)error_case
: Checks if a test case expects an error to be thrown (ex:{% for case in cases%}{% if case is error_case}
)regex_replace
: Regex string replacement (ex:{{ "abc123" | regex_replace("\\d", "D") }}
->abcDDD
)
- General-use macros for highly repeated template structures are defined in
config/generator_macros.j2
.- These may be imported with the following:
{%- import "generator_macros.j2" as macros with context -%}
- These may be imported with the following:
- All test templates should end with
{{ macros.footer() }}
. - All Python class names should be in CamelCase (ex:
TwoFer
).- Convert using
{{ "two-fer" | camel_case }}
- Convert using
- All Python module and function names should be in snake_case
(ex:
high_scores
,personal_best
).- Convert using
{{ "personalBest" | to_snake }}
- Convert using
- Track-specific tests are defined in the option file
.meta/additional_tests.json
. The JSON object defined in this file is to have a single key,cases
, which has the same structure ascases
incanonical-data.json
. - Track-specific tests should be placed after canonical tests in test files.
- Track-specific tests should be marked in the test file with the following comment:
# Additional tests for this track
Most templates will look something like this:
{%- import "generator_macros.j2" as macros with context -%}
{{ macros.header() }}
class {{ exercise | camel_case }}Test(unittest.TestCase):
{% for case in cases -%}
def test_{{ case["description"] | to_snake }}(self):
value = {{ case["input"]["value"] }}
expected = {{ case["expected"] }}
self.assertEqual({{ case["property"] }}(value), expected)
{% endfor %}
{{ macros.footer() }}
The names imported in macros.header()
may be overridden by adding
a list of alternate names to import (ex:clock
):
{{ macros.header(["Clock"])}}
On rare occasion, it may be necessary to filter out properties that
are not tested in this track. The header
macro also accepts an
ignore
argument (ex: high-scores
):
{{ macros.header(ignore=["scores"]) }}
Starting with the Jinja Documentation is highly recommended, but a complete reading is not strictly necessary.
Additional Resources:
- Create
.meta/template.j2
for the exercise you are implementing, and open it in your editor. - Copy and paste the example layout in the template file and save.
- Make the appropriate changes to the template file until it produces
valid test code, referencing the exercise's
canonical-data.json
for input names and case structure.- Use the available macros to avoid re-writing standardized sections.
- If you are implementing a template for an existing exercise, matching the exact structure of the existing test file is not a requirement, but minimizing differences will make PR review a much smoother process for everyone involved.