Skip to content

Runner API

smallex.runner

Core execution pipeline for SQL expectation tests.

FailureRowsConfig dataclass

Configuration controlling how failing rows are collected and reported.

Attributes:

Name Type Description
mode str

One of none, terminal, csv, or both.

terminal_limit int

Number of rows to display per failing test in terminal.

csv_limit int

Maximum number of rows to persist per failing test in CSV.

csv_dir Path

Target directory for CSV outputs.

csv_enabled() -> bool

Return whether CSV failure exports should be produced.

terminal_enabled() -> bool

Return whether terminal row previews should be produced.

FailureRowsMode

Supported modes for failure row reporting.

TestResult dataclass

Outcome for one SQL expectation.

Attributes:

Name Type Description
case SQLTestCase

Source SQL test case metadata.

passed bool

True when query returns zero rows.

row_count int

Number of rows fetched for this failure context.

has_more_rows bool

True when rows exist beyond configured fetch limits.

columns list[str]

Result column names for failing query rows.

sample_rows list[tuple[object, ...]]

Sample rows for terminal reporting.

csv_path Path | None

Generated CSV path when export is enabled.

error_message str | None

SQL error message when query execution fails.

message: str | None property

Return user-authored message associated with this test case.

node_id: str property

Return pytest-like node id for terminal reporting.

path: Path property

Return source path for compatibility with existing callers.

discover_sql_cases(tests_dir: Path) -> list[SQLTestCase]

Discover SQL files and parse all SQL test cases.

discover_sql_tests(tests_dir: Path) -> list[Path]

Discover SQL test scripts recursively under a directory.

load_config(config_path: Path, env: str | None = None) -> DatabaseConfig

Load and validate CLI config from TOML file.

run_all(config_path: Path, tests_dir: Path, *, env: str | None = None, failure_rows: FailureRowsConfig | None = None) -> tuple[list[TestResult], int]

Run all SQL expectations and return results plus failed count.

run_sql_case(connection: ConnectionProtocol, case: SQLTestCase, *, failure_rows: FailureRowsConfig) -> TestResult

Execute one SQL test case and classify pass/fail with diagnostics.