Skip to content

Development Guide

How to work on python-checkup itself.

Prerequisites

  • Python 3.12+
  • uv

Setup

git clone https://github.com/nabroleonx/python-checkup.git
cd python-checkup
uv sync --all-extras

Running tests

uv run pytest                    # full suite
uv run pytest tests/test_ruff.py -v  # single file
uv run pytest -k "test_name"    # single test by name

Project layout

python_checkup/
  cli.py                  # Click CLI entry point
  runner.py               # Parallel analyzer runner
  plan.py                 # ScanPlan: decides what to run
  models.py               # Core data types (Diagnostic, HealthReport, etc.)
  config.py               # pyproject.toml config loader
  analysis_request.py     # AnalysisRequest passed to every analyzer
  analyzer_catalog.py     # ANALYZER_CATALOG: maps names to classes
  discovery.py            # Python file discovery
  detection.py            # Framework and Python version detection
  cache.py                # Per-file result cache
  dedup.py                # Cross-analyzer deduplication
  diff.py                 # Git diff integration

  analyzers/
    registry.py           # Entry-point plugin loader
    ruff.py               # Ruff (quality, security, complexity)
    bandit.py             # Bandit (security)
    mypy.py               # mypy (type safety)
    basedpyright.py       # basedpyright (type safety, optional)
    radon.py              # Radon (complexity)
    vulture.py            # Vulture (dead code)
    deptry.py             # deptry (dependency hygiene)
    dependency_vulns.py   # OSV vulnerability scanner + advisory cache
    detect_secrets.py     # detect-secrets (security, optional)
    typos.py              # typos (quality, optional)
    cached.py             # CachedAnalyzer wrapper

  dependencies/
    discovery.py          # Lockfile/manifest discovery and parsing

  scoring/
    engine.py             # Weight redistribution + per-category scoring

  formatters/
    human.py              # Rich terminal output
    json_fmt.py           # JSON output

  mcp/
    server.py             # MCP server for AI editor integration

Architecture

Data flow

  1. CLI (cli.py) parses args, loads config, calls build_scan_plan()
  2. ScanPlan (plan.py) decides which analyzers and categories to run
  3. Runner (runner.py) executes analyzers in parallel via asyncio
  4. Each analyzer receives an AnalysisRequest and returns list[Diagnostic]
  5. Dedup merges overlapping findings across tools
  6. Scoring engine computes per-category scores with weight redistribution
  7. Formatter renders the HealthReport to terminal or JSON

Key types

  • Diagnostic -- a single finding (file, line, severity, rule, message, fix)
  • HealthReport -- the top-level result (score, label, category scores, diagnostics)
  • AnalysisRequest -- input to every analyzer (files, config, categories, project root)
  • ScanPlan -- what analyzers/categories to run, which are skipped
  • CoverageInfo -- how complete the analysis was (full/partial/limited)

Analyzer protocol

Every analyzer (built-in or plugin) must satisfy:

class MyAnalyzer:
    @property
    def name(self) -> str: ...

    @property
    def category(self) -> Category: ...

    async def is_available(self) -> bool: ...

    async def analyze(self, request: AnalysisRequest) -> list[Diagnostic]: ...

See docs/plugins.md for the full plugin development guide.

Scoring

Weights default to quality=25, types=20, security=20, complexity=15, dead_code=10, dependencies=10. When a category has no active analyzer, its weight is redistributed proportionally to the remaining categories. The output always explains when this happens.

Caching

  • Per-file cache: .python-checkup-cache/v1/ -- keyed by file content hash, skips re-analysis of unchanged files
  • Advisory cache: .python-checkup-cache/v2/advisories/ -- caches OSV vulnerability responses for 24 hours

Both are bypassed with --no-cache.

Adding a new built-in analyzer

  1. Create python_checkup/analyzers/my_tool.py implementing the analyzer protocol
  2. Add the class to ANALYZER_CATALOG in analyzer_catalog.py
  3. Map the analyzer name to categories in scoring/engine.py _categories_from_analyzers()
  4. If the tool is optional, add it to ANALYZER_EXTRA in formatters/human.py and create a pip extra in pyproject.toml
  5. Add tests in tests/test_my_tool.py
  6. Run the full suite: uv run pytest

Code conventions

  • All analyzers are async
  • Use AnalysisRequest as the sole input -- do not pass raw file lists
  • Return list[Diagnostic] -- never raise from analyze()
  • is_available() should return False if the tool is missing, not raise
  • Tests use tmp_path fixtures and mock subprocess/HTTP calls
  • The Diagnostic dataclass is frozen -- construct new instances, don't mutate