Playbook
Troubleshooting Playbook
Film-room guide for diagnosing dependency, quality, coverage, Kaggle, and docs-artifact problems.
Troubleshooting Playbook
Use this page as the film room after a broken possession. Start with the artifact or command named in the failure, then work the shortest route back to a clean rerun.
Dead-ball triage
| If the failure mentions… | Start here |
|---|---|
| Missing environment, lockfile, or package surface | Dependency inventory |
| Empty tables or failing checks | Data quality |
| Endpoint coverage drift | Extract completeness |
| Generated docs or docs build drift | Docs-autogen artifacts |
| Download/upload or publish misses | Kaggle and common CLI/CI failures |
Do not debug the whole pipeline by default. Start from the exact artifact, report, or command named in the error message and only widen the search if that lane stays cold.
Use the narrowest recovery loop
- Regenerate or reopen the exact artifact named in the failure.
- Inspect the JSON or Markdown output, not just the terminal text.
- Fix the local cause first.
- Rerun the narrowest failing command before you escalate to a wider pipeline command.
Dependency inventory artifact (dependency-inventory.json)
Best for: missing lockfiles, missing docs package paths, or suspicious environment surface.
1) Generate
uv run python -m nbadb.core.dependency_inventory --project-root . --output artifacts/health/local/dependency-inventory.json2) Diagnose
python - <<'PY'
import json
data = json.load(open("artifacts/health/local/dependency-inventory.json"))
print("summary:", data["summary"])
print("lockfile_present:", data["lockfile"]["present"])
print("pyproject_present:", data["pyproject"]["present"])
print("docs_package_present:", data["docs_package"]["present"])
PY3) Remediate
- If
lockfile_presentisfalse, runuv sync(or provide--lock-path) and commit the lockfile used by CI. - If
docs_package_presentisfalse, run from repo root or pass--docs-package-path /abs/path/docs/package.json. - If
summary.package_countis unexpectedly low, check--project-rootand rerun with an absolute path.
Data-quality JSON artifact (data-quality-report.json)
Best for: empty tables, failing checks, or “database not found” quality runs.
1) Generate
uv run nbadb run-quality --report-path artifacts/health/local/data-quality-report.json2) Diagnose
python - <<'PY'
import json
report = json.load(open("artifacts/health/local/data-quality-report.json"))
print("summary:", report["summary"])
print("failed:", [r["message"] for r in report["results"] if not r["passed"]][:10])
PY3) Remediate
- If you see
Error: database not found. Run 'nbadb init' first., seed the data directory first withnbadb initornbadb downloadplus a refresh command. - If
summary.totalis0, verify the DuckDB file exists at the configured--data-dir. - If checks fail, inspect the reported tables/columns and rerun the refresh command that matches the scope you need (
daily,monthly, orfull).
Known issue: nbadb run-quality reports failed checks but does not fail the process when checks ran. Gate on summary.failed > 0 in CI if strict enforcement matters.
Endpoint coverage artifacts (artifacts/endpoint-coverage/*)
Best for: extractor/staging drift and runtime coverage gaps.
1) Generate
uv run nbadb extract-completeness
# Source coverage gate
uv run nbadb extract-completeness --require-full
# Source + model-contract gate
uv run nbadb extract-completeness --require-full --require-model-contractBy default this writes:
artifacts/endpoint-coverage/endpoint-coverage-matrix.jsonartifacts/endpoint-coverage/endpoint-coverage-summary.jsonartifacts/endpoint-coverage/endpoint-coverage-report.md
Use --output-dir to write artifacts elsewhere.
The summary distinguishes source coverage from model ownership. In practice that lets you separate "we are not extracting or staging this surface" from "we land it, but we still need a downstream transform or an explicit exclusion." It also reports star_schema_coverage, which answers a different question: "the transform exists, but do we have a schema-backed final-tier contract for its output?"
2) Diagnose
python - <<'PY'
import json
summary = json.load(open("artifacts/endpoint-coverage/endpoint-coverage-summary.json"))
print("runtime:", summary["runtime_version"], "classes:", summary["runtime_endpoint_class_count"])
print("coverage:", summary["coverage"])
print("star_schema_coverage:", summary.get("star_schema_coverage", {}))
PY3) Remediate
- If
runtime_versionisunknownor the class count is0, make surenba_apiis installed in the active environment. - If
runtime_gapis high, validate runtime references insrc/nbadb/extract/stats/*.pyand endpoint names insrc/nbadb/orchestrate/staging_map.py. - If
extractor_onlyrows appear, either add the endpoint to the staging map or normalize the coverage aliasing. - If
schema_missing_transform_outputsis non-zero, the pipeline is producing transform outputs without a matchingnbadb.schemas.starcontract. Treat that as a data-modeling gap, not an extractor/staging gap.
Docs-autogen artifacts (docs/content/docs/*)
Best for: generated docs drift, stale auto pages, or docs build failures after schema changes.
1) Generate
uv run nbadb docs-autogen --docs-root docs/content/docs2) Expect these generated files
schema/{raw,staging,star}-reference.mdxdata-dictionary/{raw,staging,star}.mdxdiagrams/er-auto.mdxlineage/lineage-auto.mdxlineage/lineage.json
3) Diagnose + remediate
- If command output does not show
updated:orunchanged:, make sure you are running from repo root and the environment has docs-gen dependencies installed. - If only
unchanged:appears after schema edits, confirm that metadata actually changed innbadb.schemas.*. - If generated docs exist but the docs app still fails, verify the docs workspace dependencies and build path used by your environment.
Kaggle and common CLI/CI misses
| Command or area | Symptom | Fix |
|---|---|---|
uv sync | Resolver / lock mismatch | Regenerate the lock, rerun sync, and commit the lockfile CI expects |
uv run nbadb download / upload | Download or upload failure | Ensure Kaggle credentials are available to the environment and the dataset slug is correct |
uv run nbadb run-quality --report-path ... | database not found | Initialize or download the dataset before running quality checks |
actions/upload-artifact | “No files were found with the provided path” | Verify the upstream command wrote exactly the artifact path the workflow expects |
| docs install/build step | lockfile or dependency drift | Refresh docs dependencies and commit the matching lockfile changes |
Best next move after a fix
- Rerun the narrowest failing command first.
- Re-open the generated artifact or report, not just the terminal output.
- Only rerun a wider pipeline command once the local symptom has actually cleared.
Related routes
- CLI Reference for exact command behavior
- Daily Updates for recurring run-mode choices
- Kaggle Setup for publish/download flows
Keep moving
Stay in the same possession
Keep the mental model warm with adjacent pages, section hubs, and search-friendly routes into the same topic cluster.
Analytics Quickstart
Land quick wins fast and move from setup to analysis with intent.
Shot Chart Analysis
Lean into basketball-native visual storytelling on one of the best-fit pages.
Visual Asset Prompt Pack
Generate hero art, OG cards, icons, and texture systems without losing the docs identity.