Many teams have a bash script that extracts code blocks and runs them. It works until the runtime changes, a new language is added, or someone needs to know which exact snippet broke. DocsCI replaces these scripts with a maintained, sandboxed, observable pipeline.
This is what most teams are running today:
# What most teams have today (test-docs.sh) #!/bin/bash set -euo pipefail # Extract code blocks (fragile grep) grep -A 20 '```python' docs/quickstart.md | \ sed '/```/d' > /tmp/test_quickstart.py # Run (no isolation, no secret scanning) python3 /tmp/test_quickstart.py # Check JS examples (node version may not match) grep -A 10 '```javascript' docs/api.md | \ sed '/```/d' > /tmp/test_api.js node /tmp/test_api.js echo "All docs tests passed"
Replace it with this GitHub Action:
# .github/workflows/docsci.yml
# Replace test-docs.sh with this
name: DocsCI
on:
push:
branches: [main]
pull_request:
jobs:
docs-ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run DocsCI
run: |
tar czf docs.tar.gz docs/ *.md 2>/dev/null || tar czf docs.tar.gz docs/
curl -sf -X POST https://snippetci.com/api/runs/queue \
-H "Authorization: Bearer ${{ secrets.DOCSCI_TOKEN }}" \
-F "docs_archive=@docs.tar.gz" \
| jq -e '.status == "passed"'Each snippet runs in an isolated V8 or WASM sandbox — no shared state, no host contamination.
40+ regex patterns scan snippets for credentials before execution. Your script has none of this.
Exact file, line number, error, and an AI-generated fix — not just 'test failed'.
Add your OpenAPI spec and DocsCI diffs it against documented parameters on every PR.
DocsCI maintains language runtimes. You stop caring when Node 22 breaks your script.
Track pass rates and finding trends over time — your script has no memory.
# docsci.yml
snippets:
skip_patterns:
- "# SKIP"
- "# noqa"
- "# example only"
- "/* expected error */"