Skip to content

Quickstart

Supersigil is an open-source CLI that turns Markdown spec files into a verifiable graph. You write criteria in your repo, link them to test evidence through ecosystem plugins or annotations, and Supersigil checks that everything stays connected. It works locally, in CI, and as structured context for AI coding agents.

This is the fastest path to the core loop: write one enforceable criterion, run verify, and watch the tool tell you what is missing.

Terminal window
brew tap jonisavo/supersigil
brew install supersigil
# or alternatively
brew install jonisavo/supersigil/supersigil

Installs prebuilt binaries for macOS and Linux.

Prebuilt binaries are also available on the GitHub Releases page.

Verify the installation:

Terminal window
supersigil --version
  1. Initialize a project

    Navigate to the root of your repository and run:

    Terminal window
    supersigil init

    This creates a supersigil.toml config and installs six agent skills to .agents/skills/. The skills teach AI coding agents how to work with Supersigil’s spec-driven workflow.

    supersigil.toml
    paths = ["specs/**/*.md"]

    Use --no-skills to skip skills installation, or -y to accept all defaults without prompting.

  2. Create a spec document

    Scaffold a requirements document:

    Terminal window
    supersigil new requirements auth

    Then replace specs/auth/auth.req.md with one real criterion:

    specs/auth/auth.req.md
    ---
    supersigil:
    id: auth/req
    type: requirements
    status: approved
    title: "Authentication"
    ---
    ```supersigil-xml
    <AcceptanceCriteria>
    <Criterion id="valid-creds">
    WHEN a user submits valid credentials
    THE SYSTEM SHALL issue a session token.
    </Criterion>
    </AcceptanceCriteria>
    ```
    ```supersigil-xml
    <TrackedFiles paths="src/auth/**/*.rs, tests/auth/**/*.rs" />
    ```

    This is deliberately set to approved so the next verification run is meaningful. Supersigil now expects real evidence.

  3. Verify the graph

    Terminal window
    supersigil verify

    You should see a failing result because the criterion has no matching evidence yet:

    auth/req
    ✗ criterion "valid-creds" has no matching verification evidence

    This is the first payoff: the spec is now a contract, not just prose.

  4. Add evidence and watch it go green

    Annotate a test with its criterion ref. The approach depends on your language.

    Use the verifies attribute macro from the supersigil_rust crate. Native support is enabled by default.

    Terminal window
    cargo add supersigil-rust
    tests/auth/login_test.rs
    use supersigil_rust::verifies;
    #[verifies("auth/req#valid-creds")]
    #[test]
    fn login_with_valid_credentials_returns_token() {
    let result = login("alice", "correct-password");
    assert!(result.token.is_some());
    }

    Run verify again:

    Terminal window
    supersigil verify
    ✓ Clean: no findings
  • Supersigil does not treat criteria as decorative text. It expects evidence.
  • Verification can fail for the right reason before code reaches CI.
  • One spec file is enough to start; you do not need a full process to get value.

The supersigil init step installed agent skills to .agents/skills/. These teach AI coding agents the full spec-driven workflow, from writing specs to implementing against criteria and re-verifying the result.

Your agent can load the spec you just created as structured context:

Terminal window
supersigil context auth/req --format json

From here, an agent can read the criteria, plan implementation work, and run supersigil verify --format json to check its own output. The skills guide agents through this loop automatically. Point the agent at a spec, and it knows what to do.

See the AI Agents guide for the full integration details.