Verdigris Design System — Claude Code Guidelines
Project Overview
This repo (VerdigrisTech/design) is the canonical design system for all Verdigris surfaces. It contains machine-readable tokens (JSON), human-readable foundation docs (markdown), and visual rules (YAML).
Package: @verdigristech/design-tokens on GitHub Packages
Consumers: Patina (app UI), www (marketing site), evaluator pipeline, AI agents, Figma
Integration guide: CONSUMERS.md — canonical guide for any consumer (Tailwind, raw tokens, CSS imports, voice recipes, rules YAML, versioning).
Key Architecture
- OKLch is the canonical color space — all other formats (HSL, hex, RGB) are generated
- Patina is the reference implementation — www converges toward Patina, not the other way around
- W3C DTCG format for all token JSON (
$value,$type,$description) - Build pipeline:
tokens/*.json→build/config.ts→build/dist/(oklch.css, hsl.css, hex/colors.json, tailwind/preset.js) - Voice is a foundation. Lives in
voice/(top-level, sibling tofoundations/andtokens/). Before writing or generating any Verdigris content, readvoice/USE.mdfirst — it teaches you to identify subject + form + audience before picking a recipe. Thenvoice/recipes.yamlto pick the mix, andvoice/team/*.yamlfor individual voice profiles.
Development Commands
npm run validate # Check token JSON for broken references and missing $type
npm run validate:rules # Check visual-rules.yml (YAML syntax, test blocks, emdashes, convention, sidebar)
npm run validate:all # Run both validators
npm run build # Generate build/dist/ outputs from token source
npm run test:browser # Cross-browser smoke tests (Playwright, chromium/webkit/firefox)
npm run test:browser:install # Install Playwright browser binaries (one-time setup)
npm run audit:cohesion # Cross-cell brand + design cohesion audit
npm run audit:compliance # Per-artifact compliance audit (live LLM, requires OPENAI_API_KEY)
npm run audit:compliance:smoke # Full live pipeline against one fixture
npm run test:compliance # Fixture self-test (no live calls)
npm run test:audit # Self-test the auditor against fixtures
Cross-browser testing
Smoke tests live in tests/browser/ and run against a locally-built Jekyll site.
Local workflow (one-time):
bundle install # Jekyll + GitHub Pages deps
npm run test:browser:install # Playwright chromium, webkit, firefox
Then for each test run:
bundle exec jekyll build # Build _site/
npm run test:browser # Runs on all 3 browsers; python3 -m http.server serves _site
CI runs these automatically on every PR. See .github/workflows/build.yml cross-browser-smoke job and link-check job.
Pre-Commit Checklist
Before every commit that changes tokens:
npm run validate– must pass with 0 errorsnpm run build– regenerate outputs- Commit build outputs alongside token changes
- If color tokens changed:
npm run validate:wcag– print-stylesheet contrast must still pass
Before every commit that changes rules (visual-rules.yml):
npm run validate:rules– must pass with 0 errors- Every
type: "constraint"rule must have atestblock - Every
minmust have amax(floors need ceilings) - Every
llm_evalprompt must use YES = violation convention - No emdashes anywhere in the file
Before every commit that changes content (foundations, specimens, examples):
npm run lint:external– no internal content in public filesnpm run validate:rules– checks sidebar coverage for new pages- Check for AI writing artifacts (emdashes, jargon, overexplaining)
- Verify cross-file consistency (values in rules must match foundations and specimens)
Before every commit that changes print stylesheets or color tokens:
npm run validate:wcag– must pass with 0 contrast violations- WCAG AA conformance is a release blocker; deviations require explicit, accurate documentation in
tokens/color/base.jsontoken descriptions - Brand teal (
brand.verdigris, ~2:1 on white) is safe as text only on dark backgrounds; for any teal text on light, usebrand.verdigris-on-light(~5.5:1)
Before adding a NEW CELL or making MAJOR cross-cell changes:
/cohesion-audit– check the system still hangs together- Address all
criticalfindings before merge - File
should-fixfindings under the active epic - Note
notefindings for next quarterly review
Numerical claims must be computed, not estimated
Any contrast ratio, luminance, page-count, file-size, or token-coverage claim made in a commit message, PR description, comment, or doc must be verified by running the actual computation. PR-G round 4 caught a CRITICAL bug where the “passes 3:1 large-text contrast” framing for brand teal on white was numerically wrong (actual: 2.085:1, fails even the relaxed bar). Estimates and rough numbers in repeated artifacts harden into false facts. Use npm run validate:wcag for contrast; python3 for ad-hoc luminance; wc -l / git diff --stat for size claims.
After applying adversarial-review fixes, run another adversarial pass before commit
PR-G shipped four cumulative adversarial-review rounds; each surfaced new issue classes the previous round didn’t see. Going forward, treat fix-set application as a new state worthy of review, not a closed loop. Default cadence: at least one focused adversarial pass on the post-fix state before commit. Stop looping when N consecutive rounds find only LOW/MEDIUM nits with no CRITICAL/HIGH.
Release Process
Releases are automatic. When a PR merges to main, the auto-release.yml workflow:
- Determines the version bump (from PR labels or commit prefixes)
- Bumps package.json, rebuilds, commits, tags, creates a GitHub Release
- Publishes to GitHub Packages
You just merge the PR. Everything else is automated.
Versioning Rules
| Bump | Trigger | When to use |
|---|---|---|
| Major | PR label major or BREAKING CHANGE in commit body |
Breaking changes: renamed tokens, removed tokens, changed YAML rule ID paths, schema changes that break evaluator pipelines |
| Minor | PR label minor or any feat() commit prefix |
New tokens, new rules, new composition cells, new foundation sections, new assets |
| Patch | Default (no label, no feat prefix) | Fixes to values, docs updates, YAML corrections, adversarial review fixes |
To control the bump, either:
- Add a
major,minor, orpatchlabel to the PR before merging - Or rely on commit message prefixes:
feat()triggers minor, everything else triggers patch
Pre-Merge Checklist
- Branch + PR – never push directly to main
npm run validate:allon the branch- Adversarial review before merge (at least 1 round for rules/composition changes)
Commit Message Format
type(scope): description
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Types: feat, fix, docs, refactor, chore
Scopes: tokens, foundations, categories, rules, build, ci
Examples:
feat(tokens): add elevation shadow tokens from Patina audit
docs(categories): add photography guidelines
fix(rules): correct heading weight constraint from 600 to 700
Linear Integration
- Team: Z2O (ID:
9e2ce699-7e73-49fe-a33a-d35c81cdb868) - Project: Design System: VerdigrisTech/design
- Include issue ID in commit messages:
[Z2O-XXX]
Glossary
- Genre is the human-facing noun for the artifact-type-within-a-cell distinction (e.g., the slides cell has four genres: pilot kickoff, customer 101, partner enablement, internal team; the whitepaper cell has three: lab_tradition, policy_brief, ceo_brief). Use “genre” in all prose.
modes:is the YAML field on rules inrules/visual-rules.ymlthat lists which genres a rule applies to. The two terms refer to the same concept; “modes” is the technical contract on disk, and “genre” is the producer-facing word. Only say “mode” when explicitly referencing the YAML field (e.g., “themodes:field accepts a list of genres”).- cohesion-audit is a Claude Code skill at
.claude/skills/cohesion-audit/that prosecutes the design system for cross-cell drift. Read-only. Writes reports toaudits/cohesion/. Companion tonpm run validate:all: validators check each file is well-formed; cohesion-audit checks the system as a whole hangs together. SeeSKILL.mdfor invocation,README.mdfor maintenance,DESIGN.mdfor rationale.
File Structure Rules
Tokens (tokens/)
- All values in W3C DTCG format:
{ "$value": "...", "$type": "...", "$description": "..." } - References use
{path.to.token}syntax – resolved by the build pipeline - Group by concern under
tokens/:tokens/color/,tokens/typography/,tokens/spacing/,tokens/motion/,tokens/elevation/
Foundation Docs (foundations/)
- Include rationale (“why”), not just specification (“what”)
- Keep token values in sync with JSON — if a value changes, update both
- If deviating from Patina, add a “Deviation from Patina” section explaining why
Category Guides (categories/)
- Use
_guide-template.mdas the starting point - Include at least 2 good and 2 bad examples with screenshots
- Reference tokens by name — never hardcode color/size values
- Assets go in
assets/subfolder: SVG for icons, PNG for screenshots, WebP/JPG for photos
Visual Rules (rules/)
- YAML format, machine-parseable for evaluator pipeline consumption
- Schema (v4.0.0): every rule must have
id,severity,type,description - Optional
maturityfield:experimental(warning, collecting signal),convention(warning, deviation requires justification),rule(default, blocks merge),invariant(axiomatic, cannot override) type: "reference"entries omit severity (informational, not enforced)- Every guidance rule needs both a floor AND a ceiling — AI agents optimize toward maximums without upper bounds
- Cross-file consistency: if a value appears in rules, foundations, and specimens, all three must match
Custom YAML fields
The schema accreted several fields beyond the base set as the rules system grew. Two categories:
Rules-system canonical (validator-checked, structural meaning):
linear_issue(string, e.g."Z2O-1318"). The Linear ticket that conceived the rule. Required on all new rules from PR #43 onward. Use# no linear_issue (pre-tracking)for older rules that pre-date this convention. Example:linear_issue: "Z2O-1318"oncomposition.persuade-slide-deck.logomark-consistency.inherits_from_sales_collateral(list of rule IDs). Declared on a rules block to inherit slide-deck universals (logomark, confidentiality, roles, dates) into one-pager and case-study cells. The validator’scheckInheritanceIntegrityconfirms every referenced rule ID actually exists. Example:inherits_from_sales_collateral: ["composition.persuade-slide-deck.logomark-consistency"].modes(list of genre names — see Glossary). Restricts a rule to specific genres within a cell. Used on slide-deck genres (pilot_kickoff,internal_team,customer_101,partner_enablement) and on one-pager genres (solution_overview,comparative). The evaluator skips the rule when the artifact’s declared genre is not in this list. Example:modes: ["pilot_kickoff", "customer_101"].genre_metadata_field(string). Names the HTML/CSS attribute the evaluator should read to determine the artifact’s mode. Currentlydata-genre on <body>anddata-confidentiality on <body>. Example:genre_metadata_field: "data-genre on <body>".applies_to_modes(list of mode names). Functionally identical tomodes; used on whitepaper-cover rules. Treat as a synonym; future cleanup should consolidate tomodes.
Metadata helpers (informational, not validator-checked):
applies_to(string). Free-form prose narrowing where the rule applies, whenmodesis too coarse. Example:applies_to: "TEMPLATE artifacts only (not produced/filled-in decks)".applies_to_examples(list of file paths). Names example artifacts that demonstrate the rule. Used on cell-level reference blocks; mirrors whatexisting_verdigris_examplesdoes for genres.existing_verdigris_examples(list of strings). Real-world Verdigris artifacts that approximate the genre. Each entry should explain how the artifact maps to the genre and what’s missing. Example:existing_verdigris_examples: ["Verdigris 'Signals Overview' (Notion) — pre-cell; needs refresh"].exemplar_archetypes(list of strings). Industry archetypes for the genre when no Verdigris artifact yet exists. Each entry is a one-line pattern description plus public references where available. Example:exemplar_archetypes: ["Product fact-sheet pattern: title + 3 callouts (Stripe product pages, Linear feature pages)"].note_on_confidentiality(string). Explains the cell’s confidentiality default and exceptions. Example on case-study cell:note_on_confidentiality: "Case studies default to PUBLIC tier...".related_issues(list of Linear IDs). Companion tickets that informed the rule but are not the originating ticket. Distinct fromlinear_issue(singular originator). Example:related_issues: ["Z2O-1310"]onsingle-anchor-metric(per-instance qualifier discipline came from a different ticket).anti_examples(list of strings). Concrete failure shapes the rule catches; complementsexamples. Each entry is one or two lines describing a specific failure mode an LLM evaluator can pattern-match. Generic placeholders (“doesn’t follow the rule”) are not acceptable.
When adding a new field, document it here in the same PR. Validator drift starts as undocumented fields and ends as silent inheritance breaks.
Explorations (explorations/)
- Prototypes, portfolios, working-through-something essays. Not authoritative.
- Nothing here is a rule. Other repos should not treat exploration content as canonical.
- Ideas start here. Graduation happens when evidence accumulates (see below).
Graduation
The directory structure IS the maturity model. Promoting an artifact = moving it between directories and updating metadata.
- Exploration → Pattern: move from
explorations/tocategories/once used on 2+ real surfaces with positive review - Pattern → Convention: promote to
foundations/when rationale is stable and one adversarial review has passed - Convention → Rule: add to
rules/visual-rules.ymlwithmaturity: experimental. Graduate tomaturity: ruleafter 30 days with no surfaced violations or stakeholder objections - Demotion: anything can move back down if evidence shifts. This is not failure; it is honest response to learning
Guiding principle: if an artifact has no identified ideal brand-aligned use, do NOT build a system around it. Keep it as an exploration until a real use emerges. Bias toward applying existing work to real surfaces over building elaborate exploration scaffolding.
Content Guidelines
- Don’t hardcode design values in docs — reference token names (e.g., “use
color.brand.verdigris” not “use#0fc8c3”) - Asset naming: lowercase, hyphens, prefix with
good-orbad-for examples - Screenshots: 2x resolution, max 2400px wide, PNG format
- No AI writing artifacts — strip emdashes, “This means”, “In other words”, “grounded in”, “leverage”, “comprehensive”. Write short, plain sentences. If it sounds like an AI explaining, rewrite it.
- Alt text — short factual labels (“Verdigris logo — teal”), not internal documentation (“Recovered canonical SVG lockup for light surfaces”)
Information Architecture
index.md— summarizes with compact visual specimens, links to detailsfoundations/*.md— defines rules with rationale and research citationsspecimen.html— shows applied examples (rendered page scrolls, live demos). Never lecture — show.rules/visual-rules.yml— machine-consumable rules for evaluator/agentsexamples/good|bad/— isolated pattern examples with live HTML demos
Workflow
- Always branch + PR — never push directly to main, even for docs-only changes
- QA before merge — run content, rules-consistency, and HTML validation review before any PR merge
Deviation Protocol
Any design decision that differs from Patina must be:
- Explicitly documented with a “Deviation from Patina” section
- Justified (marketing-specific need, medium constraint, etc.)
- Rare — Patina has 60+ battle-tested components
Justified: display font (Lato), marketing hero patterns, ad templates, physical goods Unjustified: changing brand teal, different component library, different dark mode strategy
GitHub Actions
- Build & Validate — runs on push to main and PRs
- Publish — publishes to GitHub Packages on release tags (e.g.,
v0.2.0) - Pages — deploys docs site on push to main
Related Repos
VerdigrisTech/verdigris— www site + evaluator pipeline (consumes this package)- Patina source at
/tmp/patina/— reference implementation for all design decisions