Video

How Verdigris surfaces video. Covers the invariant accessibility requirements, the production workflow that keeps render costs bounded, the technical specs we inherit from established industry standards, the integration pattern for consuming design tokens inside Remotion, and the open questions we are deliberately NOT codifying yet.

The Verdigris Remotion work lives in the verdigris www repo. This foundation defines the constraints; the compositions live there.

Invariants

These are non-negotiable. Every Verdigris video must satisfy them.

Production workflow

Iterate in the preview server. Render only for publish. This is the highest-leverage cost control available in Remotion.

Render economics reference: Remotion Lambda charges $0.0000166667/GB-s. A 60-second 1080p render is roughly $0.10-0.30 per pass. A 2-minute 4K render can exceed $2. Iterating by rendering is financially viable for a handful of videos, financially ruinous for a team’s worth of weekly work.

Technical specs

We follow established industry standards rather than inventing our own. Where multiple valid standards exist, we explicitly choose one.

Frame rates:

Aspect ratios by platform (platform canonical, not our invention):

Resolution:

Export format:

Audio levels:

Color space:

Caption sizing:

Safe areas:

Consuming design tokens in Remotion

Remotion compositions are React components. The design system package exports three useful forms:

1. Hex colors as typed JavaScript constants (@verdigristech/design-tokens):

import { hexColors } from '@verdigristech/design-tokens';

const teal = hexColors['color.brand.verdigris']; // '#0fc8c3'
const neutral950 = hexColors['color.neutral.950']; // '#09090b'

This is the primary path for programmatic color access in compositions. Remotion’s interpolateColors() and CSS prop values both accept hex.

2. Raw JSON tokens via subpath exports (@verdigristech/design-tokens/tokens/*):

Motion, spacing, typography, and other non-color tokens are not in the JS export today. Import them directly from the package’s tokens/ directory:

import duration from '@verdigristech/design-tokens/tokens/motion/duration.json';
import easing from '@verdigristech/design-tokens/tokens/motion/easing.json';

const fastMs = parseInt(duration.duration.fast.$value); // 150
const easingOut = easing.easing.out.$value; // 'ease-out'

Note: JSON token files use W3C DTCG format with $value wrapping. Extract the value before using.

3. CSS variables for styled layers (@verdigristech/design-tokens/css/oklch):

For Remotion components that use regular CSS styling, import the variables sheet once and reference tokens via var(--color-brand-verdigris) etc. Useful when a Remotion layer uses standard DOM styling rather than Canvas or programmatic color.

Remotion-specific token gaps to track:

Storyboard convention (proposed)

Pre-production design review is the highest cost lever, more than render discipline. We propose (but do not yet enforce):

This is maturity: experimental for now. Will graduate when we have evidence from 2-3 real productions that it reduces iteration cost.

Open questions (Verdigris-specific, not yet codified)

The following are NOT rules. They are areas where the right answer depends on Verdigris brand context that we have not yet resolved through real production. Do not prescribe. Do not codify. Experiment in real compositions, document what works, graduate the patterns that hold.

Scene rhythm. Is Verdigris a slow-burn-deliberate voice, or does “if you know, you know” reward faster pattern-recognition edits? Both interpretations are defensible. Real productions will tell.

Visual signature in motion. The Lissajous, spectrogram, and harmonic spectrum exist as static and interactive elements. Which become video-native? How do they rhyme with the brand rather than feel like decoration? Probably the answer involves real data driving the motion rather than simulated data animating stylistically. Confirm with real work.

Opening identity. Social algorithms reward fast brand recognition; the brand’s data-first principle suggests NOT leading with a logo. These tensions are real. Resolve per-video based on platform and audience; watch for patterns that emerge.

Closing identity. Previous exploration (Lissajous-to-logo) was rejected. “Data becomes brand” as an animation metaphor reads as gimmicky. Candidate direction: brand wordmark appears alongside a held data moment, not emerging from it. Or: held stillness with clean endcard. Validate in real productions before codifying.

Voice in audio. Competence-driven brand voice translates to specific audio choices: register, pace, hedging. The earlier spoken-voice framework exploration established a direction, but we have not yet produced narrated video. First 2-3 narrated pieces will reveal whether the direction holds.

Data-driven motion. Real 8 kHz data animating within a video is the strongest potential brand signal, and the hardest to deliver. Depends on data pipeline (see ClickHouse access) and on how data is sampled, filtered, and mapped to motion. Expect this to evolve substantially as we try it.

Graduation path

Work happens in the Verdigris Remotion project (www repo). When a pattern proves itself across 2-3 productions, it graduates back here:

  1. Candidate pattern noted in the Remotion repo’s README or video README
  2. After 2-3 confirmatory uses, a PR to this design system repo proposes the pattern as maturity: experimental in rules/visual-rules.yml
  3. After 30 days without surfaced violations or stakeholder objections, the rule graduates to maturity: rule
  4. The pattern also appears as a specimen in categories/video/ (when that category exists)

Demotion applies: anything can move down if a later production surfaces problems. Not failure, just honest response to learning.

References