# Voice Recipes — how to mix ingredients for specific content types
#
# Each recipe is a starting mix. Adjust based on the specific piece.
# Percentages are rough guides for emphasis, not exact measurements.
#
# The "reaches" field matters: content often flows to multiple audiences.
# A case study sent to an operator gets forwarded to their VP. A blog
# post shared on LinkedIn reaches engineers AND their managers. The
# recipe should work for the primary audience AND the secondary one.

recipes:

  # ── Website ────────────────────────────────────────────────────────

  homepage:
    primary_audience: first-time visitor (operator or executive)
    reaches: "their team, their boss, investors doing diligence"
    target_feelings: [curiosity, confidence, respect]
    mix:
      strategic_narrative: 35    # why this matters
      technical_precision: 25   # what we actually do (not vague)
      mission_gravity: 20       # the weight of the problem
      self_honesty: 10          # stage transparency
      warmth_and_humor: 10      # human, not corporate
    notes: |
      The homepage has ~8 seconds to answer "what do you do and why
      should I care?" Lead with the problem (strategic narrative),
      prove depth with one specific number (technical precision),
      and close with mission pull. Operators and execs both need to
      find their thread within those 8 seconds.
    voice_sources:
      primary: Mark (strategic narrative), Jon (technical precision)
      supporting: Jimit (outside-in market framing for positioning context)

  about_page:
    primary_audience: someone deciding if they trust Verdigris
    reaches: "candidates, partners, press, investors"
    target_feelings: [confidence, warmth, belonging]
    mix:
      personal_connection: 30   # people have names
      strategic_narrative: 25   # the story arc
      self_honesty: 20          # honest about the journey
      mission_gravity: 15       # why this matters
      technical_precision: 10   # grounding claims
    notes: |
      This is where Mark's storytelling voice dominates. The about page
      is a trust-building tool — people read it after they're interested,
      to decide if they believe. Self-honesty is load-bearing here.
    voice_sources:
      primary: Mark (storytelling, personal connection)
      supporting: Seren (people-first voice, humanizes individuals)

  product_page:
    primary_audience: operator evaluating whether Verdigris solves their problem
    reaches: "their boss (needs ROI), their team (needs to believe it works)"
    target_feelings: [relief, curiosity, confidence]
    mix:
      operator_empathy: 35      # "we know your problem"
      technical_precision: 30   # "here's exactly what we do"
      self_honesty: 15          # honest about what we don't do
      strategic_narrative: 10   # why this category matters
      warmth_and_humor: 10      # human touches
    notes: |
      The operator needs to feel relief ("finally, someone who gets it").
      Their boss needs confidence ("these people know what they're doing").
      Lead with the operator's problem, not Verdigris's product. Let the
      product reveal itself as the answer, not the headline.
    voice_sources:
      primary: Jon (technical precision), Thomas (operator empathy)
      accent: Josh (real-world product-in-action), Mike (pricing/manufacturing fluency)

  careers_page:
    primary_audience: potential hire evaluating Verdigris
    reaches: "their partner, their friends, their current colleagues"
    target_feelings: [belonging, curiosity, warmth]
    mix:
      mission_gravity: 30       # worth leaving their job for
      warmth_and_humor: 25      # they'd enjoy working here
      personal_connection: 20   # real people, not stock photos
      self_honesty: 15          # honest about stage and challenges
      technical_precision: 10   # the problems are real and hard
    notes: |
      Candidates share JDs with their partner and friends. The page
      needs to work for non-technical readers too ("my partner works
      on power intelligence for AI data centers" should sound
      interesting at a dinner party).
    voice_sources:
      primary: Josh (builder energy, makes people want to join)
      supporting: Mark (mission gravity)

  # ── Content ────────────────────────────────────────────────────────

  technical_blog:
    primary_audience: engineer or operator who wants to learn
    reaches: "HN, LinkedIn, engineering teams evaluating Verdigris"
    target_feelings: [curiosity, respect, warmth]
    mix:
      technical_precision: 45   # show the work
      warmth_and_humor: 15      # human asides, honest mistakes
      self_honesty: 15          # what surprised us, what we don't know
      operator_empathy: 15      # grounded in real problems
      personal_connection: 10   # named author, first person
    notes: |
      Named authorship always. "By Jon Chu" not "The Verdigris Team."
      End with what you don't know yet — that's more credible than a
      tidy conclusion. Humor comes from specificity ("this is the part
      where we realized we'd been measuring the wrong thing for three
      weeks"), not from trying to be funny.
    voice_sources:
      primary: Jon (technical depth)
      supporting: Josh (field troubleshooting stories, honest dead ends)

  case_study:
    primary_audience: operator whose situation is similar to the customer's
    reaches: "their VP (ROI), procurement (risk), board (evidence)"
    target_feelings: [relief, confidence, curiosity]
    mix:
      operator_empathy: 30      # "this customer had YOUR problem"
      technical_precision: 25   # specific outcomes with numbers
      personal_connection: 20   # named customer, named problem
      self_honesty: 15          # what was hard, what we learned
      strategic_narrative: 10   # why this matters beyond one site
    notes: |
      The customer's voice is the primary voice, not Verdigris's.
      Let the operator say "Verdigris showed us X" rather than
      Verdigris saying "we delivered X." Specific numbers always:
      "reduced unplanned downtime by 37% over 6 months" not
      "significantly improved reliability."

      Audited 2026-05-02 against the new Mark Chung and Jon Chu profile
      YAMLs (PR #43): the recipe holds. Neither founder voice belongs as
      primary in a case study because the customer is the protagonist,
      not Verdigris. Seren's diplomatic precision translates the
      customer voice without scrubbing it into marketing speak.

      See categories/case-studies/guide.md for the cell that consumes
      this recipe (single genre, dual web+PDF render targets).
    voice_sources:
      primary: Seren (customer perspective, diplomatic precision)
      supporting: Mike (field context, competitor comparison)
      accent: Jimit (market trend connection)

  # ── Communications ─────────────────────────────────────────────────

  investor_update:
    primary_audience: board and investors
    reaches: "their LPs, other portfolio companies (word of mouth)"
    target_feelings: [confidence, respect, curiosity]
    mix:
      strategic_narrative: 35   # where we are, where we're going
      self_honesty: 25          # what's working, what isn't
      technical_precision: 20   # evidence, not assertions
      mission_gravity: 15       # why this market matters
      personal_connection: 5    # team highlights
    notes: |
      Mark's voice dominates. Investors want to see clarity of thought
      and intellectual honesty more than they want to see good news.
      "We missed X because Y, and here's what we changed" is stronger
      than only reporting wins.
    voice_sources:
      primary: Mark (strategic narrative, self-honesty)
      supporting: Jimit (market fluency, structured evidence)

  partner_materials:
    primary_audience: partner's team evaluating the integration
    reaches: "their customers, their sales team"
    target_feelings: [confidence, relief, warmth]
    mix:
      technical_precision: 30   # what the integration does
      operator_empathy: 25      # shared customer problem
      warmth_and_humor: 15      # we're good to work with
      self_honesty: 15          # honest about what each brings
      strategic_narrative: 15   # why together > apart
    notes: |
      Avoid "strategic partnership" language. Say what you're actually
      doing together. "When CoreWeave operators need to see what's
      happening at the circuit level, Verdigris data shows up in their
      existing monitoring stack" beats "Verdigris and CoreWeave have
      formed a strategic partnership."
    voice_sources:
      primary: Jimit (outside-in market positioning)
      supporting: Seren (structured, respectful cross-org communication)
      accent: Mike (industry insider credibility)

  linkedin_post:
    primary_audience: industry peers, potential hires, customers
    reaches: "their network, their team, recruiters"
    target_feelings: [warmth, curiosity, belonging]
    mix:
      warmth_and_humor: 30      # human first
      personal_connection: 25   # real person posting, real story
      technical_precision: 20   # one specific insight
      mission_gravity: 15       # why this work matters
      self_honesty: 10          # honest takes
    notes: |
      Named authorship (Jon, Mark, Thomas, or team members — not
      "Verdigris"). LinkedIn rewards personal voice. The best posts
      share one specific thing the author learned, not a product
      announcement. "I spent last week in a data center in Virginia
      and here's what I saw" > "Exciting to announce our new
      partnership with..."
    voice_sources:
      primary: Josh (builder energy, informal authenticity)
      also: Any team member posting as themselves — the key is specificity

  recruiting_outreach:
    primary_audience: specific person being reached out to
    reaches: "just them (unless they forward it)"
    target_feelings: [curiosity, warmth, belonging]
    mix:
      personal_connection: 35   # why THIS person specifically
      warmth_and_humor: 20      # human, not recruiter-bot
      technical_precision: 20   # what Verdigris actually does
      mission_gravity: 15       # worth their time
      self_honesty: 10          # honest about stage
    notes: |
      Use individual founder voice profiles (Jon or Thomas), not brand
      voice. Reference specific things about the person's work. Always
      leave the referral door open. See founder voice profiles for
      detailed guidance.
    voice_sources:
      primary: Jon or Thomas (founder voice)
      supporting: Seren (honest mutual-fit evaluation)

  # ── Sales collateral (decks) ────────────────────────────────────────
  # Added 2026-05-02 from the customer pilot kickoff review (Z2O-1318 through Z2O-1323).
  # Per-genre voice mixes for the slides cell. See categories/slides/ for full specs.

  pilot_kickoff_deck:
    primary_audience: customer team executing the pilot (sponsor + working lead + engineers)
    reaches: "their VP, their leadership chain, their procurement team, future participants in the engagement"
    target_feelings: [confidence, clarity, mutual ownership]
    mix:
      operator_empathy: 35      # the customer team is busy; respect their attention
      technical_precision: 30   # this deck is the canonical record of what was agreed
      self_honesty: 15          # what's experimental, what's mature, what's still TBD
      personal_connection: 10   # named team members, role labels first
      strategic_narrative: 10   # why this pilot fits the bigger picture
    notes: |
      The pilot is signed -- the customer team does NOT need founder voice
      reassurance. They need to feel the team they'll work with for 90 days
      already understands their world. Mike's field-credibility voice
      ("we replaced RPC2 2 of 2 in Aurora", "looks like a bad power supply")
      makes the install plan, the success criteria, and the team slide read
      as "we have lived your problems." Thomas's operational voice carries
      the structure (timeline, decisions, escalation). Jon's bench-engineer
      voice grounds the data-flow and hardware-install slides.

      Avoid the pitch-deck trope of "Why Verdigris" -- the contract is signed;
      those slides erode trust at this stage.

      The deck must be re-readable a week later by someone who wasn't in the
      room. Specific numbers always: "Hardware install 2026-06-15, four sites"
      not "in mid-June across some sites." See categories/slides/pilot-kickoff.md
      for the full spec including required slide structure.
    voice_sources:
      primary: Mike (field credibility, operator empathy from actual customer-site work)
      supporting: Thomas (operational clarity, structured-not-corporate)
      accent: Jon (technical depth on data flow / hardware install slides)

  internal_team_deck:
    primary_audience: Verdigris team members coordinating an active engagement
    reaches: "engagement working group, leadership, board prep that hasn't reached the board"
    target_feelings: [clarity, speed, mutual accountability]
    mix:
      technical_precision: 35   # internal coordination requires concrete state
      self_honesty: 25          # what's slipping, what's blocked, what's working
      operator_empathy: 15      # the customer's perspective, even in internal docs
      strategic_narrative: 15   # why this account matters
      personal_connection: 10   # named owners, named risks
    notes: |
      Internal teams are heterogeneous -- engineers need technical precision,
      operators need empathy, GTM needs market context. A single-voice recipe
      brittles.

      - Thomas leads (operational, structured-but-transparent; self_honesty 10
        from his profile makes him the right voice for "what's slipping")
      - Mike supports (translates engineering reality into operator-readable
        status; field credibility 8 grounds the customer-side perspective even
        when the audience is internal)
      - Jon co-supports (technical_precision 9 from his profile is load-bearing
        for engineering status slides; the bench-diagnostic register 'looks like
        a firewall issue to me' is what makes internal technical coordination
        credible). Added 2026-05-02 after Loop 3 adversarial review caught the
        gap -- internal coordination cannot lead with operations-speak alone.
      - Jimit accents (connects the engagement to market signals when relevant)

      Internal jargon is fine -- the audience is Verdigris-only and faster
      reading beats translation. Week-N notation acceptable when the pilot
      timeline is the running thread; always pair with absolute dates for
      decision deadlines. The diction rules for external genres (e.g., "exit
      criteria" → "expansion criteria") do NOT apply here.

      Customer name in customer's preferred form. Always. The customer
      reading this internal deck second-hand should still feel addressed
      with care.
    voice_sources:
      primary: Thomas (operational, transparent)
      supporting: Mike (technical translation, operator-readable status)
      co_supporting: Jon (engineering status, bench-diagnostic credibility)
      accent: Jimit (market context when the engagement intersects strategy)

  customer_101_deck:
    primary_audience: prospect's initial evaluator at the first substantive meeting
    reaches: "their team, their boss, their procurement team if they advance"
    target_feelings: [curiosity, confidence, warmth]
    mix:
      operator_empathy: 30      # the prospect is busy and skeptical; show you understand
      mission_gravity: 25       # why this matters, why now
      personal_connection: 15   # named individuals, customer's name throughout
      technical_precision: 15   # one specific number, one specific evidence point
      warmth_and_humor: 15      # they should leave wanting another meeting
    notes: |
      A first-meeting prospect needs to feel HEARD, not pitched at.
      Seren's diplomatic precision and people intelligence ("from my
      perspective", "I will just quietly leave this here") makes the
      prospect feel the team understands customer constraints before the
      product is positioned. Mike's field credibility supplies the
      operator-recognizable specifics (OCP-conference observations, real
      installation patterns). Mark earns the close slide and the "why now"
      framing -- the founder voice is decisive on mission gravity but
      should NOT carry every slide; story-led at 50% overpowers a first
      meeting where the audience is skeptical.

      Pronounce "Verdigris" audibly on the title slide the first time.
      Capitalize "Signals" as a proper noun when referring to the product.
      Never use internal product-roadmap jargon ("EVD atom", "L3 evaluator",
      "site-coherence gate") -- translate to customer language.

      First-person inclusive ("we'd want to validate X with you") beats
      third-person ("customers see Y"). See categories/slides/customer-101.md
      for the full spec.
    voice_sources:
      primary: Seren (operator_empathy 8 + personal_connection 8; carries diction, framing slides, body register; ~45% of the deck)
      supporting: Mike (field credibility on evidence slides; ~25%)
      accent: Mark (strategic_narrative 9 + mission_gravity 9 on close slide and "why now" framing; ~25-30%, bounded to specific slides not woven through every one)
      note_on_mission_gravity: "Seren's profile does NOT have mission_gravity in her strongest_ingredients. The mission_gravity weight in the mix (25%) lands on the Mark-accent slides (close, why-now) -- not on Seren's slides. This explicit slide-level allocation prevents Seren from being asked to carry an ingredient her profile doesn't deliver. Loop 3 adversarial review surfaced this misallocation in the first draft."

  customer_201_deck:
    primary_audience: prospect's technical evaluator post-101, reviewing leave-behind without a Verdigris presenter
    reaches: "engineering, IT, reliability, facilities; technical evaluators reading the deck three times before recommending pilot commitment"
    target_feelings: [confidence, precision, no-surprises]
    mix:
      technical_precision: 35   # this audience trusts numbers over adjectives; imprecision is costly
      operator_empathy: 20      # integration patterns named in the customer's language
      personal_connection: 15   # named technical contacts the evaluator can call
      mission_gravity: 0        # founder voice forbidden in 201 context
      warmth_and_humor: 10      # human, not robotic; never glib
      strategic_narrative: 20   # operational structure on pilot scope + decisions
    notes: |
      The customer 201 is leave-behind by default: the technical evaluator
      reads it without a presenter, three times, in their own time. The
      register is engineering precision, not founder narrative. Mark
      deliberately stays out -- founder voice in a technical-evaluator
      context reads as "they're closing me before I've finished diligencing."

      Mike carries integration translation and evidence ("I have actually
      done this in a comparable deployment"). Jon carries data flow,
      performance, security -- technical_precision 9 is load-bearing.
      Thomas carries the pilot-scope-template and decisions framing -- the
      operational backbone of the engagement transition. Seren has a light
      accent on the "decisions you owe yourselves" slide so it lands as
      "I will just quietly leave this here" rather than homework.

      Diction: spell out MCP on first use; capitalize Signals as a proper
      noun; give numbers not adjectives ("p95 alert latency 280ms" beats
      "sub-second alerts"); anchor "pilot" with scoping specifics ("pilot
      deployment of N panels at one site for X weeks") not vague "we'll
      figure it out as we go."

      See categories/slides/customer-201.md for the full spec.
    voice_sources:
      primary: Mike (field credibility, integration translation, evidence; ~40% of the deck)
      supporting: Jon (engineering precision on data flow, performance, security; ~35%)
      accent: Thomas (operational structure on pilot scope template and decisions slides; ~15%)
      light_accent: Seren (diplomatic precision on the "decisions you owe yourselves" slide; ~10%)
      forbidden: Mark (founder voice reads as a close in technical-evaluator context; this genre lives in engineering register)

  one_pager:
    primary_audience: scanner without a presenter -- prospect, partner AE, customer team member
    reaches: "anyone who picks up the page cold; forwarded to their team"
    target_feelings: [clarity, confidence, curiosity]
    mix:
      technical_precision: 35   # the page must be specific; vague one-pagers fail
      operator_empathy: 25      # the reader is busy; respect their attention
      strategic_narrative: 20   # one clear argument carries the page
      personal_connection: 10   # named individuals where genuinely attributed
      warmth_and_humor: 10      # human, not corporate boilerplate
    notes: |
      A one-pager is read cold. There's no presenter to bridge ambiguity, no
      calendar to anchor follow-up. The voice has to land on first scan.

      Solution overview genre leans Mike (field credibility) + Jon (technical
      precision); the three callouts are operator-recognizable specifics.

      Comparative genre leans Jimit (market fluency) + Mark accent (the
      thesis block carries the argument; Mark's voice is decisive on
      mission-anchored framing). Mark is NOT primary on the body of a
      one-pager because the founder voice scanned cold reads as a brochure;
      reserve him for the thesis block when the argument needs founder weight.

      Diction discipline applies (audience_fit_diction): never use internal
      jargon; spell out acronyms on first use; specific numbers always; CTA
      action is concrete.

      See categories/one-pagers/guide.md for full spec including the genre
      decision tree.
    voice_sources:
      primary_solution_overview: Mike (field credibility, operator-recognizable specifics on the 3 callouts)
      supporting_solution_overview: Jon (technical precision on capability claims)
      primary_comparative: Mark (thesis block carries the argument; strategic_narrative 9 + mission_gravity 9 from his profile is load-bearing for the comparative thesis -- this was Jimit primary in the first draft, flipped 2026-05-02 after Loop 3 adversarial review showed the thesis is the load-bearing element of a comparative one-pager and Mark's argumentative voice carries it better than Jimit's positioning voice)
      supporting_comparative: Jimit (outside-in market positioning across the numbered list items; market_fluency translates Verdigris claims into the prospect's commercial frame)
      accent_both: Seren (operator-empathy framing on body copy; rare on a one-pager but available for the diction pass)

  partner_enablement_deck:
    primary_audience: partner's account executive (a salesperson at another company)
    reaches: "the partner's customers, the partner's sales leadership"
    target_feelings: [confidence, clarity, joint ownership]
    mix:
      technical_precision: 25   # partner AE needs to answer common objections without escalating
      operator_empathy: 25      # their customer's perspective; their AE's perspective
      personal_connection: 20   # mutual respect, not extraction
      strategic_narrative: 20   # why Verdigris x partner > either alone
      warmth_and_humor: 10      # we are good to work with
    notes: |
      Mirrors the partner_materials recipe pattern: Jimit primary (outside-in
      market positioning), Seren supporting (people-first framing makes the
      partner feel like a collaborator, not an extraction target), Mike accent
      (industry insider credibility for objection-handling slides).

      The earlier draft of this recipe demoted Seren to accent; the demote was
      reversed after adversarial review showed her diplomatic precision is
      load-bearing for partner relationships, not decorative. A partnership
      that reads as "you'll sell our product" instead of "we'll sell together"
      fails before it starts.

      Refer to the partner's customers as "your customers" or by named
      accounts. Generic "partner" reads as boilerplate; name the partner
      explicitly when known. "Co-sell" preferred over "resell." "Margin" is
      fine; partner AEs care. "Generally available" is internal product-
      roadmap language; name the actual product status (in pilot,
      fleet-deployed, in production at N customers) instead.

      See categories/slides/partner-enablement.md for the full spec.
    voice_sources:
      primary: Jimit (outside-in market positioning)
      supporting: Seren (people intelligence, diplomatic precision on co-sell mechanics)
      accent: Mike (industry insider credibility for objection-handling slides)

# ── Audience-fit diction (Z2O-1321) ────────────────────────────────
# Cross-recipe diction rules. Apply on top of the recipe's voice mix.
# Lives here (rather than rules/visual-rules.yml) because diction is
# a voice concern, not a visual one. See categories/slides/pilot-kickoff.md
# § Audience-fit diction for the full rationale and additional examples.

audience_fit_diction:
  applies_to:
    - pilot_kickoff_deck
    - customer_101_deck
    - partner_enablement_deck
    - one_pager
    - case_study
    - investor_update    # external context
    - partner_materials
  not_applied_to:
    - internal_team_deck
    - any internal-only artifact
  rules:
    - id: "voice.audience-fit-diction.exit-criteria"
      description: |
        Replace "exit criteria" with "expansion criteria" or "graduation criteria"
        in customer-facing artifacts. "Exit" implies the customer is leaving
        Verdigris. The opposite is intended: the criteria mark the path from
        pilot to expanded engagement. "Expansion criteria" carries the right
        vector for an executive audience. "Graduation criteria" works when the
        audience is technically inclined.
      severity: "warning"
      maturity: "experimental"
      linear_issue: "Z2O-1321"
      anti_examples:
        - "Define exit criteria for the pilot"
        - "Once exit criteria are met, we'll review next steps"
      good_examples:
        - "Define expansion criteria for the pilot"
        - "Once graduation criteria are met, we move to fleet deployment"
    - id: "voice.audience-fit-diction.customer-pronoun"
      description: |
        In a deck addressed TO a customer, refer to them by name or "your team",
        not third-person "customers." "Customer" is third-person — fine for case
        studies discussing other customers, wrong for an artifact the audience
        is reading.
      severity: "warning"
      maturity: "experimental"
      anti_examples:
        - "Customers typically see a 30% reduction in unplanned outages"
      good_examples:
        - "Your team should expect to see a 30% reduction in unplanned outages"
        - "At a similar fleet scale, we'd expect to see a 30% reduction in unplanned outages"
    - id: "voice.audience-fit-diction.pilot-phase"
      description: |
        Replace "pilot phase" with "first 90 days" or "this engagement". "Phase"
        is internal Verdigris jargon for the staged engagement model. Customers
        don't think in phases.
      severity: "warning"
      maturity: "experimental"
      anti_examples:
        - "During pilot phase 1, we'll install hardware..."
      good_examples:
        - "In the first 90 days, we'll install hardware..."
        - "For this engagement, the install plan is..."
    - id: "voice.audience-fit-diction.generally-available"
      description: |
        Replace "generally available" / "GA" with concrete product status: "in
        production", "fleet-deployed", "in production at N customers". GA is
        internal product-roadmap language. Customers want to know if it works
        at their scale, not where it sits in the roadmap.
      severity: "warning"
      maturity: "experimental"
      anti_examples:
        - "Signals goes GA in Q3"
        - "This feature is currently in beta, GA target Q4"
      good_examples:
        - "Signals is fleet-deployed at 800+ rectifiers across 4 customers"
        - "This feature is in production at one large customer; broader rollout planned for Q4"
    - id: "voice.audience-fit-diction.internal-jargon-pass"
      description: |
        Final read-through before distribution: walk every slide, mark any
        Verdigris-internal jargon, replace with operator-readable language.
        Common offenders include EVD atom, L3 evaluator, site-coherence gate,
        viz placement, brand-positioning evaluator. The voice recipe sets the
        dial; the diction pass sharpens individual word choices.
      severity: "warning"
      maturity: "experimental"
      applies_during: "production checklist step 6, see workflows/sales-collateral.md"
