Deltas-only against pilot-kickoff. Graduates with the cell after 2 customer-facing decks ship.
Customer 201 Deck
This genre is the technical follow-up to a customer 101. Audience is the prospect’s technical evaluator — an engineering manager, DevOps lead, IT director, or facilities engineer who is being asked “is this real, does it integrate, is it safe?” by their colleagues. They were either in the customer 101 meeting and want deeper detail, or they were forwarded the recap and need to make a build/buy/wait recommendation without a Verdigris presenter on the call.
The deck’s job is to land technical depth without a Verdigris technical contact in the room. It is the proof a technical evaluator brings into their internal review meeting and re-reads three times before committing to a pilot. Architecture diagrams, integration specifics, security/compliance answers, performance claims with measurement context — all the detail a customer 101 deliberately keeps thin.
This guide documents only the deltas against pilot-kickoff.md. Read that spec first.
What changes
| Axis | Pilot kickoff (parent) | Customer 201 (this genre) |
|---|---|---|
| Length | 12-20 slides | 20-35 |
| Voice primary | Mike (field credibility) | Mike (field credibility on integration + evidence) |
| Voice supporting | Thomas | Jon (engineering precision on data flow, performance, security) |
| Voice accent | Jon | Thomas (operational structure on pilot scoping) |
| Voice forbidden | (none) | Mark stays out. Founder voice in a technical-evaluator context reads as “they’re closing me already” |
| CTA pattern | “Pilot scope + decision date for expansion” | “Decision date for pilot commitment + technical questions still open” |
| Confidentiality default | CUSTOMER-CONFIDENTIAL | CUSTOMER-CONFIDENTIAL (assumes MNDA in place; if not, escalate to legal before sharing) |
| Logomark | full lockup | full lockup (same as parent) |
| Date format | absolute calendar dates | absolute (same as parent) |
| Default density | dual-use | leave-behind (~150 words/slide; reader carries narrative; presenter not assumed) |
What changes structurally
The 18-slide pilot-kickoff structure does not apply — pilot kickoff assumes a contract is signed. Customer 201 has its own canonical structure tuned to the technical-eval moment:
- Title slide (required) — Verdigris + the prospect’s specific technical use case
- Recap from customer 101 (required) — 1 slide; the takeaways from the first meeting that this deck builds on. The “you saw this” anchor.
- Architecture overview (required) — Verdigris signals to MCP to Signals Engine to outputs. One canonical architecture diagram.
- Integration model (required, 1-2 slides) — how Verdigris connects to the customer’s existing infrastructure. BMS, EMS, panels, conduit, network paths.
- Security & compliance (required) — SOC 2 status, data residency, MNDA + MNDA exceptions, what auditable controls exist
- Data flow (required) — what data leaves the facility, where it goes, retention, encryption, customer access controls
- Performance & scale (required) — sample rates (8kHz means what for their fleet), throughput, latency, supported deployment scales
- Sample outputs (required, 2-3 slides) — what they’ll actually see in production. Dashboards, alerts, reports, API responses. Real screenshots from a comparable deployment.
- Pilot scope template (required) — what a pilot looks like; scoping anchors (sites, time window, success criteria); what’s negotiable vs not
- Pricing & licensing model (required) — rate card, pilot fee, deployment-class pricing. Open about the structure; doesn’t quote a specific number unless one is anchored
- References (required, 2-3 slides) — case study summaries with anchor metrics + named technical contacts the evaluator can call
- Decisions you owe yourselves (required) — what the customer needs to confirm internally before pilot (security review, change-management approval, budget owner, vendor onboarding)
- Decisions we owe you (required) — what Verdigris will commit to during pilot scoping (response times, technical contact, escalation path)
- Open technical questions (per engagement) — explicit list of questions the customer hasn’t asked yet but typically come up at this stage
- Close: decision date for pilot commitment (required) — explicit calendar date, what gets sent in advance of that date, what happens after
Slides marked (required) appear in every customer 201 deck. Slides marked (per engagement) appear when the conversation has surfaced specific open questions worth pre-empting.
What stays the same
- 12-column grid; 1280×720 master; footer band 32pt
- Logomark position + size; full lockup variant
- Confidentiality marking on every slide (CUSTOMER-CONFIDENTIAL default; tier color is
--vd-conf-customer-on-lightwarm amber) - Role labels in templates (“Verdigris VP Engineering” not “Jon Chu”; produced deck adds the name alongside)
- Table dims, figures, PDF-exportable
Decision framework: where to land in the bounds
| Boundary | Floor (small) | Default | Ceiling (large) |
|---|---|---|---|
| Slide count (20-35) | 20 slides when the prospect has a tightly-scoped technical question (e.g., “does this integrate with our BMS?”) and most of the depth lives in 1-2 specific sections. | 25-28 slides for a typical post-101 follow-up with mainstream technical due-diligence: integration + security + data flow + sample outputs + pilot scope. | 35 slides when the customer’s technical evaluation team is broad (architecture, security, ops, procurement) and each surface needs its own depth pass. Above 35, the deck is a binder; split into a deck + appendix or escalate to a workshop format. |
| Sample outputs (2-3) | 2 sample slides for a customer with a single deployment surface (e.g., one building type, one anchor use case). | 3 sample slides for typical evaluations: the operator dashboard, an alert/anomaly view, and the API/integration output. | 3 is the ceiling. Above 3, the deck stops feeling like proof and starts feeling like a product tour. Move additional sample views to an appendix or a live demo. |
| References (2-3 slides) | 2 reference slides when the prospect’s segment has a strong anchor case study they’ve already reviewed. | 3 reference slides for typical evals: anchor case study + segment-adjacent reference + outcome-summary roll-up. | 3 is the ceiling — beyond that, link to the case-study library rather than padding the deck. |
If a customer 201 deck pushes to the ceiling on three or more boundaries simultaneously, the engagement is no longer at “technical follow-up” stage. Either the prospect is a strategic relationship requiring its own playbook, or the deck is scope-creeping into a pilot-kickoff. Pause and check with the engagement lead.
Density: leave-behind by default
The customer 201 deck is not a presenter-led artifact by default. Default density is leave-behind (~150 words/slide cap; reader carries narrative; speaker notes not assumed). Captions are sentence-level on figures; bullet text reads as standalone explanations rather than presenter cues.
If the technical follow-up happens in a live call where a Verdigris engineer presents (e.g., a Jon-led architecture deep-dive), override to density: live-spoken on the deck frontmatter. The same content survives both modes; what changes is the words/slide cap, the headline:body ratio, and whether speaker notes are required.
See pilot-kickoff.md § Presentation density for the canonical density spec.
Diction adjustments specific to customer 201
The Z2O-1321 diction rule applies (avoid internal jargon). Customer 201 has additional adjustments for the technical-evaluator audience:
- “Signals” — capitalize as a proper noun for the product. Spell out the value (waveform telemetry at 8kHz) on first use because technical evaluators care about the precision.
- “MCP” — spell out on first use: “Model Context Protocol (MCP)”. Technical evaluators may know MCP from another vendor’s marketing; clarify Verdigris’s specific implementation.
- “EVD atom / canonical claim” — never use; substitute “evidence” or the underlying number with a citation.
- “Pilot” — fine to use but anchor: “pilot deployment of N panels at one site for X weeks”. The vague “pilot” without scoping anchors reads as “we’ll figure it out as we go” which is the wrong signal at this stage.
- “Latency / throughput / sample rate” — give numbers. Technical evaluators trust numbers over adjectives. “Sub-second alerts” is weaker than “p95 alert latency 280ms”.
Template vs. produced
Customer 201 decks are template-driven on the architecture/integration/security frame and customer-specific on the use-case framing. Placeholders cluster around the prospect’s deployment context.
| Slot | Template stage | Produced stage |
|---|---|---|
| Title slide | Verdigris technical follow-up: <span class="vd-template">[FIELD: prospect name + deployment context, e.g. "Acme — colocation portfolio"]</span> |
Verdigris technical follow-up: Acme — colocation portfolio |
| Recap | What we discussed in our 101 on <span class="vd-template">[FIELD: customer 101 date]</span>: <span class="vd-template">[FIELD: 3-4 sentence recap of the wedge + their problem framing]</span> |
What we discussed in our 101 on 2026-05-13: Verdigris's 8kHz telemetry surfaces a fault class your existing 1-min metering cannot. Your team identified motor harmonic signature analysis as the first surface that would justify a pilot. |
| Architecture diagram caption | <span class="vd-template">[FIELD: caption naming the diagram and its relevance to the prospect's deployment]</span> |
Verdigris signal pipeline for a colocation portfolio. Two integration points (panel CT + BMS data feed) shown. Customer's existing BMS retains primacy. |
| Pricing model | <span class="vd-template">[FIELD: pricing structure (rate card, pilot fee, deployment-class) — names structure not numbers unless an anchored quote exists]</span> |
Pilot fee structure: $X/site/month for first N panels, capped pilot scope (3 sites maximum), no automatic renewal — pilot extension is a separate decision. |
| Close: decision date | Decision date for pilot commitment: <span class="vd-template">[FIELD: absolute date]</span>; <span class="vd-template">[FIELD: what gets sent before that date]</span> |
Decision date for pilot commitment: 2026-06-12; Jon will send a draft pilot SOW with the Acme legal team's edits incorporated by 2026-06-05. |
The template stage is what an agent generates from the spec; the produced stage is what a human (or evidence-grounded agent) fills in. Never ship the produced stage without source evidence — the recap quotes the customer 101 meeting notes; the pricing structure references the pricing brief; the decision date is on someone’s calendar.
Voice recipe
The customer_201_deck recipe in voice/recipes.yaml sets:
- Mike (primary, ~40%) — field credibility, integration translation, evidence (
voice/team/mike-mahedy.yaml). Technical evaluators trust operator-recognizable specifics over founder narrative. Mike carries the integration-model and evidence sections. - Jon (supporting, ~35%) — engineering precision on data flow, performance, security (
voice/team/jon-chu.yaml). Jon’stechnical_precision(9) is load-bearing here; technical evaluators are reading these slides three times and notice imprecision. Carries data flow, performance & scale, security & compliance. - Thomas (accent, ~15%) — operational structure on pilot scoping (
voice/team/thomas-chung.yaml). Thomas’s voice carries the pilot-scope-template and decisions slides — the operational backbone of the engagement transition. - Seren (light accent, ~10%) — diplomatic framing on the “decisions you owe yourselves” slide (
voice/team/seren-coskun.yaml). The slide that asks the customer to do internal homework benefits from Seren’s “I will just quietly leave this here” register; otherwise it reads as homework assignment.
Mark deliberately stays out of customer 201. Founder voice in a technical-evaluator context reads as “they’re trying to close me before I’ve finished diligencing,” which is exactly the wrong signal. Mark’s voice belongs in customer 101 (close + why-now) and in pilot-kickoff (founder commitment). Customer 201 lives in the engineering register.
Voice at a glance
A producer reading this cell should be able to answer “what voice mix am I writing in?” without leaving the page. Pulled directly from the customer_201_deck recipe and the linked profile YAMLs.
Mike — primary (Profile: voice/team/mike-mahedy.yaml). Field credibility, integration translation. Mike’s industry-insider voice supplies the “I have actually done this in a comparable deployment” register that earns trust from technical evaluators.
“I was at [conference] and they presented…”
Carries: integration model, sample outputs, evidence + references — about 40% of the deck. Body register throughout.
Jon — supporting (Profile: voice/team/jon-chu.yaml). Engineering precision, bench-diagnostic credibility. Technical evaluators reading slides three times notice imprecision; Jon’s voice is calibrated against that.
“Looks like a firewall issue to me”
Carries: architecture overview, data flow, performance & scale, security & compliance — about 35% of the deck.
Thomas — accent (Profile: voice/team/thomas-chung.yaml). Operational, transparent. Carries the pilot-scope-template and decisions framing — the operational backbone of the engagement transition.
“people should walk away feeling good”
Carries: pilot scope template, decisions you owe yourselves, decisions we owe you, close — about 15% of the deck.
Seren — light accent (Profile: voice/team/seren-coskun.yaml). Diplomatic precision on the “homework for the customer” slides.
“I will just quietly leave this here”
Carries: decisions you owe yourselves (light touch); about 10% of the deck.
Why this genre exists
After the customer 101, the prospect’s technical evaluator needs a deeper artifact they can review without a Verdigris presenter on the call. The customer 101 is calibrated for first-meeting register (Seren-led, story-driven, light on detail); the pilot kickoff assumes a signed contract. The gap was being filled by recycled customer-101 decks padded with extra technical slides — which lost the customer 101’s diplomatic register without gaining the depth a technical evaluator needs.
The customer 201 genre exists so the technical-eval moment gets its own calibrated artifact: leave-behind register, engineering voice, depth on the surfaces that matter for a build/buy/wait decision.
What this cell does NOT cover
- First-meeting introductions. When the audience hasn’t seen Verdigris before, use
customer-101.md. - Signed-pilot scoping. When the customer has signed a pilot agreement and you’re scoping the engagement, use
pilot-kickoff.md. - Partner channels. When the audience is a channel partner’s account executive (not the end customer’s technical team), use
partner-enablement.md. - Internal team coordination. When the audience is Verdigris-only, use
internal-team.md. - Single-page leave-behinds. When the artifact is a one-page decision aid (not multi-page technical detail), use
categories/one-pagers/guide.md. - Operational diagnostic reports. When the artifact is an SE-produced finding for an existing customer, use
categories/service-reports/portfolio-diagnostic.md.
See also
- Pilot kickoff (primary spec)
- Slides index
workflows/sales-collateral— production guide spanning all collateral typesvoice/recipes.yaml—customer_201_deckrecipe (to be added)voice/team/mike-mahedy.yaml,jon-chu.yaml,thomas-chung.yaml,seren-coskun.yaml— voice profile sources