First service-reports genre. Graduates with the cell after 2 SE-produced reports ship using these patterns.
Portfolio Diagnostic
A portfolio diagnostic is a survey-shaped report covering an existing customer’s full deployment. It surfaces multi-tier findings — whole-system patterns at the top, unit-level investigations underneath — anchored to citable thresholds, and surfaces 1-2 spotlights with figures for the highest-signal findings.
This is the load-bearing genre for the service-reports cell. It captures the shape that Solutions Engineering produces most often: a portfolio survey that lands as a leave-behind to unblock a customer-side decision (do we keep this equipment? is this a fleet pattern or a unit issue? where do we send a tech?).
When to use this genre
Use portfolio-diagnostic when:
- The customer has a multi-site or multi-unit deployment in scope
- Findings naturally split into “whole-system patterns” and “unit-level investigations” (each tier has its own remediation register)
- A citable threshold (industry standard, regulation, or internal benchmark) anchors what counts as a finding
- 1-2 specific findings warrant a deeper spotlight with figures
Do NOT use portfolio-diagnostic when:
- The report covers a single site / single unit deep dive — that’s a different shape (anticipated
single-site-investigationgenre) - The report is regulator-facing — that’s
compliance-audit-report(also anticipated, not built) - The audience is a prospect rather than an existing customer — use
case-studiesinstead
Required structure
A portfolio-diagnostic report has six sections in this order. Sections marked (required) are non-negotiable.
- Header callouts (required) — single-row stat strip with 3-4 portfolio-scale numbers. The reader gets the scope in a single glance: how many units in the portfolio, how many flagged, how many unit-level investigations recommended. This is the elevator pitch for the report.
- What this report covers (required) — one paragraph framing scope, time window, methodology anchor. Names the citable threshold. Bucket structure (e.g., Category A vs Category B) is named here, with the remediation register implied for each.
- Methodology (required) — threshold anchor block (with citation), detection gates, data filters, peer-comparison logic. Reader’s question on first read: “do I trust how this was measured?” Methodology answers it.
- Upstream check (per engagement) — when the threshold anchor depends on a clean upstream (e.g., supply voltage clean before treating downstream-equipment harmonics as causal), include this section. Skip when the threshold anchor is independent of upstream conditions.
- Findings by category (required) — multi-tier grouping. Whole-system patterns get one block; unit-level investigations get another. Each finding has its own row in a table with peer comparison. The remediation register differs by tier: system patterns require engineering investigation; unit-level findings are physical validation candidates.
- Spotlights (1-2, required) — deep dives on the highest-signal findings. Each spotlight has at least one figure (Pro Capture waveform, IR thermal, time-series chart, or per-unit comparison plot). Suggested validation steps follow the figure. References specific equipment specs / academic papers / standards where relevant.
- Reference appendix (required) — methodology details, source documents, citable standards (full bibliographic entry), internal evidence atom IDs.
What is not in a portfolio-diagnostic:
- Equipment replacement recommendations — service reports recommend investigation, not replacement (the customer’s engineering team decides). See
composition.advise-service-report.no-replacement-recommendation. - Marketing language (“Verdigris detected”, “we proactively identified”) — the report is third-person diagnostic, not first-person sales.
- Findings without threshold anchors — every finding traces to a citable threshold (
composition.advise-service-report.threshold-anchor-required). - Single-unit findings without peer comparison (
composition.advise-service-report.peer-comparison-required). - Multi-tier findings using the same remediation register (
composition.advise-service-report.dual-category-discipline).
Decision framework: where to land in the bounds
| Boundary | Floor (small) | Default | Ceiling (large) |
|---|---|---|---|
| Page count (8-25) | 8 pages for a small-scope diagnostic: single equipment class, single building, narrow time window. | 15 pages for a typical portfolio diagnostic: multi-site survey, single equipment class, 14-30 day window, 1 spotlight. | 25 pages for multi-equipment-class or multi-site reports with 2 spotlights and an extended methodology section. Above 25, the report becomes hard to navigate and signals overscope. |
| Spotlights (1-2) | 1 spotlight when one finding dominates the diagnostic interest (the highest-signal pattern or unit). | 1-2 spotlights for a typical report covering both a system-level pattern and a unit-level standout. | 2 spotlights maximum. Above 2, each one’s signal-to-noise drops; consider whether the reader can absorb the depth. |
| Header callouts (3-4) | 3 callouts when the portfolio’s scope is straightforward to summarize (units, flagged, recommended). | 3-4 callouts for a typical report: scope, what’s flagged, breakdown by category. | 4 callouts when an additional dimension materially clarifies (e.g., “X sites covered” for a portfolio where site count differs from unit count). Above 4, the callout strip stops being scannable. |
Threshold anchor discipline
Every finding traces to a citable threshold. The threshold appears once, in the methodology section, with full citation. Findings reference the threshold by short form (“exceeds the IEEE 519 individual harmonic limit”).
Acceptable threshold sources:
- Industry standards — IEEE Std 519-1992, NEC, ASHRAE, etc. Cite section + table where applicable.
- Regulations — LL97, IEEE 1547, EU AI Act, etc. Cite the regulation + relevant article.
- Published internal benchmarks — Verdigris published reference deployments (e.g., the 800-rectifier benchmark from the UPS Rectifier Case Study March 2026). Cite the case study + section.
- Manufacturer specs — e.g., partial-load tolerance for a specific module. Cite manufacturer + part number + spec sheet revision.
Unacceptable threshold sources:
- “Verdigris flagged this” — no citable anchor; reads as marketing.
- “Industry best practices” without a specific standard — vague; not auditable.
- Unsourced numerical thresholds — the reader cannot verify them.
Peer comparison discipline
Every flagged unit shows a peer comparison. The peer is a same-site, same-equipment-class unit running comparable load. Peer comparison answers the question: “is this unit the problem, or do all units look like this?”
Comparison framing:
- “X is N× higher than same-site peers” with the peer-set size and the peer median value
- A table row showing flagged unit + peer median + peer set size
- A figure where applicable (per-unit comparison plot, two waveforms aligned to the 60Hz fundamental)
Single-unit findings without peer context get downgraded to “anomaly noted, peer comparison unavailable” — never elevated to investigation candidate.
Dual-category discipline
When findings split into system-level patterns and unit-level investigations, the remediation register differs:
- System-level patterns (“Category B” in the telecom reference exemplar): multiple units on a shared infrastructure element show the same elevated reading. The cause is shared (input filter, isolation transformer, common DC bus, feeder conductor). Remediation register: engineering investigation, not module replacement. The fault path is in the shared element.
- Unit-level investigations (“Category A” in the telecom reference exemplar): one unit (or a single redundant pair) sits well above same-site peers. The cause is module-side. Remediation register: physical validation candidate. The fault path is on the module.
A report that uses the same remediation register for both tiers signals “we don’t know what we’re looking at.” The dual-category split is the report’s diagnostic value-add: telling the customer where to point engineering effort.
No replacement recommendation
Service reports recommend investigation, not equipment replacement. The remediation register is:
- “Read THD-57 on the AC input bus to confirm the elevated reading is rectifier-side, not arriving from the upstream feeder.”
- “Compare HMB4’s feeder THD against PDSB21’s; if HMB4 is elevated and PDSB21 is not, the issue is in the HMB4 feeder path.”
- “Pull manufacturer (Tyco / Emerson) service-spec sheet for the RR0153 / RR0154 model and confirm the 35% input-side THD reading exceeds the unit’s tolerance for partial-load operation.”
NOT:
- “Replace the DC2-1A and DC2-1B modules.”
- “Verdigris recommends procuring new rectifiers for the affected DC plant.”
The report identifies; the customer’s engineering team decides whether to replace. This is the medical-diagnostic discipline — a clinical lab report says “your inflammatory markers are elevated”; it does not say “take ibuprofen.” The customer’s doctor (or in this case, engineering team) decides treatment.
Spotlight discipline
Each spotlight has at least one figure. Spotlight without a figure is just a paragraph-level finding; it does not earn the spotlight register. Acceptable figure types:
- Pro Capture waveforms — time-domain current samples aligned to the 60Hz fundamental zero-crossing for shape comparison; subject vs clean peer
- IR thermal images — per-unit thermal signature under comparable load
- Time-series charts — daily THD (avg + max) across the analysis window, with peer median + threshold reference lines
- Per-unit comparison plots — bar charts or scatter plots showing the flagged unit against same-site peers
The figure is named (Figure 1, Figure 2) and has a caption that explains what the reader is looking at. The caption is descriptive, not editorial: “Daily THD-57 (avg + max) on Rectifier DC2-1A while loaded, 21 days. The +0.2 percentage-points-per-day trend trips Gate B, on top of the chronic exceedance trip from Gate A. Site peer median (~6%) and IEEE 519 individual harmonic limit (12%) shown for reference.”
Spacing rhythm
All values from tokens/spacing/print.json (the print stylesheet that whitepaper-cover and case-study CSS already consume). Same floors and ceilings as whitepapers body — service reports inherit the multi-page editorial spacing pattern.
Voice
Voice is operational-diagnostic — not advisory in the consultative-sales sense, not editorial in the whitepaper sense. The narrator is a technical operator who has run the data and is reporting findings to another technical operator.
Mike — primary. Field credibility, technical translation. Reads as “here’s what we measured, here’s what it means, here’s how to validate.”
Jon — supporting. Bench-diagnostic credibility on spotlight sections. Engineering-precision register for the figures and validation steps.
The voice is never the founder voice (Mark) — service reports are not pitches. The voice is never the people-intelligence voice (Seren) — service reports are not sales-empathy register.
Template vs. produced
The template-vs-produced contract from sales-collateral cells holds here too. The cell’s examples/ directory ships templates with placeholders; produced reports fill placeholders against real customer data.
| Slot | Template stage | Produced stage |
|---|---|---|
| Customer name (header) | <span class="vd-template">[FIELD: customer name OR redaction, e.g. "Apex Telecom"]</span> |
Apex Telecom |
| Portfolio scale (header callouts) | <span class="vd-template">[FIELD: total units, e.g. "1,742 rectifiers"]</span> |
1,742 rectifiers |
| Threshold anchor (methodology) | <span class="vd-template">[FIELD: cited standard + section, e.g. "IEEE Std 519-1992 § 10.3, Table 10.3"]</span> |
IEEE Std 519-1992 § 10.3, Table 10.3 |
| Spotlight subject (spotlight section) | <span class="vd-template">[FIELD: subject equipment with site context, e.g. "Elgin DC2-1 redundant pair"]</span> |
Elgin DC2-1 redundant pair |
The template stage is what an agent generates from the spec; the produced stage is what a human (or evidence-grounded agent) fills in. Never ship the produced stage without source evidence — every finding traces to a measurement; every threshold traces to a cited standard; every spotlight figure traces to actual telemetry.
Inheritance from sales-collateral universals
Two rules inherit from the sales-collateral system (slides cell):
composition.persuade-slide-deck.logomark-consistency— full lockup in the document header; consistent variant; consistent position. (Service reports use the full lockup, not the wordmark.)composition.persuade-slide-deck.confidentiality-marking— CUSTOMER-CONFIDENTIAL by default; tier color signals tier without requiring the reader to parse the text.
These are not service-report-specific innovations; they’re cross-cell brand discipline. Documented here for completeness; canonical definitions live in the slides cell.
See also
index.md— cell overview + decision treeproducing.md— producer workflowexamples/— shipped reference examplesrules/visual-rules.ymlcomposition.advise-service-report:block — machine-consumable rules for the evaluator pipeline