Experimental

First service-reports genre. Graduates with the cell after 2 SE-produced reports ship using these patterns.

Portfolio Diagnostic

A portfolio diagnostic is a survey-shaped report covering an existing customer’s full deployment. It surfaces multi-tier findings — whole-system patterns at the top, unit-level investigations underneath — anchored to citable thresholds, and surfaces 1-2 spotlights with figures for the highest-signal findings.

This is the load-bearing genre for the service-reports cell. It captures the shape that Solutions Engineering produces most often: a portfolio survey that lands as a leave-behind to unblock a customer-side decision (do we keep this equipment? is this a fleet pattern or a unit issue? where do we send a tech?).

When to use this genre

Use portfolio-diagnostic when:

Do NOT use portfolio-diagnostic when:

Required structure

A portfolio-diagnostic report has six sections in this order. Sections marked (required) are non-negotiable.

  1. Header callouts (required) — single-row stat strip with 3-4 portfolio-scale numbers. The reader gets the scope in a single glance: how many units in the portfolio, how many flagged, how many unit-level investigations recommended. This is the elevator pitch for the report.
  2. What this report covers (required) — one paragraph framing scope, time window, methodology anchor. Names the citable threshold. Bucket structure (e.g., Category A vs Category B) is named here, with the remediation register implied for each.
  3. Methodology (required) — threshold anchor block (with citation), detection gates, data filters, peer-comparison logic. Reader’s question on first read: “do I trust how this was measured?” Methodology answers it.
  4. Upstream check (per engagement) — when the threshold anchor depends on a clean upstream (e.g., supply voltage clean before treating downstream-equipment harmonics as causal), include this section. Skip when the threshold anchor is independent of upstream conditions.
  5. Findings by category (required) — multi-tier grouping. Whole-system patterns get one block; unit-level investigations get another. Each finding has its own row in a table with peer comparison. The remediation register differs by tier: system patterns require engineering investigation; unit-level findings are physical validation candidates.
  6. Spotlights (1-2, required) — deep dives on the highest-signal findings. Each spotlight has at least one figure (Pro Capture waveform, IR thermal, time-series chart, or per-unit comparison plot). Suggested validation steps follow the figure. References specific equipment specs / academic papers / standards where relevant.
  7. Reference appendix (required) — methodology details, source documents, citable standards (full bibliographic entry), internal evidence atom IDs.

What is not in a portfolio-diagnostic:

Decision framework: where to land in the bounds

Boundary Floor (small) Default Ceiling (large)
Page count (8-25) 8 pages for a small-scope diagnostic: single equipment class, single building, narrow time window. 15 pages for a typical portfolio diagnostic: multi-site survey, single equipment class, 14-30 day window, 1 spotlight. 25 pages for multi-equipment-class or multi-site reports with 2 spotlights and an extended methodology section. Above 25, the report becomes hard to navigate and signals overscope.
Spotlights (1-2) 1 spotlight when one finding dominates the diagnostic interest (the highest-signal pattern or unit). 1-2 spotlights for a typical report covering both a system-level pattern and a unit-level standout. 2 spotlights maximum. Above 2, each one’s signal-to-noise drops; consider whether the reader can absorb the depth.
Header callouts (3-4) 3 callouts when the portfolio’s scope is straightforward to summarize (units, flagged, recommended). 3-4 callouts for a typical report: scope, what’s flagged, breakdown by category. 4 callouts when an additional dimension materially clarifies (e.g., “X sites covered” for a portfolio where site count differs from unit count). Above 4, the callout strip stops being scannable.

Threshold anchor discipline

Every finding traces to a citable threshold. The threshold appears once, in the methodology section, with full citation. Findings reference the threshold by short form (“exceeds the IEEE 519 individual harmonic limit”).

Acceptable threshold sources:

Unacceptable threshold sources:

Peer comparison discipline

Every flagged unit shows a peer comparison. The peer is a same-site, same-equipment-class unit running comparable load. Peer comparison answers the question: “is this unit the problem, or do all units look like this?”

Comparison framing:

Single-unit findings without peer context get downgraded to “anomaly noted, peer comparison unavailable” — never elevated to investigation candidate.

Dual-category discipline

When findings split into system-level patterns and unit-level investigations, the remediation register differs:

A report that uses the same remediation register for both tiers signals “we don’t know what we’re looking at.” The dual-category split is the report’s diagnostic value-add: telling the customer where to point engineering effort.

No replacement recommendation

Service reports recommend investigation, not equipment replacement. The remediation register is:

NOT:

The report identifies; the customer’s engineering team decides whether to replace. This is the medical-diagnostic discipline — a clinical lab report says “your inflammatory markers are elevated”; it does not say “take ibuprofen.” The customer’s doctor (or in this case, engineering team) decides treatment.

Spotlight discipline

Each spotlight has at least one figure. Spotlight without a figure is just a paragraph-level finding; it does not earn the spotlight register. Acceptable figure types:

The figure is named (Figure 1, Figure 2) and has a caption that explains what the reader is looking at. The caption is descriptive, not editorial: “Daily THD-57 (avg + max) on Rectifier DC2-1A while loaded, 21 days. The +0.2 percentage-points-per-day trend trips Gate B, on top of the chronic exceedance trip from Gate A. Site peer median (~6%) and IEEE 519 individual harmonic limit (12%) shown for reference.”

Spacing rhythm

All values from tokens/spacing/print.json (the print stylesheet that whitepaper-cover and case-study CSS already consume). Same floors and ceilings as whitepapers body — service reports inherit the multi-page editorial spacing pattern.

Voice

Voice is operational-diagnostic — not advisory in the consultative-sales sense, not editorial in the whitepaper sense. The narrator is a technical operator who has run the data and is reporting findings to another technical operator.

Mike — primary. Field credibility, technical translation. Reads as “here’s what we measured, here’s what it means, here’s how to validate.”

Jon — supporting. Bench-diagnostic credibility on spotlight sections. Engineering-precision register for the figures and validation steps.

The voice is never the founder voice (Mark) — service reports are not pitches. The voice is never the people-intelligence voice (Seren) — service reports are not sales-empathy register.

Template vs. produced

The template-vs-produced contract from sales-collateral cells holds here too. The cell’s examples/ directory ships templates with placeholders; produced reports fill placeholders against real customer data.

Slot Template stage Produced stage
Customer name (header) <span class="vd-template">[FIELD: customer name OR redaction, e.g. "Apex Telecom"]</span> Apex Telecom
Portfolio scale (header callouts) <span class="vd-template">[FIELD: total units, e.g. "1,742 rectifiers"]</span> 1,742 rectifiers
Threshold anchor (methodology) <span class="vd-template">[FIELD: cited standard + section, e.g. "IEEE Std 519-1992 § 10.3, Table 10.3"]</span> IEEE Std 519-1992 § 10.3, Table 10.3
Spotlight subject (spotlight section) <span class="vd-template">[FIELD: subject equipment with site context, e.g. "Elgin DC2-1 redundant pair"]</span> Elgin DC2-1 redundant pair

The template stage is what an agent generates from the spec; the produced stage is what a human (or evidence-grounded agent) fills in. Never ship the produced stage without source evidence — every finding traces to a measurement; every threshold traces to a cited standard; every spotlight figure traces to actual telemetry.

Inheritance from sales-collateral universals

Two rules inherit from the sales-collateral system (slides cell):

These are not service-report-specific innovations; they’re cross-cell brand discipline. Documented here for completeness; canonical definitions live in the slides cell.

See also