AI-Enabled PMM Operating System
In Production.
Produced above-benchmark content across three brands and three channels: 28.17% open rate, 2.5× subscriber acquisition on the LevelUp Labs guest post format, by running the AI-enabled PMM operating system in production.
Problem
Three of the four brand repos are featured here. The Atrium Academy parent brand runs on the same operating system shown in Case 1. There are three brands, three voice systems, and three growth priorities pulling in different directions.
- Learn Prompting needed a launch send for a new course at scale. Over 140,000 subscribers on the list, the highest-stakes content moment of the quarter, with a 20 to 25 percent open rate to clear.
- LevelUp Labs needed top-of-funnel growth for the instructor's newsletter, where she had already built the established audience. The pull was on guest posts, with community members writing under their own bylines to reach audiences she did not.
- Uniswap Hook Incubator needed an applicant-pulling thread that translated a recorded public event into a multi-format X post.
The voice files, benchmarks, and QA system existed. The question was whether the system could actually produce content that landed on each channel for each brand without rewriting from scratch.
Approach
Each piece ran through the same workflow.
- Identify the brand and intent. Atrium's content routing sends the request to the right repo. A guest post for LevelUp Labs routes differently than a course launch email for Learn Prompting.
- Load the brand foundation. Voice core, channel-specific voice file, ideal customer profile (ICP), real examples. Every skill loads these before drafting.
- Draft against the brand's writing standard. Each brand's writing standard names what the brand sounds like. Skills draft into the brand's preferred register.
- Run the AI writing detector. Vendored into each brand repo, synced from the canonical Atrium copy. The workflow fails if the detector is missing.
- Ship. Output saves to
output/YYYY-MM-DD-{slug}.mdwith front-matter logging channel, ICP, target KPI, and detector status.
Same workflow for three very different outputs. Each piece carries the voice rules of its brand and its channel, enforced at the gate.
The three pieces, in production
Three different brands, three channels, one workflow.
Ammaar turned Logan into a song. Now it is your turn.
| Metric | This email | Learn Prompting benchmark | Industry norm |
|---|---|---|---|
| Open rate | 28.17% | 25–30% (engaged segments) | 20–25% (broad sends) |
- Engaging subject line. Specific moment hook (image-to-song), no rhetorical question, scroll-stopping at 63 characters. The rule lives in
voice-email.md, added after this exact launch. - Authority stamp inline. Mentions "Google AI Studio" in the preview text. Logan Kilpatrick and Ammaar Reshi (builders at Google DeepMind) are named with full role context for readers outside AI Twitter.
Build your AI Chief of Staff in 45 minutes.
| Metric | This guest post | Comparable usual post |
|---|---|---|
| New subscribers earned | 39 | 16 |
| Views | 7,350 | 8,360 |
| Open rate | 30% | 30% |
| Likes | 20 | 21 |
- Voice capture. The guest's voice from a live session was captured and translated to long-form writing, allowing personality to survive the translation. Typical ghostwritten technical posts read like the brand, or are simply bland. This one differentiates through personality.
- Brand voice alignment. LevelUp Labs' voice was upheld while ghostwriting in the guest's register. Both brand voice and personal voice held simultaneously, which is a difficult constraint to enforce manually.
Zero to Hero: Uniswap API Quickstart thread.
- Multi-format thread production at speed. Eight tweets are generated a day after the event, with each anchored to its own clip from a public event. The OS coordinated this as a single thread and resulted in quick production value that would normally require a content team.
- Technical depth without losing voice. HTTP request structure, Permit2 signing, server-side routing across v2, v3, v4, and UniswapX. All in UHI's lowercase Twitter cadence, builder vocabulary, no parenthetical definitions for the technical audience.
What the operating system actually adds
Patterns observable across the pieces.
- Voice never bleeds across brands.
- Channel voice is enforced per brand.
- The system ghostwrites across voices within a brand.
- Above-benchmark performance where it is a defined objective.
- Writing standards ship as enforced rules with QA gates.
- Improvements travel to future assets.
- Speed compounds.
What I would do differently
The detector was built to catch drift from the writing standard, and the pieces above performed well on the metrics that matter (above-benchmark open rates on a course launch, 2.5× acquisition lift on the guest post). But the detector itself does not see performance data. It only sees whether the writing follows the rules, not whether the writing performs.
If I built the system again, I would add per-brand performance benchmarking from day one. Track each brand's content performance per channel (open rates, click-through, subscriber growth, engagement) and feed the data back into the voice files as updates. The detector would then have quantitative voice fit signals, not just writing standard adherence. A piece that follows the rules but historically underperforms for this brand's audience would flag at the gate.
Without that loop, the detector enforces consistency but does not enforce winning. The system catches drift, but is yet to catch underperformance preemptively.