Skip to content
Work / AI-Enabled PMM Operating System: In Production
In production · Atrium Academy · 2026

AI-Enabled PMM Operating System
In Production.

AI-Enabled PMM Operating System In Production

Produced above-benchmark content across three brands and three channels: 28.17% open rate, 2.5× subscriber acquisition on the LevelUp Labs guest post format, by running the AI-enabled PMM operating system in production.

Channels

Email, Substack, X

Brands

Learn Prompting, LevelUp Labs, Uniswap Hook Incubator

System

Same workflow, three voices

Result

Above-benchmark on every piece

Problem

Three of the four brand repos are featured here. The Atrium Academy parent brand runs on the same operating system shown in Case 1. There are three brands, three voice systems, and three growth priorities pulling in different directions.

  • Learn Prompting needed a launch send for a new course at scale. Over 140,000 subscribers on the list, the highest-stakes content moment of the quarter, with a 20 to 25 percent open rate to clear.
  • LevelUp Labs needed top-of-funnel growth for the instructor's newsletter, where she had already built the established audience. The pull was on guest posts, with community members writing under their own bylines to reach audiences she did not.
  • Uniswap Hook Incubator needed an applicant-pulling thread that translated a recorded public event into a multi-format X post.

The voice files, benchmarks, and QA system existed. The question was whether the system could actually produce content that landed on each channel for each brand without rewriting from scratch.

Approach

Each piece ran through the same workflow.

  1. Identify the brand and intent. Atrium's content routing sends the request to the right repo. A guest post for LevelUp Labs routes differently than a course launch email for Learn Prompting.
  2. Load the brand foundation. Voice core, channel-specific voice file, ideal customer profile (ICP), real examples. Every skill loads these before drafting.
  3. Draft against the brand's writing standard. Each brand's writing standard names what the brand sounds like. Skills draft into the brand's preferred register.
  4. Run the AI writing detector. Vendored into each brand repo, synced from the canonical Atrium copy. The workflow fails if the detector is missing.
  5. Ship. Output saves to output/YYYY-MM-DD-{slug}.md with front-matter logging channel, ICP, target KPI, and detector status.

Same workflow for three very different outputs. Each piece carries the voice rules of its brand and its channel, enforced at the gate.

28.17%
Open rate on Learn Prompting course launch email. Industry benchmark, 20 to 25% for broad sends.
2.5×
Subscriber acquisition lift on the LevelUp Labs guest post, against the comparable usual post.
7 days → 24 hrs
Production cycle time on the Uniswap Hook Incubator multi-format thread.

The three pieces, in production

Three different brands, three channels, one workflow.

Example 01 · Learn Prompting · Email

Ammaar turned Logan into a song. Now it is your turn.

Learn Prompting course launch email titled Ammaar turned Logan into a song. Now it is your turn.
Channel

Email

Topic

Build Real Apps in Google AI Studio course launch

Key metric

Open rate met benchmark while maintaining brand guidelines

MetricThis emailLearn Prompting benchmarkIndustry norm
Open rate28.17%25–30% (engaged segments)20–25% (broad sends)
Operating system in the work
  1. Engaging subject line. Specific moment hook (image-to-song), no rhetorical question, scroll-stopping at 63 characters. The rule lives in voice-email.md, added after this exact launch.
  2. Authority stamp inline. Mentions "Google AI Studio" in the preview text. Logan Kilpatrick and Ammaar Reshi (builders at Google DeepMind) are named with full role context for readers outside AI Twitter.
newsletter.learnprompting.org — read the live email
Example 02 · LevelUp Labs · Substack

Build your AI Chief of Staff in 45 minutes.

LevelUp Labs guest post on Substack titled Build your AI Chief of Staff in 45 minutes
Channel

Substack newsletter

Topic

Tutorial on setting up an AI Chief of Staff, captured from a live session with a guest

Key metric

2.5× subscriber acquisition vs. a comparable usual post. Reusable pattern.

MetricThis guest postComparable usual post
New subscribers earned3916
Views7,3508,360
Open rate30%30%
Likes2021
Operating system in the work
  1. Voice capture. The guest's voice from a live session was captured and translated to long-form writing, allowing personality to survive the translation. Typical ghostwritten technical posts read like the brand, or are simply bland. This one differentiates through personality.
  2. Brand voice alignment. LevelUp Labs' voice was upheld while ghostwriting in the guest's register. Both brand voice and personal voice held simultaneously, which is a difficult constraint to enforce manually.
thenuancedperspective.substack.com — read the live guest post
Example 03 · Uniswap Hook Incubator · X

Zero to Hero: Uniswap API Quickstart thread.

Channel

X (Twitter)

Topic

Tight tweet thread with embedded clips from a public event

Key metric

Production cycle time of 7 days, down to under 24 hours.

Operating system in the work
  1. Multi-format thread production at speed. Eight tweets are generated a day after the event, with each anchored to its own clip from a public event. The OS coordinated this as a single thread and resulted in quick production value that would normally require a content team.
  2. Technical depth without losing voice. HTTP request structure, Permit2 signing, server-side routing across v2, v3, v4, and UniswapX. All in UHI's lowercase Twitter cadence, builder vocabulary, no parenthetical definitions for the technical audience.
x.com/AtriumAcademy — read the live thread

What the operating system actually adds

Patterns observable across the pieces.

  • Voice never bleeds across brands.
  • Channel voice is enforced per brand.
  • The system ghostwrites across voices within a brand.
  • Above-benchmark performance where it is a defined objective.
  • Writing standards ship as enforced rules with QA gates.
  • Improvements travel to future assets.
  • Speed compounds.
Voice is a range, not a personality. The system makes the range navigable.

What I would do differently

The detector was built to catch drift from the writing standard, and the pieces above performed well on the metrics that matter (above-benchmark open rates on a course launch, 2.5× acquisition lift on the guest post). But the detector itself does not see performance data. It only sees whether the writing follows the rules, not whether the writing performs.

If I built the system again, I would add per-brand performance benchmarking from day one. Track each brand's content performance per channel (open rates, click-through, subscriber growth, engagement) and feed the data back into the voice files as updates. The detector would then have quantitative voice fit signals, not just writing standard adherence. A piece that follows the rules but historically underperforms for this brand's audience would flag at the gate.

Without that loop, the detector enforces consistency but does not enforce winning. The system catches drift, but is yet to catch underperformance preemptively.

Want to talk?

If you are scaling content across multiple brands or channels, this is the kind of system I build.

LinkedIn is fastest. A line about your current channels and where the voice is drifting gets a faster reply than a generic intro.