Skip to content
Work / AI-Enabled PMM Operating System
The system · Atrium Academy · 2026

AI-Enabled PMM
Operating System.

AI-Enabled PMM Operating System

Cut content production cycle time by 50% across three brands as a solo marketer, by building an AI-enabled PMM operating system spanning brand voice, ideal customer profile (ICP), and quality review.

Role

Solo PMM, end to end

Brands

Atrium, LevelUp Labs, Learn Prompting

Built

Brand voice, skill library, QA pipeline

Scale

4 repos · 3 brands · 1 canonical detector

Problem

Atrium Academy teaches DeFi developers through a partner program called Uniswap Hook Incubator. LevelUp Labs distills a problem-first approach for AI practitioners. Learn Prompting trains engineers and enthusiasts on AI usage. Each has its own ICP, channel mix, and objectives. Content quality varied by author. Brand voices drifted across pieces. ICP work lived in whoever had the most context that week. The team needed to ship more, and alignment was the bottleneck.

Approach

I built three layers in sequence, all version-controlled.

  1. Brand voice documentation. One foundation per brand: voice principles, tone rules, vocabulary preferences, and a writing standard naming what each brand specifically sounds like. Drift is easier to catch when preferred patterns are explicit.
  2. Claude skill library. Repeatable PMM tasks codified as reusable skills. ICP research from a subscriber sample. Positioning drafts from a product brief. Channel-specific copy adaptation that takes the same message and reshapes it for X, LinkedIn, and email.
  3. Quality review pipeline. Every draft runs through three checks before publication: brand voice alignment, anti-pattern detection, and overall quality review.

The system improves the longer it runs. Each new piece of content adds to the example library. Each market signal sharpens the ICP. Each rejected draft refines the detector.

Architecture

One canonical repo, three brand repos that inherit from it. Edits to the detector happen in the canonical repo and propagate to the others, so no drift between copies.

atrium/ │ ├── atrium-gtm/ ← Atrium Academy (canonical AI detector) ├── learnprompting-gtm/ ← Learn Prompting ├── leveluplabs-gtm/ ← LevelUp Labs └── uniswap-gtm/ ← Uniswap Hook Incubator

Uniswap operates as a partner program under Atrium Academy, but maintains its own repo because it has distinct voice, ICP, and workflows from the parent brand.

Each brand repo follows the same internal structure:

{brand}-gtm/ │ ├── CLAUDE.md ← Brand orchestration │ ├── ai-writing-detector/ ← Vendored QA skill (synced from canonical) │ ├── {brand}-brand-foundation/ ← Reference library │ └── references/ │ ├── voice-core.md ← Universal voice rules │ ├── voice-{channel}.md ← Channel-specific voice │ ├── icp.md ← Ideal customer profiles │ ├── writing-standard.md ← This brand's writing style │ ├── brand-kit.md │ ├── real-examples.md │ └── analytics-benchmarks.md │ └── [content workflow skills] ← Each invokes ai-writing-detector as the final QA gate

How the system works

Every piece of publishable content runs through the same pipeline.

  1. 01
    Load
    Brand foundation loads: voice rules, ICP, writing standard.
  2. 02
    Identify ICP
    Targeted ICP required before drafting. Generic content is disallowed.
  3. 03
    Draft
    Skill drafts against the brand's writing standard.
  4. 04
    Detect
    AI writing detector runs as the final QA gate.
  5. 05
    Ship
    Output saves with frontmatter logging channel, ICP, KPI, status.
Architecture context

Each brand repo is self-contained and carries its own copy of the detector. A teammate can clone one repo, install the vendored detector once via symlink, and the workflows in that repo work without depending on anything outside it. Atrium is canonical: edits to the detector happen in atrium-gtm/ai-writing-detector/ and propagate to the other three repos through scripts/sync-detector.sh. The gate is baked into each skill, not added on after, so the workflow fails if the detector is missing. Brand voice drift is caught at the gate, not after publication.

See it in code

A simplified, public version of one brand repo lives at
github.com/esmefong/demo-gtm

Uniswap is the demo brand because it is recognizable outside the Atrium ecosystem. The demo shows the structure. Production is where the lift compounds.

Where the production system goes beyond the demo
  • Cross-repo sync across four production repos, with the canonical detector propagating to each brand.
  • A reference library five to ten times the size, with deeper ICP research and continuous-interview ingest.
  • Embedding-based stylistic similarity scoring against each brand's authentic-voice corpus.
  • Weekly detector recalibration as model behavior shifts.
  • Performance loops that retrain prompts based on market signal.

Content is the test case. The same pattern applies to launch playbooks, sales enablement, win-loss synthesis, and lifecycle messaging. The operating system, not the deliverable, is the point.

The bottleneck was alignment, not capacity. The system makes alignment cheap, so capacity goes to the work.

What I would do differently

I built the brand voice documents first and the quality review pipeline second. That was the wrong order. The pipeline caught patterns the docs had missed, which meant rewriting the voice docs once they were already in use. If I built the system again, I would build the quality pipeline first against a small sample of existing content, let the patterns it surfaces inform the voice docs, then ship the docs only after they reflect what the pipeline already enforces.

Want to talk?

If you are building a system like this and want a partner to think it through, message me.

LinkedIn is fastest. A line about what you are building gets a faster reply than a generic intro.