AI-Enabled PMM
Operating System.
Cut content production cycle time by 50% across three brands as a solo marketer, by building an AI-enabled PMM operating system spanning brand voice, ideal customer profile (ICP), and quality review.
Problem
Atrium Academy teaches DeFi developers through a partner program called Uniswap Hook Incubator. LevelUp Labs distills a problem-first approach for AI practitioners. Learn Prompting trains engineers and enthusiasts on AI usage. Each has its own ICP, channel mix, and objectives. Content quality varied by author. Brand voices drifted across pieces. ICP work lived in whoever had the most context that week. The team needed to ship more, and alignment was the bottleneck.
Approach
I built three layers in sequence, all version-controlled.
- Brand voice documentation. One foundation per brand: voice principles, tone rules, vocabulary preferences, and a writing standard naming what each brand specifically sounds like. Drift is easier to catch when preferred patterns are explicit.
- Claude skill library. Repeatable PMM tasks codified as reusable skills. ICP research from a subscriber sample. Positioning drafts from a product brief. Channel-specific copy adaptation that takes the same message and reshapes it for X, LinkedIn, and email.
- Quality review pipeline. Every draft runs through three checks before publication: brand voice alignment, anti-pattern detection, and overall quality review.
The system improves the longer it runs. Each new piece of content adds to the example library. Each market signal sharpens the ICP. Each rejected draft refines the detector.
Architecture
One canonical repo, three brand repos that inherit from it. Edits to the detector happen in the canonical repo and propagate to the others, so no drift between copies.
Uniswap operates as a partner program under Atrium Academy, but maintains its own repo because it has distinct voice, ICP, and workflows from the parent brand.
Each brand repo follows the same internal structure:
How the system works
Every piece of publishable content runs through the same pipeline.
-
01LoadBrand foundation loads: voice rules, ICP, writing standard.
-
02Identify ICPTargeted ICP required before drafting. Generic content is disallowed.
-
03DraftSkill drafts against the brand's writing standard.
-
04DetectAI writing detector runs as the final QA gate.
-
05ShipOutput saves with frontmatter logging channel, ICP, KPI, status.
Each brand repo is self-contained and carries its own copy of the detector. A teammate can clone one repo, install the vendored detector once via symlink, and the workflows in that repo work without depending on anything outside it. Atrium is canonical: edits to the detector happen in atrium-gtm/ai-writing-detector/ and propagate to the other three repos through scripts/sync-detector.sh. The gate is baked into each skill, not added on after, so the workflow fails if the detector is missing. Brand voice drift is caught at the gate, not after publication.
See it in code
Uniswap is the demo brand because it is recognizable outside the Atrium ecosystem. The demo shows the structure. Production is where the lift compounds.
- Cross-repo sync across four production repos, with the canonical detector propagating to each brand.
- A reference library five to ten times the size, with deeper ICP research and continuous-interview ingest.
- Embedding-based stylistic similarity scoring against each brand's authentic-voice corpus.
- Weekly detector recalibration as model behavior shifts.
- Performance loops that retrain prompts based on market signal.
Content is the test case. The same pattern applies to launch playbooks, sales enablement, win-loss synthesis, and lifecycle messaging. The operating system, not the deliverable, is the point.
What I would do differently
I built the brand voice documents first and the quality review pipeline second. That was the wrong order. The pipeline caught patterns the docs had missed, which meant rewriting the voice docs once they were already in use. If I built the system again, I would build the quality pipeline first against a small sample of existing content, let the patterns it surfaces inform the voice docs, then ship the docs only after they reflect what the pipeline already enforces.