BRAND

AI Creative Systems — Current Work

PROJECT

Turning content into dozens of performing ads — on autopilot

The system I’m building right now for brands on retainer: a set of AI agents that read the brand book, the ad-account data, and raw interview footage — and propose the next round of creative based on what performs.

SUMMARY

ROLE

AI System Design
Claude Code / Agents / Prompt Engineering
Performance Creative
Own the full loop: input → output → feedback

ONE INTERVIEW. TWENTY ADS. ONE SYSTEM.

Most brands are sitting on a mountain of content — CEO interviews, product footage, athlete shoots, campaign takes — and they still can’t get more than a handful of ads out of it. The bottleneck isn’t ideas, it’s the translation layer between raw material and performing creative. That’s what I’m building.

00:00
01:22
00:00
FPO · AI Agent Screen Recording

I’m building a set of agents and tools that sit between a brand’s raw inputs and its paid-creative output. Inputs: the brand book, the ad-account’s recent reports, raw interview footage, past winning hooks. The agents transcribe, tag, and pull the strongest lines. They read what’s worked historically — what colors, what visual hooks, what messaging directions — and propose the next round. Then they spit out Figma-ready statics and DaVinci-ready XML edits with placeholder text, so the manual work is narrow and sharp.

The loop: upload, review, ship, measure, feed back. I started this pattern inside Momentous — turning one-hour athlete interviews into twenty paid-ad cutdowns, blog post drafts, and retention-email hooks. Now I’m turning it into something that can run across multiple brands simultaneously.

Built with Claude Code, agent SDKs, and a set of tools I’ve written for video transcription, brand-book ingestion, and ad-report parsing. I can also build you a custom video editor that takes 200 GB of raw footage and proposes a narrative cut.

MY POSITION