THE PROBLEM
Senior creatives were doing junior assembly work — at senior rates.
Every pitch started the same way: 6–8 hours of a senior creative or strategist searching the case study archive, rewriting the same paragraphs, hunting team bios. The brief sat untouched until that work was done.
When the studio scaled to 7 pitches per month, the senior bench started declining shortlist invites — the cost of saying yes was a missed weekend.
THE APPROACH
A pitch engine that drafts from the brief and the studio's own work — never generic.
Phase 1: structured the 140 past decks into a case-study index with tags (sector, deliverable, format, lead creative, year, awards).
Phase 2: tone-of-voice profile from 60 hand-picked deck pages so the model writes in the studio's voice (not generic agency-speak). We checked drafts informally against human-written pages with the studio's creative director — not a controlled blind test, but enough confidence to ship.
Phase 3: a 14-slide template that pulls sector-relevant cases automatically, drafts the rationale, fills in team bios, and surfaces the 3 questions the brief leaves open for the strategist to actually think about.
First two pitches the tone was off — too clean, not enough opinion. After a calibration pass on six older decks the drafts felt like ours. The win is I'm thinking about the brief on Monday morning, not assembling a deck.
WHAT WAS MESSY
Where the first version of the workflow failed.
The case-study index was the bottleneck. Half the decks weren't tagged, and a quarter were near-duplicates. We spent week 1 cleaning the index before the retrieval was useful.
Tone drift on the first three drafts. Generic 'we believe' phrasing crept in. Fixed with stricter few-shot examples and a per-creative voice profile.
Pitches with weird formats (interactive prototypes, multi-deliverable proposals) still need senior hands. The engine is great for first-round decks; round-2 chemistry meetings remain manual.
THE OUTCOME
More pitches, better win rate, no late nights.
- First-draft assembly time6–8 h → 45 min−87%
- Pitches per month, same team5 → 7+42%
- Pitch → shortlist (directional signal)23% → 27.5%small sample
- Senior weekend hours per month12 h → 0 h−100%
HOW WE MEASURED IT
Baseline, sample and method — so the numbers above are checkable.
Baseline: the prior 6 months of pitches before the pilot (avg. 5 pitches/month, senior assembly hours from Harvest).
Pilot: 4 weeks, 7 pitches drafted through the engine.
Time saved measured from senior timesheet entries before/after and from the engine's render logs.
Shortlist rate: 23% historical baseline (rolling 12 months) versus 27.5% during the 4-week pilot. We treat this as a directional signal, not a causal win-rate claim — the sample is too small over a 4-week window.
WHAT WE DID NOT AUTOMATE
Where the human stayed in the loop on purpose.
Creative direction stays with the studio's CD. The engine drafts a deck; the CD decides which idea wins the pitch.
We did not train models on client data. Retrieval is over the studio's own past decks and case studies only.
No deck ships without a senior creative review. The system assembles; the human chooses what to send.
Chemistry meetings, round-2 presentations, and pitches with unusual formats remain fully manual.
WHAT'S NEXT
The studio is using the same retrieval layer for proposals and BD outreach.
The case-study index turned out to be the moat. Once it existed and was tagged properly, every BD artifact — proposals, intro decks, capability one-pagers, even cold emails — got faster and more relevant.
The studio is now testing the same workflow for unsolicited proposals to dream clients.