THE PROBLEM
Briefs sat in someone's inbox for a week before scope was agreed.
The agency was winning pitches but losing margin during scoping. Briefs landed in Account Director inboxes and triggered a sequence of meetings: kick-off, discipline alignment, effort estimation, approval. Each took half a day.
By the time scope was signed, the timeline had compressed by a week, the team was already behind, and Account Directors were the bottleneck.
THE APPROACH
A workflow that turns any brief format into a structured scoping doc, automatically.
Phase 1: built a requirements extractor that reads any brief format (Word, PDF, email thread, slide deck) and produces a structured object: deliverables, deadlines, budget envelope, ambiguities, stakeholders.
Phase 2: an effort estimation model calibrated on 3 years of the agency's own timesheets — so the suggested scope reflects how this specific team actually works, not benchmarks.
Phase 3: routing logic that ships the draft scope to the right discipline leads in Teams, with the specific 2–3 questions each lead needs to confirm. No more 'all-hands' scoping meetings.
The first month we shipped two scopes that missed obvious risks because the extractor over-trusted vague brief language. We added a confidence threshold and now ambiguous lines get flagged for a human read.
WHAT WAS MESSY
Where the first version of the workflow failed.
Email-thread briefs were the hardest input. Reply chains lost the original ask. We added a 'thread reconstruction' step but it still needs human verification on long threads.
Effort estimates were 20% optimistic in the first two weeks — the timesheet training set under-counted hidden senior-review hours. Recalibrated with explicit reviewer logs.
Discipline leads pushed back on the auto-routing initially. We added a 'Account Director approves before send' gate, which kept Account Director in control without bringing back the all-hands meetings.
THE OUTCOME
Faster intake, less margin leakage, and every brief now lives in a queryable database.
- Time from brief received to signed scope3 days → 90 min−95%
- Internal scoping meetings per brief3–5 → 0–1−80%
- Effort estimate variance vs. actual±35% → ±12%−66% variance
- Margin recovery on scoped work (matched cohort)+4.8 ptsn=31
HOW WE MEASURED IT
Baseline, sample and method — so the numbers above are checkable.
Baseline: 24 briefs scoped through the old process in the prior 6 months (Account Director timesheets + meeting calendar audit).
Pilot: 31 briefs scoped through the new workflow over 8 weeks.
Time-to-scope measured from brief-received timestamp (inbox) to signed-scope timestamp (HubSpot).
Margin recovery: +4.8 pts is the average margin delta on the 31 pilot projects versus a matched cohort from the baseline period, calculated on agency-side data after delivery closed out.
WHAT WE DID NOT AUTOMATE
Where the human stayed in the loop on purpose.
Account Directors still approve every scope before it goes to a discipline lead. The workflow drafts and routes; humans sign off.
We did not train models on client data. Estimation is calibrated on the agency's own timesheets only.
Pricing and contract terms stayed manual. The engine produces scope docs, not commercial proposals.
Ambiguous briefs get escalated to a human read, not auto-scoped on assumption.
WHAT'S NEXT
The same intake layer is becoming the agency's project-memory.
An unexpected outcome: because every brief is now structured, the agency can finally answer questions like 'how many automotive briefs did we scope last year, and what was the effort delta vs. estimate?' That data was previously locked in PDFs.
The agency is using the brief database to drive better pitch decisions: declining mismatched RFPs faster, and pricing pitches with sharper margin assumptions.