← Back to field notes

PRODUCTION AGENCY · EU

Approval rounds dropped from 8 to 3 — and every version is now searchable.

A 60-person production studio was running multi-market campaigns through Frame.io and email — 8 rounds of versioning per asset on average, with feedback scattered across timelines, threads and verbal review calls. We built an approval workflow that consolidates feedback, flags routine issues (legal disclaimers, brand-safety, format spec), routes pre-approved fixes to the post team, and lets seniors focus only on the calls that actually need taste. Legal and brand final approval stayed human-owned throughout.

Client name withheld under NDA. Anonymized approval-log and flagged-issue samples available on the audit call. Routine-flag-rate is measured at delivery across the pilot's 47 assets versus a 12-asset prior baseline — directional on a small sample.

−63%
APPROVAL ROUNDS
8 → 3
ROUNDS / ASSET
+2.4×
ASSETS / WEEK
6 wk
PILOT
PRODUCTION AGENCY · EUPRODUCTION STUDIO · EU · 2026 · DETAILS REDACTED

THE PROBLEM

Feedback was scattered, repeated, and missing the parts that mattered.

Multi-market campaigns went through 6–10 reviewers across legal, brand, market leads, and clients. Feedback lived in Frame.io timeline notes, email threads, Slack DMs, and review-call recordings.

Routine notes (typo, logo placement, missing disclaimer) were rewritten by humans every round. Critical taste feedback often got lost in the noise.

THE APPROACH

An approval workflow that handles the boring 80% and surfaces the 20% that needs a human.

Phase 1: consolidated feedback channels into a single Notion log per asset — Frame.io comments, Slack messages, email replies all stream in tagged by reviewer + timestamp.

Phase 2: a compliance pass that flags brand-guideline violations, missing disclaimers, format/spec mismatches before assets reach senior reviewers. Flagging only — final legal and brand sign-off stayed human-owned.

Phase 3: a routing layer that decides which reviewer needs to see which version — taste calls go up, routine fixes route to the post team with a one-click approval gate.

We were worried the AI would over-flag and waste post-team time. First week it did exactly that — too many false positives on disclaimer placement. After a tuning pass it stabilised, but we still have a human spot-check on every legal flag before it goes to post.

WHAT WAS MESSY

Where the first version of the workflow failed.

The brand-compliance vision pass over-flagged in week one — too many false positives on logo placement. We tuned the prompt with examples from approved assets and dropped false-positive rate by ~70%.

Some legacy assets weren't in the asset API. We added a manual upload path for those, which means about 10% of routine fixes still require a human to drop the file in.

Senior reviewers initially didn't trust the routing — they wanted to see every round. We added a 'recent decisions' digest so they could audit the routing without re-reviewing every asset.

THE OUTCOME

Faster throughput, fewer rounds, and a searchable history of every decision.

  • Approval rounds per asset8 avg → 3 avg−63%
  • Assets shipped per week12 → 29+2.4×
  • Senior reviewer hours per campaign−58%n=47
  • Routine issues caught before delivery−92%directional

HOW WE MEASURED IT

Baseline, sample and method — so the numbers above are checkable.

Baseline: 12 multi-market assets shipped through the old process (Frame.io audit + scheduler logs across the 3 months prior).

Pilot: 47 multi-market assets shipped through the new workflow over 6 weeks.

Approval-rounds figure counts version uploads in Frame.io per final-delivered asset.

Routine-flag-rate is the share of disclaimer/format/brand issues caught before final delivery, measured against the same issues found in the baseline cohort after delivery. Small sample — treat as directional.

WHAT WE DID NOT AUTOMATE

Where the human stayed in the loop on purpose.

Legal sign-off stayed human-owned end-to-end. The system flags potential legal issues; it never approves an asset.

Final brand sign-off stayed with the brand lead. The system surfaces guideline mismatches; the brand lead decides.

Creative direction and taste calls remained fully senior-driven. The engine routes the boring stuff; humans run the actual review.

No client-facing send happened without a senior on the asset.

WHAT'S NEXT

The approval log is now the agency's institutional memory.

New hires used to take 4–6 weeks to internalise 'how we review here'. With the structured log, they're shadowing decisions on day one and seeing the rationale, not just the outcome.

The studio is adding a similar layer for music and licensing clearance — same pattern, different rulebook.

YOUR APPROVAL WORKFLOW

Multi-market campaigns going through 8 rounds of versioning? Worth a look.

20-min audit. We map one recent asset's review path and tell you which 60–80% could safely route around your senior team.

Replies within one working day · EU, UK & US time zones