An SME loan took 9 days to decide. The support queue grew 12% per quarter.
Two adjacent pains. Underwriters were spending 60–70% of their time on data-gathering and rule-checking before they even got to judgement work. SME loan decisions averaged 9 business days; the bank was losing applications to faster fintech competitors.
Meanwhile the retail support team was answering the same 80 questions on repeat — fees, eligibility, documents needed for a refi. Ticket volume was outpacing headcount.
Two problems, one data foundation: the bank's own policies and historical decisions.
We mapped both workflows in week one. The credit-scoring side needed an interpretable model — black-box wasn't an option for the regulator — and a clean training set from 8 years of decisioned applications. The chatbot side needed grounding in 340+ policy and product PDFs that were scattered across SharePoint folders.
Both shared the same need: a clean, versioned representation of the bank's policy and decisioning rules, with citations on every output. We built that foundation first.
Interpretable scoring + a chatbot that cites the policy line it's reading from.
The credit model was deliberately conservative: gradient-boosted trees with SHAP value explanations on every prediction. Underwriters saw the model's score plus the top-5 features driving it, ranked by impact, with the policy clause each feature mapped to. Override was always one click away. Every decision wrote a structured audit log.
The chatbot ran a small RAG pipeline: query → retrieve top-5 policy passages → answer with citations. If the retrieval confidence was low, it routed to a human agent instead of guessing. Customers got either a sourced answer or a fast handoff — never a hallucinated rule.
"The first thing the regulator asked was 'show me a wrong decision and explain why.' We could, in 30 seconds. That's the only reason this got greenlit."
6 weeks across two branches, 480 SME applications, 12,000 chatbot conversations.
The credit-scoring pilot ran on 480 SME applications across two branches. Decision time dropped from 9 days to 3.6 days; underwriter override rate stabilized at 8% (well within tolerance). Approval/decline accuracy held flat against the historical baseline — the model didn't get bolder, just faster.
The chatbot deflected 35% of support tickets in pilot, with a 4.6/5 user satisfaction score and zero policy-violation flags from the compliance team. Crucially, deflection meant agents handled fewer simple questions and could spend more time on the complex ones.
Numbers that survived the pilot.
- SME loan decision time9 days → 3.6 days−60%
- Support tickets deflected—35%
- Underwriter override rate—8%
- Audit-ready decisions—100%
- Chatbot satisfaction (1–5)—4.6