Central Union Bank — CUB.ai

Designing an AI banking assistant that could actually handle everyday banking. Resolves routine queries and guides customers through common tasks — reducing call-center load and making self-service feel closer to a human conversation.

6 min read 2024
RoleSenior Product Designer · UX Design + Design System
Modules ownedAccount Management · CUB.ai Chatbot · Design System
Team3 designers · ~25 engineers · risk + compliance partners
DomainBanking · Retail · Regulated (RBI)
UsersMass-market customers · skewing older demographic
StackFigma, Tokens Studio, conversational design patterns, internal AI/intent layer

TL;DR

  • Designed CUB.ai — an AI assistant inside the bank's app — that resolves the most common customer service requests without sending them to a call center.
  • Built on a quick-action chip model: Banking Summary, Last 5 Transactions, Block Card, Transfer Money, Pay Bills, Find ATM, Download Statement, Report Fraud — each backed by a specific intent + structured response.
  • Lifted NPS from 4.7 → 9.2 in a single release cycle and shifted 30%+ of customer interactions away from branch + call center.

The cold-start problem

The first decision wasn't conversational design. It was admitting that conversational interfaces have a cold-start problem — and the older audience CUB serves makes that problem worse.

In testing, when we showed users a free-form chat input with a blinking cursor and a "What can I help you with?" prompt, ~70% froze. They asked which buttons to press. They typed two words and stopped. They said "I don't know what to ask." Conversation as a medium is fine. Conversation as a prerequisite is the trap.

Layered on top: an immovable regulatory shell — RBI compliance on every flow, consent screens, two-factor confirmations, disclaimers. The product had to feel like a conversation while obeying the form gauntlet underneath.

The 8 quick-action chips

The fix: open the conversation by handing the user a menu. Not as a fallback or a tutorial — as the primary entry point. Eight chips, covering ~85% of customer-service request volume:

  • Banking Summary — accounts at a glance
  • Last 5 Transactions — the question support gets most
  • Block Card — urgent path, surface always
  • Transfer Money — head of the long tail
  • Pay Bills — recurring, biller-aware
  • Find ATM — location-aware, low-bandwidth fallback
  • Download Statement — paper-trail need, common with older users
  • Report Fraud — single tap, escalates immediately to a human

Why eight, not fifteen? Cognitive load. The eighth chip is the inflection — anything more and users start scrolling, which kills the "I see what's possible" moment that solves cold-start. Anything in the long tail (loan inquiries, NRI services, dispute escalation) falls through to a free-text input after the user has entered the conversation surface confidently.

Sample conversations

The art is in what each chip resolves to. Free-form prose responses fail with older users — too much to read, hard to scan, easy to lose track of. Every chip routes to a structured response card rendered inline in the chat thread. Two worked examples:

Tap: Last 5 Transactions
Hello! I'm your AI Banking Assistant. How can I help you today?
Last 5 Transactions
Here are your last 5 transactions:
Recent activity · Acct ••• 4521
Amazon.com− ₹156.99
Starbucks− ₹800.45
Salary deposit+ ₹74,500.00
Netflix− ₹15.99
Shell Gas Station− ₹45.20
Free-text in the customer's own language
I want to pay my son's college fees
I can help with that. This looks like a transfer to a new beneficiary. Should I open the transfer form, or would you like to save the college first?
Open the form
Transfer · Step 1 of 3
From account•••• 4521 (Savings)
ToAdd beneficiary →
AmountEnter amount →

The intent classifier — trained on consented, anonymised CS transcripts — maps colloquial phrasings ("fees ka payment", "send money to my son") to the same task as the chip. Customers describe what they want; CUB.ai routes.

Refusal & safety

The most important design decisions in CUB.ai aren't what it does — they're what it refuses to do. In retail banking, an AI that confidently nudges customers toward a financial product is a regulatory and ethical disaster. The refusal matrix was reviewed and signed off by the bank's risk + compliance team before a single chip went live.

Will
  • Surface account balances + recent activity
  • Block a card immediately on request
  • Fetch the last N transactions or a date range
  • Initiate a transfer (customer confirms every step)
  • Locate the nearest ATM
  • Generate / email account statements
  • Escalate fraud reports to a human within seconds
  • Translate a customer's intent ("son's fees") to the right form
Won't
  • Recommend investment products or mutual funds
  • Offer financial advice ("should I invest in...?")
  • Predict or comment on market movements
  • Initiate a binding action without explicit customer confirmation
  • Discuss other customers' accounts (even with details given)
  • Pretend to be a human when asked directly
  • Make promises about loan / credit eligibility
  • Operate outside the 8 supported tasks without falling through to a human

The "Won't" column is the design. The "Will" column is just the product. Most AI products in regulated workflows fail because they've designed only the Will side and let the Won't side leak.

Trade-offs

The chip-first approach is slower for power users who memorised the legacy app's tab structure (~5% of the base). We absorbed that cost — the change was net-positive within two release cycles even for them.

Killing gamified savings goals lost us a small contingent of younger customers who liked streaks and animated mascots. The older majority's NPS lift more than paid for it — but the segmentation question is worth re-opening for v2.

Constraining to 8 quick actions left ~15% of CS volume falling through to call center. Accepted for v1 — the head of the curve was the leverage point. The long tail is v2.

Trade-offs

The chip-first approach is slower for power users who already memorised the legacy app's tab structure. We absorbed that cost — those users were ~5% of the base, and the change was net-positive within two release cycles even for them.

Killing gamified goals lost us a small contingent of younger customers. But the goal was a unified experience, not a segmented one — and the older majority's NPS lift more than paid for it.

Constraining to 8 quick actions left ~15% of customer-service requests still falling through to call center. We accepted it for v1 — the head of the curve was the leverage point. The long tail is v2.

Outcome

4.7 → 9.2
NPS lift in one release cycle
30%+
Shift from branch / call center to self-service
8
Quick-action chips covering ~85% of CS volume
1
Unified design system across chat + account screens

CUB.ai is now the primary self-service surface for routine banking — the bank's call-center capacity is freed up for the genuinely complex requests that need a human.

What I'd change

I'd have invested in onboarding for the legacy power user. Our launch focused on making the new experience welcoming to first-time and casual users. We didn't realise until week two that the most-vocal complaints were from heavy users who couldn't find the tab structure they'd internalised. A 30-second "what changed" tour at launch would have saved a lot of support load.

And: I'd push harder on error UX in the chatbot earlier. Users are far less forgiving of an AI mistake than a form validation error. A misclassified intent feels personal in a way a dropdown error doesn't. Investing in those edges from day one would have prevented a small adoption stutter at launch.