Shell — Pacific Platform

How Shell Finance got a clear view of every deal. A unified cockpit gave finance leaders one place to track deals, resources, and budget utilization — shifting effort from manual reconciliation to real performance management.

6 min read 2025
RoleSenior Product Designer · Secondment via PwC
ClientShell · Finance & Deal Management workstream
Team3 designers · ~20 engineers · Shell finance + tech stakeholders
DomainEnergy · Enterprise B2B · Finance Operations
UsersFinance leaders, deal-makers, regional analysts
StackFigma, Tokens Studio, Anthropic SDK (Insight.AI), internal Shell systems

TL;DR

  • Replaced four parallel deal trackers (spreadsheets, slide packs, legacy tools) with a single workflow-aligned cockpit named the Pacific Platform.
  • Consolidated 500+ active deals into one source of truth across deal management, financial dashboard, resources/time, and budget utilization.
  • Cut review-prep time materially and surfaced deal risk earlier — finance now acts, not just reports.

The reframe

Shell's finance function oversees a global portfolio of deals across businesses and regions. Deal data, time bookings, and budget information lived in four separate trackers — multiple spreadsheets, two legacy tools, and slide packs assembled manually each month for steering committees.

Answering basic portfolio questions — which deals are at risk, where are budgets off, where is capacity constrained — required hours of reconciliation. By the time risk surfaced in a steering committee, the window to intervene had usually closed.

The trap when projects like this kick off is the framing: "build a deal-tracking dashboard." Dashboards report. We needed something that acts. So the first design move was to refuse the dashboard framing entirely and reframe the project as a cockpit:

A dashboard reports the past. A cockpit lets the pilot fly. Finance shouldn't have to assemble the picture — the picture should assemble itself, with the controls already in reach.

That single sentence shaped every subsequent decision: which views, which navigation, what to show by default, what to keep one click away. Pacific Platform is the cockpit framing, productised.

Information architecture

The whole platform is composed of four core views, each answering a question finance leaders ask weekly. Same navigation, same vocabulary, same data spine — different lens.

Deal Management
Every active deal as a structured record. Status, value, owners, dates, phase. The portfolio at a glance, drillable to a single deal.
Financial Dashboard
Portfolio-level KPIs with one-click drill into individual deals. The "is the spend paying off?" view.
Resources & Time
Who's on which deal, booked time, capacity hot spots. The "do we have the people?" view.
Budget Utilization
Planned vs. actual spend with clear over/under-spend signals. Where Insight.AI lives.

Across all four views, navigation is exception-first. The most important filter chips — at risk, over budget, capacity constrained — are surfaced as primary navigation, not buried in an advanced-filter drawer. Finance leaders go straight from the full portfolio (500+ deals) to the few that need attention this week, in two taps.

[ Deal Page — Project Indian Ocean: status, contributors, budget, highlights tabs ]
Deal page — 30+ formerly-scattered fields, one structured record, last-updated stamp.

Insight.AI deep dive

The Budget Utilization view surfaces an AI feature called Insight.AI. Most enterprise AI features fail because they answer questions nobody asked. Insight.AI works because it answers the specific question finance teams ask every week: "is this deal off track, and how much warning do I have?"

Three concrete things Insight.AI surfaces, with worked examples:

  • Burn-rate alerts. "Burn rate has increased by 18% this month. At this pace, budget exhaustion is projected 3 weeks before deal closure." Calculated from historical burn against planned trajectory. Flags before the steering committee can.
  • Cost-by-category anomalies. "FM Staff costs are 15% above plan; project category 'Item Name 1' has consumed 70% of budget in 40% of the timeline." Calls out which line items are dragging the deal off course.
  • Spend-trend forward views. "At current pace, FY25-26 will exceed budget by ~$10K — variance pattern matches Q3 last year." Projects from current data, with the historical comparison cited.

Each Insight.AI output has a "why this" affordance one tap away — showing the underlying numbers, the comparison window, and the calculation. Finance teams trust the AI output because they can verify it, not because they take it on faith.

Stack: Anthropic SDK with structured tool-use. The model has access to deal data through a constrained schema — it can call get_burn_rate(deal_id, window) and similar functions, but cannot freely narrate. Output templates were drafted with finance leaders, not engineers, so the phrasings sound like finance, not chatbot.

The boundary line

The single most important design decision for any AI feature in a regulated workflow is where AI's authority ends. Get this wrong and the feature fails internal risk review (or worse, ships and causes a bad decision). Get it right and the feature passes scrutiny on the first review pass.

The boundary in Pacific Platform, in one line:

AI surfaces signal. Humans commit decisions.

  • AI does: read deal data, compute burn rates, flag anomalies, project trajectories, draft narrative summaries, suggest categories where attention may be warranted.
  • AI does not: change deal status, approve headline-size revisions, assign contributors, commit budget reallocation, write back to source-of-truth systems without explicit human action.

Every action that changes something in the platform is a button a human presses, after seeing the AI's signal. Insight.AI shows that a deal is off track; the human deal owner decides whether to escalate. The platform's design language reinforces this — AI panels use a neutral grey background; human action surfaces use the brand colour.

That line was non-negotiable for finance compliance. It's also the reason the feature got past internal risk review on the first pass — when the boundary is explicit and visually reinforced, reviewers can see it instead of having to take it on faith.

Trade-offs

Going single-cockpit cost us scope ambition. There were proposals to ship forecasting, scenario analysis, and Power BI-style custom dashboards in v1. We deferred all three. Adoption depended on the cockpit being immediately understandable to a finance lead opening it cold — adding configurability would have undermined that.

Lean data inputs meant some data shape decisions had to be standardised across business units that previously did things differently. Every finance lead lost something small they liked, in exchange for everyone gaining visibility they couldn't get before.

Outcome

500+
Active deals consolidated into one system
4 → 1
Trackers replaced with a single source of truth
↓ Significant
Reduction in manual review-prep time
↑ Earlier
Risk visibility surfacing pre-month-end

Pacific is now the starting point for monthly and quarterly portfolio discussions. Leaders use it to scan the portfolio, drill into flagged deals, and decide where to focus attention — using a single live view instead of multiple spreadsheets and slide packs.

With structured deal data and workflows in one place, Shell Finance now has a platform for scenario analysis, trend views across quarters, and future integration with forecasting and AI-driven indicators of deal health.

What I'd change

I'd have invested in the contributor experience earlier. The cockpit was designed for finance leaders first — but deal-makers and analysts are the ones who keep the data fresh, and their flows lagged the leadership flows by a release cycle. If we'd parallel-tracked them from day one, adoption would have ramped faster.

Also: I'd have shipped Insight.AI behind a clearer "explain" affordance. Early users wanted to know why the AI flagged something, not just what. We added it later. Should have been there at launch.