Project Sundial: Building AI Capability at Scale

A leading financial services organisation reached a new inflection point: AI moved from systems into everyday work.

Generative AI expanded use beyond technical teams, shifting use and accountability to managers and knowledge workers, introducing new questions on judgement, trust and value.

Project Sundial describes a 12-month engagement in which organisational AI capability was deliberately aligned to enterprise strategy and operational transformation—embedding AI into ways of working while preserving governance, confidence, and long-term resilience.


Executive Summary

Organisations that already have AI maturity in technical teams are finding a new challenge: enabling the workforce with GenAI and agentic AI capability.

A large financial services organisation with more than a decade of experience using AI reached a critical inflection point.

AI was already embedded in core processes, supported by established governance, and culturally understood as “safe to try.” What changed was the scale and proximity of AI to everyday work. The introduction of generative AI tools, including Microsoft Copilot, expanded AI use beyond specialist teams to knowledge workers, managers, and leaders across the enterprise.

The challenge was not technical maturity. It was the impact of AI at scale.

Project Sundial describes how AI was framed as an organisational capability and operational transformation challenge—aligning governance, culture, education, and ways of working to support trusted, responsible AI use.

Over a 12-month engagement, the organisation achieved:

  • Sustained adoption supported by capability and culture, not one-off training

  • Increased ability among managers and knowledge workers to use AI effectively and responsibly

  • Reduced risk through practical, understood guardrails

  • Moved from experimentation to creating a return on AI investment

  • Governance experienced as an enabler of transformation rather than a constraint

  • AI ethics aligned with regulated standards and organisational values

  • A foundation for future democratised agentic AI capability

Rather than scaling activity without impact, AI became a coherent operational capability—supporting increased productivity, creativity, and future resilience while maintaining trust in the organisation and the technology.


The Context: When AI Becomes Everyone’s Tool

For over 10 years, the organisation had successfully deployed AI in production environments, including decision support, automation, and analytics. Strong foundations were already in place:

  • Established AI and data governance

  • Proven technical capability

  • A culture supportive of innovation and experimentation

However, generative AI and AI-enabled collaboration tools introduced a new dynamic:

  • Thousands of non-technical staff now interacted directly with AI

  • Managers became accountable for AI-informed work products and decisions

  • AI use increasingly shaped how work was performed, not just systems

This created new questions that existing AI maturity had not been designed to answer:

  • How do non-technical roles gain benefits from AI?

  • How do we help all of our people understand the potential and risks of AI?

  • How do we enable our organisation to be more future-proof given the pace of AI change?

This was not a technology rollout problem.
It was a people-outside-of-tech problem.

The Mandate

The mandate was to evolve AI capability to match this new reality—without losing the trust, safety, and momentum already established.

Specifically, the organisation needed to:

  • Extend AI confidence beyond technical teams

  • Translate governance into practical guidance for everyday use

  • Support leaders and managers to exercise judgement and accountability

  • Enable widespread adoption of generative AI tools

The Approach: Aligning Governance, Culture and Education

AI was deliberately positioned as a strategic and operational transformation capability, not a standalone technology or change initiative.

The approach aligned AI adoption with enterprise strategy—ensuring that how AI was governed, adopted, and used directly supported operational priorities, risk appetite, and long-term resilience.

Governance: Enabling Operational Transformation with Confidence

Existing data and AI governance was translated into practical, everyday guardrails. Rather than adding new layers of control, governance was operationalised to align with how AI was actually used in day-to-day work.

This included:

– Clear accountability for AI-informed decisions

– Defined escalation pathways linked to operational risk

– Explicit decision boundaries that enabled autonomy within agreed guardrails

– Language and practices that resonated at every level of the organisation

As a result, responsible AI use is now embedded in everyday decision-making. It functions not only as a risk management mechanism, but as a shared way of working that protects customers and reinforces trust as generative AI use scales.

Culture: Creating the Conditions for Sustainable AI Adoption

As AI tools became widely available, adoption was driven through hands-on experimentation in everyday work. Leaders and enablement teams actively reinforced this shift, making it clear that exploring AI was both safe and expected as part of improving outcomes.

Cultural signals were strengthened through ongoing research into workforce behaviour, sentiment, and confidence, giving leadership a clear line of sight into how AI was being experienced in practice. These insights informed targeted change and capability interventions, embedded directly into priority initiatives where AI was expected to deliver operational impact.

The intent was not one-off uptake, but sustained and effective use. By systematically measuring AI sentiment across the organisation, leadership identified distinct patterns of readiness and constraint, enabling tailored interventions to address barriers, support wellbeing, and build a workforce capable of working confidently and productively with AI over time.

Education: Building Capability for Strategic Value

The workforce moved beyond AI awareness and isolated experimentation to AI embedded into everyday ways of working. The capability program focused on embedding AI knowledge directly into how work is done—ensuring learning translated into consistent, practical application rather than standalone literacy.

Capability uplift was intentionally organisation-specific, concentrating on how AI should be used in this context to deliver strategic value. Education programs demonstrably increased AI sentiment across the enterprise, reinforcing confidence, shared expectations, and disciplined use.

Programs addressed:
– AI practices and expectations aligned to governance and risk appetite
– Future-ready mindsets and innovation capability
– Value creation and benefits realisation
– Leadership judgement, accountability, and escalation
– Practical, role-specific application in day-to-day work

Targeted upskilling delivered direct impact in critical operational areas and evolved over time toward more advanced, agentic AI capability as tools, governance, and operational ambition matured.


Why This Matters

By aligning AI governance, culture, education, and ways of working to enterprise strategy, the organisation avoided a common failure mode: AI adoption that scales activity, without real impact.

Project Sundial provided the foundation for AI to grow as a coherent operational capability, enabling transformation with confidence.


AI360 Strategic AI Advisory Program

The approach taken in Project Sundial directly informs the AI360 Strategic AI Advisory Program, a 10 month, high-impact partnership designed for organisations who need structure behind their AI transformation. As your Strategic AI Advisors, we work directly with your executive leadership team. We bring a unique combination of AI strategy, governance, people leadership. Get a clear AI approach that supports your business strategy; and a practical AI roadmap that balances innovation and risk.

Best for: Organisations that need a structured program that helps you deliver AI value, strategic alignment, and trust.

Enquire about the AI 360 Strategic AI Advisory Program
Next
Next

Beyond AI Readiness: The Case for Human Readiness