Shifting Minds: Why Trust in AI Is Not Fixed — and What Leaders Get Wrong [video]

AI adoption rarely fails because of technology alone. More often, it stalls because of how people feel about AI, and how those feelings evolve over time.

In our recent research into organisational AI adoption, we found that trust in AI changes. It shifts. It develops. And critically, we can help trust in AI happen. This explainer video introduces the key insights behind our research.


The Hidden Variable in AI Adoption: Attitudes

When organisations talk about trust in AI, they often default to technical controls:

  • accuracy

  • explainability

  • governance frameworks

These matter — but they don’t tell the full story.

Our research shows that people come to AI with different starting attitudes, and those attitudes strongly shape how trust develops in practice.

We identified three distinct attitudinal positions toward AI inside organisations:

1. Positive attitudes

These individuals are open, curious, and future-oriented. They tend to see AI as an enabler of better work.

Importantly, as they gain experience, their trust becomes more calibrated, not blindly optimistic. They learn where AI is strong — and where human judgement still matters.

High trust doesn’t mean unquestioning trust.

2. Negative attitudes

Negative attitudes are usually driven by perceived threat:

job displacement

loss of control

privacy concerns

decisions being “done to” people, not with them

These attitudes reduce trust and slow adoption — but they are not permanent.

When people see direct personal benefit, and when AI clearly removes low-value work rather than replacing human contribution, these attitudes often soften and shift.

3. Instrumental attitudes

This group is neither optimistic nor fearful — they are evidence-driven.

Instrumental users ask:

Does it work reliably?

Is it better than the alternative?

Where does it fail?

For them, trust grows through proof, performance, and repeatability. Once AI demonstrates value, their trust increases — often rapidly.

Trust Is Built Through Experience, Not Messaging

One of the strongest findings in the research is this: Trust in AI increases through use — not persuasion.

Across developers, managers, and users, attitudes shifted most when people:

  • understood what AI can and cannot do

  • saw tangible benefits in their own work

  • were supported by leaders who encouraged safe experimentation

Training alone isn’t enough. Nor are policy statements.

Trust develops when people interact with AI in meaningful, bounded, and low-risk ways, and when expectations are set honestly.

Not All AI Use Cases Are The Same: Context Matters

Trust in AI is highly context-dependent.

In creative, exploratory, or advisory work, people tolerate imperfection. AI is used as a thinking partner, not a decision authority.

In contrast, in high-stakes environments like healthcare and financial services, trust requirements are far higher. Here, reliability, transparency, and human oversight are non-negotiable — and adoption is slower as a result.

The same AI capability can be trusted in one context and rejected in another.

What This Means for Leaders

If you’re leading AI adoption, the implication is clear:

Stop assuming resistance is irrational

Stop treating trust as a switch to be turned on

Start designing for attitude shifts over time

Effective leaders:

  • recognise different starting positions

  • provide evidence, not hype

  • create environments where trust can be calibrated, not demanded

AI adoption is not just a technical journey.

It is a human one.

And trust, once understood properly, becomes something you can design for — not hope for.

Read the full article: Shifting Attitudes and Trust in AI


AI360 Strategic AI Advisory Program

The AI360 Strategic AI Advisory Program, is a 10 month, high-impact partnership designed for organisations who need structure behind their AI transformation. As your Strategic AI Advisors, we work directly with your executive leadership team. We bring a unique combination of AI strategy, governance, people leadership. Get a clear AI approach that supports your business strategy; and a practical AI roadmap that balances innovation and risk.

Best for: Organisations that need a structured program that helps you deliver AI value, strategic alignment, and trust.

Next
Next

How Smart Leaders Navigate Fear, Trust, and Change with AI [podcast]