The silent effect of AI at work: why burnout, botshit and rising expectations are creeping in
AI in the workplace is often sold as a win-win: improved productivity, fewer errors, and more time for high-value work. Business cases highlight FTE savings, not necessarily through layoffs, but by quietly not replacing staff when they leave. This looks efficient on paper—but beneath the surface, the cost to people is building.
This isn’t just about job loss. It’s about what happens when AI begins to subtly—but significantly—reshape how we work, without giving us the time, space, or support to adapt.
The quiet squeeze on workers
Canva recently made headlines for laying off nine tech writers, suggesting their work could now be done with AI. It's one of the more visible examples of displacement. But in most organisations, the shift is subtler. Teams augmented by AI are expected to deliver more, with less. The AI doesn’t replace them—it surrounds them.
Deloitte’s research into the “silent impacts of AI” of AI at work describes a paradox: while AI promises to remove low-value tasks, workers often report feeling more pressure, not less. Cognitive offloading is increasing—AI drafts the email, summarises the meeting, generates the proposal—but the thinking doesn’t go away. It just gets faster. We’re no longer processing ideas deeply, but reacting to a flood of shallow content.
And with AI’s help, the content just keeps coming.
Welcome to the era of ‘botshit’
One of the most overlooked and increasing phenomenon in today’s AI-enabled workplace is the rise of low-quality, AI-generated content—aptly termed botshit. These are Teams messages, documents, and proposals produced at lightning speed, often polished on the surface but hollow underneath. They lack the context, nuance, or critical thinking required to be genuinely useful. People are left sifting through a flood of AI-generated material, trying to extract meaning or rework it into something usable.
And. There. Is. A. Lot.
Content created without human oversight and judgement is quickly becoming a form of cognitive pollution—cluttering workflows and eroding the quality of thought.
While AI may save time for the person generating the content, it often creates more mental load for the person on the receiving end. This illustrates a broader dynamic: AI doesn’t always reduce the total amount of work. It simply shifts where the effort is felt. And when implemented poorly, it can amplify pressure rather than relieve it.
The loss of reflective space
Perhaps the most insidious cost is one we don’t measure: the erosion of time to think.
In knowledge work, deep thinking—whether strategic, creative, or ethical—is essential. Our digital ecosystem and emails started the trend. Now, as generative AI accelerates our pace, those moments of reflection are shrinking. The faster we go, the less time we have to question, sense-make, or pause. As a result, we risk becoming more efficient but less wise.
We’re seeing this across industries. Workers feel like they’re falling behind even as the narrative around AI tools claims to be lifting them up. In interviews I’ve conducted as part of my research, employees talk about “chasing the latest AI,” or “feeling like the pace is picking up with no time to breathe.” It’s a disconnection that’s hard to articulate for people, but it’s real.
What leaders need to do next
AI isn’t going away—nor should it. The potential for augmenting human capability is immense. But if we want sustainable, human-centred AI adoption, we need to manage the costs we’re not currently counting.
That means:
Considering cognitive and emotional load, not just productivity metrics
Giving teams time to adapt, reflect, and build new mental models
Addressing the downstream impacts of poor AI-generated content
Knowing when it is appropriate to manage for quality of work, rather than quantity
Designing for trust, transparency, and meaningful work—not just efficiency
AI is not a neutral tool. It reshapes the rhythms of our work and the expectations placed upon us. And while it may be invisible in quarterly reports, the effect on people is real.
If we want AI to support humans to be their best, we need to notice and raise these silent effects—and start designing around them.
How the AI360 Review supports people-centred AI adoption
The AI360 Review helps leaders move beyond surface-level AI adoption by uncovering the hidden impacts on people—like cognitive overload, burnout, and the rise of low-quality AI-generated content. Through our structured assessment and strategic recommendations, we support organisations to identify where AI is adding value—and where it's adding friction. We help leaders build the human capability needed to work effectively with AI, implement governance that reduces epistemic risk, and design change approaches that support sustainable, trust-based adoption.
If you're navigating AI transformation, the AI360 Review offers a clear, people-centred path forward.