Majority of Health and Safety Professionals Fear AI Over-Reliance as Adoption Increases
If you asked me five years ago whether artificial intelligence would become one of the defining talking points in our profession, I’d have raised an eyebrow. AI felt like something for the tech sector to worry about. Today, it’s on every conference agenda, in every regulator’s briefing pack, and increasingly on the desks of the health and safety professionals I speak to every week.
This year’s iteration of our Voice of Our Learners survey, which gathered views from health, safety and environmental professionals across the UK, has unearthed a concerning trend that’s more complicated than the headlines suggest. The profession is broadly positive about AI. It can see the potential. It expects AI to play a significant role in the next five years. And yet, it is deeply worried about getting it wrong and candidly admits it doesn’t have the skills to get it right.
What Is the Biggest Concern About AI in Health and Safety?
When we asked respondents to name their single biggest concern about AI in workplace health and safety, one answer towered above the rest: over-reliance on technology versus human judgment. Six in ten business leaders chose this option, and nearly half of employees agreed.
Not privacy. Not job losses. Not cost or cyber risk. The profession’s primary fear is that organisations will hand too much decision-making to algorithms and erode the human expertise that actually keeps people safe.
In nearly four decades working in this industry, I’ve seen technologies come and go. Some transformed how we work. Others created new categories of risk that nobody anticipated. What’s striking about the AI conversation is that our profession is resisting the idea that technology alone is the answer. And I think that instinct is sound.
The HSE’s own 2025 AI report, which examined over 250 real-world uses of AI across regulated sectors, flagged the same tension. Alongside genuine benefits in predictive maintenance, real-time monitoring, and risk assessment, the report identified over-reliance on AI and workforce deskilling as significant risks. When the regulator and the regulated are worried about the same thing, we should pay attention.
How Prepared Are Health and Safety Professionals for AI?
The readiness picture makes the anxiety feel entirely rational. Our survey found that nearly 40% of employees describe themselves as “not very” or “not at all” prepared to use AI tools in their work. Among business leaders, that figure is still 28%. Fewer than one in five employees, and fewer than one in six leaders, consider themselves “very prepared.”
This mirrors what the International Labour Organization found when it published its landmark report on AI and workplace safety in April 2025. AI-powered systems can reduce hazardous exposures and prevent injuries, but only when implemented with proper governance, training, and worker involvement at every stage. Without those foundations, the technology risks creating the very problems it was meant to solve in the first place.
Employees are nearly twice as likely as business leaders to report that their organisation is actively using AI for health and safety. They’re also nearly twice as likely to say their organisation isn’t even considering it. That polarisation suggests AI is being adopted in pockets, often without a coordinated strategy from the top. Some teams are getting on with it; others have been left out of the conversation entirely.
What Should Health and Safety Professionals Do About AI Now?
Despite the concerns, the profession isn’t pessimistic. Over half of both business leaders and employees expect AI to be a “significant tool alongside traditional methods” within five years. Only a tiny minority think it will have minimal impact.
That’s the right framing, and it aligns with the HSE’s own position. In January 2026, the regulator set out its approach to AI, making clear that existing health and safety legislation applies fully to AI-driven systems. The Health and Safety at Work Act’s goal-setting approach means organisations can’t outsource their duties to an algorithm. You still need the risk assessment. You still need human oversight. AI doesn’t change what the law requires.
I think there are three things every health and safety professional should be doing right now:
- Audit your AI exposure. Find out whether AI tools are already being used in your organisation, formally or informally. Our survey suggests there’s a good chance frontline teams are using tools that leadership doesn’t know about. That’s a glaring governance gap that needs to be closed.
- Keep the human at the centre. This is the one our respondents already understand instinctively. AI should augment the judgment of your well-trained safety professionals. The organisations that get this right will be the ones that treat AI as a tool in a competent professional’s hands.
The full findings from our Voice of Our Learners Report 2026 are available now, covering AI adoption, training challenges, skills priorities, and more.
To stay ahead of the trends shaping health and safety, sign up for the Astutis Quarterly Newsletter for expert analysis delivered straight to your inbox, or explore more from the This Week in Health and Safety series.
This Week in Health and Safety @Model.Properties.HeaderType>
Real Life Stories
