Many Professionals Are Using AI to Write Risk Assessments. We Need to Talk About It.
Somewhere in Britain this morning, a health and safety practitioner opened a tab, typed a handful of bullet points into a generative AI model, and pasted the output into a risk assessment. They saved the file. Their colleague signed it off. Nobody read it properly.
We need to stop talking about it as though the technology is driving the behaviour. It isn’t. The question every practitioner using AI should be asking themselves is, are we using AI to support workplace safety, or are we all too willing to give away the most important part of safety? Our professional judgement.
The International Labour Organisation has dedicated this year’s World Day for Safety and Health at Work to AI and digitalisation in the workplace. Quite rightly too. But before we raise the alarm about algorithmic management of warehouse pickers or the psychosocial impact of always-on monitoring, we should be honest about the choices being made by many.
Why This Matters More Than the Usual Tech Panic
I have been in this profession for nearly four decades. I have watched practitioners move from paper to spreadsheets, from spreadsheets to platforms, from platforms to apps. Every shift brought the same anxiety about whether the tool would replace the person. The only thing that changed was where the person applied their judgment. The documents got faster. The thinking always stayed with the competent professional. However, there are many using generative AI who are completely outsourcing the production of one of the foundational elements of health and safety, and they know it.
When a practitioner prompts a model to produce a hazard list, control measures, a residual risk rating, and a recommended review period, they are delegating the core intellectual work of the assessment to a tool that cannot be held accountable. If the practitioner doesn’t know enough to interrogate what comes back, they can’t tell where the output is right, where it is wrong, and where the model has confidently invented something that sounds plausible but isn’t true.
The Profession Already Knows There’s a Problem
In our 2026 edition of the Voice of Our Learners Report, we discovered two findings that speak directly to the choices practitioners are making right now.
The first is that nearly two in five employees in our sector (39%) told us they are not very or not at all prepared to use AI tools in their work. Among Business Leaders, 28% said the same. Over half of both groups are optimistic about the technology’s potential. They are people who want to use AI and are telling us, plainly, that they haven’t been equipped to do so safely. Some of them are using it anyway. That is a choice about when to wait and when to press on, and right now, too many people are pressing on without the skills or discretion to know whether the output is fit for purpose.
The second is that the single biggest AI concern raised by both cohorts, outweighing cost, privacy, data security, and job displacement, was over-reliance on technology versus human judgment. Six in ten Business Leaders flagged it. Nearly half of the employees did. Overwhelmingly so.
Put those findings next to each other, and we’re starting to see a worrying trend. A workforce that knows it isn’t ready. A leadership cohort that knows the main risk is people leaning too heavily on the tool. And, in the meantime, many use it anyway because it saves them time against their myriad other duties.
Competency Should Govern Every Prompt
I want to be clear about where I stand. I am not against practitioners using AI. Used well, by a competent person, the technology has the potential to support health and safety in the workplace. It can accelerate document drafting. It can surface patterns in incident data that a human eye would miss. It can translate technical guidance into plain English for a shop-floor audience.
Every legitimate use case depends on a qualified person making informed judgments about what to keep, what to rewrite, and what to throw away. Someone who can read an AI-drafted risk assessment and know, immediately, that the control measure for working at height on a pitched roof is wrong. Someone who can spot that the legislation cited is either outdated or hallucinated. Someone who can tell the difference between a copy-and-paste template and a risk assessment that reflects the job being done, on the actual site, by the actual people.
What Good Looks Like
If people in your organisation are using generative AI in any part of the health and safety function, and if you’re being honest with yourself, they probably are, there are three things worth doing this week.
- First, ask them. Directly, without judgement. Where are they using AI? Risk assessment drafting. Toolbox talks. Method statements. Training content. Incident report wording. The people doing the work know what they’re doing with it. Managers who haven’t asked don’t. That is a governance failure that really needs to be addressed.
- Second, make competence the condition of use. Anyone using AI to draft a safety document should be qualified to interrogate what the tool produces. Anyone signing off on that document should be qualified to interrogate it again. If your sign-off chain doesn’t have the qualifications to catch a hallucinated regulation or a wrong control measure, fix the chain before you ban the tool.
- Third, invest in training. Not AI training. Safety training. The foundational qualifications that let a practitioner look at any document, AI-generated or otherwise, and know whether it is fit for purpose. Our own data tells us the profession wants this. The appetite is there. What’s missing, in too many organisations, is the decision to fund it.
Stay ahead of the curve. Subscribe to the Astutis This Week in Health and Safety series for more expert commentary on the issues shaping the profession, or sign up for the Astutis Quarterly Newsletter for the latest in training, regulation, and industry analysis.
This Week in Health and Safety @Model.Properties.HeaderType>
Real Life Stories
