Time to Audit | EU Moves Toward Strict Regulation of Workplace AI Systems
The European Parliament is preparing to vote on groundbreaking legislation that could fundamentally reshape how AI and algorithmic management systems operate in workplaces across the EU. For health and safety professionals navigating an increasingly digital landscape, this regulatory shift demands immediate attention and action.
What's Happening with EU AI Regulation?
The Committee on Employment and Social Affairs has called on the European Commission to introduce comprehensive legislation regulating algorithmic technologies in European workplaces. This follows Parliament's scheduled vote in December 2025, which could trigger a new directive specifically targeting automated monitoring and decision-making systems used to manage workers.
Here's what makes this significant: While the Platform Work Directive adopted in 2024 already regulates algorithmic management for platform workers, and the EU AI Act prohibits emotion recognition in workplaces, this new proposal goes further by specifically targeting algorithmic management (AM) systems that instruct, monitor, or evaluate employees. These systems have become remarkably prevalent across European firms; 69% use AI tools for instruction tasks, 33% deploy basic monitoring tools, and uptake continues to accelerate year-on-year.
The Core Requirements | Human Oversight and Transparency
The proposed regulations establish clear guardrails. Every decision taken or supported by algorithmic systems must have human oversight. Workers would gain the right to request explanations for decisions affecting their employment, including critical choices such as hiring, termination, contract renewals, remuneration changes, or disciplinary actions. These decisions must be made by humans, not algorithms alone.
This represents a pivotal philosophical shift: technology as a tool, not a decision-maker.
For facilities management teams overseeing multiple sites, contractors, and varied workforce arrangements, this distinction matters enormously. The cleaning crew scheduling algorithm, the automated maintenance dispatch system, and the contractor performance evaluation tool all fall under this emerging regulatory umbrella.
What This Means for Facilities Management
I've watched numerous technological waves reshape workplace practices. What distinguishes this regulatory moment is its recognition that AI systems aren't neutral; they can embed bias, erode autonomy, and create new occupational health risks even while promising efficiency gains.
Consider the data: 78% of global companies have integrated AI into at least one business function as of 2025, yet 64% of managers report that employees fear AI will make them less valuable at work. In facilities management specifically, where we're increasingly deploying predictive maintenance algorithms, automated inspection systems, and workforce optimisation platforms, this tension is palpable.
The proposed EU regulations would prohibit processing certain categories of worker data: emotional or psychological states, private communications, off-duty data, real-time geolocation outside working hours, and data relating to freedom of association. This directly impacts common FM applications, such as wearable safety monitors that track stress levels or apps that ping workers' locations throughout shifts.
The Broader Industry Shift
We're witnessing a global recalibration of workplace AI governance. The UK's Health and Safety Executive has articulated a "pro-innovation approach" that requires risk assessments for AI applications that affect health and safety. Meanwhile, the International Labour Organisation's recent research highlights that whilst digitalisation offers "immense opportunities to enhance workplace safety", the implementation of robots handling hazardous tasks and AI detecting incidents in real-time must avoid creating new risks around surveillance, algorithmic bias, and worker autonomy.
The statistics tell a compelling story: 27% of managers acknowledge inadequate protection of workers' physical and mental health when algorithmic management tools are deployed. Yet conversely, companies using AI for safety monitoring report up to 80% reductions in workplace incidents when implemented thoughtfully.
This paradox defines our challenge: capturing AI's preventive potential whilst mitigating its surveillance and control risks.
Actionable Steps for Facilities Management Professionals
Drawing from my experience establishing compliance frameworks across diverse sectors, here's what facilities management leaders should prioritise now:
Conduct an AI Systems Audit
Map every algorithmic system touching worker management, from shift scheduling platforms to PPE compliance monitoring. Document what data each system collects, how decisions are made, and where human oversight exists (or doesn't). This inventory becomes foundational for regulatory compliance and risk management.
Assess Decision-Making Pathways
Identify which systems make autonomous decisions versus supporting human judgment. The regulation requires human oversight for all consequential employment decisions. Where your systems operate autonomously, establish clear human review protocols before the regulations take effect.
Review Data Processing Practices
Evaluate what personal data your systems collect, particularly sensitive categories the proposed directive would prohibit. Many facilities management platforms now incorporate predictive analytics drawing on extensive worker data, ensure your practices align with emerging restrictions.
Engage Workers in Technology Decisions
The proposed regulations emphasise consultation with workers on algorithmic management systems. Beyond compliance, this participation improves implementation outcomes. Workers using systems daily often identify practical issues that designers miss.
Establish Transparency Mechanisms
Workers must understand how algorithmic systems affect their working conditions, what data is collected, and how decisions are made. Develop clear, accessible documentation explaining your AI systems in plain language, not technical specifications, but genuine transparency about impacts and safeguards.
Prepare Training Programmes
Both management and workers need training on algorithmic systems deployed in your facilities. This isn't regulatory box-ticking; effective human oversight requires a genuine understanding of these tools' capabilities and limitations.
The Opportunity in Regulation
It's tempting to view emerging AI regulation as a bureaucratic burden. I'd argue the opposite: these frameworks create opportunity for competitive advantage through responsible implementation.
Organisations that get ahead of regulation, building transparent, worker-centred AI systems now, position themselves as employers of choice whilst competitors scramble for compliance. The facilities management sector, which has often led in occupational safety innovation, can again demonstrate leadership by proactively addressing algorithmic management risks.
Moreover, proper governance addresses real business risks. Algorithmic systems that undermine worker autonomy, embed bias, or create surveillance concerns generate significant liability exposure, regulatory, reputational, and operational. The proposed EU requirements essentially mandate good practice that protects both workers and organisations.
Looking Forward
The December 2025 Parliamentary vote represents just one milestone in a longer regulatory journey. The Commission will have three months to respond, potentially launching a multi-year legislative process. But smart organisations won't wait for final regulations; they'll use this lead time to audit current practices, engage stakeholders, and build compliance frameworks proactively.
For facilities management professionals, this means integrating algorithmic governance into your existing health and safety management systems. The principles aren't foreign: risk assessment, hierarchy of controls, worker consultation, and continuous improvement. We're simply extending established occupational health frameworks to encompass algorithmic risks.
The technology itself isn't the challenge; we've always adapted to new tools. The challenge is to ensure these powerful systems serve human flourishing rather than diminish it. That's fundamentally a question of governance, transparency, and values, a territory where health and safety professionals have deep expertise to contribute.
Stay ahead of regulatory developments and workplace safety innovations. Sign up for the Astutis Quarterly Newsletter to receive expert analysis directly to your inbox or explore more insights from our This Week in Health and Safety series below, where we break down emerging regulatory changes and their practical implications for safety professionals across all sectors.
The future of work will be shaped by those who ask not just "Can AI do this?" but "Should AI do this, and if so, how can we ensure it serves everyone involved?" That's the conversation the EU is advancing, and one every workplace leader needs to join.
This Week in Health and Safety @Model.Properties.HeaderType>
Real Life Stories
