
Away from all the scaremongering headlines, AI is playing a valuable supporting role in health and safety on a site near you. Denise Chevin talks to some of the people making use of it.
From smart assistants like Siri via navigation apps like Google Maps to predictive text and online chatbots, artificial intelligence (AI) has become woven into our everyday lives without many of us even noticing it.
But over the past 18 months, AI has been advancing at pace, with the likes of large language models (LLMs) such as ChatGPT and Microsoft Copilot, which use deep learning from vast amounts of data taken from the internet (or internal documents) to analyse and understand text or images and then generate their own output based on prompts provided by the user. Reports can be generated in a fraction of the time.
The result is providing everyone with their own assistant – or, as one person described it, “a very smart intern that sometimes gets things wrong”. These tools are revolutionising the workplace.
AI is beginning to play a growing and transformative role in improving health and safety on construction sites, as well as assisting CDM coordinators and health and safety professionals.
It is still early days and you might not be able to trust a risk assessment from ChatGPT yet without some very careful scrutiny, but the message from experts is very clear: ignore at your peril.
So, what is AI and how is it being used to aid health and safety in construction?
Human intelligence
AI refers to the development of computer systems that can perform tasks that normally require human intelligence. These include things like understanding speech, recognising images, translating languages, making decisions and learning from experience.
It has been making its way onto construction sites in several technologies, often with the multiple aims of increasing efficiency, quality and safety – and sometimes by putting people out of harm’s way.
These functions include using robots to do surveying tasks in dangerous areas, like Boston Dynamics’ dog-like Spot. This quadrupedal robot is being used at the Sellafield nuclear site in Cumbria, for example, to assist with decommissioning and cleanup efforts.


Training is another area that is benefiting enormously (see box, p13), while wearable sensors that alerted workers if they were too close to each proved helpful during Covid,
for example.
But AI is now increasingly being used on construction sites to proactively manage health and safety risks. Gena Ibraev, a principal consultant at professional services business Shirley Parsons, recently delivered a CPD webinar for APS on the role of AI technology in construction project safety.
Enhancing safety assurance
The session focused on how AI tools can enhance safety assurance in construction projects, concentrating on two main advances – the use of LLMs such as ChatGPT for producing risk assessments, and the use of computer vision coupled with predictive analytics.
The latter is where AI-powered cameras monitor live site activity to ensure workers are wearing the correct personal protective equipment (PPE), such as hard hats, hi-vis vests, and harnesses. Ibraev showed by how this can work by demonstrating the software tool DeepX. Systems like this can flag when individuals enter restricted or hazardous areas, triggering real-time alerts to site managers.
Combined with predictive analytics, this type of software can suggest when and where accidents are most likely to occur, either through historical data or through forecasting.
For example, if data shows a pattern of slips and falls on wet surfaces near scaffolding during early morning shifts, site supervisors can be prompted to increase monitoring or adjust work schedules accordingly. Or forecasting the trajectory of moving vehicles on site in real time from the site camera system can be used to alert drivers if it was likely they would crash.
Automated risk assessments
One of the emerging trends that could have the biggest impact on health and safety professionals is the use of AI software based on LLMs like ChatGPT to help generate risk assessments.
Ibraev demonstrated how uploading a picture into ChatGPT of, say, a trench, means that, with the right prompts, the software can have a good stab at providing a comprehensive risk assessment within a few seconds.

The software can be ‘trained’ into using company HSE protocols, terminology and can use RAG ratings. But, as Ibraev pointed out, it is not infallible, and there were a few key areas that it missed.
Another issue is, as he says, technology does not know its own limitations and will produce a risk assessment even when it is not given the full context of the situation.
“Technology doesn’t care about ethics or limitations, so the onus is on you to know what you want to do with it,” he says.
Professional scrutiny
Ibraev’s view, and that of others interviewed, is that this technology can provide a good starting-off point, but it still needs a professional to scrutinise the information.
Specialist mobile phone apps that can generate risk assessments from photos taken by field workers have been available for some time. These can be quickly emailed back to specialist H&S professionals at base, to scrutinise whether what has been generated is adequate or not.
“It means they save a lot of time,” Ibraev commented in the webinar.
Ibraev says that the arrival of systems like ChatGPT may sound scary, as professionals wonder if it is going to put them out of a job – and there may be sceptical firms that ban its use altogether.
He warns against this: “As minimum, you need to be aware of it – and in some ways it gives more importance to your role as professionals to critically assess what you have been given, for example, by your subcontractors.”
Streamlining CDM processes
Fran Watkins-White, head of CDM services at Bureau Veritas UK, is among the early adopters championing AI in the organisation. Her focus is on using AI to streamline processes in the CDM domain and beyond, exploring how it can alleviate manual workloads and enhance technical oversight.
One of the key areas she is targeting is the collation and consolidation of risk registers. Typically, this involves gathering disparate information from architects, engineers and stakeholders to form a cohesive document.
“You spend a lot of time pulling together information from different consultants and putting it into one document,” Watkins-White explains.

“AI, in this context, acts as a valuable assistant, allowing professionals to focus on high-value tasks, ie, reviewing and technical oversight of information, rather than administrative collation.”
Bureau Veritas is using an in‑house AI solution – in preference to commercial models like ChatGPT – which provides the security required for handling data.
“It’s our in-house version, which means it’s secure, so information doesn’t leave the business systems,” Watkins-White notes. This tailored AI can be trained to perform specific tasks, such as reviewing documents and extracting key points, making it a versatile tool in her daily work.
The initiative is gradually being rolled out across the company, with all employees now having access to the AI tool from their desktops. The goal is to encourage exploration, with Watkins-White and her fellow ambassadors providing guidance.
In terms of potential, she sees AI playing a significant role in supporting CDM professionals – not replacing them. “You still need technical brains to review and provide oversight,” she emphasises. “But you use AI to support process and production… to do it quicker and more speedily.”
Reviewing design drawings
Watkins-White envisions AI evolving to help review design drawings, identify safety compliance issues and even generate key questions for design teams based on visual inputs.
She is candid about the limitations and evolving nature of AI. “It’s only as good as how it’s learned,” she cautions. “If you give it the wrong questions, it will give you the wrong answers.”
James Hymers, who runs his own consultancy Honest Safety, has also been exploring its use to generate risk assessments using Microsoft Copilot.
He has been impressed that the software generates detailed risks even in very specialised areas – such as working with rare earth metal magnets. He has also been impressed by the reports’ structure. But, like others, he says using AI in this way must be treated with caution.
Seb Corby, principal consultant at Safetytech Accelerator, which brings together technology startups with industry partners, is working closely with HSE to understand how AI can enhance compliance without compromising accuracy – and how new technologies can be implemented in safety-critical environments.
He says that there is an acknowledgement that companies are spending huge amounts of time on paperwork but HSE needs to ensure the tools used to automate compliance are genuinely effective.
One of the key insights from the evaluations is that many AI tools simply aren’t accurate enough to be trusted with safety-critical decisions. This is especially true for LLMs, like ChatGPT, which can produce plausible but sometimes incorrect answers.The bottom line, Corby says, is: “We’re not quite there.”
Inconsistency
A major barrier is the inconsistency in how safety data is captured, logged and interpreted across the industry. Without a shared taxonomy – for example, whether a hazard is called a “risk”, “condition” or “event” – it’s nearly impossible to analyse data effectively at scale. This issue is something HSE, tech developers and construction firms must solve together.
Like others, Corby points to the shift toward AI that assists workers rather than replaces them as a promising development. This includes FYLD (see below), a platform which allows users to conduct risk assessments by filming a site and narrating what they see. The system analyses footage to generate assessments – even evaluating whether the user appears alert.
Looking ahead, he believes the biggest breakthroughs may come not from AI alone, but from improvements like greater automation. “Taking people out of dangerous environments altogether might ultimately reduce the need for reactive safety systems. But that’s still a long way off.”
“People forget electricity took 50 years to have an impact on productivity,” he notes. “We’re still early. There have been a lot of important failures – finding out what doesn’t work is progress too.”
Four ways artificial intelligence is helping to improve safety

A plethora of new AI-driven tools are appearing in construction that in various ways are geared to improving safety and productivity. Here’s four:
FYLD app
The FYLD app uses video analytics and AI to help operatives and managers identify and record hazards and control measures they see in their work environment. Using the app, field workers take 30-second videos of their site, talking through hazards that are present or noticeably absent.
The software’s AI-engine then reviews the video and audio data and generates a visual risk assessment (VRA) with a bullet-point list of potential risks and proposed control measures.
Field workers can assess and amend the VRA before sharing it with a remote manager for their review and input. The Kier Highways team on the National Highways Area 13 contract used Fyld to conduct risk assessments up to 85% faster.
DeepX
DeepX employs AI-driven computer vision systems to automatically detect whether workers are wearing the required PPE, such as helmets, safety vests and gloves. This real-time monitoring ensures adherence to safety protocols and helps maintain compliance with regulations.
By reducing manual oversight, these systems minimise human error and enhance overall safety on construction sites.
Through continuous video analysis, DeepEx’s technology identifies potential safety hazards, such as unsafe worker behaviours, unauthorised access to restricted areas or equipment malfunctions. The system provides immediate alerts to supervisors, enabling prompt corrective actions and preventing accidents before they occur.
Schindler Robotic Installation System
Skanska has recently deployed the Schindler Robotic Installation System for Elevators (RISE) at the 105 Victoria Street project in central London. This is the first time this technology has been used in the UK.
Schindler RISE is a self-climbing robot designed to navigate elevator shafts independently while installing components with “precision and speed”.
Equipped with tools to drill holes and install anchor bolts, it significantly reduces human involvement in this part of the process. Such tasks can lead to fatigue when performed at height, but using a robot eliminates this risk. The specialist operator monitors the robot’s movements via a remote-control panel.
SafeXtend
SafeXtend is an adaptive learning system designed for educational and training environments that uses an advanced virtual reality (VR) training platform powered by AI.
This provides an immersive, interactive learning experience for construction workers, by providing accurate simulations of construction sites, in which personnel can engage in realistic scenarios that include risk assessment and safety protocol training.
The system claims to be able to evaluate trainee performance and for employers to monitor training effectiveness.