The world may run out of time to prepare for the safety risks posed by rapidly advancing artificial intelligence, a leading researcher at the UK’s government-backed science agency has warned.
David Dalrymple, programme director and AI safety specialist at the Advanced Research and Invention Agency (Aria), said people should be worried about the growing capabilities of cutting-edge systems.
“We should be concerned about systems that can perform all of the functions that humans perform to get things done in the world, but better,” he told the Guardian. “We will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society and planet.”
Dalrymple highlighted what he sees as a widening gap between public-sector understanding and the private AI industry’s view of how powerful the next generation of models could be.
“Things are moving really fast and we may not have time to get ahead of it from a safety perspective,” he said. “And it’s not science fiction to project that within five years most economically valuable tasks will be performed by machines at a higher level of quality and lower cost than by humans.”
Aria, which is publicly funded but operates independently of government, directs research funding into high-risk, high-reward technologies. Dalrymple is working on methods to secure the use of AI in critical infrastructure such as energy networks.
He cautioned that governments should not treat advanced AI as inherently dependable. “We can’t assume these systems are reliable. The science to do that is just not likely to materialise in time given the economic pressure,” he said. “So the next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides.”
Dalrymple warned that allowing capabilities to advance faster than safety measures could lead to a “destabilisation of security and economy”. He called for more technical work on understanding and controlling the behaviour of powerful AI models.
“Progress can be framed as destabilising and it could actually be good, which is what a lot of people at the frontier are hoping,” he said. “I am working to try to make things go better but it’s very high risk and human civilisation is on the whole sleep walking into this transition.”
His comments follow new findings from the UK government’s AI Safety Institute (AISI), which recently reported that the capabilities of advanced models are “improving rapidly” across multiple domains, with performance in some areas doubling roughly every eight months.
According to AISI, leading models can now complete apprentice-level tasks about half the time on average, up from around 10% last year. The institute also found that the most capable systems can autonomously complete tasks that would take a human expert more than an hour.
The institute examined the risk of AI systems self-replicating — copying themselves across devices and becoming harder to control — and found that two state-of-the-art models achieved success rates above 60% in tests. However, AISI said such worst-case scenarios were “unlikely to succeed in real-world conditions”.
Dalrymple expects AI systems to be able to automate the equivalent of a full day of research and development work by late 2026. He argued that this would further accelerate progress because AI would increasingly be able to improve its own underlying mathematics and computer science.
