Oxford Researchers Awarded ARIA funding to Develop Safety-first AI

23 April 2025

A team of Oxford University researchers led by Trinity’s Dave Parker is developing one of two major projects as part of the UK Government’s Advanced Research and Invention Agency (ARIA) Safeguarded AI programme. 

The programme is backed by £59 million funding and aims to develop novel technical approaches to the safe deployment of AI. As part of the Technical Area 3 (TA3) of the programme, nine research teams across the UK will focus on developing mathematical and computational methods to provide quantitative safety guarantees for AI systems. This will help ensure that advanced AI can be deployed responsibly in safety-critical sectors and domains such as infrastructure, healthcare, and manufacturing. 

Two of these projects are led by researchers at the University of Oxford; Towards Large-Scale Validation of Business Process Artificial Intelligence (BPAI) is led by Trinity’s Tutorial Fellow in Computer Science, Professor Dave Parker, along with Professor Nobuko Yoshida, both from Oxford’s Department of Computer Science. The project will provide formal, quantitative guarantees for AI-based systems in Business Process Intelligence (BPI). Using probabilistic process models and the PRISM verification toolset, the team will develop a workflow to analyse automated BPI solutions and evaluate them against safety benchmarks. The project will also involve collaboration with industry to apply the methods in practical settings.

Professors Nobuko Yoshida and Dave Parker said: 'Through the Safeguarded AI programme, ARIA is creating space to explore rigorous, formal approaches to AI safety. Our project addresses the challenge of verifying AI-based business process systems using probabilistic models and automated analysis techniques. By developing scalable workflows and benchmarks, we aim to provide quantitative guarantees that support the safe deployment of these systems in real-world settings.'

Professor Parker’s research is in formal verification: rigorous techniques for checking that systems function correctly. He is particularly interested in verification methods for establishing the correctness and robustness of AI-based systems. 

The second Oxford-led project is called SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility, an aims to develop an AI-based framework for scalable and adaptive coordination of a net-zero power grid in Great Britain, that will involve millions of additional grid-edge devices including electric vehicles, heat pumps, and home/community batteries. A team of researchers at the Oxford’s Department of Engineering Science will address this by developing rigorous safety specifications, a test problem curriculum, and a software platform supporting the design, benchmarking and scaling up of MARL solutions.