Make a Gift
I am Professor of Computing Systems in the Department of Computer Science.
Before coming to Oxford, I was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland.
I am a Fellow of the Royal Society, a Fellow of ACM, a member of Academia Europea, a Fellow of EATCS, a Fellow of the BCS and Fellow of the Polish Society of Arts & Sciences Abroad.
I was the first female winner of the 2018 Royal Society Milner Award and was awarded the BCS Lovelace Medal in 2019.
I won two ERC (European Research Council) Advanced Grants, VERIWARE (www.veriware.org) and FUN2MODEL (www.fun2model.org), and an EPSRC Programme Grant on Mobile Autonomy.
I teach undergraduates and MSc students in the Department of Computer Science and advise graduate students at Trinity. The course on ‘Probabilistic Model Checking’ that I initiated is an introduction to my research, and it includes both theory and practical work using the probabilistic model checker PRISM developed in my group. I am also involved in two EPSRC Centres for Doctoral Training, AIMS (Autonomous Intelligent Machines and Systems) and SABS (Sustainable Approaches to Biomedical Science: Responsible and Reproducible Research). In addition, I frequently give tutorials about my research at conferences and summer schools.
Computing infrastructure and Artificial Intelligence (AI) has become indispensable in our society, with examples ranging from online banking, to robotic assistants, intelligent vehicles and electronic medical equipment. Software faults in such systems can have disastrous consequences.
My research is concerned with developing modelling and automated verification techniques that can guarantee the stable, safe, secure, timely, reliable and resource-efficient operation of computing systems. More recently, I have turned my attention to safety, robustness and trust in AI and autonomous robots. Model checking is a verification technique that can establish whether certain properties, usually expressed in temporal logic, hold for a system model. The models describe how systems move between states by executing actions, and are typically represented as state-transition graphs. Model checking explores all system executions and, unlike testing, amounts to a mathematical proof. This task is inherently challenging in view of infinite state spaces and undecidability of the underlying problem. In conventional model checking, the properties of interest are qualitative, such as safety (e.g. ‘the car will never stall while driving autonomously’ for intelligent vehicles). The distinctive aspect of my work is its focus on probabilistic and quantitative verification techniques.
I led the development of the leading international probabilistic symbolic model checker PRISM, which has been downloaded over 75,000 times and used to analyse over 100 case studies, discovering flaws in several of them. PRISM systematically explores a model of a system against properties such as ‘what is the worst case time to deliver an update from the sensor monitoring distance to the car ahead?’ (for an intelligent vehicle) and ‘what is the expected power consumption during the first hour of operation?’ (in a body-sensor network). PRISM has proved its usefulness in many distinct fields, including distributed and cloud computing, wireless networks, security, robotics, quantum computing, game theory, biology and nanotechnology.
Marta Kwiatkowska, Gethin Norman, David Parker and Gabriel Santos, ‘PRISM-games 3.0: Stochastic Game Verification with Concurrency, Equilibria and Time’, International Conference on Computer Aided Verification (CAV 2020)
Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, Doina Precup, ‘Invariant Causal Prediction for Block MDPs’, International Conference on Machine Learning (ICML 2020) (forthcoming, 2020)
Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang and Marta Kwiatkowska,
‘A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees’, Theoretical Computer Science (2020)
Matthew Wicker, Luca Laurenti, Andrea Patane and Marta Kwiatkowska, ‘Probabilistic Safety for Bayesian Neural Networks’, 36th Conference on Uncertainty in Artificial Intelligence (UAI 2020), AUAI Press (forthcoming, 2020)
Xiaowei Huang, Marta Kwiatkowska, Maciej Olejnik, ‘Reasoning about Cognitive Trust in Stochastic Multiagent Systems’, ACM Transactions on Computational Logic (2019).
Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska and James Worrell, ‘On the Hardness of Robust Classification’, Conference on Neural Information Processing SystemsNeurIPS (2019)
Computing infrastructure and Artificial Intelligence (AI) has become indispensable in our society, with examples ranging from online banking, to robotic assistants, intelligent vehicles and electronic medical equipment. Software faults in such systems can have disastrous consequences. My research addresses safety, trust and robustness of computing and AI systems.