is currently a Research Scholar at the Centre for the Governance of Artificial Intelligence and a DPhil Candidate in the Faculty of Law of the University of Oxford. My doctoral research focuses on the application of Anglo-Canadian tort law to consequences caused via machine learning systems, particularly generative AI.
I evaluate the potential for language models to create statements upon which persons will reasonably rely and the attendant potential for liability under the Hedley Byrne principle in English and Canadian law.
I evaluate the potential for language models to create defamatory statements and analyse how such statements will be treated in Canadian and English defamation law.
We argue that s 24(1) of the Canadian Charter of Rights and Freedoms permits awarding damages for legislative action, that underlying constitutional principles limit the type of damages that should be awarded, and that Supreme Court precedent to date suggests that compensatory damages should be available only if a legislature has acted negligently. We suggest the concomitant standard of care requires the legislature to obtain properly qualified legal advice.
I argue that the unique Canadian common law cause of action called “constructive expropriation” by the Supreme Court of Canada in Annapolis Group Inc v Halifax Regional Municipality, 2022 SCC 36 is, and has always been, a wrongly conceived tort. I argue that legislative intervention is required to address the matter.
I reconsider the legal nature of the Independent Assessment Process (IAP) of the Indian Residential Schools Settlement Agreement (IRSSA). I argue the IAP should have been considered an arbitration. I construct a test for identifying when a process is an “arbitration”, show this part of the IAP meets that test, and evaluate treating this process as an arbitration would have affected the process.
D Matyas, P Wills, & B Dewitt, 48(1) Can Pub Pol'y 186
We explore the Canadian court system’s response to COVID-19 and the prospects for administering justice amid disasters through the lens of resilience. We propose that the business of judging during shocks can become more integral to the business as usual of court systems.
A Reuel, B Bucknall, S Casper, T Fist, L Soder, O Aarne, L Hammond, L Ibrahim, A Chan, P Wills, M Anderljung, B Garfinkel, L Heim, A Trask, G Mukobi, R Schaeffer, M Baker, S Hooker, I Solaiman, A S Luccioni, N Rajkumar, N Moës, J Ladish, N Guha, J Newman, Y Bengio, T South, A Pentland, S Koyejo, M J Kochenderfer, R Trager
We develop the concept of technical AI governance to refer to technical analysis and tools that support the effective governance of AI. We argue technical AI governance can help to (a) identify areas where intervention is needed, (b) identify and assess the efficacy of potential governance actions, and (c) enhance governance options by designing mechanisms for enforcement, incentivization, or compliance. We also taxonomise and catalogue open problems in the area.
A Chan, N Kolt, P Wills, U Anwar, C Schroeder de Witt, N Rajkumar, L Hammond, D Krueger, L Heim, M Anderljung
We suggest that being able to identify AI systems (both the individual instance and types of systems) could benefit society as such systems become increasingly pervasive. We propose a framework for doing so, consider why certain actors could have incentives to create identifiers, and highlight the limitations and risks of the proposed framework.
O Evans, O Cotton-Barratt, L Finnveden, A Bales, A Balwit, P Wills, L Righetti, W Saunders
We identify the potential for AI systems to produce lies and the broader consequences of AI-generated falsehoods. We propose a standard of avoiding negligent falsehoods and propose institutions to evaluate AI systems before and after they are deployed in the real world.