Visit the intranet now to manage and update your profile, connect and collaborate by joining interest groups, and access essential resources such as policies, templates and useful guides.
Log in now
Leah Davis is an Electrical and Computer Engineering PhD Student in the Responsible Autonomous Intelligent Systems Ethics Lab at McGill University. She holds an MSc in Social Data Science from the University of Oxford and a BEng in Biomedical Engineering Co-op from the University of Guelph. Through diverse volunteer and industry experiences in AI ethics organizations, engineering education development, and Women in Engineering initiatives, her doctoral research investigates sociotechnical algorithmic auditing practices in collaboration with l’Université du Québec à Montréal and Mila – Quebec AI Institute. She hopes to contribute to forthcoming AI regulatory and policy movements in safety engineering, particularly in creating accountability mechanisms among actors in the Canadian AI ecosystem.
Born in North Bay, Ontario, Leah grew up in a Francophone community, improving her French through local interactions and participating in an immersive Québécois program at l’Université Laval. Believing that we are shaped by those around us, Leah is excited to collaborate with recipients, academic fellows, and industry mentors to learn about the shared experiences of those her work impacts. Within the scholarship program, she looks to actively engage in collective forums such as the Public Interaction Program. Outside of research, Leah enjoys kickboxing, rowing, and making vegan desserts.
2025
Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attibutes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11AI ethics principles. We find that most measures focus on four principles – fairness, transparency, privacy, and trust – and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding. Rismani, S., Shelby, R., Davis, L., Rostamzadeh, N., & Moon, A. (2025, October). Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 8, No. 3, pp. 2199-2213).
2025
General All-Purpose Sociotechnical Algorithmic Auditing Project: In collaboration with Professor AJung Moon at McGill University and Dr. Dominic Martin and Dr. Sébastien Gambs at Université du Québec à Montréal, in addition to L'Observatoire international sur les impacts sociétaux de l'IA et du numérique (OBVIA). Following an extensive scoping review process of over 3,000 algorithmic evaluation articles, this work aims to consider the complexities and potential solutions for systems-level AI audits of general-purpose AI systems, including large language models, visual and audio models, and image-to-text models.
2025
Gaps in Robotic/AI Governance: In collaboration with Professor AJung Moon at McGill University, Dr. Pierre Larouche at the Université De Montréal, and Dr. Keri Grieman at the University of Oxford, this project intersects engineering, regulation, and law. It examines the regulatory gaps in AI and robotics in Canada and their implications. It aims to unpack the following: a) the unique nature of the robotics ecosystem in Canada and its relationship to the AI ecosystem; b) existing regulations and standards on responsible robotics internationally and in Canada; and c) critical governance gaps for embodied AI systems in Canada. A primary objective aims to bridge policymakers, robotics and AI industry actors, and academics across the two disparate but closely related fields through consultations with start-ups, SME, industry, academia, non-profits, and standards organizations.
2025
Responsible AI Measures Project: In collaboration with Shalaleh Rismani and Professor AJung Moon at McGill University, and Dr. Renee Shelby, and Dr. Negar Rostamzadehat at Google Research, as well as Mila - Quebec AI Institute.
2025
The Vadasz Scholars Program supports outstanding graduate students accepted into a doctoral degree program at the McGill Faculty of Engineering, enabling future engineering leaders to gain the expertise and skills needed to solve problems that matter. The program provides a fully-funded stipend (~$128,000) to allow students to pursue their doctoral studies.
2023
Received for MSc, Social Data Science at the University of Oxford. Graduates of Canadian universities who pursue graduate study in the United States or the United Kingdom in the areas of international relations or industrial relations.
2025
Leah Davis is an Electrical and Computer Engineering PhD Student in the Responsible Autonomous Intelligent Systems Ethics Lab at McGill University. Through diverse volunteer and industry experiences in AI ethics organizations, engineering education development, and Women in Engineering initiatives, her doctoral research investigates sociotechnical algorithmic auditing practices in collaboration with l’Université du Québec à Montréal and Mila – Institut québécois d’intelligence artificielle. She hopes to contribute to forthcoming AI regulatory and policy movements in safety engineering, particularly in creating accountability mechanisms among actors in the Canadian AI ecosystem. Watch her three-minute thesis presentation titled “(Re)Pointing the Finger: Procedural Accountability in Humanizing Algorithmic System Evaluation” during an introductory meeting in Saint-Paulin, Quebec.
2025
Leah Davis’s research into how artificial intelligence systems interact with their social environments sits squarely at the intersection of engineering and the social sciences, blending technical expertise with ethical inquiry. This interdisciplinary focus is at the heart of her work. And it helps explain why the McGill PhD student in Electrical and Computer Engineering has won a 2025 Pierre Elliott Trudeau Foundation Scholarship, an award typically reserved for scholars in the social sciences and humanities. Davis, who began her doctoral work in January 2025, said she sees her work as bridging both worlds.