The Role of the Statistician in Ethical Artificial Intelligence
Conference
64th ISI World Statistics Congress
Format: CPS Abstract
Session: CPS 56 - Statistical estimation VI
Tuesday 18 July 4 p.m. - 5:25 p.m. (Canada/Eastern)
Abstract
The prevalence of AI systems demands statisticians adapt foundational theory in order for it to remain applicable to models of increasing complexity, changing the ways in which researchers choose to characterise and quantify uncertainty. A particular example of this is the shift in the Bayesian paradigm towards concepts such as generalised Bayesian inference, which (arguably) fit better within the context of modern technology.
However, in the midst of this period of transition, the lines are being blurred as to what these different types of uncertainty mean in practice and how they should be incorporated into the decision-making process. When the results from an AI system are presented alongside credible/confidence intervals on key quantities of interest, are these numbers that should in fact build confidence and elicit action? Or, are they merely by-products that cannot help but inherit the opacity of the model from which they were derived?
An example of an area in which concerns such as these arise is the mass effort to define viable prior distributions over Bayesian Neural Networks (for examples of recent publications, see (Nalisnick, Gordon, & Hernández-Lobato, 2021) and (Tran, Rossi, Milios, & Filippone, 2022)). BNNs typically have thousands of parameters, meaning that work in this area often seeks to overcome computational challenges to enhance empirical performance, as opposed to focusing on the principle of what tangible meaning these priors can hold for decision-makers.
Gaps in foundational motivation are problematic when we consider the role that uncertainty quantification plays in the validity of decision-making, the subjective Bayesian school of thought maintaining that decisions are only valid if the uncertainty you act under comes from ``you or an expert you trust’’ (Smith, 2010). We argue that this foundational discussion is an issue of ethics as well as justifiable statistical practice and present a framework which aims to target these concerns via the quantification of subjective uncertainty across the inferences provided by a range of AI systems.
This constitutes what we believe to be a defensible M-open approach to subjective Bayesian inference in this setting, thus providing a clear foundational basis for the use of AI. Furthermore, we emphasise the importance of producing theory that can help to draw modellers further into the ethical AI discourse, by making issues such as opacity accessible through actionable mathematical frameworks.
Nalisnick, E., Gordon, J., & Hernández-Lobato, J. M. (2021). Predictive Complexity Priors. International Conference on Artificial Intelligence and Statistics (pp. 694-702). PMLR.
Smith, J. Q. (2010). Bayesian Decision Analysis: Principles and Practice . Cambridge: Cambridge University Press.
Tran, B.-H., Rossi, S., Milios, D., & Filippone, M. (2022). All You Need is a Good Functional Prior for Bayesian Deep Learning . Journal of Machine Learning Research , 1-56.