Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Stakeholders in explainable AI

Preece, Alun, Harborne, Daniel, Braines, David, Tomsett, Richard and Chakraborty, 2018. Stakeholders in explainable AI. Presented at: AAAI FSS-18: Artificial Intelligence in Government and Public Sector Proceedings, Arlington, VA, USA, 18-20 October 2018.

[img]
Preview
PDF - Presentation
Download (3MB) | Preview

Abstract

There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable. However, there is no general consensus over what is meant by ‘explainable’ and ‘interpretable’. In this paper, we argue that this lack of consensus is due to there being several distinct stakeholder communities. We note that, while the concerns of the individual communities are broadly compatible, they are not identical, which gives rise to different intents and requirements for explainability/ interpretability. We use the software engineering distinction between validation and verification, and the epistemological distinctions between knowns/unknowns, to tease apart the concerns of the stakeholder communities and highlight the areas where their foci overlap or diverge. It is not the purpose of the authors of this paper to ‘take sides’ — we count ourselves as members, to varying degrees, of multiple communities — but rather to help disambiguate what stakeholders mean when they ask ‘Why?’ of an AI.

Item Type: Conference or Workshop Item (Paper)
Status: Unpublished
Schools: Computer Science & Informatics
Crime and Security Research Institute (CSURI)
Date of First Compliant Deposit: 19 October 2018
Last Modified: 30 May 2019 21:40
URI: http://orca.cf.ac.uk/id/eprint/116031

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics