Login (DCU Staff Only)
Login (DCU Staff Only)

DORAS | DCU Research Repository

Explore open access research and scholarly works from DCU

Advanced Search

Scenario-based requirements elicitation for user-centric explainable AI

Cirqueira, Douglas orcid logoORCID: 0000-0002-1283-0453, Nedbal, Dietmar orcid logoORCID: 0000-0002-7596-0917, Helfert, Markus orcid logoORCID: 0000-0001-6546-6408 and Bezbradica, Marija orcid logoORCID: 0000-0001-9366-5113 (2020) Scenario-based requirements elicitation for user-centric explainable AI. In: International Cross-Domain Conference for Machine Learning and Knowledge ExtractionCD-MAKE 2020: Machine Learning and Knowledge Extraction, 25–28 Aug 2020, Dublin, Ireland (Online).

Abstract
Explainable Artificial Intelligence (XAI) develops technical explanation methods and enable interpretability for human stakeholders on why Artificial Intelligence (AI) and machine learning (ML) models provide certain predictions. However, the trust of those stakeholders into AI models and explanations is still an issue, especially domain experts, who are knowledgeable about their domain but not AI inner workings. Social and user-centric XAI research states it is essential to understand the stakeholder’s requirements to provide explanations tailored to their needs, and enhance their trust in working with AI models. Scenario-based design and requirements elicitation can help bridge the gap between social and operational aspects of a stakeholder early before the adoption of information systems and identify its real problem and practices generating user requirements. Nevertheless, it is still rarely explored the adoption of scenarios in XAI, especially in the domain of fraud detection to supporting experts who are about to work with AI models. We demonstrate the usage of scenario-based requirements elicitation for XAI in a fraud detection context, and develop scenarios derived with experts in banking fraud. We discuss how those scenarios can be adopted to identify user or expert requirements for appropriate explanations in his daily operations and to make decisions on reviewing fraudulent cases in banking. The generalizability of the scenarios for further adoption is validated through a systematic literature review in domains of XAI and visual analytics for fraud detection.
Metadata
Item Type:Conference or Workshop Item (Paper)
Event Type:Conference
Refereed:Yes
Uncontrolled Keywords:Explainable Artificial Intelligence; Requirements Elicitation; Domain Expert; Fraud Detection
Subjects:Computer Science > Artificial intelligence
Computer Science > Machine learning
Computer Science > Visualization
DCU Faculties and Centres:DCU Faculties and Schools > Faculty of Engineering and Computing > School of Computing
Research Institutes and Centres > Lero: The Irish Software Engineering Research Centre
Published in: Machine Learning and Knowledge Extraction. Lecture Notes in Computer Science 12279. Springer.
Publisher:Springer
Official URL:http://dx.doi.org/10.1007/978-3-030-57321-8_18
Copyright Information:© 2020 IFIP. Open Access
Funders:This research was supported by the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 765395, Science Foundation Ireland grant 13/RC/2094.
ID Code:24978
Deposited On:04 Sep 2020 14:57 by Douglas Da Rocha cirqueira . Last Modified 28 Mar 2022 12:22
Documents

Full text available as:

[thumbnail of Post_Print_Scenario-based Requirements Elicitation for User-Centric Explainable AI.pdf]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
563kB
Downloads

Downloads

Downloads per month over past year

Archive Staff Only: edit this record