Login (DCU Staff Only)
Login (DCU Staff Only)

DORAS | DCU Research Repository

Explore open access research and scholarly works from DCU

Advanced Search

To be high-risk, or not to be - semantic specifications and implications of the AI act’s high-risk AI applications and harmonised standards

Golpayegani, Delaram orcid logoORCID: 0000-0002-1208-186X, Pandit, Harshvardhan J. orcid logoORCID: 0000-0002-5068-3714 and Lewis, Dave orcid logoORCID: 0000-0002-3503-4644 (2023) To be high-risk, or not to be - semantic specifications and implications of the AI act’s high-risk AI applications and harmonised standards. In: ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23), 12-15 June 2023, Chicago, USA. ISBN 979-8-4007-0192-4

Abstract
The EU’s proposed AI Act sets out a risk-based regulatory framework to govern the potential harms emanating from use of AI systems. Within the AI Act’s hierarchy of risks, the AI systems that are likely to incur “high-risk” to health, safety, and fundamental rights are subject to the majority of the Act’s provisions. To include uses of AI where fundamental rights are at stake, Annex III of the Act provides a list of applications wherein the conditions that shape high-risk AI are described. For high-risk AI systems, the AI Act places obligations on providers and users regarding use of AI systems and keeping appropriate documentation through the use of harmonised standards. In this paper, we analyse the clauses defining the criteria for high-risk AI in Annex III to simplify identification of potential high-risk uses of AI by making explicit the “core concepts” whose combination makes them high-risk. We use these core concepts to develop an open vocabulary for AI risks (VAIR) to represent and assist with AI risk assessments in a form that supports automation and integration. VAIR is intended to assist with identification and documentation of risks by providing a common vocabulary that facilitates knowledge sharing and interoperability between actors in the AI value chain. Given that the AI Act relies on harmonised standards for much of its compliance and enforcement regarding high-risk AI systems, we explore the implications of current international standardisation activities undertaken by ISO and emphasise the necessity of better risk and impact knowledge bases such as VAIR that can be integrated with audits and investigations to simplify the AI Act’s application.
Metadata
Item Type:Conference or Workshop Item (Paper)
Event Type:Conference
Refereed:Yes
Uncontrolled Keywords:Knowledge representation and reasoning; Information systems; Resource Description Framework (RDF);Governmental regulations; AI Act; high-risk AI; harmonised standards; taxonomy, semantic web
Subjects:Computer Science > Artificial intelligence
Computer Science > Information technology
DCU Faculties and Centres:DCU Faculties and Schools > Faculty of Engineering and Computing > School of Computing
Research Institutes and Centres > ADAPT
Published in: FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. . Association for Computing Machinery (ACM). ISBN 979-8-4007-0192-4
Publisher:Association for Computing Machinery (ACM)
Official URL:https://doi.org/10.1145/3593013.3594050
Copyright Information:© 2023 Authors
Funders:European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497 (PROTECT ITN), ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant#13/RC/2106_P2
ID Code:28330
Deposited On:16 May 2023 09:16 by Harshvardhan Pandit . Last Modified 14 Aug 2023 12:09
Documents

Full text available as:

[thumbnail of facct23_high_risk_ai_act_standards.pdf]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Creative Commons: Attribution 4.0
928kB
Metrics

Altmetric Badge

Dimensions Badge

Downloads

Downloads

Downloads per month over past year

Archive Staff Only: edit this record