Login (DCU Staff Only)
Login (DCU Staff Only)

DORAS | DCU Research Repository

Explore open access research and scholarly works from DCU

Advanced Search

Multimodal Bias: Assessing Gender Bias in Computer Vision Models with NLP Techniques

Mandal, Abhishek orcid logoORCID: 0000-0002-5275-4192, Little, Suzanne orcid logoORCID: 0000-0003-3281-3471 and Leavy, Susan orcid logoORCID: 0000-0002-3679-2279 (2023) Multimodal Bias: Assessing Gender Bias in Computer Vision Models with NLP Techniques. In: 25th International Conference on Multimodal Interaction (ICMI ’23), October 9–13, 2023, Paris. ISBN 979-8-4007-0055-2/23/10

Large multimodal deep learning models such as Contrastive Language Image Pretraining (CLIP) have become increasingly powerful with applications across several domains in recent years. CLIP works on visual and language modalities and forms a part of several popular models, such as DALL-E and Stable Diffusion. It is trained on a large dataset of millions of image-text pairs crawled from the internet. Such large datasets are often used for training purposes without filtering, leading to models inheriting social biases from internet data. Given that models such as CLIP are being applied in such a wide variety of applications ranging from social media to education, it is vital that harmful biases are detected. However, due to the unbounded nature of the possible inputs and outputs, traditional bias metrics such as accuracy cannot detect the range and complexity of biases present in the model. In this paper, we present an audit of CLIP using an established technique from natural language processing called Word Embeddings Association Test (WEAT) to detect and quantify gender bias in CLIP and demonstrate that it can provide a quantifiable measure of such stereotypical associations. We detected, measured, and visualised various types of stereotypical gender associations with respect to character descriptions and occupations and found that CLIP shows evidence of stereotypical gender bias.
Item Type:Conference or Workshop Item (Paper)
Event Type:Conference
Uncontrolled Keywords:bias; fairness; multimodal models; trustworthiness
Subjects:Computer Science > Artificial intelligence
Computer Science > Machine learning
Social Sciences > Gender
DCU Faculties and Centres:UNSPECIFIED
Published in: Proceedings of the 25th International Conference on Multimodal Interaction (ICMI ’23). . Association for Computer Machinery (ACM). ISBN 979-8-4007-0055-2/23/10
Publisher:Association for Computer Machinery (ACM)
Official URL:https://doi.org/10.1145/3577190.3614156
Copyright Information:© 2023 The Authors.
Funders:<A+> Alliance / Women at the Table, Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289_2, European Regional Development Fund
ID Code:29470
Deposited On:19 Jan 2024 11:48 by Abhishek Mandal . Last Modified 19 Jan 2024 11:48

Full text available as:

[thumbnail of Mandal_ICMI_2023_camera_ready.pdf]
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader


Downloads per month over past year

Archive Staff Only: edit this record