Large multimodal deep learning models such as Contrastive Language
Image Pretraining (CLIP) have become increasingly powerful
with applications across several domains in recent years. CLIP
works on visual and language modalities and forms a part of several
popular models, such as DALL-E and Stable Diffusion. It is trained
on a large dataset of millions of image-text pairs crawled from the
internet. Such large datasets are often used for training purposes
without filtering, leading to models inheriting social biases from
internet data. Given that models such as CLIP are being applied
in such a wide variety of applications ranging from social media
to education, it is vital that harmful biases are detected. However,
due to the unbounded nature of the possible inputs and outputs,
traditional bias metrics such as accuracy cannot detect the range
and complexity of biases present in the model. In this paper, we
present an audit of CLIP using an established technique from natural
language processing called Word Embeddings Association Test
(WEAT) to detect and quantify gender bias in CLIP and demonstrate
that it can provide a quantifiable measure of such stereotypical associations.
We detected, measured, and visualised various types
of stereotypical gender associations with respect to character descriptions
and occupations and found that CLIP shows evidence of
stereotypical gender bias.
Proceedings of the 25th International Conference on Multimodal Interaction (ICMI ’23).
.
Association for Computer Machinery (ACM). ISBN 979-8-4007-0055-2/23/10