Login (DCU Staff Only)
Login (DCU Staff Only)

DORAS | DCU Research Repository

Explore open access research and scholarly works from DCU

Advanced Search

Auditing imbalance and bias in deep neural networks for multimedia content analytics

Mandal, Abhishek orcid logoORCID: 0000-0002-5275-4192 (2024) Auditing imbalance and bias in deep neural networks for multimedia content analytics. PhD thesis, Dublin City University.

This thesis introduces novel metrics and techniques to detect, measure, and mitigate gender and geographical bias in computer vision deep neural networks. It adopts an interdisciplinary approach, incorporating deep learning, feminist and decolonial theory, and ethics to address this issue. Artificial intelligence can amplify societal biases, marginalise vulnerable groups, and undermine public trust in AI, affecting its broader adoption. Bias in deep neural networks is complex, stemming from training data sourced from the internet, propagating through the machine learning pipeline, and impacting real-world applications. Factors such as model training methodologies, performance metrics, and model deployment further complicate this issue. The proposed metrics quantify a complex human concept – social bias in deep learning models – and provide insight into the internal bias dynamics of these ‘black-box’ systems. Part one examines geographical and gender bias and their intersection, focusing on bias origination in training data and its reflection in trained models. Novel metrics were introduced for measuring geographical bias in dataset creation methods and intersectional geographical and gender bias in multimodal models and they revealed their presence in both cases. Part two investigates how bias is managed internally in large visual-linguistic models like CLIP, DALL-E and Stable Diffusion. Traditional bias measures, focusing on accuracy, were found inadequate for capturing the extent of bias in multimodal vision models. Techniques from NLP were adapted to create metrics to capture bias in multiple modalities, including non-binary gender. The metrics revealed the presence of stereotypical gender bias in both models and showed that model architecture plays an important role in bias amplification. The metrics provided insight into how bias is handled inside the models. Part three uses data augmentation to debias vision models. This thesis develops and applies interdisciplinary metrics to detect, measure, and mitigate gender and geographical bias in vision models.
Item Type:Thesis (PhD)
Date of Award:November 2024
Refereed:No
Supervisor(s):Little, Suzanne and Leavy, Susan
Subjects:Computer Science > Artificial intelligence
Computer Science > Machine learning
Computer Science > Multimedia systems
Social Sciences > Gender
DCU Faculties and Centres:DCU Faculties and Schools > Faculty of Engineering and Computing
DCU Faculties and Schools > Faculty of Engineering and Computing > School of Computing
Use License:This item is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 License. View License
Funders:Science Foundation Ireland
ID Code:30556
Deposited On:10 Mar 2025 11:37 by Suzanne Little . Last Modified 10 Mar 2025 11:37

Full text available as:

[thumbnail of PhD_e-Thesis___AbhishekMandal_20214767.pdf]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Creative Commons: Attribution-Noncommercial-No Derivative Works 4.0
26MB

Downloads

Downloads per month over past year

Archive Staff Only: edit this record