Gosavi, Purva Prasad, Kulkarni, Vaishnavi Murlidhar and Smeaton, Alan F. ORCID: 0000-0003-1028-8389
(2025)
Capturing Bias Diversity in LLMs.
The 2nd International Conference on Foundation and Large Language Models (FLLM2024)
.
This paper presents research on enhancements to Large Language Models (LLMs) through the addition of diversity in its generated outputs. Our study introduces a configuration of multiple LLMs which demonstrates the diversities capable with a single LLM. By developing multiple customised instances of a GPT model, each reflecting biases in specific demographic characteristics including gender, age, and race, we propose, develop and evaluate a framework for a more nuanced and representative AI dialogue which we call BiasGPT. The customised GPT models will ultimately collaborate, merging their diverse perspectives
on a topic into an integrated response that captures a broad
spectrum of human experiences and viewpoints. In this paper,
through experiments, we demonstrate the capabilities of a GPT
model to embed different biases which, when combined, can open
the possibilities of more inclusive AI technologies.
Item Type: | Article (Published) |
---|---|
Refereed: | Yes |
Uncontrolled Keywords: | Large Language Models, bias, gender, race, age, diversity. |
Subjects: | Humanities > Language Humanities > Culture Social Sciences > Social psychology |
DCU Faculties and Centres: | DCU Faculties and Schools > Faculty of Engineering and Computing DCU Faculties and Schools > Faculty of Engineering and Computing > School of Computing |
Publisher: | arXiv |
Official URL: | https://arxiv.org/abs/2410.12839 |
Copyright Information: | Authors |
ID Code: | 30827 |
Deposited On: | 24 Mar 2025 15:02 by Gordon Kennedy . Last Modified 24 Mar 2025 15:02 |
Full text available as:
Preview |
PDF
- Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Creative Commons: Attribution 4.0 1MB |
Dimensions Badge
Downloads
Downloads per month over past year
Archive Staff Only: edit this record