Questions pressantes

Comment l’IA élimine les frictions épistémiques et atténue la diversité

Auteurs-es

  • Nicole Ramsoomair Dalhousie University

Mots-clés :

biais algorithmique, intelligence artificielle, chambres d’écho, friction épistémique, injustice épistémique, théorie du point de vue

Résumé

Cet article examine la façon dont les grands modèles de langage (GML) favorisent l’homogénéisation du style et du contenu et dont ils contribuent à la marginalisation épistémique des groupes sous-représentés. En s’appuyant sur la théorie du point de vue, l’article explique comment les ensembles de données biaisés des GML perpétuent les injustices testimoniales et herméneutiques et limitent les différents points de vue. L’argument principal est que les GML atténuent ce que Jose Medina appelle la « friction épistémique », qui est essentielle pour remettre en question les visions du monde qui sont prédominantes et déceler les lacunes dans les points de vue courants, comme l’explique Miranda Fricker (Medina 2013, 25). Cette réduction favorise les chambres d’écho, diminue l’engagement critique et renforce la complaisance dans la communication. L’IA concilie les désaccords de communication, réduisant ainsi les possibilités de clarification et de création du savoir. L’article souligne la nécessité d’améliorer la littératie critique et la médiation humaine dans la communication par l’IA afin de préserver la diversité des voix. En préconisant un engagement critique à l’égard des résultats de l’IA, cette analyse vise à lutter contre les préjugés et les injustices potentiels et à garantir un environnement technologique plus inclusif. Elle souligne l’importance de maintenir des voix distinctes dans un contexte où la technologie évolue rapidement et appelle à redoubler d’efforts pour préserver la richesse épistémique que les différents points de vue apportent à la société.

Biographie de l'auteur-e

  • Nicole Ramsoomair, Dalhousie University

    Nicole Ramsoomair is an Assistant Professor of Philosophy at Dalhousie University, a position she has held since 2021. She specializes in social and political philosophy, feminist philosophy, and applied ethics. She earned her PhD in Philosophy from McGill University in 2019, with a dissertation exploring the conditions of responsibility in cases of radical personality change. Her current research focuses on social responsibility, freedom of speech, and children's rights.

Références

Alcoff, Linda Martín. 2007. “Epistemologies of Ignorance: Three Types.” In Race and Epistemologies of Ignorance, edited by Shannon Sullivan and Nancy Tuana, 39–57. Albany: State University of New York Press.

Bayruns García, E. 2022. “How Racial Injustice Undermines News Sources and News-Based Inferences.” Episteme 19 (3): 409–30. doi.org/10.1017/epi.2020.35.

Bolukbasi, Tolga, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.” In Proceedings of the 30th International Conference on Neural Information Processing Systems, 4349–57.

Collins, Patricia Hill. 1986. “Learning from the Outsider Within: The Sociological Significance of Black Feminist Thought.” Social Problems 33 (6): S14–S32. www.jstor.org/stable/800672.

Crawford, Kate. 2017. “The Trouble with Bias.” Paper presented at the Neural Information Processing Systems (NIPS) Conference, Long Beach, CA, December 4–9.

Deshpande, Abhay, Vinay Murahari, Tirthraj Rajpurohit, Abhishek Kalyan, and Kartik Narasimhan. 2023. “Toxicity in ChatGPT: Analyzing Persona-Assigned Language Models.” Manuscript in preparation.

Doctorow, Cory. 2023. “The ‘Enshittification’ of TikTok.” Wired, January 23. www.wired.com/story/tiktok-platforms-cory-doctorow/.

Fitzhugh-Craig, Martha. 2023. “Grief Tech Takes End-of-Life Planning to Another Level.” Information Today 40 (6): 35–36.

Fountain, Jane E. 2022. Digital Government: Advancing E-Governance through Innovation and Leadership. Cambridge, MA: MIT Press.

Fricker, Miranda. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press.

Gebru, Timnit. 2020. “Race and Gender.” In The Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, 16. Oxford: Oxford University Press. doi.org/10.1093/oxfordhb/9780190067397.013.16.

Grey, Mary L., and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Chicago: University of Chicago Press.

Grice, H. P. 1975. “Logic and Conversation.” In Syntax and Semantics, edited by Peter Cole and Jerry L. Morgan, Vol. 3, 41–58. New York: Academic Press.

Kelly, Mary Louise, host. 2024. “He Has Cancer—So He Made an AI Version of Himself for His Wife After He Dies.” Consider This (podcast), June 12. NPR. www.npr.org/transcripts/1198912621.

Lawrence, H. M. 2021. “Siri Disciplines.” In Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip, 121–35. Cambridge, MA: MIT Press.

Lee, Peter. 2024. “Synthetic Data and the Future of AI.” Cornell Law Review 110 (forthcoming). ssrn.com/abstract=4722162.

Medina, José. 2013. The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. New York: Oxford University Press.

Metz, Cade. 2023. “Chatbots Hallucinate Information Even in Simple Tasks, Study Finds.” New York Times, November 6. www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html.

Meuse, Matthew. 2023. “Bots Like ChatGPT Aren’t Sentient. Why Do We Insist on Making Them Seem Like They Are?” CBC Radio, March 17. www.cbc.ca/radio/spark/bots-like-chatgpt-aren-t-sentient-why-do-we-insist-on-making-them-seem-like-they-are-1.6761709.

Milligan, Ian. 2022. The Transformation of Historical Research in the Digital Age. Cambridge: Cambridge University Press.

Mitchell, Margaret, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. “Model Cards for Model Reporting.” In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT '19)*, 220–29. New York: ACM. doi.org/10.1145/3287560.3287596.

Nguyen, C. Thi. 2020. “Echo Chambers and Epistemic Bubbles.” Episteme 17 (2): 141–61.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Pariser, Eli. 2012. The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin Books.

Reed, Ronan. 2024. “Does ChatGPT Violate New York Times’ Copyrights?” Harvard Law School, March 22. hls.harvard.edu/today/does-chatgpt-violate-new-york-times-copyrights/.

Samuel, Sigal. 2023. “What Happens When ChatGPT Starts to Feed on Its Own Writing?” Vox, April 10. www.vox.com/future-perfect/23674696/chatgpt-ai-creativity-originality-homogenization.

Sun, Tony, et al. 2019. “Mitigating Gender Bias in Natural Language Processing: Literature Review.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 5459–68.

Tanksley, Tinisha. 2024. “Critical Race Algorithmic Literacies: A Framework for Black Liberation.” Journal of Media Literacy Education 16 (1): 32–48.

Wylie, Alison. 2012. “Feminist Philosophy of Science: Standpoint Matters.” Proceedings and Addresses of the American Philosophical Association 86 (2): 47–76. doi.org/10.2307/20620467.

Téléchargements

Publié

2025-03-19