Pressing Matters
How AI Irons Out Epistemic Friction and Smooths Over Diversity
Keywords:
algorithmic violence, artificial intelligence, echo chambers, epistemic friction, epistemic injustice, standpoint theoryAbstract
This paper explores how Large Language Models (LLMs) foster the homogenization of both style and content and how this contributes to the epistemic marginalization of underrepresented groups. Utilizing standpoint theory, the paper examines how biased datasets in LLMs perpetuate testimonial and hermeneutical injustices and restrict diverse perspectives. The core argument is that LLMs diminish what Jose Medina calls “epistemic friction,” which is essential for challenging prevailing worldviews and identifying gaps within standard perspectives, as further articulated by Miranda Fricker (Medina 2013, 25). This reduction fosters echo chambers, diminishes critical engagement, and enhances communicative complacency. AI smooths over communicative disagreements, thereby reducing opportunities for clarification and knowledge generation. The paper emphasizes the need for enhanced critical literacy and human mediation in AI communication to preserve diverse voices. By advocating for critical engagement with AI outputs, this analysis aims to address potential biases and injustices and ensures a more inclusive technological landscape. It underscores the importance of maintaining distinct voices amid rapid technological advancements and calls for greater efforts to preserve the epistemic richness that diverse perspectives bring to society.
References
Alcoff, Linda Martín. 2007. “Epistemologies of Ignorance: Three Types.” In Race and Epistemologies of Ignorance, edited by Shannon Sullivan and Nancy Tuana, 39–57. Albany: State University of New York Press.
Bayruns García, E. 2022. “How Racial Injustice Undermines News Sources and News-Based Inferences.” Episteme 19 (3): 409–30. doi.org/10.1017/epi.2020.35.
Bolukbasi, Tolga, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.” In Proceedings of the 30th International Conference on Neural Information Processing Systems, 4349–57.
Collins, Patricia Hill. 1986. “Learning from the Outsider Within: The Sociological Significance of Black Feminist Thought.” Social Problems 33 (6): S14–S32. www.jstor.org/stable/800672.
Crawford, Kate. 2017. “The Trouble with Bias.” Paper presented at the Neural Information Processing Systems (NIPS) Conference, Long Beach, CA, December 4–9.
Deshpande, Abhay, Vinay Murahari, Tirthraj Rajpurohit, Abhishek Kalyan, and Kartik Narasimhan. 2023. “Toxicity in ChatGPT: Analyzing Persona-Assigned Language Models.” Manuscript in preparation.
Doctorow, Cory. 2023. “The ‘Enshittification’ of TikTok.” Wired, January 23. www.wired.com/story/tiktok-platforms-cory-doctorow/.
Fitzhugh-Craig, Martha. 2023. “Grief Tech Takes End-of-Life Planning to Another Level.” Information Today 40 (6): 35–36.
Fountain, Jane E. 2022. Digital Government: Advancing E-Governance through Innovation and Leadership. Cambridge, MA: MIT Press.
Fricker, Miranda. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press.
Gebru, Timnit. 2020. “Race and Gender.” In The Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, 16. Oxford: Oxford University Press. doi.org/10.1093/oxfordhb/9780190067397.013.16.
Grey, Mary L., and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Chicago: University of Chicago Press.
Grice, H. P. 1975. “Logic and Conversation.” In Syntax and Semantics, edited by Peter Cole and Jerry L. Morgan, Vol. 3, 41–58. New York: Academic Press.
Kelly, Mary Louise, host. 2024. “He Has Cancer—So He Made an AI Version of Himself for His Wife After He Dies.” Consider This (podcast), June 12. NPR. www.npr.org/transcripts/1198912621.
Lawrence, H. M. 2021. “Siri Disciplines.” In Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip, 121–35. Cambridge, MA: MIT Press.
Lee, Peter. 2024. “Synthetic Data and the Future of AI.” Cornell Law Review 110 (forthcoming). ssrn.com/abstract=4722162.
Medina, José. 2013. The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. New York: Oxford University Press.
Metz, Cade. 2023. “Chatbots Hallucinate Information Even in Simple Tasks, Study Finds.” New York Times, November 6. www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html.
Meuse, Matthew. 2023. “Bots Like ChatGPT Aren’t Sentient. Why Do We Insist on Making Them Seem Like They Are?” CBC Radio, March 17. www.cbc.ca/radio/spark/bots-like-chatgpt-aren-t-sentient-why-do-we-insist-on-making-them-seem-like-they-are-1.6761709.
Milligan, Ian. 2022. The Transformation of Historical Research in the Digital Age. Cambridge: Cambridge University Press.
Mitchell, Margaret, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. “Model Cards for Model Reporting.” In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT '19)*, 220–29. New York: ACM. doi.org/10.1145/3287560.3287596.
Nguyen, C. Thi. 2020. “Echo Chambers and Epistemic Bubbles.” Episteme 17 (2): 141–61.
Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
Pariser, Eli. 2012. The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin Books.
Reed, Ronan. 2024. “Does ChatGPT Violate New York Times’ Copyrights?” Harvard Law School, March 22. hls.harvard.edu/today/does-chatgpt-violate-new-york-times-copyrights/.
Samuel, Sigal. 2023. “What Happens When ChatGPT Starts to Feed on Its Own Writing?” Vox, April 10. www.vox.com/future-perfect/23674696/chatgpt-ai-creativity-originality-homogenization.
Sun, Tony, et al. 2019. “Mitigating Gender Bias in Natural Language Processing: Literature Review.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 5459–68.
Tanksley, Tinisha. 2024. “Critical Race Algorithmic Literacies: A Framework for Black Liberation.” Journal of Media Literacy Education 16 (1): 32–48.
Wylie, Alison. 2012. “Feminist Philosophy of Science: Standpoint Matters.” Proceedings and Addresses of the American Philosophical Association 86 (2): 47–76. doi.org/10.2307/20620467.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Nicole Ramsoomair

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
1. Authors retain copyright and grant the journal right of first publication, with the work simultaneously licensed under a Creative Commons Attribution 4.0 International License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
2. Authors are aware that articles published in Atlantis are indexed and made available through various scholarly and professional search tools, including but not limited to Erudit.
3. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
4. Authors are permitted and encouraged to preprint their work, that is, post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process. This can lead to productive exchanges, as well as earlier and greater citation of published work. Read more on preprints here.