Monday, October 3, 2022

Core Post #4 (Sebastian)

    A few weeks ago, in a comment on Dalia’s post about the “New Media Theory 1.0” readings, I raised the possibility of relationality with AI. Now that the “Data/AI” week is actually upon us, I would like to return to this idea and explore it further. My understanding of “relationality” is primarily shaped by its extensive usage in Indigenous studies. Aileen Moreton-Robinson (Goenpul) summarizes, 

Relationality is grounded in a holistic conception of the inter-connectedness and inter-substantiation between and among all living things and the earth, which is inhabited by a world of ancestors and creator beings. It informs our epistemological and ethical premise that social research should begin with an awareness of our proper relationships with the world we inhabit, and is conducted with respect, responsibility, generosity, obligation, and reciprocity. (71)


For instance, a framework of relationality might encourage us to think of land, water, oil, etc. not as “resources” – a term which can imply the inevitability of extraction and exploitation – but as “relations” which are owed the “respect, responsibility, generosity, obligation, and reciprocity” described by Moreton-Robinson. Similarly, if we apply relationality to AI, perhaps it would require us to think about AI as more than just a tool. Perhaps it would even require us to move beyond the age-old philosophical debates about whether or not AI truly constitutes a distinct sentience, instead just presupposing that it does and that it should be treated accordingly.  

    

    Almost all of the readings for this week express understandable concerns about the nature and uses of data/AI. In particular, several of the authors discuss how data and AI are enmeshed in histories of colonialism and ongoing extractive colonial practices. In “Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject,” Nick Couldry and Ulises A. Mejias write, “[D]ata relations enact a new form of data colonialism, normalizing the exploitation of human beings through data, just as historic colonialism appropriated territory and resources and ruled subjects for profit” (336). As such, it seems fitting to turn to Indigenous studies in search for a potential counterresponse through which one might conceptualize better forms of data and AI. For instance, in “Anatomy of an AI System,” Kate Crawford and Vladan Joler observe how the function and design of Amazon Echo’s Alexa is meant to obscure the fact that it is constantly learning through every interaction with a user. They write, “[E]ven the shiny design options maintain a kind of blankness: nothing will alert the owner to the vast network that subtends and drives its interactive capacities” (Crawford and Joler). Leaving aside Crawford and Joler’s potentially problematic use of the word “owner,” it might be interesting to imagine what a more reciprocal relationship with an AI would look like. To quote Lindsey Catherine Cornum (Diné), “In Indigenous SF, authors, artists, and creators of all kinds have the ability to imagine encounters with the Other not in terms of aggression, competition, conquest, and violence but collaboration, exchange, mutual respect, and co-specificity.” What would it look like for this approach to relationality to move beyond the realm of creative work and into the “real world”? What would it mean for an AI to learn more openly and honestly from us, acknowledging the “vast network” that it mobilizes every time it interacts with us? Likewise, what would it mean for us to learn from an AI, implementing a bidirectional or multidirectional rather than unidirectional mode of engagement? What would it mean for an AI and its users (or, perhaps, interlocuters) to share knowledge in truly generative ways?

    

    To be clear, I do not wish to imply that the impetus is solely – or even primarily – on the users to enact relationality. As described in Crawford and Joler’s essay, it would seem that Alexa is a priori unsuitable for this sort of engagement; it is a system too heavily predicated on extraction, exploitation, and violence. Calling for relationality with AI would undoubtedly entail constructing a better version of AI from the ground up. Indeed, one would quite literally have to start from scratch given the way that the current materials used to construct AI are procured through colonialist mining practices. Crawford and Joler directly connect these practices to Indigenous epistemologies when they discuss lithium extraction in Chile and Argentina. According to Aymara stories, the great salt lake where lithium can be found is actually the tears and breast milk of a volcano whose child was stolen long ago. Crawfod and Joler quote Liam Young and Kate Davies’ when they note, “[Y]our smart-phone runs on the tears and breast milk of a volcano.” In short, to build an AI according to the principles of relationality, one would need to think critically not just about how the AI interacts with the world but also about which materials are used to create the AI in the first place.

    

    Although it may be possible to remain cautiously optimistic that good relationality with AI is theoretically be possible, I also wonder about some of the implications of the Crawford and Joler reading, as well as the Wendy Hui Kyong Chun reading and the John Cheney-Lipold reading. In the first of these readings, Crawford and Joler discuss the Mechanical Turk, an 18th century invention in which an automaton appeared to play an excellent game of chess. In reality, a human hid inside the invention, operating the automaton. Many data and AI systems work in a remarkably similar fashion, relying on human labor that is inevitably erased. Crawford and Joler write, “At every level, contemporary technology is deeply rooted in and running on the exploitation of human bodies.” In “Correlating Eugenics,” Chun discusses the data firm Cambridge Analytica, which analyzed “big data” to help Donald Trump and Ted Cruz during the 2016 presidential election. She explains that methods of analyzing data often have their roots in eugenics. “Along with linear regression and other foundational statistical methods, correlation was developed by early twentieth-century biometric eugenicists, who were eager to breed a better ‘human crop’” (Chun 36). Similarly, in “Categorization,” Cheney-Lipold explores how the data itself inherits profoundly human biases and prejudices. He notes, for instance, that data’s definition of “terrorist” often “looks and sounds very similar to whom the U.S. government has historically declared to be a terrorist. Both are most likely located in the Middle East and its neighbors” (41). 

    

    Crawford, Joler, Chun, and Cheney-Lipold have very different interests, concerns, and methodologies, but they collectively remind us of the extent to which human biases and human labor remain enmeshed in data/AI. If one moves beyond the “shiny design” when interacting with an AI, for instance, is it entirely unreasonable to contend that – to some extent – we are actually still interacting with other humans? And, if so, is it possible to really imagine good relationality with AI without first ensuring good relationality with other humans? 

 

 

Additional Citations:

Cornum, Lindsey Catherine. “The Creation Story is a Spaceship: Indigenous Futurism and

Decolonial Deep Space.” Voz-à-vozhttp://www.vozavoz.ca/feature/lindsay-catherine-cornum. Accessed 29 Sept. 2022. 

Moreton-Robinson, Aileen. “Relationality: A Key Presupposition of an Indigenous Social

Research Paradigm.” Sources and Methods in Indigenous Studies, edited by Chris Andersen and Jean M. O’Brien, Routledge, 2017, pp. 69-77. 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.