This week, we read a collection of texts that outlined the relationship between algorithms and power. Tania Bucher summed it up well, "Algorithms do not work on their own but need to be understood as part of a much wider network of relations and practices" (p.20).
Tarleton Gillespie did an excellent job of helping us understand why algorithms are significant in terms of publics, their epistemology, and algorithmic ubiquity. Algorithms are not merely commands, but they are mathematical procedures that "produce and certify knowledge" (p. 2). We can see how in the age of digital misinformation and disinformation, this can lead to dangerous outcomes. Sarah Noble's work on search algorithms solidifies this idea. Stereotypes, biases, and discriminative norms are certified and reproduced as knowledge. Search engines like Google maintain a divine-like trust from the public that often believes that Google or the internet can tell no lies, as the computer must know everything. The opaqueness of the black box allows people to have this sort of blind trust. This reminded me of Guillermo Gómez-Peña's article on The Virtual Barrio, where they actually did not want to know how this all works. One could draw parallels between this and some followers or religious and spiritual communities, in that it is best not to question the inner workings because it is too complex and why ruin the mystery or magic of it all.
Gillespie also points out that we often fail to give credit to the complexity of these algorithms. As users, we refer to interfaces as algorithms, but they go much beyond that. We only see google searches, Facebook feeds, and TikTok feeds as an algorithm, but they are multiple algorithms at work with constant changes that are not always noticeable visually/through UX.
With my fascination in surveillance technologies, I thoroughly enjoyed Juan de Lara's Race, "Algorithms, and the Work of Border Enforcement." De Lara's piece on algorithms and border enforcement was an excellent reminder that the digitization and datafication of traditional colonial infrastructure is a frightening and violent process. Similarly to scholars like Lisa Nakamura, de Lara contextualizes technological tools within labor, race, and coloniality. In a refreshing way, he uses Richard Edward's theory that "technology can only be fully understood by taking into account the role it plays in specific societal processes" (p. 152).
De Lara does a great job reminding us also that while technology has propelled discourses and critical analyses regarding surveillance, it is integral to understand that the logics of anti-Blackness, anti-Indigenousness, xenophobia, and racism are what truly maintain and uphold surveillance through technological and digital means.
I think your point about how people are encouraged to have a sort of “blind trust” in algorithms is interesting. As we discussed in class yesterday, I do think it is fair to say that younger generations are becoming increasingly skeptical of algorithms and what they can do (although, of course, that skepticism is undoubtedly shaped by factors related to geography, privilege, access, etc.) At the same time, skepticism can sometimes only extend so far, which might connect to your point about how “we often fail to give credit to the complexity of these algorithms.” A brief personal anecdote: On the way home from class, I listened to my music on iTunes. As I always do, I shuffled the music beforehand. At a certain point, I started to wonder how this feature determines how to shuffle the different songs I own. For instance, is there some sort of rule that dictates how many songs must play in between each placement of Tiffany’s “I Think We’re Alone Now”? I’m sure I could search for an answer to this question on Google but this, of course, would require me to use another algorithm. And my larger point is really that this is the first time it had ever consciously occurred to me that an algorithm – an algorithm I know next to nothing about – is at work whenever I listen to shuffled music. Per the documentary we watched in class, algorithms can be used for good just as they can be used for evil. But, especially for people like me who are distinctly non-tech-savvy, where and how do we really draw that line? At a purely selfish level, I appreciate that the shuffle function adds some form of variety to the way I listen to my relatively small selection of music. But, given that I had never before actively considered that the shuffle function involves an algorithm, I certainly wouldn’t know if that algorithm’s structure embeds colonialist, white supremacist, and patriarchal logics. With that in mind, is a universal wariness of algorithms necessary? Or are there ways we can be effectively discerning so as to best direct our attention to those algorithms that are most damaging?
ReplyDelete