I was really interested in Beller’s conception of the subject as fragmented in an era of extreme and expansive “distributed computation” (66) and how this type of subjectivity can be framed as a byproduct of a “computational unconscious” emerging in the context of increasingly abstracted state and corporate structures (65). In an earlier post, I expressed skepticism about Couldry and Mejias’s use of the term “data colonialism.” I felt that the term “colonialism” was expanded to accommodate a false equivalence between user-platform power relations and the power relations governing colonized subject and colonizer state. In part by elaborating an observed shift in subjectivity, Beller’s framework of racial formation links computation, capitalism, and colonialism in a more compelling way. His claim at the beginning of the text that “the space-time regimes of subjectivity [...] that once were part of the digital armature of ‘the human,’ have been profitably shattered,” (67) is later qualified: “Female and racialized bodies and subjects have long endured this now-universal fragmentation and forced recomposition”(87). I think this is a productive way to conceive of the exploitative reach of technology. Marginalized bodies, particularly those under regimes of colonialism, faced capitalist subjugation in extreme forms. And despite the ontological fragmentation accompanying this kind of subjugation being “now-universal,” the nuanced role of contemporary encoded biases and prejudices against marginalized identities is not lost in Beller’s overarching argument. That is, Beller complicates the notion of universality, comparing that which is universal, or ubiquitous, with that which is perceived as universal, such as consequences of technology that falsely appear to be evenly distributed across demographics or understandings of the digital world that lack specificity because they lack “historicity” (74) and context.
Roberts too is concerned with a lack of context in the digital realm. She speaks about the perniciousness of invisible decisions, made as part of the labor of content moderation, that are taken for granted as automated processes: “When the content contains racist, homophobic, or sexist aspects, this invisibility is particularly problematic. It can appear that such content just naturally exists, and should exist, in the digital ecosystem, rather than it often being the result of a decision-making process” (Roberts 2). The invisibility of the moderator establishes norms, or at least the impression that certain norms exist, and this can further the propagation of the aforementioned racism, sexism, and homophobia. Of course, discussions about invisible or hidden labor featured prominently in the other readings as well. D’Ignazio and Klein argue that, “in our capitalist society, we tend to value work that we can see” (178), and Terranova delineates the notion of “immaterial labor” which “involves a series of activities that are not normally recognized as ‘work’” (81).
I’m interested in further teasing out the relationship between visibility/invisibility and false perceptions of universality and normality. What is perceived as universal, normal, or acceptable is often informed by what information is excluded from our collective understandings of tech labor machinations (invisible) as much as it is by what is included (visible). Thinking about hate content itself as opposed to the machinations enabling or preventing its visibility and referring back to last week’s Meme Wars reading, the initial hiddenness of alt-right discourse online allowed for progressives to remain unaware of the proliferation of extreme beliefs that were more widespread, perhaps terrifyingly more “universal” in certain pockets of the country than initially believed. Obviously, hate speech should not be platformed and should rather be actively countered. But how best to prevent the spread of hate online without blinding ourselves to the fact that this hate is real, pervasive, and comes with political consequences? Roberts says this hateful content does not "naturally exist" in the digital ecosystem. Can hate be said to "naturally exist" anywhere? Does it "naturally exist" in the non-digital ecosystem where there aren't assigned moderators? And what would it mean for hate to "unnaturally exist"? For example, would the "unnatural existence" framing allude to the ways in which hate can be passed down from generation to generation as a kind of "programming"? If so, does that impact how hate content should be monitored, labeled, and generally dealt with?
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.