Saturday, October 22, 2022

Google Pixel Ads (Sebastian)

    For several weeks now, I have been bombarded with ads for the Google Pixel, the newest iteration of which I assume has either just been released or will be released very soon. I get these ads far more frequently and consistently than I get ads for any other product or service at the moment. As it relates to this class, I find this phenomenon simultaneously amusing and horrifying for a few reasons.

    Firstly, I really want to know why “the algorithm” (whichever algorithm might be at play here) has decided that I should be getting these ads multiple times a day. As Siva Vaidhyanathan observes in “The Googlization of Us,” “Because Google makes its money by using our profiles to present us with advertisements keyed to the words we search, precision is its goal…. The more Google knows about us, the more effective its advertising services can be” (83). So then how has Google drawn a connection between its profile of me and these specific ads? I’m certainly not in the market for a new phone. Many of the ads highlight the Google Pixel’s supposedly very technically advanced camera, so maybe “the algorithm” assumes that my interest in cinema translates to an interest in photography. It does not. If for nothing else, I am always struck by how algorithmic attempts to understand us so often seem to arrive simultaneously so close and yet so far from the truth, and often in exceedingly banal ways. It is somewhat amusing, for instance, if “the algorithm” has indeed made the cinema-photography connection, partially because this reflects how little it knows about my utter disinterest in the quality of my phone’s camera. But I also have to concede that the connection is not entirely unfounded; I certainly know more than one colleague who studies cinema and is also invested in either taking photographs or studying them. In that regard, the association being made here (if that is, indeed, partially why I am receiving these ads) signals something far more unsettling. 

    Secondly, the actual content of these ads is fascinating. I tend to get two variations, both of which highlight the Google Pixel’s camera. One emphasizes the fact that the camera allows you to now erase unwanted background details in a photograph. The other asserts that the camera is now able to properly recognize and reproduce darker skin tones (to be more specific, given that this version of the ad always features a Black actress, it is implicitly about the recognition and reproduction of Blackness). These two variations are interesting enough when taken on their own terms. The first would seem to reflect Vaidhyanathan’s observation that “every major marketing campaign these days is likewise framed as being about ‘you’” (83). Indeed, this version of the Google Pixel ad frequently tells the story of a woman taking an imperfect selfie, only to discover that she can erase everything about the image aside from herself. The ostensible magic of the Google Pixel is that it truly does allow you to take photographs that are just of you. Of course, as Vaidhyanathan notes, the rhetoric of “you” is always just an attempt to obfuscate how companies like Google engage in surveillance practices. Meanwhile, the second ad variation recalls Simone Browne’s observations in Dark Matters about “epidermalization.” Drawing on the work of Frantz Fanon, she argues that epidermalization is “the imposition of the race on the body” (7). By being able to better recognize Blackness, the Google Pixel improves its capacity for epidermalization and, in turn, further envelops Black bodies into its surveillance regimes. 

    When put in conversation with each other, though, these two variations of the Google Pixel ad reveal the extent to which legibility has been framed as a consumer benefit. In “Becoming NULL: Queer Relations in the Excluded Middle,” Jacob Gaboury extensively discusses how surveillance systems require legibility to function effectively. Examining the way that Facebook allows its users to declare their “identity,” he writes, “[Facebook] is more than willing to accommodate the particularity of your unique user profile, but in order for that profile to be useful it must be legible” (Gaboury). Similarly, the Google Pixel ads may claim that the phone’s camera primarily benefits the user, but it seems like no accident that both of the advertised new features (the erase function and the capacity to recognize Blackness) are about enhancing the user’s legibility to the phone. In fairness, I do not know how the Google Pixel’s camera may or may not be attached to a larger surveillance infrastructure, although the readings for the surveillance week incline me to suspect that it probably somehow is. Regardless, Gaboury’s assertion that “all representation is essentially recuperable [for surveillance]” resonates here. How the photographs taken by the Google Pixel’s camera ultimately do (or, optimistically, do not) get used for the purposes of surveillance is perhaps less relevant than the fact that they can be. And, if the ads I keep receiving are any indication, they may even be optimized for the logics of surveillance.  

1 comment:

  1. Hi Sebastian,

    Thanks for such a clear approach of applying the different readings to the experience of getting targeted with Google Pixel ads. Advertising is such an important part of surveillance that our readings didn’t really touch on. I think you’ve highlighted a limitation in Vaidhyanathan’s analysis of what “Googlization” entails that Srnicek captured really well in Platform Capitalism. Google’s business model is not just about ensuring that every ad campaign has the highest efficiency possible. Instead, it’s about erecting an infrastructure to extract value at scale from all users—this includes ourselves as consumers, certainly, but also advertisers.

    What if we think about the Pixel ad re-targeting you as a data collection exercise by the ad tech platform rather than Pixel as an advertiser? With that goal, I can imagine why the ad tech algorithms are delivering the same ad to you multiple times: to see if you’ll engage with it in some way or another—by hovering, watching the video to completion, etc. Every time you engage—or don’t engage—with that ad, the network collects another data point that helps refine your measurable type, to use Cheney-Lippold’s term. (The fact that you wrote about it in this blog post, and potentially searched it in the process, could actually be a very positive engagement signal, unfortunately.) Behaviorally, there’s no way to opt out—your engagement is tracked as either true or false. (Unless you have an ad blocker. Not AdBlock Plus!) As long as you’re sending data to the ad network about your interest in the ad—even if you’re just confirming your sustained *lack* of interest—that’s a useful data point it can use to enhance its targeting algorithms for you and others. This is why I found Andrejevic’s focus on population-level subject formation compelling as opposed to individual-level targeting: the ad system just wants to earn higher margins from the population, not from you specifically.

    At the same time, there are other, more mundane—and realistic, frankly—reasons why you might be getting re-targeted with this ad. One is that the ads are served on Google’s ad tech and there aren’t any live campaigns with higher margins, so Google is flooding its own ad supply with ads for internal products for brand awareness. Another is that it’s truly an inefficient algorithm—perhaps because the ad tech is owned by Yahoo. A third is that it’s truly erroneous. From my experience people who run ad tech know next to nothing about how their systems work. They just plug in campaigns and hope for the best.

    ReplyDelete

Note: Only a member of this blog may post a comment.