Industry vs. Machine: Canonization, Localization, and the Algorithm
Julia Weist
In 2013 Google reported that only 10 percent of searches required more than a quick, fact-based answer, according to their research. Put another way, 90 percent of Google searches require definitive, easily digestible results. Algorithmically speaking, content should be homogenous, popular, recent, shallow, and short.
The company revealed these statistics during the launch of a feature called “In-depth articles”—a special block of search results that attempts to provide more balanced material for select queries. In-depth articles, per Google’s description, are longer in form, written across a larger time span, and drawn from a more diverse list of publications than the usual results. (It’s worth noting, however, that in one Forbes study, the New York Times represented a quarter of all In-depth articles, and a majority of content was less than two years old). Google has released some information about the suite of signals used to select deep content, but no official details have been shared about how the search engine determines whether or not a search receives the In-depth treatment. A disclosure of this kind is extremely unlikely, as the information would reveal specific variables of Google’s propriety algorithm, which is a protected trade secret.
It’s almost impossible to imagine: an engine that selects the world’s topics of deepest and greatest interest. What could this mean for art? I’d like to think that an art history major turned software engineer is having a private chuckle somewhere in California. Because how could someone at Google not have realized? With the In-depth feature, the company has essentially created a math machine for determining canonization.
Jeff Koons has In-depth results, Janine Antoni does not. Cindy Sherman, yes, Robert Gober, no. I Googled a few new names, which turned into making a list, which turned into creating a database. I wanted to know what the Google canon looked like. Does it resemble that of the art world? Could the two canons be compared to reveal a secret data structure underpinning all artistic legacies? Here’s the answer, in depth.
⇗ View the project full screen ⇗
Julia Weist is an artist and information scientist who works in New York. Her new media and sculptural work explore cultural informatics and collection theory as well as the emotional, cognitive, and manifest aspects of data modeling and presentation. Since 2010 Weist has been a senior consultant at the software development firm Whirl-i-Gig, where she has modeled digital infrastructures and metadata schemata for such organizations as the New Museum, e-flux, and the Mattress Factory. In 2014 Weist exploited search engine optimization techniques to temporarily control the search results returned for the artist Haim Steinbach. This project is on view January 18–March 8, 2015, at Pioneer Works in Brooklyn and in Weist’s artist book After, About, With, published by Arpia Books.
Technical note:
This information visualization was built in collaboration with Seth Kaufman, using CollectiveAccess and the D3 JavaScript Library. We also used the Mechanical Turk marketplace to determine that In-depth articles are almost certainly not personalized to the searcher (the way many general Google searches are). For example, Google knows that Janine Antoni is one of my Gmail contacts and that I’m an artist (through a variety of personal signals), but that doesn’t change Janine’s In-depth status in my search, or my In-depth results in general. The results are, however, localized for users by country. Currently Google.com and Google.co.uk platforms include the feature.
The Red Hook Journal has received generous support from the New York State Council on the Arts (NYSCA).