Faculty Spotlight: Research Reflections with Alexander Monea

Faculty Spotlight: Research Reflections with Alexander Monea

Cultural Studies faculty member, Prof. Alexander Monea, was the guest speaker of the Mason Libraries' Research Reflection Series on March 29, 2018. He presented his speech entitled, "I Know It When I See It" - An Overview of Google's Safe Search & the Politics of Automating Judgment. His talk covers the classificatory logic of contemporary machine learning applications and he examines Google’s work to filter Not Safe For Work (NSFW) images.

Please find below the abstract of Prof. Monea's speech.

In his 1964 concurrence in Jacobellis v. Ohio, Potter Stewart noted that while he could not define hard-core pornography, he knew it when he saw it. In this presentation, I refer to such I-know-it-when-I-see-it concepts as extra-linguistic concepts because they contain an intuitive, inductive, and/or felt component in the classificatory logic that affords their generalization. This paper argues that contemporary machine learning applications have successfully operationalized this classificatory logic at mass scale, and looks to Google’s work to filter Not Safe For Work (NSFW) images as a particularly compelling success story. I argue that this constitutes not only the computational production of extra-linguistic concepts, but the automatic mediation of the visual world. This presentation traces the history of SafeSearch, with particular attention paid to the introduction of Cloud Vision in 2016, which Google promised would leverage machine learning for the detection of labels, logos, landmarks, optical characters, faces, image attributes, and explicit content in images. The resulting machine learning apparatus was composed not only by material technologies and communications infrastructures, but also by scientists, engineers, and programmers conducting research and development, the diverse bodies of international laborers sitting in cubicles reviewing flagged and reported content, and the hordes of citizen surveillance agents reporting offensive content as they browse the web. In a sense, this machine learning apparatus automates the production of subjective constructs, though it produces three problematic operations: (1) the big data paradigm is probabilistic, and thus designed to tolerate a certain percentage of misclassification without adequate adjudication mechanisms for redress; (2) the extra-linguistic nature of computational concepts make them an opaque medium for supporting human judgment, and thus delimits our capacity to adequately assess their accuracy and critique their parameters; and (3) the awe we feel at such technological feats makes it easy to simultaneously fetishize and depoliticize machine learning apparatuses, and thus obscures their increasingly prominent role in shaping our subjectivities and communities.