Eva & Franco Mattes: Not Safe for Work

By Nadim Samman

Eva & Franco Mattes: Not Safe for Work

Since the 1990s, the New York based duo Eva & Franco Mattes have kept a close eye on the Internet and its IRL entanglements, establishing a provocative body of work that runs the gamut from virtual reality to sculpture. Throughout, the pair have probed a tension between what is newly visible, and what is made hidden within online culture. Their recent work explores how the threshold between an ecstasy of communication and a censorial apparatus is established today, in the age of social media—a practice located at intersection of poetry, politics and trauma. The Bots is a collision at this intersection, drawing together both macabre and mundane.

Everything posted to social media is screened and surveilled, subject to ‘community guidelines’ and ‘content restrictions’. While algorithms do some of the work policing content, human input remains important. Somewhere, people sit behind screens in drab office buildings, deciding what stays and what goes. The Bots (2020) is a deep dive into this world of content moderation, based on true confessions by ghost workers.

When it comes to content moderation, algorithms operate by identifying material that appears to violate the policies of a given platform. However, automatic processes are not able to interpret the context of a post, and often flag ambiguous cases for further review. This is where human input comes in. Through assessing and removing actionable items, living moderators supply a front-line service, keeping digital space somewhat free from inappropriate, offensive, and/or illegal material, while simultaneously protecting free speech from accidental censorship. They also play a part in machine learning, with their tracked input proving critical for the development of the algorithm’s future accuracy, feeding back into the system and fine tuning it. At least, on the surface…

Of course, there is more to consider. Content moderation is psychologically taxing, as it requires workers to view material that can range from offensive to disturbing. Maintaining detachment from a relentless stream of questionable suggestion is an arduous task, with mental health issues including stress, depression, anxiety, and insomnia awaiting those who cannot successfully repress the experience. Workers based in developing countries are subject to further pressures, including racism, poor labor conditions and low pay. One can readily imagine that such moderators require clinical and social support. And yet, ‘The Bots’ (as these persons are termed in the eponymous series) are forbidden to talk about what they have experienced outside the workplace. As a consequence, a clear picture of labor conditions inside this growing sector, insight into how decisions are made, and the true extent of social media’s underbelly, remains obscure.

Against this background, Eva & Franco Mattes managed to speak to some workers based at moderation center for Facebook in Berlin. The resulting videos, performed by actors, draw directly from interviews with them. With scripts prepared in collaboration with the writer Adrian Chen, the clips are delivered in the style of online make-up tutorials—a format that has been successfully used to bypass censorship or ‘moderation’ of political speech on Chinese social media platforms.1 Speaking to their smartphones, the actors stage a tension between “the mundane levity of dressing up for one’s followers [and] the everyday horror of online hate”, according to the artists. Throughout, serious discourse—addressing topics such as violence, sexual abuse, hate speech and terrorism—is constantly interrupted by makeup tips.

Each video is a personal report on an individual’s work attending a specific geographical territory or ‘market’. Of course, workers describe different issues depending on the region that they serve. In this manner, The Bots explores the specificity of various markets, and how content moderation may affect different communities. Along the way, stories offer a comparative peek behind the curtain—uncovering the darker aspects of cultural variation. In keeping with conceptual focus on activity behind the scenes, videos from The Bots are normally displayed on monitors mounted on the underside of a customized office desk—of the same brand used at the Berlin moderation center; a material metaphor for the unrecognized nature of the moderators’ work, and the repression of content that must be handled ‘under the table’, so to speak.

In a sense, The Bots indicates that the moderators are, themselves, repressed content: “You would say to people ‘I work for Facebook’, but not really. Facebook, in the Silicon Valley, I’m sure it’s an amazing employer. But I had to go to the middle of nowhere to get to this sweatshop type place […] The building was very small, but there were a lot of people working there. At the peak there were some 1000 people crammed into two buildings. The floors were only rows of desks with computers. It was like an assembly line, like a conveyor belt with content instead of auto parts. It would smell quite bad sometimes…”2

But Eva and Franco’s restaging of the interviews has further implications. By having actors perform the manifestly ‘normal’ activity of cosmetic application, the setup conveys—on the one hand—a sense of ubiquity concerning exposure to extreme material. On the other, desensitization “[…] She begged for mercy and tried to stop them, but two women restrained her arms. It was the most brutal thing I had ever seen, I was horrified. I cried at that. Another video I saw was of a teenage girl in my city talking about how she wanted to kill herself, and that made me cry as well. After a while I just sort of became numb to it. Nothing sort of affected me. I was just like “oh, delete, delete”.3 The combination of self-care tropes—lipstick and concealer—and reference to graphic violence creates an unsettling atmosphere; one that speaks, on the individual level, to the normalization of workplace trauma; also, to a wider context of degradation attending internet-enabled information sharing. A metaphorical implication even implies a crossover between concealing (facial) imperfections and beautifying the surface of the internet through removing unwanted content. Throughout, conceptual tension between exterior effects and hidden depths plays out through harrowing narratives and lip-gloss.

Clearly, the series title serves to emphasize the dehumanizing and often robotic nature of the job—a repetitive and inconspicuous task, delivered without recognition or significant reward. It is an assessment tied into the concept of ghost work, elaborated in a 2019 book of the same name which outlined how Silicon Valley covers up a great deal human labor behind the scenes, while making surface products appear to be the result of full automation.4 Reality is far messier than companies would have us believe. “In the past, before I had the job, I knew that content moderation existed, but I thought it was done by a computer, not by real people. In fact, when working, we see that people think we’re robots.”5 Another moderator relates how the psychological burden of role demands formal social recognition that is unforthcoming: “In my previous work, I did work in the humanitarian field, on the border with Syria. I have a certain expertise in handling real world harm. But it’s difficult when you are not valued, when you don’t exist. At least if you are a social worker, you know what you do, if you are a law enforcement agent or a teacher, people know what you do. We do act in all these roles, but we don't exist.”6

From another perspective, and in a more disturbing fashion, the title outlines a novel form of discipline: a reality starkly drawn by the writer Laura Preston in an account of her own experience working as “HUMAN_FALLBACK” for a chatbot: “Months of impersonating Brenda had depleted my emotional resources. […] All I wanted was to glide through my shifts in a stupor. It occurred to me that I wasn’t really training Brenda to think like a human, Brenda was training me to think like a bot, and perhaps that had been the point all along.”7 Whether disguising oneself as a bot—acting like a machine—or being trained to think like one, the specter of dehumanization looms large. In this respect, ‘putting on a face’ takes on a different meaning—with the makeup tutorial serving as an analogy for the manner in which a certain industrial logic (and algorithmic processing) programs the scope of action officially available to workers. This is, of course, a limited frame that excludes the messiness of their emotions from its logic. As one person speaking in The Bots relates: “The way things are presented make you kind of forget that these posts belong to people. You kinda sleep on the human side a bit and when you’re rushing to reach a target—so your boss doesn’t have to ‘talk with you about your performance’—you stop caring about who posted it and you just remove it.”8

Like makeup, the operation is an external application whose protocol addresses only the ‘Data Body’, that “fascist sibling of the real body”—a concept first outlined by Critical Art Ensemble in 1995, decades before the complex of massive information capture and deep learning arrived on the scene.9 In a prescient diagnosis, their account proposes that “the most frightening thing about the data body is that it is the center of an individual’s social being. It tells the members of officialdom what our cultural identities and roles are. We are powerless to contradict the data body. Its word is the law. One’s organic being is no longer a determining factor, from the point of view of corporate and government bureaucracies. Data have become the center of social culture, and our organic flesh is nothing more than a counterfeit representation of original data.”10 The data body is thus a person opened up to technical inspection and programming; a performer whose auditorium is not so much playhouse but an operating theatre, or extraction machine. In The Bots, putting on makeup may be viewed as alluding to the moral grotesque latent in this kind of performance culture at work.

Moving on, might the makeup also suggest moderators’ efforts to disguise their true selves, somehow, in order to cope with their burden—attempts to ‘mask’ the distress that comes with the job? It is a reading compatible with the camp performance style of some of the actors. Characterized by elements of irony, absurdity, and exaggeration, camp is often used to satirize social norms or to emphasize the absurdity of a situation. Sometimes flippant, sometimes intensely laconic—to the point of vapidity—in their descriptions of egregious user uploads, the performances in The Bots dramatize the point where personal strategies of detachment intersect with industrial forms of alienation.

The Bots is unique in setting issues surrounding online content moderation into a complex artistic format—one that bridges reportage, performance, sculptural installation, and drama/creative writing. As a talking point, the work is a catalyst for conversations on new forms of labor attending digital change in society. But perhaps we do not need content moderation at all? While it may ensure that online platforms stay ‘safe’ and civil—rather than being overrun with spam, scams, or hate speech—the makeup tutorial format indicates a flip side: Content moderation has the potential to limit freedom of expression, as some content may be removed from platforms if deemed politically or ideologically inappropriate. Moderation can also be used to discriminate against certain users or demographics.

The Bots goes some way to demonstrating just how difficult it is to gain access to a fuller picture of who (and what) gets censored, and who (and what) does the censoring. A related work, Abuse Standards Violations (2016), further contextualizes the issue. It features a series of wall-mounted insulation panels, printed with corporate guidelines that were leaked to the artists in the course of their investigations. The guidelines refer to images that moderators need to classify, usually in order to remove them. They are like “filters”: letting some things through while blocking others. The companies that produced these guidelines are anonymous. Often, the moderators do not even know who their employer is.

Why should artists care? Not least, because content moderation potentially affects their ability to reach audiences. On the most obvious level, they risk finding their artwork on the wrong side of said ‘moderation’, judged inappropriate or otherwise; their freedom limited by narrowly defined terms of service. This concern is not speculative, as one moderator relates: “There was also a famous artist from Russia, and she had an exhibition with a real man and a woman in a gallery, and they were naked. Unfortunately we had to delete that because they were naked.”11 As private goods that amount to de-facto public spaces (classically figured in a hoary suggestion that Twitter is ‘the world’s town square’) speech on social media is caught between corporate and constitutional regulation. For those who make an art out of communication and grey areas, the dead hand of market-as-law threatens to smother.

But a deeper concern must be the fact that content moderation may premediate the artistic imagination: Receiving only secondary access to what has been uploaded—a literal feed—the impression of just what social media really is runs on corporate guardrails. Under the circumstances, how can an artist claim to provide an informed perspective on the most prominent communication tool of our age? One of the most serious suggestions of Eva & Franco Mattes’ series is that artists might endeavor to supply corrective ‘moderation’ of what social media claims to be in the broader cultural space.

But there is more to say. ‘The Bots’ may not only be the moderators. Social media has had a significant impact on the way people communicate and access information. It can be used to amplify certain voices, to spread misinformation, and to influence. Many believe that it conditions programmed responses from users, making them more robotic—a phenomenon that has contributed to the wave of political instability in the US and Europe, where the majority of voters get their news from social media—despite the fact that algorithms have been shown to serve people content that they are most likely to engage or agree with, which can create a ‘filter bubble’ around a person and limit their exposure to different viewpoints. This can make people more prone to echo chamber effects and less likely to engage in critical thinking.12 But the notion that all such bubbles are merely the outcome of laissez-faire design, or mundane group-think, is politically naïve. In fact, is has been documented that covert work by the social-media influencing group Cambridge Analytica helped to sway the result of Donald Trump’s election to the presidential office of the United States. In this context, the curation of filter bubbles (by a combination of algorithmic processes and human direction) produces bot-like or programmed attitudes in users. Thus the promise of identity creation through social media is more of a minefield that commonly proposed.

Self-fashioning, as the process of constructing and presenting one’s identity, often through clothing, appearance, and other forms of self-presentation, has a long history in art. Some would argue that our online profiles extend the self-portrait genre—carrying its ‘modern’ program of subject formation into the contemporary media-scape. Perhaps everyone is an artist now. The culture of social media certainly conveys an impression that it is simple to direct the construction of one’s identity in a public forum. Yet, as the Cambridge Analytica scandal indicates, the process of ‘self-fashioning’ a political identity through social media is hugely problematic. Notwithstanding outright manipulation, even the concept of individual self-fashioning neglects the ‘social’ dimension of the media. Clearly, norms pressure users into presenting polished and curated images of themselves. But what is the palpable difference between a norm and an echo chamber effect? Filter bubbles may lead users to file away ‘human’ edges (and emotions) in the pursuit of a better outline—a living imitation of data bodies.

Such a reflection opens on to the important issue of the social-media as an epistemological frame whose impacts cannot fail to be anything other than political. Indeed, it is important to note that what is exhibited on social media is not just individual personas—or even communities. But, rather, a sense of what makes up the world as a whole. In an age of relentless information accumulation, one recalls Baudrillard’s deployment of an image from Jorge Luis Borges—of a map so large that it comes to cover the whole Earth, becoming a new kind of terrain. Online, mediation and the territory itself have collapsed into something amorphous. In as much as the world is re-formed through the internet, the task of moderation knows no end. Indeed, the graphic figures that adorn computer screens, so perfectly termed ‘icons,’ stand to be analyzed with a view to the manner in which they figure disciplinary regimens for the self and the world at large. To the extent that they are displayed on screens, these portraits can also be described as screening-off certain human(e) values in the service of instrumental foci. The true-life accounts that underpin The Bots make this clear.

Don’t let the makeup fool you. Eva and Franco’s series is a work about how modern day sweatshops, beheadings, rape, mutilation and worse are repressed. But it is about more than this, too: For a cultural field governed by a professed ethos of disclosure, social media’s hidden truths are subject to a distributed mechanism akin to some border patrol, keeping what Freud called the unconscious at bay. But whereas the Freudian ego constantly consolidates itself into a unity figure—against its own tendencies to fragment—no coherent (moral) identity is being forged through the work of content moderation. The disturbing import of The Bots is that, in the digital age, there appears to be no unified agency or consciousness that can serve as court of moral last resort—to a give meaning to moderation, and by extension its mediation of ‘social’ media. Social media’s internal censors barely hold together a conglomerate of economic imperatives, nefarious censors, propaganda, and so on. The fantasy of possible moral core to the system is moot. It is not just that there is no good in social media, but that there is also no answer concerning its evil or inhumanity. There is no necessity or coherence at all, just a shifting play of masks.

1 rollingstone.com/culture/culture-features/china-tiktok-uyghur-protest-censorship-918757
2 The Bots (Greek Market 2)
3 The Bots (English Market)
4 Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Boston and New York, 2019).5 The Bots (Italian Market)
6 The Bots (Turkish Market)
7 theguardian.com/technology/2022/dec/13/becoming-a-chatbot-my-life-as-a-real-estate-ais-human-backup
8 The Bots (Arab Market)
9 See Critical Art Ensemble, “Appendix: Utopian Promises—Net Realities*,” in Flesh Machine: Cyborgs, Designer Babies, & New Eugenic Consciousness (1998), as republished on www.critical-art.net, http://www.critical-art.net/books/flesh/flesh7.pdf, p.146 (accessed October 27, 2022).
10 Ibid.
11 The Bots (English Market)
12 Eli Pariser, The Filter Bubble: What the Internet is Hiding from You (New York, 2011)