Conceived of as a response to the 2016 exhibit “Witness” at New Westminster’s New Media Gallery (Joyce and Duggan 2016), this project, “Witnessing” is a series of three projects developed by students at Simon Fraser University’s School of Interactive Art and Technology (SIAT).
The projects all address the thematic issues raised in the Witness exhibit, that of human machine communication and mutual surveillance. Each work presented here demonstrates the uneasy marriage of our shared technological fears and desires. Paired with this web archive, the downloadable Zine is imagined as a future archive of humanities’attempts to communicate with an all-seeing machine. The Zine includes the analog counter-parts to the digital projects, and encourages the reader to contemplate how future machine might learn of our thoughts, memories, languages, and visions.
The 2016 “Witness” exhibit showed five works of electronic or media art that all addressed the powerful and potentially omniscient gaze of machines. Adam Basanta’s work, “A Truly Magical Moment” (2016) displays two cell phones affixed to cell phone sticks that spin around each other when called, so each caller experiences the other through multiple and disorienting lenses. Rafael Lozano-Hemmer’s “Surface Tension” (1992 (2001)), uses an infra-red camera mounted above the gallery to monitor visitor movements – all translated to a video screen displaying an eye that follows the viewer as they move through the space. This watchful eye demands the viewer’s full attention, and its’ mechanistic movement and human-like appearance cause us to question our own cyborg realities. France Cadet’s “Do Robotic Cats Dream of Electric Fish?” takes us to a future where cyborg cats do exist, as the hacked robot cat appears to watch an impossible-to-catch electronic fish swimming on screen. Bjorn Schulke’s “Vision Machine” (2014) presents an alternative view of vision and watchfulness as the computerized device, reminiscent of a dental tool, slowly spins to catch a glimpse of itself in an awkward machine-self portrait. Lastly, Stanza’s “The Agency at the end of Civilization (2014), is a massive array of networked cameras and screens that display real video links from CCTV cameras, which watch UK car number plates; out of these visions the machine creates narratives and stories that distort our perception of what is real, and what is speculative.
Jo Shin’s project, “You.got.a.pic?” addresses the question of machine vision and self-imagination, and draws upon themes of immediacy in a hyper-mediated world. When sent a “seflie” image, the AI developed for this project responds to an email with a modified “machine vision” version of this image – a distorted and pixelated view of the human in question. In the corresponding records in the zine, “Project Canary” imagines this process as a way to teach an AI how to “see”; and gives a paper based lesson on pixel sorting – an analog experimentation of the digital process used in the “You.got.a.pic?” processing tool. As the correspondence between the AI and the human via email takes time, Shin demands that we question our obsession with instantaneous communication and hypermediacy (Bolter and Grusin 1999). Likewise slowly sorting pixels by hand is the only way humans might begin to understand how these processing components allow the computer to virtually “see” us.
Xavier Wu’s project, “Sentinel #002”, imagines an AI capable of recording and learning from human language. Instead of drawing upon written texts or carefully worded posts, this machine would collect data from social media sites like Twitter. Language could be learnt from real time conversational and emotional utterances, not unlike recent AI advancements in Google Translate (Lewis-Kraus 2016). By analyzing these posts through key words, it would be possible to calculate and enumerate the average emotional status of humans at any given time. By teaching an AI to attribute emotional values like “happiness” to certain words, sentences, and conversations, might we be able to teach it something about the world and history? About the affectual relationships between people? The Sentinel #002 records included in the zine demonstrate what such a governmental project would be, the graphs created expose how a machine might measure these relations with text-based processors.
“Human Memory Database,” by Frederico Machuca, presents an ominous vision of a future database of human memories. As the user clicks through the interface, text memories appear, with associated images flickering behind. The memories are real; and the work allows us a glimpse into what might be machine-readable database of memories. What is at stake if such a system were real as well? How would we teach a machine to remember? The records of Human Memory Database included in the zine addresses this question as a bureaucratic document: a machine-readable text entry field is provided for new memories to be added by hand.
The accompanying zine asks the viewer to imagine a world where media technologies as we know them have come and gone. In an era of reverse-bureaucratic “paper” knowledge work (Gitelman 2014); where an all seeing eye has gained intelligence, human communications have become low-tech and actively evade the computer’s graphical screen and vision. The zine is provocation and a media archaeology (Huhtamo and Parikka 2011) artifact of the past from 3000 years in the future. It imagines a world in which a typeface (ZXX) was developed to prevent machine surveillance and the optical character recognition of documents (Mun 2013). Picking up the zine in your hands, you are to imagine that you are now seeing something you should not – and that you are also uniquely able to read the text that this future dystopian AI cannot. In this future, the humans are spying on the computer – not the other way around. Drawing on theoretical issues of media and format specificity (Sterne 2012) and the role bureaucracy plays in the development and use media technologies (Eichhorn 2016); the zine is reminiscent of soon outdated media technologies: paper, the pen, the typewriter, and the Xerox machine.
What is compelling about a cat robot that watches a fish? It is not the question of whether or not the cat sees the fish, but rather what it is that makes it frightening to imagine if this was the case. This imaginative future – and the knowledge that the eye, for example, is not actually watching you – presents a dilemma. What if it was following you? Learning from you, repeating your actions, intentions, and emotions? Is that a future too frightening to not be prepared for?
Witnessing was recently exhibited as part of the symposium, “Under Super Vision” in the Art History, Visual, and Art Theory (AHVA) Department at the University of British Columbia (UBC) in the Audain Art Centre, curated by Laurie White, Sherena Razek, Whitney Brennan, and Paula Booker.
– Hannah Turner, Vancouver 2017
A note on the typeface: “The name ZXX comes from the Library of Congress' Alpha-3 ISO 639-2 -- codes for the representation of names of languages. ZXX is used to declare No Linguistic Content; Not Applicable.” (Mun, 2012). ZXX is a disruptive typeface developed by Sang Mun (www.sang-mun.com), a former contractor with the US National Security Agency (NSA) (Mun, 2013). He designed the typeface as a way to “conceal our fundamental thoughts from artificial intelligences,” and the typeface itself is unreadable by text scanning software. As Mun notes, ZXX is intended to raise questions about privacy and the protection and codification of human emotions, thoughts, and affective experiences. It is a censor for humanity in the information age.
The criticalmediartstudio (cMAS) explores how old and new technologies have and continue to shape historical narratives and practices of media arts and design. Our research outputs can take the shape of an art exhibition; a scholarly publication; a public performance; an experimental video; a generative artwork; an interactive digital graphic novel; a printed zine or an artist book. Located in the School of Interactive Arts and Technology at Simon Fraser University Surrey Campus, cMAS is directed by Dr. Gabriela Aceves Sepulveda.