Human ID / CounterPulse Combustible Artist in Residence
2020 - 2021
A four-channel installation using dance, cinema, and AI deepfakes to consider the edges of intimacy and our ability to know one another when our relationships are mediated by technology.
StratoFyzika invited me to join them for their residency at CounterPulse shortly before COVID-19 lockdowns were announceed in early 2020. The following travel and work restrictions forced our team, split between Berlin and The Bay Area, to work entirely remotely in lieu of the extensive in-person rehearsals we had planned the residency around. The conditions of the residency under lockdown changed the work: from the removal of live performance, to the imprint of constant tele-conferencing on our work process.
We started Human ID (pre-lockdown) questioning how deepfakes and AR alter our abilities to know one another and our perceptions of reality. Not only do these technologies provide opportunity for propaganda and control, their mere existence, the mere possibility that any media one might encounter has been altered by an AI changes how we see the reality they represent, and the people we know through them.
The installation presented for the residency was comprised of four channels of video and an audio composition by Danishta Rivero. Both the video and audio were composed from a mixture of “real” media and AI-generated synthetic content. The line between real and synthetic is ambiguous and difficult to ascertain, even for a trained eye or ear. Despite losing the opportunity to incorporate live performance into the presentation, the installation remained startlingly intimate, with larger-than-life projections of the dancers’ bodies (Daria Kaufman and Hen/i), and a tactile vocal score.
Videos were generated from dozens of hours of slow-motion footage of the dancers, processed with a custom computer vision pipeline, and used to train a generative adversarial network. The resulting models allowed us to create deepfakes of dance movements, synthesizing choreography that appeared to be performed by the dancers, even if they had never performed it. Likewise, portions of the audio were based on text generated using a GPT-2 model fine-tuned with the writings of Roland Barthes. The synthetic text was read aloud using a model trained on Danishta’s voice.
Media:
Deepfake Studies
2021
Sketches with neural networks and the body.
A series of short, looping video studies working with neural networks and the body. These studies consider the neural network as a kind of camera film, a residue of something that has happened. Unlike film, the networks are designed for manipulation, falsification, and the potential for the bifurcation of memory and the record of history.
The neural networks of Deepfake Studies are propositions of devices for digital memory. On the one hand, the network retains and learns something about light that once reflected off a body in a certain place at a certain time. The network is an intangible residue of a past presence–or tens of thousands of presences–but it is also a device for manipulating this very record. After training the network, I can inject new data, and cause the network to infer an image of an event that never occurred.
These studies were generated using a custom high-resolution AI video pipeline.
Abstract Camera No. 1
2019 - 2020
Abstract Camera No. 1 is a camera of movement and color, a tool for expression, and a collaboration between the camera, the subject, and the camera’s maker.
A camera is a perspective, a pinhole through which the shadows of our desires and fears flicker, like a dream, as much a mirror of the viewer as a window on its subject. The threads of a camera’s construction are secret, its rules absurd, its interpretation deceitful, always concealing even as it reveals.
Abstract Cameras is a series of fictional cameras, each a proposition and a glimpse of a potential future. The series questions the assumed objectivity of the camera, in an effort to expose the subjectivity and emotion that is encoded in technological objects and in how they shape our perspectives on reality. The fictional cameras supplant a phone’s built-in camera with a poetic one, installed from the Apple App Store.
This camera, the first in the series, is a camera of movement and color, as though the camera’s film has been replaced with an impossible material that is sensitive to the gesture of a hand and the turn of a dancer more than the fall of light upon its subject. As movement etches the surface of this impossible film it bleeds outward like ink in water, leaving a tracery of memories that layer upon one another. Even as Abstract Camera No. 1 reveals the dancing of its subject, it conceals their image. Abstract Camera No. 1 is a tool for the user’s expression and a collaboration between the camera, the subject, and the camera’s maker.
Gestures #2 - #4
2018
A series of three generative dance films that research how history is encoded in the body.
Gestures #2 - #4 continue my study of gesture-based choreography and computer-generated video. Work on the piece began with a series of workshops with Sherwood Chen and Gabriel Christian that investigated physical gesture as an encoding of power and history. These workshops found and developed three gestures based on the dancers’ personal experiences. The gestures were rehearsed exhaustively, and performed for camera roughly two-dozen times each. Each performance of a gesture is a minute or two in duration, and when filmed with a high-speed camera and replayed, takes about ten minutes to watch. Each gesture, then, has several hours of footage from dozens of performances. Custom software sifts through these hours of footage and generates multi-hour compositions that are presented on television screens.
The computer-generated compositions are meditative and continually surprising. Using partially stochastic decision trees, the computer finds unusual juxtapositions of movements–sometimes finding unison choreographies, sometimes contrasting postures that seem to hang in time. Overlaid performances expose multiple concurrent memories of an experience that may agree or disagree, confounding the viewer’s desire to resolve their own understanding of the performers’ experiences.
297 Gestures Upon the Body
2016
297 Gestures Upon the Body is a video installation that examines pedestrian physical vocabularies as encoded expressions of structures of race, gender, class, wealth, and power.
297 Gestures Upon the Body is a video installation that examines pedestrian physical vocabularies as encoded expressions of structures of race, gender, class, wealth, and power. As two people pass on the street a curl of a lip may denote contempt at the other’s misfortune, while a lingering glance may imply perceived familiarity. Our gazes, gestures, postures, etc. are traces of our histories, telling stories of displacement, oppression, kinship, and resistance. Gestures materializes these unseen structures first in the bodies of the performers, and then in video.
The installation is presented as two large video projections, each displaying an infinite stream of imagery, composed by a computer at the moment of observation. The computer sifts through a corpus of videos of 297 semi-improvised performances. The gestures are sourced from the performers’ lives through a workshopping process, then rehearsed and performed with dancers constantly changing roles and searching for new perspectives on the material. An awkward, possibly sexual encounter becomes sinister when roles are reversed, while an uncomfortable moment between a couple becomes abstracted into geometries of bodies and negative space.
The performances are overlaid, contrasted, and juxtaposed across the video projections as the computer engages in a semi-improvised performance of its own. The computer’s re-performance of the source material encourages unpredictable third-order interpretations through chance and rule-based operations.
The process of 297 Gestures Upon the Body is exhaustive and rigorous. It is an alloy of processes from big data, data visualization, and artificial intelligence, as well as from Anna Halprin and Merce Cunningham. Gestures applies analytical and creative rigor to researching the weight of progress upon the body, and how history replays itself in our most intimate moments.
Rondo Variation
2016
Interactive projections for a dance that brings sculptures to life.
Based on Bruce Beasley’s series Rondo, Rondo Variation takes his sculpture and connects each ring with a dancer’s movements. Multiple projections create a church-like space out of form, color, and light.
The performance system uses wireless sensors worn by each of the six dancers to determine orientation of the rings, while spatial information is provided by six off-stage “puppeteers” that enter into virtual duets with the dancers. The sensor data is then processed to create four high definition projections using a custom rendering engine, and the projections are mapped onto three screens upstage, and downstage left and right.
0⏎
2015 - 2016
Real-time generated absurdist choreography for any number of dancers.
Pulling from previous work with Disappearing Acts, 0⏎ experiments with using projection, audio, and a computer system to generate choreography in real-time. The choreography is influenced by classic algorithmic structures, forcing the dancers into machine-like loops and glitches, all the while creating short phrases of movement that are recombined in unexpected ways.
0⏎ was performed in 2015 at Codame Art + Tech in San Francisco, and in 2016 at aMID Festival in Chicago.
#0
2013 - 2015
A generative dance performance invoking a near future where humans have lost control of their technology.
Created in collaboration with Lisa Wymore and Sheldon Smith, #0 (a space opera) is a generatively choreographed dance performed within an autonomous installation.
The comedic and dystopian world presented in #0 blends the absurd with the profound, taking the viewer on an unexpected journey to a fictionalized future where humans have lost control of their technology. Playing mercilessly with the tropes of science fiction, the performers in #0 (a space opera) can no longer distinguish between the system and themselves.
The show is partly improvised in response to an actual computer system governing when and how certain parts of the piece will unfold. While the basic premise will always be the same, no two performances will be alike.
Using projection and audio, the installation system directs the dancers in various physical tasks. In one score, the system directs the dancers to develop snippets of movement through challenging, frustrating loops. In another, it attempts to re-teach the dancers human intimacy from its own abstract and ill-informed understanding of it.
The scores themselves are complex algorithms that enact real-time decision-making while producing generative graphics, creating movements and video that are different with every performance. The results vary greatly, consistently exposing new and unexpecteddynamics between the dancers, the audience, and the system.
#0 was performed in 2014 at Motion Pacific in Santa Cruz; at CounterPULSE in San Francisco; and in 2015 at Joe Goode Annex in San Francisco.
Grains
May, 2013
A live audio visual performance that deals with expanding the sonic energy that resides in a single grain of sound.
Grains explores the visual and sonic amplification of domestic food grains, as well as their transformation from solid to liquid as they multiply.
In this first-time collaboration with Surabhi Saraf, I accompanied Surabhi’s audio performance with a visual performance of my own.Together we transformed the space into one pulsating with the sound of magnified granular energy
The video was heavily influenced by Surabhi’s use of granular synthesis, looping, and mixing of live and pre-recorded sources. A simple interface to the software allowed me to compose the video live, allowing a bidirectional relationship between the audio and video, and exposing creative possibilities that were unique to each performance.
The visuals written for this performance allow projection mapping of multiple live and pre-recorded video streams. A number of effects are applied to the video in real-time, giving the video a sometimes gauzy, meditative, repetitive feeling, and other times a glitchy, staccato, and grainy feel.
Grains was performed in 2013 at The Asian Art Museum in San Francisco, and again in 2015 at Wood Street Galleries in Pittsburgh PA.
Parades and Changes
February, 2013
A revival of Anna Halprin’s pioneering performance.
First performed in 1965, Anna Halprin’s Parades and Changes pioneered the use of everyday movements and domestic rituals in dance, marking the onset of postmodern choreography. The dance revolves around a set of mundane tasks—unrolling giant sheets of plastic, stomping, interacting with the audience, handling objects, tearing paper, dressing and undressing. (From BAM)
Anna revived Parades and Changes in early 2013 with the original composer Morton Subotnick and Assistant Director Shinichi Iova-Koga. The development of the 2013 performance started in Anna’s Performance Laboratory long before we commenced rehearsals. In her usual fashion, the process started with open scores that had the performers generating resources for the choreography. As the perfomance matured, the score became increasingly specific and refined–codifying the raw materials found earlier in the process into fully realized choreography. The result was not just a wrote re-performance of the original 1965 piece, but a largely new piece adapted to the new performers and contemporary dance expectations/culture.
Reviews:
All photos are by Pak Han.
Threshold Hack
2011
A generative video installation processing a dance performance into an infinite stream of new compositions.
Threshold Hack is a generative video installation, projected at a scale that fills the viewer’s field of vision and at a resolution that allows for close inspection of details.
The video is presented alongside a series still images selected from it. These stills freeze interstitial moments that would otherwise be missed by the viewer as they flicker by at a rate barely perceptible to the human eye.
The score for the computer (the software) was developed in a recursive process with the score for the dancer (the choreography)—each iteration of the dance suggesting new algorithms, and each new algorithm proposing new resources for the performance.
The core of the software is an optical flow algorithm that analyzes pixel-by-pixel differences between video frames in an effort to perceive the dance. The understanding of dance gleaned by this process is utterly superficial and provides a useful anti-pattern to work against in a search for human dance under the debris of progress.