RGBFAQ (2020) 27’38” video essay, subtitled. Multimedia installation, 3D projection mapping, sculpture.
RGBFAQ traces the trajectory of computer graphics from WW2 to Bell Labs in the 1960s, from the visual effects studios of the 1990s to the GPU-assisted algorithms of the latest machine learning models. The story culminates with the emergence of the synthetic dataset: computer-generated images used as ‘ground truth’ for training computer vision algorithms. Synthetic data is increasingly sought after as a ’clean’ alternative to real world data sets, which are often biased, unethically sourced or expensive to create. And while CGI data seems to avoid many of these pitfalls, my argument aims from the outset to consider whether the virtual world is as clean and steady as we think. I try to catalogue the ‘hacks’ used to construct the foundations of simulated worlds and suggest that the solutions of early computer graphics create a technical debt that might be less than ideal material on which to build the foundations of yet another generation of technology.
RGBFAQ excavates these foundations, bringing forth a battery of forensic evidence that undermines what we might think of as the image, supplying instead the far more unpredictable, colourful concept of the ‘exploded image’, a mode of seeing that, as I try to demonstrate, originates in the tricky render economics of the early 2000s but like many new technologies, has unexpected applications in surveillance, entertainment and behavioural science.
The concept of the exploded image (or the alternative ‘hyperimage’) articulates the slipperiness inherent in all discussion of digital aesthetics. The image discussed in RGBFAQ bridges XYZ and RGB, space and colour, data and aesthetics, machine and human, weapon and tool. I try to make it clear that while many might mistake a contemporary image for a plain, traditional photograph, it has long been something far more than that; a decoy, a classically Baudrillardian simulacra.
If RGBFAQ concludes with a sense of the power of digital imaging technologies (and perhaps the feeling of an accident waiting to happen) it is also an attempt to speak of wonder and empowerment: the kaleidoscopic possibilities for interpretation, intervention and synthesis that the exploded image allows.
CREDITS
Commissioned by arebyte, with support from Arts Council England
Research, Script, Animation, Exhibition Design: Alan Warburton.
Audio: David Kamp
Animation Support: Tom Pounder & Nikolas Diamant
Additional (intro) Music: Jacob Samuel
Projection Mapping and Lighting: Gorka Cortazar & Blanca Regina
Installation photography: Max Colson
Research Support: Frank-Ratchye STUDIO for Creative Inquiry, Carnegie Mellon University; CHASE; Vasari Research Centre for Art and Technology, Birkbeck.
Thanks: Golan Levin, Joel McKim, Katrina Sluis, Catherine Grant, Jacob Gaboury.
RESPONSES TO THE SHOW
CLICK HERE for full script and a catalogue essay by Deborah Levitt.
CLIP CREDITS
All clips, where known, are credited and linked. Extra thanks goes to Jeff Quitney’s invaluable archive, which you can see at quickfound.net or on Vimeo. This video essay falls under the guidelines of fair use. Fair use, free use and fair practice are frameworks designed to allow the lawful use or reproduction of work without having to seek permission from the copyright owner(s) or creator(s) or infringing their interest. As such this video may contain copyrighted content not authorized for use by the owner. If you think I’ve missed something or would like your video to be credited differently, then please get in touch and I’d be glad to amend these credits.
Chapter 000: Origins
Computer History: ENIAC becomes a reality - origins of a Giant Computer (1946)
Courtesy of Jeff Quitney Largest Computer Ever Built: “On Guard - The Story of SAGE” 1956 IBM
Stay Safe, Stay Strong: The Facts About Nuclear Weapons (1960)
Courtesy of Jeff Quitney Ballistics: ” Fundamentals of Ballistics” 1948 US Army Training Film TF9-1512
Courtesy of Jeff Quitney Basic Mechanisms In Fire Control Computers: Solvers, Integrators, Multipliers 1953 US Navy Training Film MN-6783b
Courtesy of Jeff Quitney Basic Mechanisms In Fire Control Computers: Shafts, Gears, Cams 1953 US Navy Training Film MN-6783a and Doppler Radar Navigation: “Eyes of Flight” ~ 1959 Ryan AN-APN-122(V)
Above the horizon. by American Meteorological Society
Eulerian and Lagrangian Descriptions in Fluid Mechanics MIT, National Science Foundation, uploaded by Barry Belmont
Solar radiation 2: The earth's atmosphere. by American Meteorological Society
British Pathe Two New Wind Tunnels (1960)
Courtesy of Jeff Quitney Radar and Its Applications 1962 US Army Training Film; Uses of Radar
Thor - the IRBM (1959) Air Force Space & Missile Museum Foundation
Rise of the Terminators - Military Artificial Intelligence (AI) | Weapons that think for Themselves
Progressive Growing of GANs for Improved Quality, Stability, and Variation & Augmenting Radiology with AI
Here comes a new era of AI detection! (Facebook detectron Open source)
How To Use EXR Passes Like The Pros | After Effects Tutorial | ActionVFX Quick Tips
Computer Games Empower Deep Learning Research | Two Minute Papers #105
G-Buffer recording with ReShade custom shader _ Watch_Dogs _ Ray Tracing _ 4K
Machine Learning - Part 2 - Detecting the Human Face - Flame 2020
G-Buffer recording with ReShade custom shader | Watch_Dogs | Ray Tracing | 4K
Chapter XYZ: A New Worldspace
Chapter VFX: The Render Gap
Turning Human Motion-Capture into Realistic Apes in Dawn of the Planet of the Apes | WIRED
Avengers: Endgame | Professor Hulk VFX
Chapter GPU: Back to Black
Mobile object detection - TensorFlow Lite SSD MobileNet (Samsung Note 9)
NVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze Estimation
Research at NVIDIA: Medical Image Synthesis for Data Augmentation and Anonymization Using GANs
NVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze Estimation
[CVPR'17] SURREAL synthetic training results on Youtube Pose
Realtime Multi-Person 2D Human Pose Estimation using Part Affinity Fields
Computer Games Empower Deep Learning Research _ Two Minute Papers #105
G-Buffer recording with ReShade custom shader _ Watch_Dogs _ Ray Tracing _ 4K