Michael McCrea – Rover (falling)

Rover is an iterative series of artworks exploring the potential and peculiarities of machine perception through a peripatetic imaging device. [1] Through light field photography, machine listening, and machine vision, Rover creates a variable representation of place that is at once familiar to the human viewer, yet estranged. Throughout its travels, Rover captures dense sets of data to be computationally explored and reimagined, offering new perspectives at the edges of our collective schemes of meaning-making.

Whereas previous iterations of the project comprised solely of data internal to Rover’s sensory facilities (on-site image and sound capture), the introduction of semantic image captioning in Rover, falling expands its perceptual set to intersect with our own. A prescribed understanding of where meaning lies in an image is now imposed on the system’s logic, through pre-trained machine learning networks infused with the labours and observations of people. However, the results are far from accurate, sensible, or even “useful”. This expands the space where the sense-making of Rover and the human interlocutor become entangled, and where new forms and strategies for understanding must emerge.

Contributors:

Sound and video composed and arranged by Michael McCrea.

The text was given voice by Laura Cemin.
www.lauracemin.com

Rover is an ongoing exploration by Robert Twomey and Michael McCrea.
www.roberttwomey.com

Tools used:

Rover’s imaging device consists of a RaspberryPi, custom motor gantry driven by Arduino, open-source CNC motion control Grbl (https://github.com/grbl/grbl), and SuperCollider (https://supercollider.github.io).

Light fields are reconstructed using Visual Structure From Motion (http://ccwu.me/vsfm/), re-synthesized using openFrameworks (https://openframeworks.cc), and temporally structured with SuperCollider.

Machine-vision text generated using the DenseCap (https://cs.stanford.edu/people/karpathy/densecap/), a Fully Convolutional Localization Network for natural language dense captioning of images by Justin Johnson, Andrej Karpathy, and Li Fei-Fei.

Algorithmic sound processing was performed using SuperCollider.

[1] R. Twomey and M. McCrea, “Transforming the Commonplace through Machine Perception: Light Field Synthesis and Audio Feature Extraction in the Rover Project,” Leonardo, vol. 50, no. 4, pp. 400–408, Aug. 2017, doi: 10.1162/LEON_a_01458.

Scroll Up