{"id":226,"date":"2020-11-19T11:20:59","date_gmt":"2020-11-19T09:20:59","guid":{"rendered":"https:\/\/artxai.uwas.aalto.fi\/?page_id=226"},"modified":"2020-11-19T18:12:30","modified_gmt":"2020-11-19T16:12:30","slug":"michael-mccrea","status":"publish","type":"page","link":"https:\/\/artxai.uwas.aalto.fi\/michael-mccrea\/","title":{"rendered":"Michael McCrea – Rover (falling)"},"content":{"rendered":"\n

Rover is an iterative series of artworks exploring the potential and peculiarities of machine perception through a peripatetic imaging device. [1] Through light field photography, machine listening, and machine vision, Rover creates a variable representation of place that is at once familiar to the human viewer, yet estranged. Throughout its travels, Rover captures dense sets of data to be computationally explored and reimagined, offering new perspectives at the edges of our collective schemes of meaning-making.<\/p>\n\n\n\n

Whereas previous iterations of the project comprised solely of data internal to Rover\u2019s sensory facilities (on-site image and sound capture), the introduction of semantic image captioning in Rover, falling<\/em> expands its perceptual set to intersect with our own. A prescribed understanding of where meaning lies in an image is now imposed on the system\u2019s logic, through pre-trained machine learning networks infused with the labours and observations of people. However, the results are far from accurate, sensible, or even \u201cuseful\u201d. This expands the space where the sense-making of Rover and the human interlocutor become entangled, and where new forms and strategies for understanding must emerge.<\/p>\n\n\n\n

\n