Sep 29, 2018

Project description:
The piece presents 5 minutes 35 seconds of neural network-rendered video, at 6k resolution, exploring image_histories--as--continuous_spaces. It begins sampling from Chicago art & design history (O'keeffe, Paschke, Marshall, etc) and ends with internet vernacular (cats, cheeseburgers, celebrity faces). The hope is to synthesize incredible work coming out of AI research into an independent, standalone artwork requiring little explanation. This projection-mapped video (with audio track) was commissioned by the Terra Foundation for the inauguration of a public art program called Art on theMART. It was projected on the Merchandise Mart's 2.5 acre south wall in downtown Chicago from Sept 29, 2018 to Dec 31, 2018. Assisted by:Esme BajoKyle BojanekJan BruggerEllie HogemanIsaac Nicholas   Tech notes: Almost the entirety of the piece was rendered by an in-house created neural network architecture. We combine a progressively-trained GAN with a high resolution autoencoder. The autoencoder provides a Z-vector prior from its bottleneck which is also used as a source of reconstruction loss. We call our network yGAN. Additionally, we built a versatile system (called Genmo) to decompose input video/animation into varied tilings, render those tiles into a new domain with our trained models, and recompose them. In this work, all rendered imagery is derived from carefully determined input video & animation. We ultimately trained 17 distinct models to render it.
Dec 06, 2017

Project description:
Check out our iOS app! Also, check out our beta web live demo. What is this? Genmo (short for Generative Mosaic) is a visual effects process that reconstructs any video (or photo) input using an entirely separate set of images. You upload a video, select a dataset to render it with, and get your recreated clip back in seconds. How does it work? We use our own deep neural network architecture to process your video. No original video content remains after processing - and no filters - the new clip is completely synthesized by an AI model. Our architecture hybridizes a generative adversarial network (GAN) with other neural network models. It is feed-forward (contributing to its speed) and, unlike other popular AI-based image manipulation tools, we do not use a classifier network. Patent pending. Have I seen this before? Not exactly. You’ve seen photomosaics. You may have seen style transfer and deep dream. You might even know the great Arcimboldo. This differs from those. You haven’t seen video redrawn at near real-time from an arbitrarily chosen image set. You haven’t seen total flexibility in mosaic tile size, arrangement, and shape - all selected at runtime. You definitely haven’t seen the potential to recreate any input video with any other image dataset on-demand. And much more is yet to come, including true real-time rendering. Who are you guys? This is a project of LATENTCULTURE, an AI research & production venture based within Jason Salavon Studio on the University of Chicago campus. We are artists, programmers, and researchers investigating how deep learning and other generative systems can impact creative production. We welcome opportunities & partnerships to collaborate in this endeavor. Contact us at hey@latentculture.com.
Aug 23, 2017

Project description:
Sep 29, 2016

Project description:
Sep 22, 2016

Project description:
Sep 20, 2016

Project description:
Sep 13, 2016

Project description:
A process transforms images/photographs of pattern (rugs, tile work, etc) into cartoonish, gloopy, painterly abstraction.  There are 300, each will only be printed once.
Sep 13, 2016

Project description:
Sep 13, 2016

Project description:
Sep 13, 2016

Project description:
Sep 13, 2016

Project description:
This video amalgamates every single frame from the first 26 seasons of The Simpsons (1989-2015, 574 episodes, 17.7M frames).  The composition includes algorithmically mixed audio and combines both standard & high definition seasons. Reconsidering a promise to stop making these types of pieces (begun in 1997), the project belongs to a large suite of works (including murals, prints, books & video) reorganizing and manipulating episodes of The Simpsons - as data - in related, but varied ways.  The overall project represents the synthesis and unification of my amalgamation work (Playboys, Homes, etc.) with my color-averaged frame work (Titanic, EAO, etc). By varying parameters, a single software process produces compositions capable of a huge breadth (all the ways) of data representation.  Most importantly, it maps a contiguous space inhabited by these previously distinct styles. We're currently exploring these frames and audio tracks with deep networks...more to come. 
Sep 13, 2016

Project description:
Sep 13, 2016

Project description:
These volumes present color information for every single episode from the first 26 seasons of The Simpsons (1989-2015, 574 episodes, 17.7M frames).  Each page is a single episode, in a unique data representation, produced by a software process with varied input parameters. This project belongs to a large suite of works (including murals, prints, books & video) reorganizing and manipulating episodes of The Simpsons - as data - in related, but varied ways.  The overall project represents the synthesis and unification of my amalgamation work (Playboys, Homes, etc.) with my color-averaged frame work (Titanic, EAO, etc). By varying parameters, the software process produces compositions capable of a huge breadth (all the ways) of data representation.  Most importantly, it maps a contiguous space inhabited by these previously distinct styles. We're currently exploring these frames and audio tracks with deep networks...more to come. 
Sep 13, 2016

Project description:
This print amalgamates every single "couch gag" from the first 26 seasons of The Simpsons. Reconsidering a promise to stop making these types of pieces (begun in 1997), the project belongs to a large suite of works (including murals, prints, books & video) that reorganize and manipulate episodes of The Simpsons - as data (1989-2015, 574 episodes, 17.7M frames) - in related, but varied ways.  The overall project represents the synthesis and unification of my amalgamation work (Playboys,Homes, etc.) with my color-averaged frame work (Titanic, EAO, etc). By varying parameters, a single software process produces compositions capable of a huge breadth (all the ways) of data representation.  Most importantly, it maps a contiguous space inhabited by these previously distinct styles. We're currently exploring these frames and audio tracks with deep networks...more to come. 
Sep 13, 2016

Project description:
‹‹ previous
1 2