Genmo & yGAN   2017
Neural network software architecture

Check out our iOS app.


What is this?

Genmo (short for Generative Mosaic) is a visual effects process that reconstructs any video (or photo) input using an entirely separate set of images. You upload a video, select a dataset to render it with, and get your recreated clip back in seconds.

How does it work?

We use our own deep neural network architecture to process your video. No original video content remains after processing - and no filters - the new clip is completely synthesized by an AI model.

Our architecture hybridizes a generative adversarial network (GAN) with other neural network models. It is feed-forward (contributing to its speed) and, unlike other popular AI-based image manipulation tools, we do not use a classifier network. Patent pending.

Have I seen this before?

Not exactly. You’ve seen photomosaics. You may have seen style transfer and deep dream. You might even know the great Arcimboldo. This differs from those.

You haven’t seen video redrawn at near real-time from an arbitrarily chosen image set. You haven’t seen total flexibility in mosaic tile size, arrangement, and shape - all selected at runtime. You definitely haven’t seen the potential to recreate any input video with any other image dataset on-demand. And much more is yet to come, including true real-time rendering.

Who are you guys?

This is a project of LATENTCULTURE, an AI research & production venture based within Jason Salavon Studio on the University of Chicago campus. We are artists, programmers, and researchers investigating how deep learning and other generative systems can impact creative production. We welcome opportunities & partnerships to collaborate in this endeavor. Contact us at