Contribute Media
A thank you to everyone who makes this possible: Read More

Deep generative models for image and text generation

Description

Deep generative models for text and image generation

Can we program a computer to be creative? Doesn’t everyone dream of painting a portrait or writing poetry? If you are not a good painter nor a good writer you can now rely on generative modeling, just like us. Recently, computers became able to paint, write, produce movies, and compose music thanks to the advance of generative modeling. In this workshop we will demonstrate the principles and the architecture of selected generative models for text and image.

In the first part of the workshop, we will focus on image generation and manipulation using Variational AutoEncoders (VAE). VAE is one the most fundamental and well-established deep learning architectures for generative modeling. We will describe the principles of VAE architecture and its advantages over simple Autoencoders. We will guide the audience to build VAE from scratch to generate images and to morph between images.

In the second part of the workshop, we will generate our own stories. We will first cover the principles and architecture of Recurrent Neural Networks (RNNs) and take a look at how these models can be used to generate text. Then, we describe Long Short Term Memory Networks (LSTM), which is another type of RNN. We observe its advantages over a basic RNN model and see how it improves our generated text.

We will provide notebooks. All you need is a laptop and basic python knowledge!

Details

Improve this page