The Neural Aesthetic @ ITP-NYU, Fall 2018
Lecture 7: Conditional generative models [10/30/2018]
[Slides]
- Review of generative models (5:58)
- Conditioning generative models (18:05)
- Image-to-image translation (pix2pix) (25:34)
- pix2pix projects (29:30)
- Conditioning on face landmarks (37:54)
- Conditioning on pose (44:13)
- Drawing interface for pix2pix (50:54)
- pix2pix ping-ponging and feedback loops (51:50)
- Interactive interfaces and edge2landscapes (57:47)
- Unpaired image translation and CycleGAN (1:00:37)
- CycleGAN projects (1:04:08)
- Object detection (YOLO) and dense captioning (1:14:52)
- Image-to-text & text-to-image (1:19:16)
- Installing dataset-utils & pix2pix/CycleGAN (1:20:54)
- Extracting faces from a movie (1:32:32)
- Making a pix2pix edge-to-photo dataset (1:46:50)
- Training pix2pix on faces (1:55:17)
- Scraping a dataset for CycleGAN (2:04:31)
- Training CycleGAN to turn faces to clowns (2:16:00)
- Installing densecap (2:17:51)
- Short pix2pixHD tutorial & CycleGAN results (2:21:30)
- Captioning images with densecap (2:33:01)