The Neural Aesthetic @ ITP-NYU, Fall 2018
Lecture 3: How neural nets are trained [9/18/2018]
[Slides]
- Introduction, announcements (0:00)
- Review of supervised learning pipeline (5:41)
- Why training is hard (10:06)
- Linear regression (15:33)
- Gradient descent (20:18)
- Calculating the gradient, backpropagation (31:32)
- The problem of non-convexity, SGD, and mini-batches (38:31)
- Momentum and adaptive optimizers (44:29)
- Overfitting and regularization, dropout (48:52)
- Further reading & questions (56:01)
- Overview of ml4a-ofx (1:03:40)
- Demo of ConvnetPredictor (webcam transfer learning) (1:08:12)
- Communicating between ConvnetPredictor and Processing (1:19:36)
- Regression controlling generative art sketch (1:32:53)
- Controlling Ableton Live with ConvnetPredictor (1:40:24)
- Demo of DoodleClassifier (1:48:48)
- Demo of AudioClassifier and controlling the keyboard with sound (offline) (1:59:27)
- Summary and comparison of tools (2:08:35)