Looking inside convnets

todo: header=dog conv filters

Visualizing activations

Review from previous convnet chapter

Visualizing weights

todo: alexnet-firstlayer-filters.jpg, alexnet-conv5-filters.jpg

todo: weights images from umontreal

Retrieving images which maximally activate neurons

todo: zeiler/fergus visualizing what neurons learn, image ROIs which maximally activate neurons

Occlusion experiments

todo: occlusion experiments, zeiler/fergus visualizing/understanding convnets https://cs231n.github.io/understanding-cnn/

Deconv / guided backprop

todo: deconv, guided backprop, deepvis toolbox

todo: inceptionism class viz, deepdream

deconvnets http://cs.nyu.edu/~fergus/drafts/utexas2.pdf zeiler: https://www.youtube.com/watch?v=ghEmQSxT6tw Saliency Maps and Guided Backpropagation on Lasagne https://github.com/Lasagne/Recipes/blob/master/examples/Saliency%20Maps%20and%20Guided%20Backpropagation.ipynb multifaceted feature vis https://arxiv.org/pdf/1602.03616v1.pdf

Visualizing

aubun visualizing lenet classes http://www.auduno.com/2015/07/29/visualizing-googlenet-classes/ peeking inside convnets http://www.auduno.com/2016/06/18/peeking-inside-convnets/

Neural nets are easily fooled

todo: neural nets are easily fooled

Etc

todo: notes on performance

todo: attention, localization

Further reading

https://cs231n.github.io/understanding-cnn/

keras how convnets see the world https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html

http://yosinski.com/deepvis https://www.youtube.com/watch?v=AgkfIQ4IGaM

https://youtu.be/XTbLOjVF-y4?t=12m48s