Introduction: Harold Cohen Machine learning: 00000 Neural networks: Lovelace Looking inside neural nets: Brainbow How neural nets are trained: Topographic map Convnets: Convnets are everywhere Looking inside convnets: Visualization/ Deepdream: Mike Tyka Style transfer: Mona Lisa series Generative models: Tom White VRAE Recurrent neural nets: MC Escher
"Artificial intelligence is anything computers can't do yet" - Douglas Hofstadter, re: Tesler theorem
"Machine learning is applied computational statistics." - Chris Wiggins
Evidently, the meaning of the terms artificial intelligence and machine learning depend on who you ask.
The term artificial intelligence has been in usage since the 1950s, and originally referred to ___. It has been prone to periods of great popular interest, and pessimism and decreased interest, in part due to its failure to quickly achieve the techno-utopianism prognosticated by opportunistic speculators.
Pamela McGoldrick attributes these periodic busts to the notion that "Practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures.'"
Similarly, common usage of the term "machine learning" goes back to the 1980s, but most of its core concepts trace back much further than the term itself. It became widely recognized as a field in its own right when approaches from statistics - especially data-driven computational statistics - began being applied to problems in AI.
ML has become so predominant in AI that the terms are often used interchangeably, both inside and outside of academia.
Emergence of deep learning
In the 2010s, many machine learning researchers who were working with multilayer ("deep") architectures - especially multilayer neural networks - began to use the term "deep learning."
Deep neural networks set new benchmarks in speech recognition in 2009 (citation) and image recognition in 2012 (citation) - shattering the previous year's top systems by an unprecedented margin - and have since characterized most of ML's state-of-the-art techniques. In the last 5 years, most of the major scientific research centers have increased their funding of ML research, particularly deep learning. Much of the research has shifted away from university research centers and to the major tech industry titans, including Facebook, Google, Microsoft, and Amazon. Owing to its notorious secrecy, Apple has largely fallen behind.
In 2015 Jürgen Schmidhuber wrote "A Critique of Paper by 'Deep Learning Conspiracy'", recasting the recent advances of deep learning as building on top of prior work with multilayer neural networks, going back decades. His post provoked a great deal of commentary from numerous people, including very well-known researchers, sparking arguably the most academically rigorous internet flamewar of all-time.
Deep learning shmeep learning
None of these terms have well-defined meanings, and connotations are inconsistently associated with them, especially within the academic community itself. We are interested in machines which do interesting things, and can do them now or may do so in the near future.
Core principles of this book
Plenty of academic courses exist already. This book is not trying to replace them – rather find middle ground between science and magic. CS/engineering/math as few requirements as possible.
This book tries to find a middle way. tries to be methodical and easy to install, come with as many documented examples as possible…..
Machine learning is a fast-moving field and a quarter of the first draft of this book was obsolete by the time I finished writing it. I will try to keep up with the winds of change and welcome tips and contributions [github][twitter][email].
If you have suggestions, corrections, links to relevant projects, post them to the issue tracker. or chat with me.
In general, this book will try to minimize the use of math, and rely on visual aides more than equations, both because neural networks can be well understood this way, and because it helps reduce the need for other qualifications. Nevertheless you may still find it helpful to review your maths to attach mental probabilistic and spatial geometric interpretations to them. Francis Tseng has a nice guide to AI and ML which contains a concise review of the math needed. [ some more links ]
one of the things we can do away with is some of the details of training deep networks and focus on applications. vindication for me as its all i used to do
- machine learning as a subject for artistic practice and interdisciplinary research
- These notes and the accompanying course will be less rigorous than CS231n, and others. Those courses assume a certain level of mathematical proficiency and the majority of students have some background, if not a degree, in computer science. In contrast, this course will make as few assumptions as possible about background preparation. In practice, this necessarily entails some sacrifice; nothing beats having a ___. But many important aspects of machine learning can be well understood with nothing more than high school-level mathematics and good analogies and abstractions. It is both possible and desirable to be able to engage with the subject at this level, whereas they may otherwise not approach it at all. An example of this in practice – backpropagation
lin algebra online class https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab http://setosa.io/ev/eigenvectors-and-eigenvalues/
A ‘Brief’ History of Neural Nets and Deep Learning, Part 1 www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/
Visual intro: http://www.r2d3.us/visual-intro-to-machine-learning-part-1/
A primer (img) https://www.datarobot.com/blog/a-primer-on-deep-learning/ DeepLearning.tv https://www.youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ
yann introduction videos https://code.facebook.com/pages/1902086376686983