Introduction

Introduction: Harold Cohen

Over the last two years, deep learning has made inroads into domains of interest to artists, designers, musicians, and the like. Combined with the appearance of powerful open source frameworks and the proliferation of public educational resources, this once esoteric subject has become accessible to far more people, facilitating numerous innovative hacks and art works. The result has been a virtuous circle, wherein public art works help motivate further scientific inquiry, in turn inspiring ever more creative experimentation.

http://diccan.com/Images/Cohen_tortue.jpg http://diccan.com/History_1970.html

"Artificial intelligence is anything computers can't do yet" - Douglas Hofstadter, re: Tesler theorem

"Machine learning is applied computational statistics." - Chris Wiggins

Evidently, the meaning of the terms artificial intelligence and machine learning depend on who you ask.

The term artificial intelligence has been in usage since the 1950s, and originally referred to ___. It has been prone to periods of great popular interest, and pessimism and decreased interest, in part due to its failure to quickly achieve the techno-utopianism prognosticated by opportunistic speculators.

Pamela McGoldrick attributes these periodic busts to the notion that "Practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures.'"

Similarly, common usage of the term "machine learning" goes back to the 1980s, but most of its core concepts trace back much further than the term itself. It became widely recognized as a field in its own right when approaches from statistics - especially data-driven computational statistics - began being applied to problems in AI.

ML has become so predominant in AI that the terms are often used interchangeably, both inside and outside of academia.

Emergence of deep learning

In the 2010s, many machine learning researchers who were working with multilayer ("deep") architectures - especially multilayer neural networks - began to use the term "deep learning."

Deep neural networks set new benchmarks in speech recognition in 2009 (citation) and image recognition in 2012 (citation) - shattering the previous year's top systems by an unprecedented margin - and have since characterized most of ML's state-of-the-art techniques. In the last 5 years, most of the major scientific research centers have increased their funding of ML research, particularly deep learning. Much of the research has shifted away from university research centers and to the major tech industry titans, including Facebook, Google, Microsoft, and Amazon. Owing to its notorious secrecy, Apple has largely fallen behind.

In 2015 Jürgen Schmidhuber wrote "A Critique of Paper by 'Deep Learning Conspiracy'", recasting the recent advances of deep learning as building on top of prior work with multilayer neural networks, going back decades. His post provoked a great deal of commentary from numerous people, including very well-known researchers, sparking arguably the most academically rigorous internet flamewar of all-time.

Deep learning shmeep learning

None of these terms have well-defined meanings, and connotations are inconsistently associated with them, especially within the academic community itself. We are interested in machines which do interesting things, and can do them now or may do so in the near future.

Core principles of this book

Mathemagics

Plenty of academic courses exist already. This book is not trying to replace them – rather find middle ground between science and magic. CS/engineering/math as few requirements as possible.

Software

This book tries to find a middle way. tries to be methodical and easy to install, come with as many documented examples as possible…..

Upkeep

Machine learning is a fast-moving field and a quarter of the first draft of this book was obsolete by the time I finished writing it. I will try to keep up with the winds of change and welcome tips and contributions [github][twitter][email].

If you have suggestions, corrections, links to relevant projects, post them to the issue tracker. or chat with me.

Math

In general, this book will try to minimize the use of math, and rely on visual aides more than equations, both because neural networks can be well understood this way, and because it helps reduce the need for other qualifications. Nevertheless you may still find it helpful to review your maths to attach mental probabilistic and spatial geometric interpretations to them. Francis Tseng has a nice guide to AI and ML which contains a concise review of the math needed. [ some more links ]

one of the things we can do away with is some of the details of training deep networks and focus on applications. vindication for me as its all i used to do

etc

  • machine learning as a subject for artistic practice and interdisciplinary research
  • https://docs.google.com/document/d/1wmJtZ8QTeE6nBCT17iWWVwNvhfOm0mPA8smNNGG2Agw/edit

  • These notes and the accompanying course will be less rigorous than CS231n, and others. Those courses assume a certain level of mathematical proficiency and the majority of students have some background, if not a degree, in computer science. In contrast, this course will make as few assumptions as possible about background preparation. In practice, this necessarily entails some sacrifice; nothing beats having a ___. But many important aspects of machine learning can be well understood with nothing more than high school-level mathematics and good analogies and abstractions. It is both possible and desirable to be able to engage with the subject at this level, whereas they may otherwise not approach it at all. An example of this in practice – backpropagation

lin algebra online class https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab http://setosa.io/ev/eigenvectors-and-eigenvalues/

A ‘Brief’ History of Neural Nets and Deep Learning, Part 1 www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/

Visual intro: http://www.r2d3.us/visual-intro-to-machine-learning-part-1/

A primer (img) https://www.datarobot.com/blog/a-primer-on-deep-learning/ DeepLearning.tv https://www.youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ

http://www.computervisionblog.com/2015/01/from-feature-descriptors-to-deep.html

yann introduction videos https://code.facebook.com/pages/1902086376686983

http://deeplearninggallery.com/

http://frnsys.com/ai_notes/