An Introduction to Deep Learning is the complete guide to writing deep learning programs with the widely-used Python language and TensorFlow programming environment. Building on his pioneering undergraduate and graduate courses, Brown University professor Eugene Charniak covers every key concept and technique, including feed-forward neural nets, convolutional neural nets, word embeddings, recurrent neural nets, sequence-to-sequence learning, deep reinforcement learning, unsupervised models, and more. Each chapter contains a full programming project and a set of exercises carefully crafted to help readers build mastery, as well as additional readings and references for even deeper insight.
This review is written from the perspective of an intermediate machine learning student. Specifically I have taken a year of rigorous university machine learning coursework and a semester of general AI coursework. The last month of my most recent ML course focused on deep learning, but implementation was limited to writing a FF NN with Numpy. We did not discuss or use TF, Keras, or any other deep learning libraries. This book picked up roughly where my class left off. It follows a logical sequence of major domains in deep learning, and introduces TF from the first chapters at a level approachable to a complete novice (like me). The figures provided were largely helpful for synthesizing the ideas, and the author's own "takes" on the algorithms and how they function were often, but not always, useful. My foremost complaints have to be about the quality of the code provided and the editing of the book overall. First, the code provided is no longer current recommended TF usage. I know this is difficult from the author's perspective. However, it took me hours of reading the TF documentation to learn just how much of the provided code, including sessions, variable scopes, etc. are staged for obsolescence. Second, the code itself is incomplete and inconsistent in places, occasionally with mismatched variables even. The worst section of the book in these regards was chapter 4 covering RNNs. The code provided is NOT sufficient to reconstruct his example, especially because he does not explicitly discuss the dimensions of the tensors involved (perhaps the most crucial of details). I will say the book improved in the remainder of the chapters in these regards and does include more complete code and discussion in the final chapters. Third, there are general typos and poorly edited sections throughout, including lines similar to "I can't remember this [easily searched] detail off the top of my head, so we will skip it," followed a few lines later by, "I actually did look it up and [this] was the answer." In summary, it was a useful introduction to TF and many of these deep learning disciplines, but I ended up running to the official TF tutorials and Github for answers more often than I'd like. I wish the code were current and ready for TF 2.0.
An Excellent Book to get an Introduction to Deep Learning. I took this book to help me understand about Deep Learning within A.I.
This might be a small-scaffold for my PhD thesis. I think -- if you're new to this topic, this would be a good introduction. I think, the references and citations are useful to understand researchers and scholars involved in this field.