Cet ouvrage s’adresse aux étudiants en fin de licence et en master d’informatique ou de maths appliquées, ainsi qu’aux élèves ingénieurs. L’apprentissage profond (deep learning) a révolutionné l’intelligence artificielle et s’est très rapidement répandu dans de nombreux domaines d’activité. Grâce à une approche « orientée projet », ce livre a pour but d’expliquer les bases du deep learning, depuis les réseaux de neurones à propagation avant jusqu’aux réseaux non supervisés. Conçu comme un manuel d’apprentissage synthétique, avec cours et exercices, il s’appuie sur des exemples dans des domaines comme la vision par ordinateur, la compréhension des langages naturels ou l’apprentissage par renforcement. Ces exemples sont étudiés avec le logiciel TensorFlow. Les notions théoriques sont illustrées et complétées par une quarantaine d’exercices, dont la moitié sont corrigés.
This review is written from the perspective of an intermediate machine learning student. Specifically I have taken a year of rigorous university machine learning coursework and a semester of general AI coursework. The last month of my most recent ML course focused on deep learning, but implementation was limited to writing a FF NN with Numpy. We did not discuss or use TF, Keras, or any other deep learning libraries. This book picked up roughly where my class left off. It follows a logical sequence of major domains in deep learning, and introduces TF from the first chapters at a level approachable to a complete novice (like me). The figures provided were largely helpful for synthesizing the ideas, and the author's own "takes" on the algorithms and how they function were often, but not always, useful. My foremost complaints have to be about the quality of the code provided and the editing of the book overall. First, the code provided is no longer current recommended TF usage. I know this is difficult from the author's perspective. However, it took me hours of reading the TF documentation to learn just how much of the provided code, including sessions, variable scopes, etc. are staged for obsolescence. Second, the code itself is incomplete and inconsistent in places, occasionally with mismatched variables even. The worst section of the book in these regards was chapter 4 covering RNNs. The code provided is NOT sufficient to reconstruct his example, especially because he does not explicitly discuss the dimensions of the tensors involved (perhaps the most crucial of details). I will say the book improved in the remainder of the chapters in these regards and does include more complete code and discussion in the final chapters. Third, there are general typos and poorly edited sections throughout, including lines similar to "I can't remember this [easily searched] detail off the top of my head, so we will skip it," followed a few lines later by, "I actually did look it up and [this] was the answer." In summary, it was a useful introduction to TF and many of these deep learning disciplines, but I ended up running to the official TF tutorials and Github for answers more often than I'd like. I wish the code were current and ready for TF 2.0.
An Excellent Book to get an Introduction to Deep Learning. I took this book to help me understand about Deep Learning within A.I.
This might be a small-scaffold for my PhD thesis. I think -- if you're new to this topic, this would be a good introduction. I think, the references and citations are useful to understand researchers and scholars involved in this field.