Looking for one central source where you can learn key findings on machine learning? Deep Learning: The Definitive Guide provides developers and data scientists with the most practical information available on the subject, including deep learning theory, best practices, and use cases.
Authors Adam Gibson and Josh Patterson present the latest relevant papers and techniques in a nonacademic manner, and implement the core mathematics in their DL4J library. If you work in the embedded, desktop, and big data/Hadoop spaces and really want to understand deep learning, this is your book.
I just finished reading the "early release" that is, the raw and unedited text—as the O'Reilly website describes it. It is not la lie, there are typos and minor errors, as well as several missing sections.
I am disappointed. I found this book stays halfway between theory and practice but excels in none. For instance, I found the presentations of Boltzmann machines (BM), recurrent neural networks (RNN), or Convolutional neural networks (CNN) too brief and shallow, and I had to go through other resources to understand. I don't know how helpful the DL4J examples are for the practitioner, as I did not try to run them. As technology evolves quickly, I guess this book will soon be out-of-date. I still liked the general guidance about when to use what algorithm (optimisation, weights initialization, regularisation, etc.).
If like me, you are looking for a resource to help you learn about deep learning, you may want to keep searching.
I expected to read this over a much longer period of time than I did -- I ended up tearing through most of it at one night. But even as a very high-level overview of the state of deep learning, I feel like i learned a lot -- in fact, by not having the opportunity to really mess D4LJ apis but instead focusing on the high-level approach, I might have gotten more out of this book than I would have otherwise. (I have no shortage of detailed papers and books gathering dust, because I stalled out understanding particular details). At the same time, its focus on being 'for practitioners' gave some useful tips that I will know to look back to (e.g., when it says that almost any values for dropout probability beyond the extremes tends to work about the same, so most often it is just left at 0.5 -- that's exactly the sort of rule of thumb that I really value).
I would recommend this book to anybody who has some basic familiarity with AI techniques/NN, but has not confronted the breadth of development in the past decade.
Книга больше похожа на конспект и личные заметки автора - создается впечатление, что читаешь разные статьи из Википедии, а не целостную книгу. Помимо этого, перевод немного оторван от предметной области. Верность (как перевод accuracy), мини-пакеты, ОЕЯ, СНС звучат довольно странно, хотя может кто-то использует и такие варианты.
As other readers have pointed out, its a practitioner's guide. Not much depth in the theory behind the DL algorithms. But, contains some useful tips for models. I'd say this book at a very high level and readers should read Deep Learning by Ian GoodFellow and Yoshua Bengio for the theory.
I felt it too keenly avoided showing the mathematical basis of concepts (you can't really understand a lot of these things properly without that, perhaps others would disagree). Interesting to see how Java frameworks operate, may revisit
good for basic overview of the concepts, but codes done on java , If they implement the concepts in python it will be helpful for most of the developer and researchers
This book covers a lot of ground. That fact is a blessing and a flaw. It dips your toes in a vast number of topics related to deep learning that are reasonably up to date. Its ideal for a person who already understands calculus and has maybe used some api like pytorch or tensorflow to do basic things.