Chapter 1: What is Deep Learning? Chapter Review the history of Deep Learning, how where the field is today, and discuss the general goals that the book has for the reader in their progression. No of pages 10
Chapter 2: A Review of Notation, Vectors and Matrices Chapter Establish a sense of understanding in the aforementioned topics within the reader to allow them to understand the models described later. Topics discussed includes the Notation, vectors, matrices, inner products, norms, and linear equations. No of 50
Apress Proposal Submission Form for Prospective Authors Chapter 3: A Review of Optimization Chapter Discuss/Review Optimization concepts and how it is used in Deep Learning models. Topics discussed include the constrained and unconstrained optimization, gradient descent, and newton's method. No of pages : 60
Chapter 4: Single Layer Artificial Neural Network (ANNs) Chapter Introduce readers to ANNs, it's uses, the math that powers the model, as well as discussing its limitations No of 10
Chapter 5: Deep Neural Networks (Multi-layer ANNs) Chapter Establish the difference between single and multilayer ANNs as well as discuss the nuances that are created as a product of having multiple hidden layers No of 10
Chapter 6: Convolutional Neural Networks (CNNs) Chapter Build upon the knowledge of neural networks described earlier and begin to branch in the other models, such as CNNs. Here, we will establish what a convolutional layer is, in addition to what the uses of this model are, such as computer vision and processing visual data. No of 10
Apress Proposal Submission Form for Prospective Authors Chapter 7: Recurrent Neural Networks (RNNs) Chapter Describe the mathematics and intuition behind RNNs and their use cases, such as handwriting recognition and speech recognition. Also describe how the unique structure behind them differentiates themselves from feed forward networks. No of 10
Chapter 8: Deep Belief Networks and Deep Boltzman Machines Chapter Discuss the similarities between these two models and how their disadvantages and advantages in contrast to the prior Deep Learning Models described No. of 20
Chapter 9: Tuning and Training Deep Network Architectures Chapter Establish an understanding of how to properly train Deep Network models and tune their parameters as to avoid common pitfalls such as overfitting. No. of 20
Chapter 10: Experimental Design and Variable Selection Chapter Now that the reader has an understanding of various Deep Learning Models, and the concepts that power them, it is time to establish an understanding of how to properly perform experiments, including the examples given in the later part of the text. Topics discussed include the Fisher's priciples, Plackett-Burman designs, statistical control, and variable selection techniques.
This is probably the worst textbook I have ever had the displeasure of reading, and I gave up on page 30. It is just awful. The language is horrible and the math is plain and simply wrong more often than not. If you write about linear algebra and you are incapable of defining addition and subtraction for vectors or multiplication on matrices -- and your examples would never make it through a computer, despite the title mentioning a language that does support linear algebra, because you get the dimensions wrong in more than half the cases -- then you just shouldn't be writing about linear algebra.
No, this book is just horrible.
I don't trust for a second that any of the material that might be new to me in later chapters would be correct if all the simple stuff, that I do know quite well, is wrong more often than not.
If you are interested in a laugh, though, please do read the mathematical review in chapter 2. I'll sell you my copy of the book if you are quick. But you have to be quick because at some point I will be out of kindling and then I can find a use for this piece of...